Behind the AI analysis that renewed interest in the Palme Assassination

Forty years after the assassination of Swedish Prime Minister Olof Palme, a new wave of attention has returned to one of Europe’s most debated cold cases. Part of that renewed interest comes from an unexpected place: modern AI applied to an unusually large, complex body of investigative material. This is the story behind that breakthrough, told through three perspectives.
Simon Lundell and the groundwork
When Swedish prosecutors announced in the summer of 2020 that the Palme investigation would be closed and identified the so-called “Skandia Man” as the likely perpetrator, many saw it as the final chapter in a decades-long process.
For programmer Simon Lundell, it marked a beginning.
He remembers the press conference as an anticlimax. After decades of speculation, and an archive that had grown to hundreds of thousands of pages, he felt the conclusion wasn’t enough.
“It felt like there was much more to be done,” Lundell says. “Something lit a spark in me, a drive to keep digging and pick up where others had stopped.”
Discussions about the Palme assassination were abundant online, but often circular. The same facts were repeated in forums and podcasts without a shared structure. Lundell began collecting documents, references, and prior reporting. Soon, he built a digital platform, similar to Wikipedia, but focused solely on the Palme case. Here, information was structured, linked, and made searchable.
“It became a hub for the community who wanted to work with the material,” he says.
A group of four
Before long, he joined forces with researchers Jonas Nyman and Mattias Davidsson, and engineer Jerry Dahlsberg. Over six years, the group combined digital analysis with traditional fieldwork.
They visited archives, reviewed historical footage, transcribed radio broadcasts, conducted and reviewed material from around 100 interviews with individuals connected to the case,ranging from witnesses to people who could clarify timelines or contextual details.
“You could say we’re regulars at the Military Archives in Stockholm,” Lundell says.
Their work focused on testing claims that had long been considered settled, including alibis that had led investigators to dismiss certain individuals. The method was thorough: aligning timelines, comparing statements, and distinguishing documented facts from new versions.
However, accessing official material proved difficult. Large portions of the investigative archive remain classified.
“The investigation is enormous, and much of it is still under restriction,” Lundell says, adding that he has formally requested access to roughly 500,000 pages.
A self-built AI agent
To manage the growing dataset, Lundell built a small computer cluster, equipped with GPUs. These are specialized processors, particularly well suited for AI because they can perform many calculations simultaneously. The group also developed an AI agent to help structure and analyze the material.
But as the data grew larger, the limits became apparent.
“We gathered everything into a cluster I run from my garage,” he says. “But eventually, we needed more computing power.”
By coincidence, Airon was located near Simon’s home town and his server-filled garage. That proximity led to a new collaboration, and six years of groundwork were about to meet industrial-scale AI capacity.
Robert Lidberg and the analysis
As the analytical ambitions expanded, the group’s technical limitations became visible. Running large AI models, comparing outputs, refining assumptions, and rebuilding graph structures requires more than a few powerful computers.
Lidberg is CEO and co-founder of Airon, a company started out of a conviction that Europe needs its own dedicated AI infrastructure. Not only for commercial use, but for research, public institutions and sensitive applications where control and security matter.
“When we founded Airon, it was clear that advanced AI would eventually run into infrastructure limits,” Lidberg says. “You can’t build serious AI capabilities without also building the factories that convert electricity into computational power”
In the TV4 documentary series, we can follow how the group’s investigation progressed to large-scale computational analysis. According to Lidberg, this is where many projects stall. The group’s AI agent was developed and trained on years of collected material. But large-scale text analysis and the building of relational graphs place heavy demands on the infrastructure.
“If you want to re-run models, test different assumptions and compare results over time, you need an environment that behaves consistently,” he says.
Airon’s facilities are designed from the outset for these kinds of AI workloads.
“Many underestimate how much infrastructure is required once you begin working seriously with AI,” Lidberg says. “With our non-shared infrastructure, we provide predictable throughput and guaranteed output” In this case, he adds, that predictability proved decisive for moving the analysis forward.
New insights
With more capacity at their disposal, the group could continue their work. Larger datasets were processed systematically. Timelines could be reconstructed. Relational networks connecting individuals, locations and events were rebuilt as new hypotheses emerged.
One of the new leads presented is a reassessment of a person of interest often referred to as “Lieutenant X”. According to Simon Lundell and his colleagues, he was previously dismissed from the official investigation because he was considered to have an alibi for the night of the assassination. The group argues that the alibi does not hold up when the timeline is reconstructed in detail, and they say their review suggests he may have been in Stockholm that night rather than where he claimed to be.
The review also points to potential links to a military extremist network, including individuals connected to paratrooper units and suggests that several people with backgrounds in law enforcement may have coordinated to uphold the alibi, with some of them still alive today.
“This shows you don’t necessarily need a massive organization to take on work at this scale,” says Robert Lidberg. “What matters is having the right people, the right tools, and AI infrastructure that can handle the task.”
Jonas Lindh and what comes next

If the first phase was about groundwork and the second about infrastructure, the third is about implications. How do we move from here?
Jonas Lindh, AI Engineer and Head of R&D at Airon, approaches the question with experience from both technology and national security. Before joining Airon, he worked as a data scientist at the Swedish Security Service. He holds a PhD in language technology, specializing in speech and audio analysis used in police investigations.
“As investigative archives grow larger and more complex, it becomes harder to manage everything manually,” Lindh says. “AI can help process and cross-reference information. But it does not replace legal standards or human responsibility.”
Lindh believes workflows will gradually change. Much of today’s work still involves manual review: reading reports, comparing documents, tracking inconsistencies across large volumes of material. AI can structure those flows, building timelines, identifying recurring entities, and highlighting patterns that would otherwise take months to detect – or not at all. In the future, the massive amounts of data handled in cases will be processed and structured by AI, while human analysts can concentrate on the details. The hybrid approach will exploit the best feature of human and machine together.
Lindh points out that parts of Europe are already introducing AI and automated biometrics in regulated operational contexts. For example, biometric passports, containing facial images and fingerprints, are used in automated border control systems where a traveller’s stored biometric data is checked against the chip in their passport at e-gates. These processes operate within clearly defined legal and procedural frameworks and are designed to enhance security while respecting privacy and regulatory requirements.
The infrastructure question
One reason AI is not yet widely used inside public authorities is practical: running modern AI models is not like running traditional IT systems. It requires a different kind of infrastructure, high-density compute, specialized cooling, stable power delivery, and environments that can handle heavy workloads over time without performance swings.
“AI has moved faster than most infrastructure planning,” Lindh says. “Many discover that owning GPUs is only the beginning. They struggle to understand how to use them effectively, how to scale as demand grows, and how to handle the complexity behind it. High-performance GPUs require substantial power and advanced liquid cooling, something most existing facilities were never designed for.”
At the same time, traditional cloud solutions are available. However, when sensitive data is involved, questions arise around governance, legal jurisdiction, and control.
“For me, it’s hard to justify running sensitive data in a shared environment where you don’t fully control how and where workloads are executed, especially as it is unencrypted on the GPUs” Lindh says.
In his view, the answer lies in purpose-built environments. Instead of retrofitting older buildings or facing operational limits, institutions can work within infrastructure already adapted to strict demands, such as Airon’s facilities.
Beyond the documentary
Lindh believes that once legal and technical bottlenecks are addressed, the next challenge becomes organizational: training, workflows, and competence. He points to Svea, an initiative focused on enabling public-sector organizations in Sweden to use advanced AI in secure, controlled environments, as an example of how this can work in practice. In Svea, Airon provides the underlying infrastructure, and the setup is designed to meet public-sector requirements around governance, isolation and handling of sensitive information.
“We actually already have the technical capacity in place,” he says. “The next step is building the knowledge side, training and how public authorities develop competence around these tools. That’s when the day-to-day work of investigators and analysts can really change.”
For Lindh, the documentary is not only about revisiting a historical case. It is a demonstration of what becomes possible when structured fieldwork meets AI, provided the conditions are right.
“This is definitely only the beginning,” he adds. “We’re excited to be part of how this develops going forward.”