Sweden’s Suspicion Machine
Behind a veil of secrecy, the social security agency deploys discriminatory algorithms searching for fraud epidemic it has invented
Sweden is regularly hailed as a model welfare state. It tops global transparency indexes and retains high levels of public trust. Behind this reputation for openness, the country’s Social Insurance Agency (Försäkringskassan) has silently conducted large-scale experiments with algorithms that score hundreds of thousands of people on benefits, supposedly predicting whether they will commit fraud.
Sources within the Social Insurance Agency — tasked with running Sweden’s social security — describe these algorithms as its “best-kept secret.” The benefit recipients who find themselves subject to sometimes humiliating investigations, or who have benefits suspended, have no idea that they have been flagged by an algorithm.
In October 2021, we sent a freedom-of-information request to the Social Insurance Agency attempting to find out more. It immediately rejected our request. Over the next three years, we exchanged hundreds of emails and sent dozens of freedom-of-information requests, nearly all of which were rejected. We went to court, twice, and spoke to half a dozen public authorities.
Lighthouse Reports and Svenska Dagbladet obtained an unpublished dataset containing thousands of applicants to Sweden’s temporary child support scheme, which supports parents taking care of sick children. Each of them had been flagged as suspicious by a predictive algorithm deployed by the Social Insurance Agency. Analysis of the dataset revealed that the agency’s fraud prediction algorithm discriminated against women, migrants, low-income earners and people without a university education.
Months of reporting — including conversations with confidential sources — demonstrate how the agency has deployed these systems without scrutiny despite objections from regulatory authorities and even its own data protection officer.
METHODS
Our Suspicion Machines series has investigated welfare surveillance algorithms in more than eight countries. Sweden was not anticipated to be the most difficult.
Over the course of three years, Lighthouse made wide-scale use of freedom-of-information laws in Sweden. Technical documentation and evaluations similar to those received in our previous investigations in France, Spain and the Netherlands were requested.
The refusals were relentless. The agency declined to disclose even the most basic material, arguing that it would allow fraudsters to evade detection. It refused to confirm whether its algorithms were trained on random samples or even how many people had been flagged in total by an algorithm. It also refused to disclose basic statistics about how it arrived at estimates of welfare fraud. In one email chain, a high level official at the agency wrote, referring to one of our reporters, “let’s hope we are done with him!” after seemingly forgetting to remove the reporter from CC.
To test the extent of the stonewalling, Lighthouse asked for information that the Social Insurance Agency had published in its annual reports. It refused to provide the information, claiming that it was confidential.
There were nonetheless strong indications that the agency’s use of fraud prediction algorithms was deeply problematic. A 2016 report from Sweden’s Integrity Committee described the practice as “citizen profiling” and warned of extreme risks to citizens’ personal integrity. Meanwhile, a redacted 2020 note from the agency’s data protection officer questioned the legality of the system.
A 2018 report from an independent supervisory authority of the Social Insurance Agency, the ISF, opened a new route for inquiry. The report concluded, based on a dataset the watchdog had received, that the agency’s algorithm for predicting fraud in parental assistance benefits did not treat applicants equally. The Social Insurance Agency rejected the conclusions of its supervisory authority and questioned the validity of their analysis.
By obtaining the dataset underlying the ISF report, in-depth reporting became possible. The dataset contained over 6,000 people flagged for investigation by the algorithm in 2017 and their demographic characteristics. With the support of eight academic experts, Lighthouse and Svenska Dagbladetran a series of statistical fairness tests to assess which groups were disparately impacted. The analysis found that women, migrants, low-income earners and people without a university education were overselected by the model. It also found that people from these groups who had done nothing wrong were more likely to be wrongly labeled as suspicious by the system.
This methodology describes our analysis and the underlying code and data is now available on Github.
STORYLINES
Lighthouse and Svenska Dagbladet produced a three-part series based on our joint-reporting and analysis.
The first story reveals how Sweden’s social security agency has deployed machine learning at an industrial scale and largely in secret. It recounts the experiences of parents who were left without money to cover basic essentials. Those with the highest risk scores are investigated by fraud investigators with enormous powers and who work inside a corner of the agency’s offices locked-off from other employees.
It shows how vulnerable groups historically discriminated against are also the groups more likely to be wrongly selected for investigation.
In response to our findings, Anders Viseth, the person who oversees the agency’s fraud algorithm, denied wrongdoing. He further argued that being put under investigation is not a disadvantage because a human investigator always makes the final decision. This is despite benefit payments being delayed while undergoing invasive investigations.
The second story examines the agency’s claims of large-scale fraud — one of the primary justifications for the use of AI and secrecy surrounding it. These estimates have been published by the media, highlighted in annual reports and have driven much of the public debate. Analysis of the methodology revealed that it was rooted in baseless assumptions, including a definition of fraud that failed to check whether mistakes were intentional.
In reality, data we obtained both from the agency and the Swedish criminal justice system show that very few cases where the social security agency alleges fraud reach the courts. And even when they do, courts rarely determine that a defendant has intentionally committed fraud.
The final piece interrogates the agency’s lack of transparency and accountability. Virginia Dignum, a professor at Umeå University and one of the 38 experts selected for the United Nations’ expert group on AI, sharply criticises the agency’s arguments against transparency.
David Nolan from Amnesty’s Algorithmic Accountability lab criticized the lack of redress for citizens profiled by the system.
“The opacity of the system means most individuals are not aware that fraud control algorithms were used to flag them for further investigation,” Nolan said. ”How are individuals expected to effectively challenge a decision made about them–as is their right–when they are likely to be unaware that they are the subject of an automated process?”
When asked whether the agency should be more transparent, its fraud algorithm supervisor, Viseth, responded: “I don’t think we need to be.”