Swedish Prime Minister Faces Criticism for Using ChatGPT in Political Decision-Making
Sweden’s Prime Minister Ulf Kristersson is under increasing scrutiny after revealing that he regularly consults artificial intelligence tools, including ChatGPT and the French AI platform LeChat, to gather ‘‘second opinions’’ on political matters and international policy strategies. This revelation, made during an interview with the business daily Dagens industri, has sparked a debate about the role of AI in governmental decision-making and democratic accountability.
Kristersson, who leads Sweden’s center-right coalition government, explained that he uses AI tools mainly to explore alternative viewpoints and assess what other countries have done on key policy issues. ‘‘If for nothing else than for a second opinion. What have others done? And should we think the complete opposite?’’ he stated. His approach aims to widen the scope of political insight by leveraging the broad data and perspectives that AI can provide.
However, the Prime Minister’s admission has drawn sharp criticism from academics, cybersecurity experts, media, and AI ethicists. Simone Fischer-Hübner, a cybersecurity researcher at Karlstad University, cautioned that AI systems are fundamentally unfit for processing sensitive or classified government information, highlighting risks related to data security and reliability. The editorial board of the Swedish newspaper Aftonbladet accused Kristersson of succumbing to ‘‘AI psychosis,’’ condemning his reliance on AI platforms predominantly controlled by foreign technology companies.
Virginia Dignum, a professor specializing in responsible AI at Umeå University, voiced a particularly strong concern, stating in an interview with Dagens Nyheter, ‘‘We didn’t vote for ChatGPT.’’ She warned that escalating dependence on AI tools for even seemingly simple political decisions could foster overconfidence in the technology’s outputs and ultimately undermine human judgment. ‘‘It is a slippery slope,’’ she cautioned, emphasizing the need for critical scrutiny and ethical boundaries in the use of AI by elected officials.
Kristersson’s spokesperson, Tom Samuelsson, responded to the controversy by assuring the public that no sensitive government information is shared with AI platforms. The AI tools are used ‘‘more as a ballpark’’ to gain broader context rather than to make direct policy decisions. Despite this clarification, many remain concerned about the consequences of integrating AI insights into governance frameworks, fearing it may set precarious precedents and risks embedding AI biases into public policy.
The controversy surrounding Sweden’s Prime Minister underscores a broader global debate about the intersection of artificial intelligence and politics. As AI technologies become increasingly sophisticated and accessible, questions arise regarding the ethical limits, transparency, and democratic accountability when AI systems influence policy decisions. This incident serves as a catalyst for governments worldwide to reconsider how elected officials should interact with AI tools while safeguarding public trust and national security.
Experts advocate for establishing clear ethical frameworks and guidelines to govern the use of AI in government, ensuring that elected representatives remain the ultimate decision-makers while benefiting from AI as a supportive resource. The Swedish case highlights the urgent need for such measures in an era where AI’s role in public administration is expanding rapidly.