The researchers are examining the issue of accountability from a philosophical and legal perspective in relation to automated or AI-controlled decision making in public administration.

Even now many decision-making processes in public administration are based on simple algorithms. Tax bills, entitlement to social welfare payments and qualification for citizenship are already among the decisions taken by algorithms. 

Strides in AI technology have led to the development of systems based on increasingly advanced algorithms. These systems have limited transparency, can process large quantities of data, are flexible, and can to some extent design themselves.

If automated or AI-controlled decision-making systems become more common, they might conceivably replace a large proportion of human decision making in public administration. This prospect raises fundamental philosophical issues about accountability: who or what should be held accountable for wrong decisions and why?

The researchers in this interdisciplinary project distinguish three scenarios:

  1. Artificial systems based on simple algorithms. Although humans are generally held accountable in these situations, the legal position in Sweden remains unclear. 
  2. Artificial systems as self-learning and self-developing algorithms. In this case entirely novel legal and philosophical questions must be addressed. For instance, decisions made by AI systems at this level cannot necessarily be understood later on by humans. Yet someone or something must be accountable.
  3. A hypothetical scenario in which AI systems exhibit behavior that cannot be distinguished from human behavior, or even possess something that cannot be equated with human consciousness. Would it be possible to hold a hypothetical entity of this kind accountable for its decisions from a legal and philosophical perspective?

The researchers aim to examine accountability from a philosophical and legal viewpoint in all three cases. They will ascertain whether there is a need to find new rules governing accountability for autonomous systems, or modify existing ones. To do so, they will analyze both how accountability works under Swedish laws, and how moral accountability and moral agency can be understood by means of philosophical reasoning. Their analysis will be based on accountability under Swedish law, and a philosophical analysis of moral responsibility and moral agency. 

The results of the analysis will result in specific proposals for modified categories of accountability and new forms of accountability that can be applied both generally and more specifically in a Swedish context under the law. The philosophical and legal analysis is also expected to yield a deeper understanding of the ethical, legal and societal implications of introducing AI systems in public administration.

Affiliated with WASP-HS

This research project is affiliated with WASP-HS and generously funded by Marianne and Marcus Wallenberg Foundation.

Principal Investigator(s)

Sandra Friberg
Associate Professor of Private Law, Senior Lecturer in Private Law at the Faculty of Law


Oliver Li
Researcher at Faculty of Theology

Lars Karlander
Researcher at Centre for Multidisciplinary Research on Religion and Society (CRS)

Johan Eddebo
Researcher at Centre for Multidisciplinary Research on Religion and Society (CRS)

Yulia Razmetaeva, Uppsala University
Visiting researcher at Department of Law, Part-time fixed-term lecturer at Department of Philosophy



  • September, Yulia Razmetaeva och Natalia Satokhina publicerar "AI-Based Decisions and Disappearance of Law" i Masaryk University Journal of Law and Technology.
  • Oktober, debattartikel av Johan Eddebo och Claes Wrangel publicerad i Aftonbladet angående algoritmiska kontrollmekanismer i den digitala sfären.
  • Oktober, Johan Eddebo och Mika Hietanen publicerar "Towards a Definition of Hate Speech—With a Focus on Online Contexts" i Journal of Communication Inquiry.
  • Oktober, Johan Eddebo holds a guest lecture at Uppsala University's Faculty of Law on digitalization's effects on the fundamental principles of the modern legal regime. Uppsala 24/10, 2022.
  • November, Yulia Razmetaeva presented research at the conference on Artificial Intelligence and Human Rights (Vilnius, Mykolas Romeris University Law School and Infolex, online). Her presentation was devoted to Invisible AI Interference and Human Rights, including she presented a scheme of how AI in public decisions affects fundamental rights. A recording of the conference is found here:
  • December - Oliver Li participates in the seminar "Offentlig upphandling av digitala system (Public procurement of digital systems) " SNS - Stockholm, 15th Nov 2022 
  • December, Johan Eddebo presenterar forskning på temat artificiell och naturlig agens vid konferensen "Military Imaginaries of AI and the Anthropocene", Uppsala universitet, 8-9:e december.
  • December, Oliver Li participates in the i workshop "AI4 research" December 5, 2022, Uppsala Universitet, Humanistiska Teatern 


  • Mars, Johan Eddebo presenterar forskning på temat algoritmisk och artificiell styrning av diskurser i digitala medier vid konferensen The 13th Asian Conference on Ethics, Religion & Philosophy, Tokyo, 31:a mars - 3:e april
  • April, Oliver Li publicerar sin monografi Gud i allt och allt i Gud: En religionsfilosofisk undersökning av panenteism på Makadam förlag
  • April, Yulia Razmetaeva publicerar texten "Sacralization of AI" iTalk About: Law and Religion Blog, vid Center for the Rule of Law and Religion Studies, Yaroslav the Wise National Law University, Ukraina.
  • Maj, Johan Eddebos presentation från The 13th Asian Conference on Ethics, Religion & Philosophy publiceras i konferensens proceedings.
  • Maj, Johan Eddebo publicerar texten "AI and commodification of religion" i Talk About: Law and Religion Blog, vid Center for the Rule of Law and Religion Studies, Yaroslav the Wise National Law University, Ukraina.
Senast uppdaterad: 2023-05-26