Technology-related Risks to the Right to Asylum: Epistemic Vulnerability Production in Automated Credibility Assessment

Research output: Contribution to journalArticleScientificpeer-review


This paper examines the risks that artificial intelligence may incur for the enjoyment of the fundamental right to asylum. It examines at a theoretical level how understandings of digitally acquired data produce vulnerability in the asylum procedure. The EU Commission’s draft AI Act has been criticised for having a weak understanding of fundamental rights, although this regulation aims to minimise risks to such rights when AI systems are used. The paper attempts to provide the missing understanding of the negative implications that AI can have for the right to asylum. This analysis is pivotal if we want to implement the safeguards proposed in the AI Act in a meaningful way. The paper argues that the way of giving meaning to digitally acquired data is something of an implicit and collective practice that is characterised by overconfidence in such data. This may in practice lead to a heightened burden of proof on the asylum applicant.
Original languageEnglish
JournalEuropean journal of law and technology
Issue number3
Number of pages28
Publication statusPublished - 30 Dec 2022
MoE publication typeA1 Journal article-refereed

Fields of Science

  • 513 Law
  • asylum procedure
  • automated decision-making
  • credibility assessment
  • vulnerability
  • AI regulation
  • oversight

Cite this