Bots and AI-Related Technologies, Legitimate Interest, and Fair Processing Under the General Data Protection Regulation

Tutkimustuotos: OpinnäyteVäitöskirjaMonografia

Abstrakti

This thesis looks at whether the GDPR can efficiently and effectively promote its goals and aims when personal data is processed by bots and AI-related technologies, specifically when that data is processed on the basis of legitimate interest under art. 6(1)(f), as interpreted by the principle of fairness per art. 5(1)(a). To do this, I have broken the topic into three research questions: How should we understand the concept of fairness under art. 5(1)(a); can the concept of fairness, as understood under art. 5(1)(a), help to address the shortcomings of legitimate interest processing under art. 6(1)(f); and is the use of legitimate interest processing, as interpreted through the lens of fairness, an efficient and effective tool for supporting the GDPR’s goals and aims when personal data is processed by bots and AI-related technologies? As these questions cover legal, philosophical and technical issues, I embrace a socio-legal approach and incorporate sources from each field in my research.

For the first question, I look at the existing interpretation of fairness and find that the concept has (at best) relied on innate or implicit judgements and (at worst) been used as mere window-dressing. I therefore use Rawls’ original-position thought experiment to develop a flexible test that can provide a framework for explicit and open discussions about whether something should be considered fair. I then compare this test to other uses of fairness in law and, finding it compatible, propose a test which can be used for evaluating fairness under the GDPR, art. 5(1)(a).

For the second, I examine the existing guidance and find that, among other things, the balancing act required by art. 6(1)(f) struggles to deal with the inherent subjectivity of the interests involved. This then leads to a lack of consistency between guidelines, difficulties in identifying and weighing interests, and uncertainty as to how the balancing act should be calibrated. I argue that by incorporating the fairness test outlined in the first question as an interpretive lens for the legitimate interest balancing test, we can incorporate this subjectivity in a more reliable and structured manner, which helps to resolve or mitigate the issues identified and address the shortcomings of art. 6(1)(f).

Finally, I look at how this test might operate when personal data is processed by bots and AI-related technologies. I first consider the nature of these technologies and what existing conversations can tell us about them. This includes, inter alia, an examination of existing discussions on ethical usage of such technologies and what factors may be relevant in a balancing test. I then consider how the legitimate interest fairness test developed above might be applied when personal data is processed by bots and AI-related technologies and how the circumstances relevant to this context might be applied. Finally, I evaluate the test and conclude that, if used properly, it can be used to support the GDPR's goals when bots and AI-related technologies are used to process personal data, noting factors that must be considered and the strengths and weaknesses of the approach described.
Alkuperäiskielienglanti
Myöntävä instituutio
  • Oikeustieteellinen tiedekunta
Valvoja/neuvonantaja
  • Bräutigam, Tobias, Valvoja
  • Korpisaari (ex. Tiilikka), Päivi, Valvoja
Myöntöpäivämäärä17 jouluk. 2021
JulkaisupaikkaHelsinki
Kustantaja
Painoksen ISBN978-951-51-7727-8
Sähköinen ISBN978-951-51-7728-5
TilaJulkaistu - 17 jouluk. 2021
OKM-julkaisutyyppiG4 Tohtorinväitöskirja (monografia)

Tieteenalat

  • 513 Oikeustiede

Siteeraa tätä