Overview of the “Voight-Kampff” Generative AI Authorship Verification Task at PAN and ELOQUENT 2024

Janek Bevendorf, Matti Wiegmann, Jussi Jerker Karlgren, Luise Dürlich, Evangelia Gogoulou, Aarne Talman, Efstathios Stamatatos, Martin Potthast, Benno Stein

Research output: Chapter in Book/Report/Conference proceedingConference contributionScientificpeer-review

Abstract

The “Voight-Kampff” Generative AI Authorship Verification task aims to determine whether a text was generated by an AI or written by a human. As in its fictional inspiration, the Voight-Kampff task structures AI detection as a builder-breaker challenge: The builders, participants in the PAN lab, submit software to detect AI-written text and the breakers, participants in the ELOQUENT lab, submit AI-written text with the goal of fooling the builders. We formulate the task in a way that is reminiscent of a traditional authorship verification problem, where given a pair of texts, their human or machine authorship is to be inferred. For this first task installment, we further restrict the problem so that each pair is guaranteed to contain one human and one machine text. Hence the task description reads: Given two texts, one authored by a human, one by a machine: pick out the human. In total, we evaluated 43 detection systems (30 participant submissions and 13 baselines), ranging from linear classifiers to perplexity-based zero-shot systems. We tested them on 70 individual test set variants organized in 14 base collections, each designed on different constraints such as short texts, Unicode obfuscations, or language switching. The top systems achieve very high scores, proving themselves not perfect but sufficiently robust across a wide range of specialized testing regimes. Code used for creating the datasets and evaluating the systems, baselines, and data are available on GitHub.

Original languageEnglish
Title of host publicationWorking Notes of the Conference and Labs of the Evaluation Forum (CLEF 2024)
EditorsGuglielmo Faggioli, Nicola Ferro, Petra Galuščáková, Alba García Seco de Herrera
Number of pages21
Place of PublicationAachen
PublisherCEUR-WS.org
Publication date2024
Pages2486-2506
Publication statusPublished - 2024
MoE publication typeA4 Article in conference proceedings
EventConference and Labs of the Evaluation Forum - Grenoble, France
Duration: 9 Sept 202412 Sept 2024
Conference number: 15

Publication series

NameCEUR Workshop Proceedings
Publisher CEUR-WS.org
Volume3740
ISSN (Print)1613-0073

Fields of Science

  • 6121 Languages
  • 113 Computer and information sciences

Cite this