Sammanfattning
The growth of Web-accessible dictionaries and term data
has led to a proliferation of platforms distributing the same lexical resources in different combinations and packagings. Finding the right word or translation is like finding a needle in a haystack. The quantity of the
data is undercut by the doubtful quality of the resources.
Our aim is to cut down the quantity and raise the quality by matching and aggregating entries within and across dictionaries. In this exploratory paper, our goal is to see how far we can get by using information extracted from multiple dictionaries themselves. Our hypothesis is that the more limited quantity of data in dictionaries is compensated by their richer structure and more concentrated information content. We hope to take advantage of the structure of dictionaries by basing quality criteria and measures on linguistic and terminological considerations. The plan of campaign is to derive quality criteria to recognize well-constructed
dictionary entries from a model dictionary, and then attempt to convert the criteria into language-independent frequency-based measures. As a model dictionary we use the Princeton WordNet. The
measures derived from it are tested against data extracted from BabelNet.
has led to a proliferation of platforms distributing the same lexical resources in different combinations and packagings. Finding the right word or translation is like finding a needle in a haystack. The quantity of the
data is undercut by the doubtful quality of the resources.
Our aim is to cut down the quantity and raise the quality by matching and aggregating entries within and across dictionaries. In this exploratory paper, our goal is to see how far we can get by using information extracted from multiple dictionaries themselves. Our hypothesis is that the more limited quantity of data in dictionaries is compensated by their richer structure and more concentrated information content. We hope to take advantage of the structure of dictionaries by basing quality criteria and measures on linguistic and terminological considerations. The plan of campaign is to derive quality criteria to recognize well-constructed
dictionary entries from a model dictionary, and then attempt to convert the criteria into language-independent frequency-based measures. As a model dictionary we use the Princeton WordNet. The
measures derived from it are tested against data extracted from BabelNet.
Originalspråk | engelska |
---|---|
Titel på värdpublikation | 15th International Semantic Web Conference (ISWC 2016) : the Fourth International Workshop on Linked Data for Information Extraction (LD4IE 2016) |
Redaktörer | Anna Lisa Gentile, Claudia d'Amato, Ziqi Zhang, Heiko Paulheim |
Antal sidor | 12 |
Volym | 1699 |
Utgivningsort | Kobe |
Förlag | CEUR-WS.org |
Utgivningsdatum | 4 okt. 2016 |
Sidor | 51-62 |
Status | Publicerad - 4 okt. 2016 |
MoE-publikationstyp | A4 Artikel i en konferenspublikation |
Evenemang | International Semantic Web Conference - Kobe, Japan Varaktighet: 17 okt. 2016 → 21 okt. 2016 Konferensnummer: 15 https://iswc2016.semanticweb.org/ |
Publikationsserier
Namn | CEUR Workshop Proceedings |
---|---|
ISSN (elektroniskt) | 1613-0073 |
Vetenskapsgrenar
- 113 Data- och informationsvetenskap
- 6160 Övriga humanistiska vetenskaper