Abstract
The growth of Web-accessible dictionaries and term data
has led to a proliferation of platforms distributing the same lexical resources in different combinations and packagings. Finding the right word or translation is like finding a needle in a haystack. The quantity of the
data is undercut by the doubtful quality of the resources.
Our aim is to cut down the quantity and raise the quality by matching and aggregating entries within and across dictionaries. In this exploratory paper, our goal is to see how far we can get by using information extracted from multiple dictionaries themselves. Our hypothesis is that the more limited quantity of data in dictionaries is compensated by their richer structure and more concentrated information content. We hope to take advantage of the structure of dictionaries by basing quality criteria and measures on linguistic and terminological considerations. The plan of campaign is to derive quality criteria to recognize well-constructed
dictionary entries from a model dictionary, and then attempt to convert the criteria into language-independent frequency-based measures. As a model dictionary we use the Princeton WordNet. The
measures derived from it are tested against data extracted from BabelNet.
has led to a proliferation of platforms distributing the same lexical resources in different combinations and packagings. Finding the right word or translation is like finding a needle in a haystack. The quantity of the
data is undercut by the doubtful quality of the resources.
Our aim is to cut down the quantity and raise the quality by matching and aggregating entries within and across dictionaries. In this exploratory paper, our goal is to see how far we can get by using information extracted from multiple dictionaries themselves. Our hypothesis is that the more limited quantity of data in dictionaries is compensated by their richer structure and more concentrated information content. We hope to take advantage of the structure of dictionaries by basing quality criteria and measures on linguistic and terminological considerations. The plan of campaign is to derive quality criteria to recognize well-constructed
dictionary entries from a model dictionary, and then attempt to convert the criteria into language-independent frequency-based measures. As a model dictionary we use the Princeton WordNet. The
measures derived from it are tested against data extracted from BabelNet.
Original language | English |
---|---|
Title of host publication | 15th International Semantic Web Conference (ISWC 2016) : the Fourth International Workshop on Linked Data for Information Extraction (LD4IE 2016) |
Editors | Anna Lisa Gentile, Claudia d'Amato, Ziqi Zhang, Heiko Paulheim |
Number of pages | 12 |
Volume | 1699 |
Place of Publication | Kobe |
Publisher | CEUR-WS.org |
Publication date | 4 Oct 2016 |
Pages | 51-62 |
Publication status | Published - 4 Oct 2016 |
MoE publication type | A4 Article in conference proceedings |
Event | International Semantic Web Conference - Kobe, Japan Duration: 17 Oct 2016 → 21 Oct 2016 Conference number: 15 https://iswc2016.semanticweb.org/ |
Publication series
Name | CEUR Workshop Proceedings |
---|---|
ISSN (Electronic) | 1613-0073 |
Fields of Science
- 113 Computer and information sciences
- Information extraction
- Linked data
- Edit distance
- 6160 Other humanities
- Quality checking
- Terminology
- Aggregation