Abstract
The growth of web accessible dictionary and term data has led to a proliferation
of platforms distributing the same lexical resources in different combinations
and packagings. Finding the right word or translation is like finding a needle in
a haystack. The quantity of the data is undercut by the redundancy and doubtful quality of the resources. In this paper, we develop ways to assess the quality of multilingual lexical web and linked data resources by internal consistency. Concretely, we deconstruct Princeton WordNet [1] to its component word senses or word labels, with the properties they have or inherit from their synsets, and see to what extent these properties allow reconstructing the synsets they came from. The methods developed should then be applicable to aggregation of term data coming from different term sources - to find which entries coming from different sources could be similarly pooled together, to cut redundancy and improve coverage and reliability. The multilingual dictionary BabelNet [2] can be used for evaluation. We restrain our current research to dictionary data and improving language models rather than introducing external sources.
of platforms distributing the same lexical resources in different combinations
and packagings. Finding the right word or translation is like finding a needle in
a haystack. The quantity of the data is undercut by the redundancy and doubtful quality of the resources. In this paper, we develop ways to assess the quality of multilingual lexical web and linked data resources by internal consistency. Concretely, we deconstruct Princeton WordNet [1] to its component word senses or word labels, with the properties they have or inherit from their synsets, and see to what extent these properties allow reconstructing the synsets they came from. The methods developed should then be applicable to aggregation of term data coming from different term sources - to find which entries coming from different sources could be similarly pooled together, to cut redundancy and improve coverage and reliability. The multilingual dictionary BabelNet [2] can be used for evaluation. We restrain our current research to dictionary data and improving language models rather than introducing external sources.
Original language | English |
---|---|
Title of host publication | 15th International Semantic Web Conference (ISWC 2016) : the 11th International Workshop on Ontology Matching |
Editors | Pavel Shvaiko, Jérôme Euzenat, Ernesto Jiménez-Ruiz, Michelle Cheatham, Oktie Hassanzadeh, Ryutaro Ichise |
Number of pages | 2 |
Volume | 1766 |
Publication date | 22 Oct 2016 |
Pages | 241-242 |
Publication status | Published - 22 Oct 2016 |
MoE publication type | A4 Article in conference proceedings |
Event | International Semantic Web Conference - Kobe, Japan Duration: 17 Oct 2016 → 21 Oct 2016 Conference number: 15 |
Publication series
Name | CEUR workshop proceedings |
---|---|
ISSN (Electronic) | 1613-0073 |
Fields of Science
- 113 Computer and information sciences
- Information extraction
- Linked data
- Edit distance
- 6160 Other humanities
- Quality checking
- Terminology
- Aggregation