To compare learning algorithms that differ by the adopted statistical paradigm, model class, or search heuristic, it is common to evaluate the performance on training data of varying size. Measuring the performance is straightforward if the data are generated from a known model, the ground truth. However, when the study concerns real-world data, the current methodology is limited to estimating predictive performance, typically by cross-validation. This work introduces a method to compare algorithms’ ability to learn the model structure, assuming no ground truth is given. The idea is to identify a partial structure on which the algorithms agree, and measure the performance in relation to that structure on subsamples of the data. The method is instantiated to structure learning in Bayesian networks, measuring the performance by the structural Hamming distance. It is tested using benchmark ground truth networks and algorithms that maximize various scoring functions. The results show that the method can produce evaluation outcomes that are close to those one would obtain if the ground truth was available.
|Lehti||JMLR workshop and conference proceedings|
|Tila||Julkaistu - 2018|
|OKM-julkaisutyyppi||A4 Artikkeli konferenssijulkaisuussa|
|Tapahtuma||International Conference on Artificial Intelligence and Statistics - Play Blanca, Lanzarote, Canary Islands, Espanja|
Kesto: 9 huhtikuuta 2018 → 11 huhtikuuta 2018
- 113 Tietojenkäsittely- ja informaatiotieteet