Abstrakti
Model selection is one of the most central tasks in supervised learning. Validation set methods are the standard way to accomplish this task: models are trained on training data, and the model with the smallest loss on the validation data is selected. However, it is generally not obvious how much validation data is required to make a reliable selection, which is essential when labeled data are scarce or expensive. We propose a bootstrap-based algorithm, bootstrap validation (BSV), that uses the bootstrap to adjust the validation set size and to find the best-performing model within a tolerance parameter specified by the user. We find that BSV works well in practice and can be used as a drop-in replacement for validation set methods or k-fold cross-validation. The main advantage of BSV is that less validation data is typically needed, so more data can be used to train the model, resulting in better approximations and efficient use of validation data.
Alkuperäiskieli | englanti |
---|---|
Lehti | Statistical analysis and data mining |
Vuosikerta | 16 |
Numero | 2 |
Sivut | 162-186 |
Sivumäärä | 25 |
ISSN | 1932-1872 |
DOI - pysyväislinkit | |
Tila | Julkaistu - huhtik. 2023 |
OKM-julkaisutyyppi | A1 Alkuperäisartikkeli tieteellisessä aikakauslehdessä, vertaisarvioitu |
Tieteenalat
- 113 Tietojenkäsittely- ja informaatiotieteet