## Abstract

Mathematical models of observational phenomena are at the core of experimental sciences. By learning the parameters of such models from typically noisy observations, we can interpret and predict the phenomena under investigation. This process, however, assumes that the model itself is correct and that we are only uncertain of its parameters. In practice, this is rarely true, but rather the model is a simplification of the actual generative process. One proposed remedy is a post hoc investigation of how the model differs from reality, by explicitly modeling the discrepancy between the two. In this paper, we use transformed Gaussian processes as flexible models for this. Our formulation relaxes the assumption on the correctness of the model by assuming it is only correct in expectation, and it directly supports both additive and multiplicative corrections, treated separately in the literature, using suitable transformations. We demonstrate the approach in two example cases: modeling human growth (relation age-height) and modeling the risk attitude (relation reward-utility). The former provides a simple example, while the second case highlights the importance of the transformations in obtaining meaningful information about the discrepancy.

Original language | English |
---|---|

Journal | Proceedings of Machine Learning Research |

Volume | 222 |

Pages (from-to) | 991-1006 |

Number of pages | 16 |

ISSN | 2640-3498 |

Publication status | Published - 2023 |

MoE publication type | A4 Article in conference proceedings |

Event | 15th Asian Conference on Machine Learning, ACML 2023 - Istanbul, Turkey Duration: 11 Nov 2023 → 14 Nov 2023 |

### Bibliographical note

Publisher Copyright:© 2023 A. Nioche, V. Tanskanen, M. Hartmann & A. Klami.

## Fields of Science

- Applied machine learning
- cognitive modeling
- Gaussian process
- Model discrepancy
- 113 Computer and information sciences