### Abstract

Original language | English |
---|---|

Title of host publication | Proceedings of the 5th Asian Conference on Machine Learning |

Editors | Cheng Soon Ong, Tu Bao Ho |

Publication date | Nov 2013 |

Pages | 181-193 |

Publication status | Published - Nov 2013 |

MoE publication type | A4 Article in conference proceedings |

Event | Asian Conference on Machine Learning - Canberra, Australia Duration: 1 Jan 1800 → … |

### Publication series

Name | Journal of Machine Learning Research: Workshop and Conference Proceedings |
---|---|

Volume | 29 |

ISSN (Print) | 1938-7228 |

### Fields of Science

- 113 Computer and information sciences

### Cite this

*Proceedings of the 5th Asian Conference on Machine Learning*(pp. 181-193). (Journal of Machine Learning Research: Workshop and Conference Proceedings; Vol. 29).

}

*Proceedings of the 5th Asian Conference on Machine Learning.*Journal of Machine Learning Research: Workshop and Conference Proceedings, vol. 29, pp. 181-193, Asian Conference on Machine Learning, Canberra, Australia, 01/01/1800.

**Achievability of Asymptotic Minimax Regret in Online and Batch Prediction.** / Watanabe, Kazuho; Roos, Teemu; Myllymäki, Petri.

Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review

TY - GEN

T1 - Achievability of Asymptotic Minimax Regret in Online and Batch Prediction

AU - Watanabe, Kazuho

AU - Roos, Teemu

AU - Myllymäki, Petri

N1 - Volume: Proceeding volume:

PY - 2013/11

Y1 - 2013/11

N2 - The normalized maximum likelihood model achieves the minimax coding (log-loss) regret for data of fixed sample size n. However, it is a batch strategy, i.e., it requires that n be known in advance. Furthermore, it is computationally infeasible for most statistical models, and several computationally feasible alternative strategies have been devised. We characterize the achievability of asymptotic minimaxity by batch strategies (i.e., strategies that depend on n) as well as online strategies (i.e., strategies independent of n). On one hand, we conjecture that for a large class of models, no online strategy can be asymptotically minimax. We prove that this holds under a slightly stronger definition of asymptotic minimaxity. Our numerical experiments support the conjecture about non-achievability by so called last-step minimax algorithms, which are independent of n. On the other hand, we show that in the multinomial model, a Bayes mixture defined by the conjugate Dirichlet prior with a simple dependency on n achieves asymptotic minimaxity for all sequences, thus providing a simpler asymptotic minimax strategy compared to earlier work by Xie and Barron. The numerical results also demonstrate superior finite-sample behavior by a number of novel batch and online algorithms.

AB - The normalized maximum likelihood model achieves the minimax coding (log-loss) regret for data of fixed sample size n. However, it is a batch strategy, i.e., it requires that n be known in advance. Furthermore, it is computationally infeasible for most statistical models, and several computationally feasible alternative strategies have been devised. We characterize the achievability of asymptotic minimaxity by batch strategies (i.e., strategies that depend on n) as well as online strategies (i.e., strategies independent of n). On one hand, we conjecture that for a large class of models, no online strategy can be asymptotically minimax. We prove that this holds under a slightly stronger definition of asymptotic minimaxity. Our numerical experiments support the conjecture about non-achievability by so called last-step minimax algorithms, which are independent of n. On the other hand, we show that in the multinomial model, a Bayes mixture defined by the conjugate Dirichlet prior with a simple dependency on n achieves asymptotic minimaxity for all sequences, thus providing a simpler asymptotic minimax strategy compared to earlier work by Xie and Barron. The numerical results also demonstrate superior finite-sample behavior by a number of novel batch and online algorithms.

KW - 113 Computer and information sciences

M3 - Conference contribution

T3 - Journal of Machine Learning Research: Workshop and Conference Proceedings

SP - 181

EP - 193

BT - Proceedings of the 5th Asian Conference on Machine Learning

A2 - Soon Ong, Cheng

A2 - Bao Ho, Tu

ER -