Achievability of Asymptotic Minimax Regret in Online and Batch Prediction

Kazuho Watanabe, Teemu Roos, Petri Myllymäki

Research output: Chapter in Book/Report/Conference proceedingConference contributionScientificpeer-review

Abstract

The normalized maximum likelihood model achieves the minimax coding (log-loss) regret for data of fixed sample size n. However, it is a batch strategy, i.e., it requires that n be known in advance. Furthermore, it is computationally infeasible for most statistical models, and several computationally feasible alternative strategies have been devised. We characterize the achievability of asymptotic minimaxity by batch strategies (i.e., strategies that depend on n) as well as online strategies (i.e., strategies independent of n). On one hand, we conjecture that for a large class of models, no online strategy can be asymptotically minimax. We prove that this holds under a slightly stronger definition of asymptotic minimaxity. Our numerical experiments support the conjecture about non-achievability by so called last-step minimax algorithms, which are independent of n. On the other hand, we show that in the multinomial model, a Bayes mixture defined by the conjugate Dirichlet prior with a simple dependency on n achieves asymptotic minimaxity for all sequences, thus providing a simpler asymptotic minimax strategy compared to earlier work by Xie and Barron. The numerical results also demonstrate superior finite-sample behavior by a number of novel batch and online algorithms.
Original languageEnglish
Title of host publicationProceedings of the 5th Asian Conference on Machine Learning
EditorsCheng Soon Ong, Tu Bao Ho
Publication dateNov 2013
Pages181-193
Publication statusPublished - Nov 2013
MoE publication typeA4 Article in conference proceedings
EventAsian Conference on Machine Learning - Canberra, Australia
Duration: 1 Jan 1800 → …

Publication series

NameJournal of Machine Learning Research: Workshop and Conference Proceedings
Volume29
ISSN (Print)1938-7228

Fields of Science

  • 113 Computer and information sciences

Cite this

Watanabe, K., Roos, T., & Myllymäki, P. (2013). Achievability of Asymptotic Minimax Regret in Online and Batch Prediction. In C. Soon Ong, & T. Bao Ho (Eds.), Proceedings of the 5th Asian Conference on Machine Learning (pp. 181-193). (Journal of Machine Learning Research: Workshop and Conference Proceedings; Vol. 29).
Watanabe, Kazuho ; Roos, Teemu ; Myllymäki, Petri. / Achievability of Asymptotic Minimax Regret in Online and Batch Prediction. Proceedings of the 5th Asian Conference on Machine Learning. editor / Cheng Soon Ong ; Tu Bao Ho. 2013. pp. 181-193 (Journal of Machine Learning Research: Workshop and Conference Proceedings).
@inproceedings{849bfcff0599414895497f2325ad9d9e,
title = "Achievability of Asymptotic Minimax Regret in Online and Batch Prediction",
abstract = "The normalized maximum likelihood model achieves the minimax coding (log-loss) regret for data of fixed sample size n. However, it is a batch strategy, i.e., it requires that n be known in advance. Furthermore, it is computationally infeasible for most statistical models, and several computationally feasible alternative strategies have been devised. We characterize the achievability of asymptotic minimaxity by batch strategies (i.e., strategies that depend on n) as well as online strategies (i.e., strategies independent of n). On one hand, we conjecture that for a large class of models, no online strategy can be asymptotically minimax. We prove that this holds under a slightly stronger definition of asymptotic minimaxity. Our numerical experiments support the conjecture about non-achievability by so called last-step minimax algorithms, which are independent of n. On the other hand, we show that in the multinomial model, a Bayes mixture defined by the conjugate Dirichlet prior with a simple dependency on n achieves asymptotic minimaxity for all sequences, thus providing a simpler asymptotic minimax strategy compared to earlier work by Xie and Barron. The numerical results also demonstrate superior finite-sample behavior by a number of novel batch and online algorithms.",
keywords = "113 Computer and information sciences",
author = "Kazuho Watanabe and Teemu Roos and Petri Myllym{\"a}ki",
note = "Volume: Proceeding volume:",
year = "2013",
month = "11",
language = "English",
series = "Journal of Machine Learning Research: Workshop and Conference Proceedings",
pages = "181--193",
editor = "{Soon Ong}, Cheng and {Bao Ho}, Tu",
booktitle = "Proceedings of the 5th Asian Conference on Machine Learning",

}

Watanabe, K, Roos, T & Myllymäki, P 2013, Achievability of Asymptotic Minimax Regret in Online and Batch Prediction. in C Soon Ong & T Bao Ho (eds), Proceedings of the 5th Asian Conference on Machine Learning. Journal of Machine Learning Research: Workshop and Conference Proceedings, vol. 29, pp. 181-193, Asian Conference on Machine Learning, Canberra, Australia, 01/01/1800.

Achievability of Asymptotic Minimax Regret in Online and Batch Prediction. / Watanabe, Kazuho; Roos, Teemu; Myllymäki, Petri.

Proceedings of the 5th Asian Conference on Machine Learning. ed. / Cheng Soon Ong; Tu Bao Ho. 2013. p. 181-193 (Journal of Machine Learning Research: Workshop and Conference Proceedings; Vol. 29).

Research output: Chapter in Book/Report/Conference proceedingConference contributionScientificpeer-review

TY - GEN

T1 - Achievability of Asymptotic Minimax Regret in Online and Batch Prediction

AU - Watanabe, Kazuho

AU - Roos, Teemu

AU - Myllymäki, Petri

N1 - Volume: Proceeding volume:

PY - 2013/11

Y1 - 2013/11

N2 - The normalized maximum likelihood model achieves the minimax coding (log-loss) regret for data of fixed sample size n. However, it is a batch strategy, i.e., it requires that n be known in advance. Furthermore, it is computationally infeasible for most statistical models, and several computationally feasible alternative strategies have been devised. We characterize the achievability of asymptotic minimaxity by batch strategies (i.e., strategies that depend on n) as well as online strategies (i.e., strategies independent of n). On one hand, we conjecture that for a large class of models, no online strategy can be asymptotically minimax. We prove that this holds under a slightly stronger definition of asymptotic minimaxity. Our numerical experiments support the conjecture about non-achievability by so called last-step minimax algorithms, which are independent of n. On the other hand, we show that in the multinomial model, a Bayes mixture defined by the conjugate Dirichlet prior with a simple dependency on n achieves asymptotic minimaxity for all sequences, thus providing a simpler asymptotic minimax strategy compared to earlier work by Xie and Barron. The numerical results also demonstrate superior finite-sample behavior by a number of novel batch and online algorithms.

AB - The normalized maximum likelihood model achieves the minimax coding (log-loss) regret for data of fixed sample size n. However, it is a batch strategy, i.e., it requires that n be known in advance. Furthermore, it is computationally infeasible for most statistical models, and several computationally feasible alternative strategies have been devised. We characterize the achievability of asymptotic minimaxity by batch strategies (i.e., strategies that depend on n) as well as online strategies (i.e., strategies independent of n). On one hand, we conjecture that for a large class of models, no online strategy can be asymptotically minimax. We prove that this holds under a slightly stronger definition of asymptotic minimaxity. Our numerical experiments support the conjecture about non-achievability by so called last-step minimax algorithms, which are independent of n. On the other hand, we show that in the multinomial model, a Bayes mixture defined by the conjugate Dirichlet prior with a simple dependency on n achieves asymptotic minimaxity for all sequences, thus providing a simpler asymptotic minimax strategy compared to earlier work by Xie and Barron. The numerical results also demonstrate superior finite-sample behavior by a number of novel batch and online algorithms.

KW - 113 Computer and information sciences

M3 - Conference contribution

T3 - Journal of Machine Learning Research: Workshop and Conference Proceedings

SP - 181

EP - 193

BT - Proceedings of the 5th Asian Conference on Machine Learning

A2 - Soon Ong, Cheng

A2 - Bao Ho, Tu

ER -

Watanabe K, Roos T, Myllymäki P. Achievability of Asymptotic Minimax Regret in Online and Batch Prediction. In Soon Ong C, Bao Ho T, editors, Proceedings of the 5th Asian Conference on Machine Learning. 2013. p. 181-193. (Journal of Machine Learning Research: Workshop and Conference Proceedings).