Olivier Catoni (auth.), Jean Picard (eds.)3540225722, 9783540225720
Statistical learning theory is aimed at analyzing complex data with necessarily approximate models. This book is intended for an audience with a graduate background in probability theory and statistics. It will be useful to any reader wondering why it may be a good idea, to use as is often done in practice a notoriously “wrong” (i.e. over-simplified) model to predict, estimate or classify. This point of view takes its roots in three fields: information theory, statistical mechanics, and PAC-Bayesian theorems. Results on the large deviations of trajectories of Markov chains with rare transitions are also included. They are meant to provide a better understanding of stochastic optimization algorithms of common use in computing estimators. The author focuses on non-asymptotic bounds of the statistical risk, allowing one to choose adaptively between rich and structured families of models and corresponding estimators. Two mathematical objects pervade the book: entropy and Gibbs measures. The goal is to show how to turn them into versatile and efficient technical tools, that will stimulate further studies and results.
Table of contents :
Introduction….Pages 1-4
1. Universal lossless data compression….Pages 5-54
2. Links between data compression and statistical estimation….Pages 55-69
3. Non cumulated mean risk….Pages 71-95
4. Gibbs estimators….Pages 97-154
5. Randomized estimators and empirical complexity….Pages 155-197
6. Deviation inequalities….Pages 199-222
7. Markov chains with exponential transitions….Pages 223-260
References….Pages 261-265
Index….Pages 267-269
List of participants and List of short lectures….Pages 271-273
Reviews
There are no reviews yet.