Robert E. Schapire0262193256, 9780262193252, 9780585312705
Approaches to building machines that can learn from experience abound – from connectionist learning algorithms and genetic algorithms to statistical mechanics and a learning system based on Piaget’s theories of early childhood development. This monograph describes results derived from the mathematically oriented framework of computational learning theory. Focusing on the design of efficient learning algorithms and their performance, it develops a sound, theoretical foundation for studying and understanding machine learning. Since many of the results concern the fundamental problem of learning a concept from examples, Schapire begins with a brief introduction to the Valiant model, which has generated much of the research on this problem. Four self-contained chapters then consider different aspects of machine learning. Their contributions include a general technique for dramatically improving the error rate of a “weak” learning algorithm that can also be used to improve the space efficiency of many known learning algorithms; a detailed exploration of a powerful statistical method for efficiently inferring the structure of certain kinds of Boolean formulas from random examples of the formula’s input-output behavior; the extension of a standard model of concept learning to accommodate concepts that exhibit uncertain or probabilistic behavior; (including a variety of tools and techniques for designing efficient learning algorithms in such a probabilistic setting); and a description of algorithms that can be used by a robot to infer the “structure” of its environment through experimentation. Robert E. Schapire received his doctorate from the Massachusetts Institute of Technology. He is now a member of the Artificial Intelligence Principles Research Department at AT&T Bell Laboratories. | |
Reviews
There are no reviews yet.