Combining pattern classifiers : methods and algorithms /

Saved in:
Bibliographic Details
Author / Creator:Kuncheva, Ludmila I. (Ludmila Ilieva), 1959-
Imprint:Hoboken, NJ : J. Wiley, 2004.
Description:xx, 350 p. : ill. ; 24 cm.
Language:English
Subject:
Format: Print Book
URL for this record:http://pi.lib.uchicago.edu/1001/cat/bib/5541514
Hidden Bibliographic Details
ISBN:0471210781 (cloth)
Notes:"A Wiley-Interscience publication."
Includes bibliographical references (p. 329-345) and index.
Table of Contents:
  • Preface
  • Acknowledgments
  • Notations and Acronyms
  • 1. Fundamentals of Pattern Recognition
  • 1.1. Basic Concepts: Class, Feature, Data Set
  • 1.2. Classifier, Discriminant Functions, Classification Regions
  • 1.3. Classification Error and Classification Accuracy
  • 1.4. Experimental Comparison of Classifiers
  • 1.5. Bayes Decision Theory
  • 1.6. A Taxonomy of Classifier Design Methods
  • 1.7. Clustering
  • Appendix
  • 2. Base Classifiers
  • 2.1. Linear and Quadratic Classifiers
  • 2.2. Nonparametric Classifiers
  • 2.3. The k-nearest Neighbor Rule
  • 2.4. Tree Classifiers
  • 2.5. Neural Networks
  • Appendix
  • 3. Multiple Classifier Systems
  • 3.1. Philosophy
  • 3.2. Terminologies and Taxonomies
  • 3.3. To Train or Not to Train?
  • 3.4. Remarks
  • 4. Fusion of Label Outputs
  • 4.1. Types of Classifier Outputs
  • 4.2. Majority Vote
  • 4.3. Weighted Majority Vote
  • 4.4. "Naïve"-Bayes Combination
  • 4.5. Multinomial Methods
  • 4.6. Probabilistic Approximation
  • 4.7. SVD Combination
  • 4.8. Conclusions
  • Appendix
  • 5. Fusion of Continuous-Valued Outputs
  • 5.1. How Do We Get Probability Outputs?
  • 5.2. Class-Conscious Combiners
  • 5.3. Class-Indifferent Combiners
  • 5.4. Where Do the Simple Combiners Come From?
  • 5.5. Appendix
  • 6. Classifier Selection
  • 6.1. Preliminaries
  • 6.2. Why Classifier Selection Works
  • 6.3. Estimating Local Competence Dynamically
  • 6.4. Pre-estimation of the Competence Regions
  • 6.5. Selection or Fusion?
  • 6.6. Base Classifiers and Mixture of Experts
  • 7. Bagging and Boosting
  • 7.1. Bagging
  • 7.2. Boosting
  • 7.3. Bias-Variance Decomposition
  • 7.1. Which is Better: Bagging or Boosting?
  • Appendix
  • 8. Miscellanea
  • 8.1. Feature Selection
  • 8.2. Error Correcting Output Codes (ECOC
  • 8.3. Combining Clustering Results
  • Appendix
  • 9. Theoretical Views and Results
  • 9.1. Equivalence of Simple Combination Rules
  • 9.2. Added Error for the Mean Combination Rule
  • 9.3. Added Error for the Weighted Mean Combination
  • 9.4. Ensemble Error for Normal and Uniform Distributions
  • 10. Diversity in Classifier Ensembles
  • 10.1. What is Diversity?
  • 10.2. Measuring Diversity in Classifier Ensembles
  • 10.3. Relationship Between Diversity and Accuracy
  • 10.4. Using Diversity
  • 10.5. Conclusions: Diversity of Diversity
  • Appendix A. Equivalence Between the Averaged Disagreement Measure D av and Kohavi-Wolpert KW
  • Appendix B. Matlab Code for Some Overproduce and Select Algorithms
  • References
  • Index