Puzzle Zeitvertreib Beste 4K Filme Beste Multimedia-Lernspiele % SALE %

Pattern Classifiers and Trainable Machines


Pattern Classifiers and Trainable Machines
74.97 CHF
Versandkostenfrei

Lieferzeit: 21 Werktage

  • 10362911


Beschreibung

1 Introduction and Overview.- 1.1 Basic Definitions.- 1.2 Trainable Classifiers and Training Theory.- 1.3 Assumptions and Notation.- 1.4 Illustrative Training Process.- 1.5 Linear Discriminant Functions.- 1.6 Expanding the Feature Space.- 1.7 Binary-Input Classifiers.- 1.8 Weight Space Versus Feature Space.- 1.9 Statistical Models.- 1.10 Evaluation of Performance.- 2 Linearly Separable Classes.- 2.1 Introduction.- 2.2 Convex sets, Summability, and Linear Separability.- 2.3 Notation and Terminology.- 2.4 The Perceptron and the Proportional Increment Training Procedure.- 2.5 The Fixed Fraction Training Procedure.- 2.6 A Multiclass Training Procedure.- 2.7 Synthesis by Game Theory.- 2.8 Symplifying Techniques.- 2.9 Illustrative Example.- 2.10 Gradient Descent.- 2.11 Conditions for Ensuring Desired Convergence.- 2.12 Gradient Descent for Designing Classifiers.- 2.13 The Ho-Kashyap Procedure.- 3 Nonlinear Classifiers.- 3.1 Introduction.- 3.2 ?-Classifiers.- 3.3 Bayes Estimation: Parametric Training.- 3.4 Smoothing Techniques: Nonparametric Training.- 3.5 Bar Graphs.- 3.6 Parzen Windows and Potential Functions.- 3.7 Storage Economies.- 3.8 Fixed-Base Bar Graphs.- 3.9 Sample Sets and Prototypes.- 3.10 Close Opposed Pairs of Prototypes.- 3.11 Locally Trained Piecewise Linear Classifiers.- 4 Loss Functions and Stochastic Approximation.- 4.1 Introduction.- 4.2 A Loss Function for the Proportional Increment Procedure.- 4.3 The Sample Gradient.- 4.4 The Use of Prior Knowledge.- 4.5 Loss Functions and Gradients of Some Important Training Procedures.- 4.6 Loss Functions Compared.- 4.7 Unequal Costs of Category Decisions.- 4.8 Stochastic Approximation.- 4.9 Gradients for Various Constituent Densities and Hyperplanes.- 4.10 Conclusion.- 5 Linear Classifiers for Nonseparable Classes.- 5.1 Modifications of Gradient Descent.- 5.2 Normalization, Origin Selection, and Initial Vector.- 5.3 The Window Training Procedure.- 5.4 The Minimum Mean Square Error Training Procedure.- 5.5 The Equalized Error Training Procedure.- 5.6 Accounting for Unequal Costs.- 5.7 An Application.- 5.8 Summary.- 6 Markov Chain Training Models for Nonseparable Classes.- 6.1 Introduction.- 6.2 The Problem of Analyzing a Stochastic Difference Equation.- 6.3 Examples of Single-Feature Classifiers.- 6.4 A Single-Feature Classifier with Constant Increment Training.- 6.5 Basic Properties of Learning Dynamics.- 6.6 Erogodicity and Stability in the Large.- 6.7 Train-Work Schedules: Two-Mode Classes.- 6.8 Optimal Finite Memory Learning.- 6.9 Multidimensional Feature Space.- 7 Continuous-State Models.- 7.1 Introduction.- 7.2 The Centroid Equation.- 7.3 Proof that ?(n) = O(?)U for n? ? t ?.- 7.4 The Covariance Equation.- 7.5 Learning Curves and Variance Curves.- 7.6 Normalization with Respect to t.- 7.7 Illustrative Examples.- 7.8 Shapes of Learning Curves in Single-Feature Classifiers.- 7.9 How Close are the Equal Error and Minimum Error Points?.- 7.10 Asymptotic Stability in the Large.- Appendix A Vectors and Matrices.- A.1 Vector Inequalities and Other Vector Notation.- A.2 Permutation Matrices.- Appendix B Proof of Convergence for the Window Procedure.- Appendix C Proof of Convergence for the Equalized Error Procedure.- C.2 Proof of Theorem 5.3.

Eigenschaften

Breite: 155
Höhe: 235
Seiten: 336
Sprachen: Englisch
Autor: G. N. Wassel, J. Sklansky

Bewertung

Bewertungen werden nach Überprüfung freigeschaltet.

Die mit einem * markierten Felder sind Pflichtfelder.

Ich habe die Datenschutzbestimmungen zur Kenntnis genommen.

Zuletzt angesehen

eUniverse.ch - zur Startseite wechseln © 2021 Nova Online Media Retailing GmbH