Lecture 19: Learning from Examples
Learning Objectives¶
Define supervised, unsupervised, reinforcement learning
Build decision trees with ID3
Apply linear regression and logistic regression
Use k-NN, SVMs, ensemble methods
Understand model selection and regularization
Forms of Learning¶
Supervised: (x, y) pairs
Unsupervised: x only
Reinforcement: Rewards from environment
Information Gain¶
Entropy: H(S) = -Σ p log p
Gain(S, A): H(S) - Σᵥ |Sᵥ|/|S| H(Sᵥ)
Choose: Attribute with max gain
k-Nearest Neighbors¶
Nonparametric: Store training data
Predict: Majority vote of k nearest
Distance: Euclidean, etc.
Support Vector Machines¶
Max margin: Separate with maximum margin
Kernel trick: Implicit high-dimensional space
Soft margin: Allow misclassification
Ensemble Learning¶
Bagging: Bootstrap, aggregate
Random forest: Random features + bagging
Boosting: Weight misclassified
Stacking: Meta-learner
Summary¶
Decision trees: ID3, gain
Linear models: Regression, logistic
k-NN, SVM: Nonparametric
Ensembles: Bagging, boosting
References¶
AIMA Ch. 19
Russell & Norvig, AIMA 4e, Ch. 19
Chapter PDF:
chapters/chapter-19.pdfaima-python: learning.ipynb