Lecture 20: Learning Probabilistic Models
Learning Objectives¶
Learn Bayesian network parameters (ML, Bayesian)
Apply EM for hidden variables
Learn HMM parameters
Learn network structure
Bayesian Parameter Learning¶
Prior: P(θ)
Posterior: P(θ|D) ∝ P(D|θ) P(θ)
Predict: P(x|D) = ∫ P(x|θ) P(θ|D) dθ
EM Algorithm¶
Hidden variables: Z unobserved
E-step: P(Z|X,θ)
M-step: θ = argmax E[log P(X,Z|θ)]
Convergence: Local optimum
EM: Mixture of Gaussians¶
Components: K Gaussians
Hidden: Which component each point
E: Soft assignment
M: Update means, covariances
Summary¶
ML: Maximize likelihood
Bayesian: Posterior over parameters
EM: Hidden variables
HMM: Baum-Welch