Syllabus (and Slides)

8/27 intro, hmm review, (readings: ch 1)

8/29 writinghmm review 2

Unit 1: INFERENCE

9/03 decoding 1, (readings: ch 2.1, 2.2)

9/05 decoding 2, (readings: ch 2.3)

9/10 constituent parsing

9/12 dependency parsing

9/17 probability distributions

9/19 soft inference  (readings: ch 5.1, 5.2)

9/24 soft inference (cont'd)

9/26 minimum Bayes risk decoding

10/1 approximate inference: local search

10/3 approximate inference: Markov Chain Monte Carlo
Optional readings: David MacKay. Introduction to Monte Carlo Methods.

10/8 Lagrangian relaxation
Optional readings: Rush and Collins. (2012). A Tutorial on Dual Decomposition and Lagrangian Relaxation for Inference in Natural Language Processing. In JAIR.

10/10 interlude: experimentation (readings: appendix B) 

10/15 supervised learning basics (readings: ch 3.3)

10/17 no class

Unit 2: SUPERVISED LEARNING

10/22 supervised learning basics (cont'd) (readings: ch 3.3)

10/24 generative models

10/29 conditional models (readings: ch 3.4, 3.5)

10/31 large margin training  

Unit 3: UNSUPERVISED LEARNING

11/5 learning from incomplete data

11/7 [peer paper reading workshop]

11/12 [guest lecture: planning] 

11/14 expectation-maximization (readings: ch 4.1) code!

11/19 expectation-maximization (cont'd) 

11/21 guest lecture: gene finding

11/26 unsupervised learning with features (readings: ch 4.2)

11/28 no class

12/3 Bayesian inference

12/5 Bayesian models: examples


The wiki used last time this course was offered can be found here.

How to use LaTeX on google sites.

Book: Linguistic Structured Prediction (LSP)