Changelog :
Course title and contents updated on Oct 2017. Old title "Probabilistic reasoning for AI".
Preamble :
A
Probabilistic Graphical Model (PGM) is a probabilistic model for which a graph expresses the dependence structure between the random variables given by the nodes in the graph. They are commonly used in probability theory, statistics and machine learning. This course takes a deep dive into PGMs with a bias towards applications in machine learning.
Objectives:
The course provides comprehensive introduction to probabilistic graphical models. At the end of the course the student should be able to model problems using graphical models; design inference algorithms; and learn the structure of the
graphical model from data.
Course Contents:
Fundamentals: Fundamentals of Probability Theory - Views of Probability, Random Variables and Joint Distributions, Conditional Probability, Conditional Independence, Expectation and Variance, Probability Distributions - Conjugate
Priors, Introduction to Exponential Family; Fundamentals of Graph Theory - Paths, Cliques, Subgraphs, Cycles and Loops.
Graphical Models: Introduction - Directed Models (Bayesian Network), Undirected Models (Markov Random Fields), Dynamic Models (Hidden Markov Model & Kalman Filters) and Factor Graph; Conditional Independence (Bayes Ball
Theorem and D-separation), Markov Blanket, Factorization (Hammersley-Clifford Theorem), Equivalence (I-Maps & Perfect Maps); Factor Graphs - Representation, Relation to Bayesian Network and Markov Random Field.
Inference in graphical models: Exact Inference - Variable Elimination, Elimination Orderings, Relation to Dynamic
Programming, Dealing with Evidence, Forward-Backward Algorithm, Viterbi Algorithm; Junction Tree Algorithm; Belief
Propagation (Sum Product); Approximate Inference - Variational Methods (Mean Field, Kikuchi & Bethe Approximation),
Expectation Propagation, Gaussian Belief Propagation; MAP Inference - Max-Product, Graph Cuts, Linear Programming
Relaxations to MAP (Tree-Reweighted Belief Propagation, MPLP); Sampling - Markov Chain Monte Carlo, Metropolis
Hastings, Gibbs (Collapsing & Blocking), Particle filtering.
Learning in Graphical Models: Parameter Estimation - Expectation Maximization, Maximum Likelihood Estimation,
Maximum Entropy, Pseudolikelihood, Bayesian Estimation, Conditional Likelihood, Structured Prediction; Learning with
Approximate Inference; Learning with Latent Variables; Structure Learning, Structure Search, L1 priors.
Text Books:
Koller, D. and Friedman, N. (2009). Probabilistic Graphical Models: Principles and Techniques. MIT Press.
Reference Books:
- Jensen, F. V. and Nielsen, T. D. (2002). Bayesian Networks and Decision Graphs. Information Science and Statistics. Springer, 2nd edition.
- Kevin P. Murphy (2013) Machine Learning: A Probabilistic Perspective. 4th Printing. MIT Press.
- Barber, D. (2011). Bayesian Reasoning and Machine Learning. Cambridge University Press, 1st edition.
- Bishop, C. M. (2011). Pattern Recognition and Machine Learning (Information Science and Statistics). Springer, 2nd printing.
- Wainwright, M. and Jordan, M. (2008). Graphical Models, Exponential Families, and Variational Inference. Foundations and Trends in Machine Learning, 1:1–305.