CS B553: Probabilistic approaches to Artificial Intelligence**
Spring 2013

Instructor: Prof. David Crandall
Tuesday Thursday 11:15-12:30
122 Informatics East

(**This course is officially known in the University Registrar's system as
Neural and Genetic Approaches to Artificial Intelligence, but that is not an
accurate description of the course content this semester.)

Overview and objectives:

Uncertainty is a fact of everyday life, caused in part by incomplete and noisy
observations, imperfect models, and (apparent) nondeterminism of the social and
physical world. Much (and in some cases most) recent work across a range of
computing disciplines (including artificial intelligence, robotics, computer
vision, natural language processing, data mining, information retrieval,
bioinformatics, etc.) has used probabilistic frameworks to explicitly address
this uncertainty. This course will introduce the statistical, mathematical, and
computational foundations of these frameworks, with a particular focus on a
popular and very general framework called Probabilistic Graphical Models. We
will also cover related topics in optimization and probability theory. We will
study applications of these techniques across a range of AI disciplines, with
perhaps a bias towards computer vision, and students will be encouraged to
choose a final project that aligns with their own research interests. 

Prerequisites:
CS B551 (Introduction to Artificial Intelligence), or permission of the
instructor. The course will require some level of mathematical maturity,
especially with linear algebra, probability theory, and basic calculus,
although we will review the key mathematical concepts as we go along. The
course will assume proficiency with some modern general-purpose programming
language like Python, Java, or C.

Grading:
Approximately 6 assignments, a final project, and occasional in-class quizzes.
The assignments will include both programming and pen-and-paper problems. 

Text and resources:
Koller and Friedman, Probabilistic Graphical Models, MIT Press, 2009. We will
also read research papers and selected chapters from other books.

Topics will include:
- Review of probability theory and basic calculus
- Graphical model frameworks: Bayes networks, Markov networks
- Exact inference: Variable elimination, conditioning, clique trees
- Approximate inference: Belief propagation, graph cuts, particle-based inference
- Inference as optimization
- Optimization techniques: Gradient descent, Newton methods, constrained
  optimization, stochastic optimization, genetic algorithms
- Learning: maximum likelihood and MAP parameter estimation, structure
  learning, Expectation-Maximization
- Temporal models: Markov chains, Hidden Markov Models
- Applications