EE 67074-AI Planning

from Graph Search to Reinforcement Learning


Realizing the dreams of autonomy requires autonomous systems that learn to make good decisions. Decision-making is a fundamental challenge in an enormous range of tasks, including robotics, transportation systems, and smart manufacturing, etc. This class will provide a solid introduction to the field of AI planning and decision-making, with a focus on robotic applications. The lectures will start from AI planning methods for deterministic systems and approach to learning near-optimal decisions from past experiences in the real world full of uncertainty. This course is intended for graduate students interested in robotics, autonomy, control, and learning.

Course outline

  • Deterministic decision-making: graph search, AI planning & automated planning, dynamic programming, model predictive control

  • Decision-making under uncertainty: Markov Chain, Markov Decision Processes, Hidden Markov Chain, Partially Observable Markov Decision Processes

  • Reinforcement Learning: model-based RL, policy gradients, value function based methods, actor-critic methods

We will cover these topics through a combination of lectures, assignments, and programming-based projects.


  • (Recommended) Artificial Intelligence: A Modern Approach, Stuart J. Russell and Peter Norvig

  • (Recommended) Principles of Robot Motion: Theory, Algorithms, and Implementations, Howie Choset, Kevin Lynch, Seth Hutchinson, George Kantor, Wolfram Burgard

  • (Recommended) Reinforcement Learning: An Introduction, Sutton and Barto, 2nd Edition

Mengxue Hou
Mengxue Hou
Assistant Professor, Electrical Engineering

My research interests include robotic autonomy, mobile sensor networks, and human robot interaction. I aim to devise practical, computationally-efficient, and provably-correct algorithms that prepare robotic systems to be cognizant, taskable, and adaptive, and can collaborate with human operators to co-exist in a complex, ever-changing and unknown environment.