|Lecture 1: Introduction
Course information. The scope of artificial intelligence, its goals and achievements. Computer systems as intelligent agents. Types of environments, agents, and performance measures.
Reading: R&N, Sections 1.1-2.5 in 3e (1.1-2.5 in 2e).
|Lecture 2: Problem solving as
search in a graph
Formulating problems as search in a graph. Measuring the performance of search techniques: completeness, optimality, time and space complexity. Blind search strategies: breadth-first search, uniform-cost search, depth-first search, depth-limited search, iterative deepening search, bidirectional search. Efficient memory management.
Reading: Sections 3.1-3.4 (3.1-3.7).
|Lecture 3: Heuristic search strategies
Guiding search using heuristic information. Greedy best-first search, A* and its properties, IDA*. The nature and origin of heuristics. Admissible evaluation functions. The effect of heuristic error.
Reading: Sections 3.5-3.6 (4.1-4.2).
Handout: Homework Assignment #1
|Lecture 4: Abstraction and hierarchical search techniques
Multi-level search, using abstraction to speed up search, hierarchical A*. Choosing good abstractions, computing abstract distances using pattern databases, disjoint pattern databases.
Reading: "Hierarchical A*: Searching abstraction hierarchies efficiently," R.C. Holte, M.B. Perez, R.M. Zimmer, and A.J. MacDonald, Proc. of the 14th National Conference on Artificial Intelligence, 530-535, 1996. "Disjoint pattern database heuristics," R.E. Korf and A. Felner, Artificial Intelligence, 134:9-22, 2002. (optional) "Multiple pattern databases," R.C. Holte, J. Newton, A. Felner, and R. Meshulam, and D. Furcy, Proc. of the 14th International Conference on Automated Planning and Scheduling, 122-131, 2004.
|Lecture 5: Search and constraint satisfaction I
Constraint satisfaction problems (CSPs). Backtracking search for CSPs. Constraint propagation, arc consistency and K-consistency.
Reading: Sections 6.1-6-3 (5.1-5.3).
|Lecture 6: Search and constraint satisfaction II
Constraint satisfaction techniques continued. Intelligent backtracking. The tradeoff between constraint propagation and search. Local search for CSPs. Complexity and problem structure.
Reading: Sections 6.5-6.6 (5.4-5.5). (optional) "Backjump-based backtracking for constraint satisfaction problems," R. Dechter and D. Frost, Artificial Intelligence, 136:147-188, 2002.
|Lecture 7: Beyond classical search
Local search algorithms: hill-climbing search, simulated annealing search, genetic algorithms. Searching with partial information. Searching in unknown environments.
Reading: Sections 4.1-4.4 (4.3-4.5).
Handout: Homework Assignment #2
|Lecture 8: Resource-bounded
Handling the time and space complexity of heuristic search. Recursive best-first search (RBFS). Anytime A*.
Reading: "Anytime heuristic search," E.A. Hansen and R. Zhou, Journal of Artificial Intelligence Research, 28:267-297, 2007.
|Lecture 9: Adversarial search
Multi-agent environments and games. Game-playing as a search problem. Optimal decisions in two-player games. Imperfect, real-time decisions. Mini-max and alpha-beta pruning. Games that include an element of chance.
Reading: Sections 5.1-5.7 (6.1-6.6).
|Lecture 10: Principles of knowledge representation and reasoning
Agents that reason logically. Principles of knowledge representation. The correspondence between symbolic structures and states of the world. The syntax and semantics of propositional logic and first-order logic.
Reading: Sections 7.1-7.4, 8.1-8.2 (7.1-7.4, 8.1-8.2).
|Lecture 11: Inference in propositional logic
Knowledge representation and reasoning in propositional logic, theorem proving using satisfiability, local search strategies, hardness of satisfiability. Forward and backward chaining. Inference using resolution.
Reading: Sections 7.5-7.7 (7.5-7.7). "Local search strategies for satisfiability testing," B. Selman, H. Kautz, and B. Cohen, DIMACS Series in Discrete Mathematics and Theoretical Computer Science, 26:521-532, 1996.
|Lecture 12: Inference in first-order logic
Knowledge representation and reasoning in first-order logic. Representing change in the world. Matching and unification. Forward and backward chaining. Inference using resolution.
Reading: Sections 9.1-9.6 (9.1-9.6).
Handout: Homework Assignment #3
Review Session |
The review session will be in class during the normal lecture time. This is an opportunity to go over sample midterm problems and discuss general questions.
Midterm Exam |
See sample exams
|Lecture 13: Reasoning with
uncertain information using Bayesian networks
The sources of uncertainty. Different approaches to representing and reasoning with uncertain information. Encoding probabilistic knowledge. Bayes' rule and Bayesian inference. Introduction to Bayesian networks. Graphical models and characterization of independence. The semantics of Bayesian networks. Query types and the complexity of evaluation.
Reading: Sections 13.1-13.5, 14.1-14.3 (13.1-13.5, 14.1-14.3).
Bayesian networks - exact inference using variable elimination
Query types and the complexity of evaluation. The variable elimination algorithm. Dealing with evidence. Complexity and its dependence on the topology of the graph.
Reading: Sections 14.4 (14.4). (optional) "Bucket elimination: A unifying framework for probabilistic inference," R. Dechter, Uncertainty in Artificial Intelligence, 211-219, 1996.
Handout: Homework Assignment #4
Bayesian networks - approximate inference and RPMs
Inference in multiply connected belief networks. Clustering methods, cutset conditioning, and stochastic simulation. Relational probability models.
Reading: Sections 14.5-14.6 (14.5-14.6). (optional) "Learning probabilistic relational models," L. Getoor, N. Friedman, D. Koller, and A. Pfeffer, In Relational Data Mining, S. Dzeroski and N. Lavrac, Eds, Springer-Verlag, 2001.
Making optimal decisions by maximizing utility. The axioms of decision theory. Utility scales and utility assessment. Human judgment under uncertainty.
Reading: Sections 16.1-16.4 (16.1-16.4).
Decision networks and the value of information
Decision tree methods. The semantics of decision networks. Evaluating decision networks. The value of information.
Reading: Sections 16.5-16.7 (16.5-16.7), "Evaluating influence diagrams," R. Shachter, Operations Research, 34:871-882, 1986. (optional) "A new perspective on algorithms for optimizing policies under uncertainty," R. Dechter, Proc. of the Conference on AI Planning Systems, 2000. "Influence diagrams with memory states: Representation and algorithms," X. Wu, A. Kumar, and S. Zilberstein, Proc. of the 2nd International Conference on Algorithmic Decision Theory, 2011.
Handout: Homework Assignment #5
What is planning? Representations of operators and plans. Planning as state-space search, state progression, goal regression, and means-ends analysis. Heuristics for state-space search. Search in space of partially ordered plans. Planning graphs. The complexity of planning. Sound and complete planning algorithms. Planning with partially instantiated operators.
Reading: Sections 10.1-10.5 (11.1-11.5).
|Lecture 19: Planning under
Formulating planning problems using Markov decision processes. Generating optimal action selection policies using value iteration and policy iteration. Solving Markov decision problems using heuristic search. The LAO* algorithm. Exploiting structured representations in stochastic planning.
Reading: Sections 17.1-17.3 (17.1-17.3), "LAO*: A heuristic search algorithm that finds solutions with loops," E. Hansen and S. Zilberstein, Artificial Intelligence, 129(1-2):35-62, 2001. (optional) "Optimal and approximate stochastic planning using decision diagrams," J. Hoey, R. St-Aubin, A. Hu, and C. Boutilier, University of British Columbia, Technical Report 00-05, June 2000.
|Lecture 20: Planning under partial
Partially observable domains, belief states and belief-state MDPs, value-iteration for POMDPs. Representing policies as finite-state controllers. Bounded policy iteration for POMDPs. Optimizing bounded-memory policies using non-linear programming.
Reading: "Planning and acting in partially observable stochastic domains," L.P. Kaelbling, M.L. Littman, and A.R. Cassandra, Artificial Intelligence, 101(1-2):99-134, 1998. (optional) "Solving POMDPs using quadratically constrained linear programs," C. Amato, D.S. Bernstein, and S. Zilberstein, Proc. of the 20th International Joint Conference on Artificial Intelligence, 2007.
|Lecture 21: Reinforcement
How to get intelligent systems to learn from success and failure, reward and punishment. Adaptive dynamic programming, temporal difference learning, learning action-value functions. RTDP and symbolic RTDP.
Reading: Sections 21.1-21.7 (21.1-21.6). (optional) "Symbolic generalization for on-line planning," Z. Feng, E. Hansen, and S. Zilberstein, Proc. of the 19th Conference on Uncertainty in Artificial Intelligence, 2003.
Handout: Homework Assignment #6
|Lecture 22: Multi-agent
planning under uncertainty: Decentralized POMDPs
Modeling collaborative multi-agent systems as decentralized MDPs. The complexity of planning. Dynamic programming algorithm for finite-horizon DEC-POMDPs. Memory-bounded approximation methods, the MBDP algorithm and its descendants.
Reading: "The complexity of decentralized control of Markov decision processes," D.S. Bernstein, R. Givan, N. Immerman, and S. Zilberstein, Mathematics of Operations Research, 27(4):819-840, 2002. (optional) "Dynamic programming for partially observable stochastic games," E.A. Hansen, D.S. Bernstein, and S. Zilberstein, Proc. of the 19th National Conference on Artificial Intelligence, 2004. (optional) "Memory-bounded dynamic programming for DEC-POMDPs.," S. Seuken and S. Zilberstein, Proc. of the 20th International Joint Conference on Artificial Intelligence, 2007.
|Lecture 23: Computational models of bounded rationality
Reasoning with limited computational resources. Simon's notion of "satisficing" and various ways to formalize it. The value of computation and ways to factor it into a decision-making process. Anytime algorithms and deliberation scheduling.
Reading: "Using anytime algorithms in intelligent systems," S. Zilberstein, AI Magazine, 17(3):73-83, 1996. (optional) "From substantive to procedural rationality," In Herbert Simon, Models of Bounded Rationality, Volume 2, MIT Press, 1982.
|Lecture 24: Summary and review for final exam
Course summary. Historical perspective on AI research: The legacy of Alan Turing. Discussion of selected problems from final exams.
Reading: (optional) "Computing machinery and intelligence," A.M. Turing, Mind, 59:433-560, 1950.
Final Exam |
The final exam will take place on Thursday, May 4, between 10:30-12:30 in South College Room W245.
See sample exams
© 2018 Shlomo Zilberstein.