University of Massachusetts Amherst
College of Information and Computer Sciences

 

 

COMPSCI 683

Artificial Intelligence

 Spring 2020

 

Shlomo Zilberstein

 

Tentative Schedule


1/21

Lecture 1: Introduction [notes]
Course information. The scope of artificial intelligence, its goals and achievements. Computer systems as intelligent agents. Types of environments, agents, and performance measures.
Reading: R&N, Sections 1.1-2.5 in 3e (1.1-2.5 in 2e).


1/23

Lecture 2: Problem solving as search in a graph [notes]
Formulating problems as search in a graph. Measuring the performance of search techniques: completeness, optimality, time and space complexity. Blind search strategies: breadth-first search, uniform-cost search, depth-first search, depth-limited search, iterative deepening search, bidirectional search. Efficient memory management.
Reading: Sections 3.1-3.4 (3.1-3.7).


1/28

Lecture 3: Heuristic search strategies [notes]
Guiding search using heuristic information. Greedy best-first search, A* and its properties, IDA*. The nature and origin of heuristics. Admissible evaluation functions. The effect of heuristic error.
Reading: Sections 3.5-3.6 (4.1-4.2).
Handout: Homework Assignment #1


1/30

Lecture 4: Abstraction and hierarchical search techniques [notes]
Multi-level search, using abstraction to speed up search, hierarchical A*. Choosing good abstractions, computing abstract distances using pattern databases, disjoint pattern databases.
Reading: "Hierarchical A*: Searching abstraction hierarchies efficiently," R.C. Holte, M.B. Perez, R.M. Zimmer, and A.J. MacDonald, Proc. of the 14th National Conference on Artificial Intelligence, 530-535, 1996. "Disjoint pattern database heuristics," R.E. Korf and A. Felner, Artificial Intelligence, 134:9-22, 2002. (optional) "Multiple pattern databases," R.C. Holte, J. Newton, A. Felner, and R. Meshulam, and D. Furcy, Proc. of the 14th International Conference on Automated Planning and Scheduling, 122-131, 2004.


2/4

Lecture 5: Search and constraint satisfaction I [notes]
Constraint satisfaction problems (CSPs). Backtracking search for CSPs. Constraint propagation, arc consistency and K-consistency.
Reading: Sections 6.1-6-3 (5.1-5.3).


2/6

Lecture 6: Search and constraint satisfaction II [notes]
Constraint satisfaction techniques continued. Intelligent backtracking. The tradeoff between constraint propagation and search. Local search for CSPs. Complexity and problem structure.
Reading: Sections 6.5-6.6 (5.4-5.5). (optional) "Backjump-based backtracking for constraint satisfaction problems," R. Dechter and D. Frost, Artificial Intelligence, 136:147-188, 2002.


2/11

Lecture 7: Beyond classical search [notes]
Local search algorithms: hill-climbing search, simulated annealing search, genetic algorithms. Searching with partial information. Searching in unknown environments.
Reading: Sections 4.1-4.4 (4.3-4.5).
Handout: Homework Assignment #2


2/13

Lecture 8: Resource-bounded search techniques [notes]
Handling the time and space complexity of heuristic search. Recursive best-first search (RBFS). Anytime A*.
Reading: "Anytime heuristic search," E.A. Hansen and R. Zhou, Journal of Artificial Intelligence Research, 28:267-297, 2007.


2/20

Lecture 9: Adversarial search [notes]
Multi-agent environments and games. Game-playing as a search problem. Optimal decisions in two-player games. Imperfect, real-time decisions. Mini-max and alpha-beta pruning. Games that include an element of chance.
Reading: Sections 5.1-5.7 (6.1-6.6).


2/25

Lecture 10: Principles of knowledge representation and reasoning [notes]
Agents that reason logically. Principles of knowledge representation. The correspondence between symbolic structures and states of the world. The syntax and semantics of propositional logic and first-order logic.
Reading: Sections 7.1-7.4, 8.1-8.2 (7.1-7.4, 8.1-8.2).


2/27

Lecture 11: Inference in propositional logic [notes]
Knowledge representation and reasoning in propositional logic, theorem proving using satisfiability, local search strategies, hardness of satisfiability. Forward and backward chaining. Inference using resolution.
Reading: Sections 7.5-7.7 (7.5-7.7). "Local search strategies for satisfiability testing," B. Selman, H. Kautz, and B. Cohen, DIMACS Series in Discrete Mathematics and Theoretical Computer Science, 26:521-532, 1996.


3/3

Lecture 12: Inference in first-order logic [notes]
Knowledge representation and reasoning in first-order logic. Representing change in the world. Matching and unification. Forward and backward chaining. Inference using resolution.
Reading: Sections 9.1-9.6 (9.1-9.6).
Handout: Homework Assignment #3


3/5

Lecture 13: Reasoning with uncertain information using Bayesian networks [notes]
The sources of uncertainty. Different approaches to representing and reasoning with uncertain information. Encoding probabilistic knowledge. Bayes' rule and Bayesian inference. Introduction to Bayesian networks. Graphical models and characterization of independence. The semantics of Bayesian networks. Query types and the complexity of evaluation.
Reading: Sections 13.1-13.5, 14.1-14.3 (13.1-13.5, 14.1-14.3).


3/10

Review Session
The review session will be in class during the normal lecture time. This is an opportunity to go over sample midterm problems and discuss general questions.


3/12

Midterm Exam
See sample exams


3/24

Lecture 14: Bayesian networks - exact inference using variable elimination [notes] [video]
Query types and the complexity of evaluation. The variable elimination algorithm. Dealing with evidence. Complexity and its dependence on the topology of the graph.
Reading: Sections 14.4 (14.4). (optional) "Bucket elimination: A unifying framework for probabilistic inference," R. Dechter, Uncertainty in Artificial Intelligence, 211-219, 1996.
Handout: Homework Assignment #4


3/26

Lecture 15: Bayesian networks - approximate inference and RPMs [notes] [video]
Inference in multiply connected belief networks. Clustering methods, cutset conditioning, and stochastic simulation. Relational probability models.
Reading: Sections 14.5-14.6 (14.5-14.6). (optional) "Learning probabilistic relational models," L. Getoor, N. Friedman, D. Koller, and A. Pfeffer, In Relational Data Mining, S. Dzeroski and N. Lavrac, Eds, Springer-Verlag, 2001.


3/31

Lecture 16: Decision-theoretic reasoning [notes] [video] [office hours]
Making optimal decisions by maximizing utility. The axioms of decision theory. Utility scales and utility assessment. Human judgment under uncertainty.
Reading: Sections 16.1-16.4 (16.1-16.4).


4/2

Lecture 17: Decision networks and the value of information [notes] [video]
Decision tree methods. The semantics of decision networks. Evaluating decision networks. The value of information.
Reading: Sections 16.5-16.7 (16.5-16.7), "Evaluating influence diagrams," R. Shachter, Operations Research, 34:871-882, 1986. (optional) "A new perspective on algorithms for optimizing policies under uncertainty," R. Dechter, Proc. of the Conference on AI Planning Systems, 2000. "Influence diagrams with memory states: Representation and algorithms," X. Wu, A. Kumar, and S. Zilberstein, Proc. of the 2nd International Conference on Algorithmic Decision Theory, 2011.
Handout: Homework Assignment #5


4/7

Lecture 18: Planning [notes] [video] [office hours]
What is planning? Representations of operators and plans. Planning as state-space search, state progression, goal regression, and means-ends analysis. Heuristics for state-space search. Search in space of partially ordered plans. Planning graphs. The complexity of planning. Sound and complete planning algorithms. Planning with partially instantiated operators.
Reading: Sections 10.1-10.5 (11.1-11.5).


4/9

Lecture 19: Planning under uncertainty: MDPs [notes] [video]
Formulating planning problems using Markov decision processes. Generating optimal action selection policies using value iteration and policy iteration. Solving Markov decision problems using heuristic search. The LAO* algorithm. Exploiting structured representations in stochastic planning.
Reading: Sections 17.1-17.3 (17.1-17.3), "LAO*: A heuristic search algorithm that finds solutions with loops," E. Hansen and S. Zilberstein, Artificial Intelligence, 129(1-2):35-62, 2001. (optional) "Optimal and approximate stochastic planning using decision diagrams," J. Hoey, R. St-Aubin, A. Hu, and C. Boutilier, University of British Columbia, Technical Report 00-05, June 2000.


4/14

Lecture 20: Planning under partial observability: POMDPs. [notes] [video] [office hours]
Partially observable domains, belief states and belief-state MDPs, value-iteration for POMDPs. Representing policies as finite-state controllers. Bounded policy iteration for POMDPs. Optimizing bounded-memory policies using non-linear programming.
Reading: "Planning and acting in partially observable stochastic domains," L.P. Kaelbling, M.L. Littman, and A.R. Cassandra, Artificial Intelligence, 101(1-2):99-134, 1998. (optional) "Solving POMDPs using quadratically constrained linear programs," C. Amato, D.S. Bernstein, and S. Zilberstein, Proc. of the 20th International Joint Conference on Artificial Intelligence, 2007.


4/16

Lecture 21: Reinforcement learning [notes] [video]
How to get intelligent systems to learn from success and failure, reward and punishment. Adaptive dynamic programming, temporal difference learning, learning action-value functions. RTDP and symbolic RTDP.
Reading: Sections 21.1-21.7 (21.1-21.6). (optional) "Symbolic generalization for on-line planning," Z. Feng, E. Hansen, and S. Zilberstein, Proc. of the 19th Conference on Uncertainty in Artificial Intelligence, 2003.
Handout: Homework Assignment #6


4/21

Lecture 22: Multi-agent planning under uncertainty: Decentralized POMDPs [notes] [video]
Modeling collaborative multi-agent systems as decentralized MDPs. The complexity of planning. Dynamic programming algorithm for finite-horizon DEC-POMDPs. Memory-bounded approximation methods, the MBDP algorithm and its descendants.
Reading: "The complexity of decentralized control of Markov decision processes," D.S. Bernstein, R. Givan, N. Immerman, and S. Zilberstein, Mathematics of Operations Research, 27(4):819-840, 2002. (optional) "Dynamic programming for partially observable stochastic games," E.A. Hansen, D.S. Bernstein, and S. Zilberstein, Proc. of the 19th National Conference on Artificial Intelligence, 2004. (optional) "Memory-bounded dynamic programming for DEC-POMDPs.," S. Seuken and S. Zilberstein, Proc. of the 20th International Joint Conference on Artificial Intelligence, 2007.


4/23

[optional] Lecture 23: Computational models of bounded rationality [notes] [video] [review session]
Reasoning with limited computational resources. Simon's notion of "satisficing" and various ways to formalize it. The value of computation and ways to factor it into a decision-making process. Anytime algorithms and deliberation scheduling.
Reading: "Using anytime algorithms in intelligent systems," S. Zilberstein, AI Magazine, 17(3):73-83, 1996. (optional) "From substantive to procedural rationality," In Herbert Simon, Models of Bounded Rationality, Volume 2, MIT Press, 1982.


4/28

[optional] Lecture 24: Summary and review for final exam [video] [review session]
Course summary. Historical perspective on AI research: The legacy of Alan Turing. Discussion of selected problems from final exams.
Reading: (optional) "Computing machinery and intelligence," A.M. Turing, Mind, 59:433-560, 1950.


5/1

Final Exam
The exam will take place on Friday, May 1. Check the Exams page for further details.


© 2020 Shlomo Zilberstein.