robot interaction

The Dec-POMDP Page

The decentralized partially observable Markov decision process (Dec-POMDP) is a very general model for coordination among multiple agents. It is a probabilistic model that can consider uncertainty in outcomes, sensors and communication (i.e., costly, delayed, noisy or nonexistent communication). This web site was created to provide information about the model and algorithms used to solve Dec-POMDPs.

Not sure what a (single agent) POMDP is? Check out Tony Cassandra's tutorial

See the links below or a recent book chapter or CDC-13 survey paper for more details.

There is also a new book on Dec-POMDPs, A Concise Introduction to Decentralized POMDPs.

Also check out masplan.org for more Dec-POMDP code and resources.
Overview   Publications   Selected Talks   Downloads and Problem Descriptions   Links   Applications

This site was created and is maintained by the Resource-Bounded Reasoning Lab at the University of Massachusetts at Amherst.

For further information, contact Christopher Amato at camato AT cs DOT umass DOT edu (or camato AT csail DOT mit DOT edu, webpage)