Reinforcement Learning for Weakly-Coupled MDPs and an Application to Planetary Rover Control

Daniel S. Bernstein and Shlomo Zilberstein. Reinforcement Learning for Weakly-Coupled MDPs and an Application to Planetary Rover Control. Proceedings of the Sixth European Conference on Planning (ECP), 373-378, Toledo, Spain, 2001.

Abstract

Weakly-coupled Markov decision processes can be decomposed into subprocesses that interact only through a small set of bottleneck states. We study a hierarchical reinforcement learning algorithm designed to take advantage of this particular type of decomposability. To test our algorithm, we use a decision-making problem faced by autonomous planetary rovers. In this problem, a Mars rover must decide which activities to perform and when to traverse between science sites in order to make the best use of its limited resources. In our experiments, the hierarchical algorithm performs better than Q-learning in the early stages of learning, but unlike Q-learning it converges to a suboptimal policy. This suggests that it may be advantageous to use the hierarchical algorithm when training time is limited.

Bibtex entry:

@inproceedings{BZecp01,
  author	= {Daniel S. Bernstein and Shlomo Zilberstein},
  title		= {Reinforcement Learning for Weakly-Coupled MDPs and an
                   Application to Planetary Rover Control},
  booktitle     = {Proceedings of the Sixth European Conference on Planning},
  year		= {2001},
  pages		= {373-378},
  address       = {Toledo, Spain},
  url		= {http://rbr.cs.umass.edu/shlomo/papers/BZecp01.html}
}

shlomo@cs.umass.edu
UMass Amherst