A POMDP Formulation of Proactive Learning

Kyle Hollins Wray and Shlomo Zilberstein. A POMDP Formulation of Proactive Learning. Proceedings of the Thirtieth Conference on Artificial Intelligence (AAAI), Phoenix, Arizona, 2016.

Abstract

We cast the Proactive Learning (PAL) problem--Active Learning (AL) with multiple reluctant, fallible, cost-varying oracles--as a Partially Observable Markov Decision Process (POMDP). The agent selects an oracle at each time step to label a data point while it maintains a belief over the true underlying correctness of its current dataset's labels. The goal is to minimize labeling costs while considering the value of obtaining correct labels, thus maximizing final resultant classifier accuracy. We prove three properties that show our particular formulation leads to a structured and bounded-size set of belief points, enabling strong performance of pointbased methods to solve the POMDP. Our method is compared with the original three algorithms proposed by Donmez and Carbonell and a simple baseline. We demonstrate that our approach matches or improves upon the original approach within five different oracle scenarios, each on two datasets. Finally, our algorithm provides a general, well-defined mathematical foundation to build upon.

Bibtex entry:

@inproceedings{WZaaai16,
  author	= {Kyle Hollins Wray and Shlomo Zilberstein},
  title		= {A POMDP Formulation of Proactive Learning},
  booktitle     = {Proceedings of the Thirtieth Conference on Artificial
                   Intelligence},
  year		= {2016},
  pages		= { },
  address       = {Phoenix, Arizona},
  url		= {http://rbr.cs.umass.edu/shlomo/papers/WZaaai16.html}
}

shlomo@cs.umass.edu
UMass Amherst