The Resource-Bounded Reasoning Lab conducts research on the computational foundations of automated reasoning and action. We are particularly interested in the implications of uncertainty and limited computational resources on the design of autonomous agents. In most practical settings, it is not feasible or desirable to find the optimal action, making it necessary to resort to some form of approximate reasoning. This raises a simple fundamental question: what does it mean for an agent to be "rational" when it does not have enough knowledge or computational power to derive the best course of action? Our overall approach to this problem is based on probabilistic reasoning and decision-theoretic principles, used both to develop planning algorithms and to monitor their execution and maximize the value of computation. The meta-level control mechanisms reason explicitly about the cost of decision-making and can optimize the amount of deliberation (or "thinking") an agent does before taking action. This research spans both theoretical issues and the development of effective algorithms and applications. We have recently developed new models to address this challenge in situations involving multiple decision makers operating in either collaborative or adversarial domains. We are also working on decision-theoretic techniques to model and exploit bounded rationality and opponent models in decentralized settings.

Current projects

Other projects