The MADP Toolbox
This is the home page of the MultiAgent decision process toolbox, a object oriented software toolbox for representing multiagent problems and performing planning and learning in multiagent systems.
On this page:
- Introduction
- Download
- Installation
- Documentation
- More information
- Recent releases and features
- More details (installation, contributing etc.)
Introduction
The Multiagent decision process (MADP) Toolbox is a free C++ software toolbox for scientific research in decision-theoretic planning and learning in multiagent systems (MASs). This project was jointly started by Matthijs Spaan and me, but a growing number of people have contributed to the toolbox.
Multiagent systems are complex: they have complex interactions and can live in complex environments. In order to make any sense of them, a field of research focusses on formal models for multiagent decision making. These models include various extensions of the Markov decision process (MDP), such as multiagent MDPs (MMDPs), decentralized MDPs (Dec-MDPs), decentralized partially observable MDPs (Dec-POMDPs), partially observable stochastic games (POSGs), etc. We refer to the collection of such models as multiagent decision processes (MADPs). (For some pointers, see here).
The MADP toolbox aims to facilitate research by providing an object oriented library for representing multiagent problems, and for performing planning and learning in them.
MADP in action
These are a few pointers that can help you to get an idea of what MADP can do.
- A couple of simple example programs.
- Watch the video that shows how Dec-POMDP strategies beat the built-in game AI in StarCraft.
- MADP is the backend for a number of solution methods on the Thinc lab solver page maintained by Ekhlas Sonu: http://lhotse.cs.uga.edu/pomdp/.
Download
The current version of the MADP toolbox is 0.4.1 and can be downloaded here:
Installation
The MADP toolbox makes use of GNU autotools. Therefore, a typical installation is as follows. First, unpack the tar. Go to the created madp directory and then
./configure make make install [optional]
The MADP toolbox is being developed on Debian GNU/Linux, but should work on any recent Linux distribution.
It has been reported to work under cygwin with these instructions (thanks to William Rudnisky):
1-Download all libraries from the devel category in cygwin installer.
2-Inside "prob.h" add a line: include <math.h> to the imports.
After this, the project will build normally.
Documentation
The MADP Toolbox comes with reasonably extensive documentation:
- The primary source of information is the manual.
- Additionally, there is html doxygen documentation (class reference) which can be build from the source code ("make htmldoc"). It also is available here or as .tar.gz.
- For a first impression, an high-level overview is given in this paper,
- or have a look at some examples
For questions, please subscribe to the MADP-users mailing list.
More information
- Generate the html doxygen documentation (class reference) or download it here.
- Some more details are explained in the manual.
- Subscribe to the MADP-users mailing list
Recent Releases
2017-03-08: Version 0.4.1 Released
- Includes test software run with 'make check'
- Includes policy iteration with GPU support for policy evaluation
- Improved pruning in AlphaVectorPruning
2016-05-20: Version 0.4 Released
- Includes freshly written spirit parser for .pomdp files.
- Includes new code for pruning POMDP vectors; obviates dependence on Cassandra's code and old LP solve version.
- Includes new factor graph solution code
- Generalized firefighting CGBG domain added
- Simulation class for Factored Dec-POMDPs and TOI Dec-MDPs
- Approximate BG clustering methods and kGMAA with clustering.
2015-04-10: Version 0.3.1 Released
This version has many new features and algorithms:- Updated Manual
- More example programs
- Support for factored models
- Parsing ProbModelXML models
- GMAA*-ICE, DP-LPC
- Simulation and reinforcement learning
- Collaborative Graphical Bayesian Games + solvers
- etc.
MADP: Some details
Overview
The framework consists of the following parts, grouped in different libraries:
- the base library
- the parser
- a support library
- a planning library
Windows port
Windows is not supported by the MADP developers, but in 2010 Prashant Doshi and his students Christopher Jackson and Kennth Bogert were able to port to windows and use the toolbox to compute policies in the game of StarCraft.
- If you are interested in reviving the windows port, we would welcome this very much. Please email me or any of the developers.
- Watch the video that shows how Dec-POMDP strategies beat the built-in game AI.
Previous Versions
Older versions are still available:
- MADP toolbox v0.4 (.tar.gz)
- MADP toolbox v0.3.1 (.tar.gz)
- MADP toolbox v0.3 (.tar.gz)
- MADP toolbox v0.2.2 (.tar.gz)
- MADP toolbox v0.1 (.tar.gz)
Contributing to MADP
We welcome all contributions to MADP! The easiest way is to implement your favorite algorithms in the latest version of MADP and share that source with the Development Team. Alternatively, if you expect to contribute a lot, you can join the development team, to get access to the private git repository we use for our own development. Let us know if you are interested in this.
Development Team
Currently active members of the development team are:
- Matthijs Spaan
- Bas Terwijn
- João Messias
- Philipp Robbel
- and myself
The best way to contact us, e.g., if you want to contribute to MADP, is to send us an email at the developer list:
For all other questions, please them to the MADP-users mailing list.
Contributors
The following people have contributed to MADP:
- Frans Oliehoek
- Matthijs Spaan
- Bas Terwijn
- Erwin Walraven
- João Messias
- Philipp Robbel
- Abdeslam Boularias
- Xuanjie Liu
- Julian Kooij
- Tiago Veiga
- Francisco Melo
- Timon Kanters
- Philipp Beau