Where can multiagent planning make significant impact in application? What's the "killer app?"
- applications that require decentralized planning or where decentralized planning is more efficient than centralized
- planetary exploration
- sensor networks
- multi-player video games (simulated participants & human)
- apps requiring privacy
- self-interested agents
- auto manufacturing - supply-chain (problems with utility fcn)
- It is difficult to motivate the need for decentralized decision making for many applications (including above). Often a centralized planner makes more sense.
How is coordinating humans different than coordinating agents? Are there techniques that are more appropriate for one than the other?
- can't assume humans understand situation/assumptions
- can't model humans accurately
- humans are not rational (from the perspective of the software, at least))
- humans understand the domain model better than software or have a different understanding
- more humans means more complexity, potentially overwhelming the humans
Auctions (market mechanisms) are almost always used for assigning tasks for execution or allocating resources in a multiagent system, but this is divorced from the actual planning algorithm. How can agents plan for auctions or use auctions to plan? Should they?
- if not auctions, then what?
- could alternatively centralize allocations
- The Deep Space Network is a multiagent planning problem where missions compete over the use of antennas (on Earth) for their spacecraft. Auctions have been proposed, but few people think the missions will accept it.
- How should "funny" or real money be distributed?
- The utility of allocating an antenna to a mission in one timeframe depends on allocations in other timeframes, so how should missions bid, or should there be a more complicated mechanism?
- If a mission later decides it doesn't need an antenna, should it be able to sell the antenna time? If so, missions could make a profit off of others, and that doesn't seem right.
How far have we gotten, and how far do we need to go in evaluating multiagent planning (differently than single agent planning)?
Seem to be using many of same metrics as single agent planning. Need to address
- communication costs
- # messages
- data volume
- information content
- efficiency of information exchange
- trading flexibility, quality, and uncertainty
- idle time
- real time (related to idle time)
- agents have different evaluation functions
Should there be benchmarks or competitions?
but multiagent planning is not used much there (yet)
- RoboCup Soccer & Rescue
- Trading Agent Competition
- Multiagent Planning problems are so varied, it is difficult to have a competition that addresses many aspects
- Chosen problems could take focus away from kinds.
- Don't want field to focus on small incremental improvements (e.g. 5% speedup, 10 more blocks)
Benchmarks are still good for evaluating different approaches.
Are distributed POMDP representations limiting? If so, how?
- of course, they have the same limitations as single agent POMDPs
- finding optimal policies is hard, but techniques are improving
Do existing approaches apply to self-interested agents (stochastic games)?
- Is this really multiagent planning? From a single agent point of view, it could be centralized planning with models of other agents. This is the path of current research.
- Current research often looks at repeated matrix games. Are other problems too hard?
How is centralized planning for multiple agents different than planning for concurrent action? Should this still be called multiagent planning?
Question not clear--restated: Much research for multiagent planning is about offline centralized planning for execution by multiple agents. Because of this and that single agent planning research is producing planning algorithms that handle concurrency and could be (is being) applied to multiagent execution, is the problem solved, or should this problem be migrated over to single agent planning research?
- for allocating tasks/roles, agents can be modeled as non-depletable resources for a centralized planner
- including communication with resulting belief updates as part of the problem is an inherent multiagent issue
- similarly, differences in agent observability
- reactive execution is different with multiple agents
Denying problem types as multiagent research is a slippery slope because centralized approaches can be taken for all of these problems, and it would be extreme to say that multiagent planning is really only distributed planning.
What research questions/challenges are being ignored in multiagent planning?
- trading communication costs, computation costs, and quality
- metrics for commitment/flexibility
- self-interested and partially self-interested agents (common and competing rewards/goals for agents)
- multi-objective optimization?
- adversarial/Byzantine environments
- trust issues
Last modified: Tue Jun 14 10:41:46 Pacific Standard Time 2005