In these kinds of systems, the problem is to employ diverse capabilities to solve problems that are not only large but also multifaceted. As a simple example of distributed capability, we will use the example of distributed sensor network establishment for monitoring a large area for vehicle movements.
In this kind of problem, the overall task of monitoring cannot be done in a central location since the large area cannot be sensed from any single location. So the establishment problem is to decompose the larger monitoring task into subtasks that can be allocated appropriately to geographically distributed agents.
Distribution of capability, information, and expertise make no single agent solution to tasks possible. Solving distributed problems well demands both group coherence that is, agents need to want to work together and competence that is, agents need to know how to work together well. group coherence is hard to realize among individually-motivated agents.
In distributed problem solving, we typically assume a fair degree of coherence is already present: the agents have been designed to work together; or the payoffs to self-interested agents are only accrued through collective efforts; or social engineering has introduced disincentives for agent individualism; etc.
Distributed problem solving concentrates on competence; as anyone who has played on a team, worked on a group project, or performed on a football team can tell you, simply having the desire to work together by no means ensures a competent collective outcome.
Distributed problem solving presumes the existence of problems that need to be solved and expectations about what constitute solutions. For example, a problem to solve might be for a team of computational agents to design an artifact say, a car.
The solution they formulate must satisfy overall requirements it should have four wheels, the engine should fit within the engine compartment and be powerful enough to move the car, etc., and must exist in a particular form, a specification document for the assembly plant. The teamed agents formulate solutions by each tackling one or more subproblems and translation of subproblem solutions into overall solutions.
Sometimes the problem the agents are solving is to construct a plan. And often, even if the agents are solving other kinds of problems, they also have to solve how the agents should plan to work together— decompose problems into subproblems, allocate these subproblems, exchange sub-problem solutions, and synthesize overall solutions—is itself a problem the agents need to solve.
So distributed planning is tightly intertwined with distributed problem solving, being both a problem in itself and a means to solving a problem. One of the powerful motivations for distributed problem solving is that it is difficult to build training constructs to be competent in every possible task.
Moreover, even if it feasible to build or train capable agents, it is often overkill because, at any given time, most of those capabilities will go to waste. The strategy in human systems, and adopted in many distributed problem-solving systems, is to bring together on demand combinations of specialists in different areas to combine their expertise to solve problems that are beyond their individual capabilities.
There are several motivations for distributed problem solving and distributed planning. One obvious motivation is that using distributed resources concurrently can allow a speedup of problem solving thanks to parallelism. The possible improvements due to parallelism depend, of course, on the degree of parallelism inherent in a problem. One problem that permits a large amount of parallelism during planning is a classic problem in artificial intelligence.
The problem is to find a sequence of moves that will achieve the goal state. A second motivation for distributed problem solving and planning requires distributed expertise or other problem-solving capabilities.
For example, in concurrent engineering, a problem could involve designing and manufacturing an artifact such as a car by allowing specialised agents to individually formulate components and processes, and combining these into a collective solution.
In application of flexible decision-making procedures to offensive attack war games the goal of the predator agents is to coordinate their actions to capture one prey agent by surrounding it on all four sides.
Like anti-air defense domains, this domain also exhibits a trade-off between decision quality and computation time. While relying on reactive rules might degrade the quality of decision-making of a predator, relying on deliberative but delayed decisions might decrease the probability of capturing the prey as well.
The behaviours of predator agents using various strategies to surround the prey while closing in on it, can be compiled into reactive rules from a set of scenarios.
As in the anti-air defense domain, the performance proﬁles can be used to quantify the quality of decision-making depending on the number of other predator agents considered and on the depth of reasoning.
Based on performance proﬁles and the computed urgency of the situation, the predator agents can decide whether they should consider other agents or take the best current action without further deliberation.
Consider the example of agents delivering more than one newspaper. More than two newspapers are delivered by separate service agents using the phone.
The expenses of the agents depend only on the number of phone calls. There are several subscribers that subscribe to all the newspapers. All the delivery agents negotiate over the distribution of the common subscriptions.
Each of the agents can opt out of the negotiations and deliver all of its own newspapers by itself. The agents are compensated according to the time of the delivery. The faster the better.
We assume that a set of agents wants to satisfy a goal. All agents can take part in satisfying the goal, but they all need to agree on the schedule.
Schedules are inter-dependent with other agents. A schedule is valid if it satisﬁes both local constraints and also equality constraints with other agents.
For example, in meeting scheduling, a person has local constraints such as they can only attend one meeting at a time, but all attendees must agree on the time of a meeting, which is an equality constraint among the schedules of different agents.
Schedules are built incrementally. That is, new activities must be incorporated into an existing valid schedule to produce a new valid schedule. A key feature of incremental scheduling is that existing activities often need to be moved, or ”bumped” and rescheduled, in order to successfully accommodate the new activities.
Schedules contain private information and each agent retains ownership of its schedule. We assume this as an explicit property of the application domain.
This property eliminates a solution approach in which all information is communicated to a central scheduler that constructs a global schedule for all agents.
Instead, each agent makes its own scheduling decisions and communicates with others to ensure a valid schedule.
Importantly,the assumption of private information places limits on the information that is exchanged
All these are key essential features of many real-world distributed scheduling problems. Incremental scheduling is clearly an important class of problem.
Inter-dependencies as deﬁned by equality constraints arise whenever multiple agents must schedule a joint activity that must be executed at the same time, e.g., scheduling a coordinated invasion in military mission planning.
Design/Implement of extended group of partial planning coordination mechanisms to assist in schedule activities for teams of cooperative agents Involves Partial Planning Approach views coordination as fine-tuning local control, not replacing it. This process occurs via set of domain-independent coordination mechanisms posting constraints to local scheduler about the importance of defined tasks and appropriate times for task initiating and completion.
By concentrating on creation of local scheduling constraints, sequential scheduling is avoided in the original Partial Planning Approach occurring when there are multiple plans.
By having separate modules for coordination and local scheduling, we can take advantage of advances in real-time scheduling to produce cooperative distributed problem solving systems responding to real-time deadlines.
We can also take advantage of local schedulers with a great deal of domain scheduling experience already encoded in-house. Finally, our approach allows consideration of termination issues not typically addressed.
One motivation is that beliefs or other data can be distributed. For example, following the successful solution of the distributed sensor network establishment problem just described, the problem of actually doing the distributed vehicle monitoring could in principle be centralised: each of the distributed sensor agents could transmit raw data to a central site to be interpreted into a global view.
This centralised strategy, however, could involve tremendous amounts of unnecessary communication compared to allowing the separate sensor agents to formulate local interpretations that could then be transmitted selectively.
Another motivation is that the results of problem solving or planning might need to be distributed to be acted on by multiple agents. For example, in a task involving the delivery of objects between locations, distributed delivery agents can act in parallel.
The formation of the plans that they execute could be done at a centralised site, a dispatcher or could involve distributed problem- solving among them. Moreover, during the execution of their plans, features of the environment that were not known at planning time, or that unexpectedly change, can trigger changes in what the agents should do.
Again, all such decisions could be routed through a central coordinator, but for a variety of reasons exploiting parallelism, sporadic coordinator availability, slow communication channels, etc. it could be preferable for the agents to modify their plans unilaterally or with limited communication among them.
Note depending on the circumstances, different steps might be more or less difficult. For example, sometimes an overburdened agent begins with a bundle of separate tasks, so decomposition is unnecessary; sometimes the agent can pass tasks off to any of a number of identical agents, so allocation is trivial; and sometimes accomplishing the tasks does not yield any results that need to be synthesized in any complex way.
1. When an agent has many tasks to do, it should enlist the help of agents with few or no tasks. The main steps in task sharing are:
2. Task decomposition: Generate the set of tasks to potentially be passed to others.
3. Decomposing large tasks into subtasks that could be tackled by different agents.
4. Task allocation: Assign subtasks to appropriate agents.
5. Task accomplishment: The appropriate agents each accomplish their subtasks, which could include further decomposition and subtask assignment,
6. Recursively to the point that an agent can accomplish the task it is handed alone.
7. Result synthesis: When an agent accomplishes its subtask, it passes the result to the appropriate agent
8. Usually the original agent, since it knows the decomposition decisions
9. Is most likely to know how to compose the results into an overall solution.
10. When an agent has received solutions to all of the subproblems it passed down, it can compose these into a more comprehensive sequence of moves, and then pass this up as its solution.