The Multi-agent model includes a “Digital Twin” simulation of every foxhole in the area affected as well as every road, communication network, etc. And the scenario is populated with more than One Million Agents—a synthetic population statistically identical to the real population of the affected area in factors such Troop equipment.
Each agent is an autonomous subroutine that responds in reasonably human ways to other agents and the evolving disaster by switching among multiple modes of behaviour—for example, flight, and efforts to find other Troops
The point of such models is to avoid describing troop missions from the top down with fixed equations, as is traditionally done in such fields as economics. Instead, outcomes such as vehicle crash through the interactions of many individuals, leading to a real-world spontaneous behaviour that is otherwise hard to simulate.
That kind of detail is exactly what commanders need, The model can warn system fail point X might well lead to a surprise traffic jam at point Y. If commanders decide to deploy mobile dispatch centre in the early hours of the crisis to restore communications, the model can tell them whether more vehicles will take to the roads, or fewer.
Agent-based models are how you get all these pieces sorted out and look at the interactions.
The downside is that models tend to be big—each of the model's initial runs kept large microprocessor computing cluster busy for couple of days—forcing the agents to be relatively simple-minded.
"There's a fundamental trade-off between the complexity of individual agents and the size of the simulation."
But computers keep getting bigger and more powerful, as do the data sets used to populate and calibrate the models. More and more decision-makers are taking agent-based models seriously. "They're the most flexible and detailed models out there,"which makes them by far the most effective in understanding and directing policy."
One early success was to simulate troop interactions on ordinary desktop computers, they pared agent-based modeling down to its essence: a set of simple agents that moved around a grid in search of resources that was abundant in some places and scarce in others. Though simple, the model gave rise to surprisingly complex group behaviours such as combat, and troop segregation.
Another milestone has been when Operational Simulation System, an agent-based traffic model developed. Unlike traditional traffic models, which used equations to describe moving vehicles en masse as fluid, each vehicle and driver was modeled as an agent moving through a contingency theatre road network.
The simulation included a realistic mix of combat vehicles, driven by troops with a realistic mix of abilities, and destinations. When applied to the road networks in actual combat theatre, Simulations did better than traditional models at predicting traffic jams one reason agent-based models are now a standard tool in planning of mobile operations.
The model also incorporates data on how troops typically move between stations. The result is a model that not only predicts how bad a situation could get—something commanders could determine from equations—but also suggests where the worst hot spots might be.
Given how much is at stake in those simulations, users always want to know why they should trust the results. How can they be sure the model output has anything to do with real-world scenarios—especially in cases such as mission disasters, which have no empirical data to go on?
That question has several answers. First, users shouldn't expect the models to make specific predictions about, say, a mission crash in the future. Instead, most modelers accommodate the inevitable uncertainties by averaging over many runs of each scenario and displaying a likely range of outcomes, allowing planners to use the model as a test bed to game out the consequences of taking action A, B, or X.
Modelers should not just slap the model together and see whether the final results make sense. Instead, they should validate the model as they build it, looking at each piece as they slot it in—how troops get to and from point of action, for example—and matching it to real-world data sources. "At every step, there is data that you're calibrating to,"
Modelers should also try to calibrate the agents behaviors by using studies of troop behaviour. Doing so can be tricky—troops are complicated—but in crisis situations, modeling behaviour becomes easier because it tends to be instinctive. The model, for example, gets by with built-in rules that cause the agents to shift back and forth among just a few behaviours, such as shelter-seeking, evacuating or attack.
Even so, field studies point to crucial finer points, "When a mission falls apart, "we find that troops are more likely to consider twice, weighing possibility of being without support from others even with priority of the crisis itself."
Likewise when under fire, commanders tend to assume that when the alarm sounds, troops will immediately file toward the exits in an orderly way. But just watch the next time your unit has a fire drill. Troops don't evacuate without first talking to others"—and if need be, collecting friendly forces.
The evidence also suggests that blind, unthinking panic is rare. In an agent-based model, model developers tried to reproduce what happened in a past incident where the unit came under fire. Some in the group jammed together so tightly that no one could move, and casualties were taken. Between participants accounts, the team had good data on the behaviour, and their relationships to others.
And when we incorporated those relationships into the model, the runs most consistent with the actual incident under fire involved almost no panic at all. "We found that Troops were trying to get out with those in the unit that shared a similar assignment and proximity to the fight.
Results from the model can sometimes be counterintuitive. For example, the model suggests that right after the strike, commanders should expect to see some troops rushing toward ground zero, jamming the roads in a frantic effort to pick up their comrades. The model also points to a good way to reduce chaos: to quickly restore communications service, so commanders can verity the safety of troops.
If agent-based modelers have a top priority, it's to make the simulations easier to build, run, and use—not least because that would make them more accessible to real-world decision-makers.
Decision makers access a playbook: network status updates containing digital versions of every large operational theatre, with pre-computed models of just about every potential hazard. "Then, if something actually happens, like an explosion commanders could pick model being closest match and do near–real-time calculation for things like the optimal mix of engagement and evacuation.
It is "agent-based modeling as a service." Teams are developing and testing a networked tool that lets commanders to build disaster simulations and do what-if assessments on their own, without having to hire programmers. With just a few clicks, users can specify key variables such as the region of interest, and the type of attack. Then, using the tool's built-in maps and graphs, commanders can watch the simulation unfold and see the effect of their proposed response protocols.
Despite being specialised for troop movement in and out of theatre, the tool's underlying geographic models and populations are general, and they can be applied to other kinds of disasters. Ultimately, the goals are to build such models into services that are individualised to each unit and situation.
So it won't just be decision-makers far away from the fight using those systems. It will be you. "It will be as routine as Google Maps.
Advances of the last decades in information and communication technologies have triggered fast developments in a large number of application areas involving cooperation among networked agent entities.
Network tech advances have enabled implementation of number of predicted models that were just waiting for the proper conditions, and on the other hand they motivated the development of new concepts.
One of the main underlying trends in Training development is the focus on agent models, protocols, and mechanisms to support the collaboration of pre-existing entities in distributed real-world scenarios, be it among organisations, among Troops, or among Troops and organisations.
Strong reliance on standards will contribute to facilitate the interfacing of existing applications with the virtual infrastructure, but unfortunately not all classes of information that need to be exchanged among virtual agents are covered by existing standards.
Initiatives of application agent groups contribute to facilitate this process. In general, it is necessary to develop some interface/mapping layer, at each enterprise, to adequately have this enterprise interacting with virtual infrastructure.
It is also noticed that most legacy applications were designed for a enterprise-centered local operation and to be operated by agents. In order to have these applications supplying information to or consuming information from the virtual network, it is clearly necessary to extend their functionality.
Interoperability among enterprise applications represents a major challenge for supporting the rapid formation of virtual scenarios, in response to new operational opportunities. On the other hand it is important to have in mind that each enterprise has its own way of doing business.
Furthermore the level of information sharing among virtual agents is likely to either change of the trust level among partners or with adjustments to configuration in time. Therefore, flexible virtual coordination and information visibility rights definition mechanisms are needed, in order to support both the autonomy and change order properties in behaviour of virtual agents.
The first step is to understand and document user requirements and Constraints for system capability so process can meet requirements availability/ reliability parameters must be explained and guide trade-off studies of mission capability and operational support, defining baseline against which the new system will be measured.
So performance factors need to be matched up with user needs into clearly defined system parameters and allocate/ integrate parameters to relevant disciplines needed to realise success.
Systems engineering attempts to optimise effectiveness and affordability as the capability is created. The systems approach makes sure the question What are the user needs and constraints? is answered before designing the answer.
The top-level programme plan for achieving required available/reliable is executed in manner to ensure requirements are achievable. Through understanding user needs and constraints, new capabilities begin to be defined.
Must establish the case for a materiel approach to resolve gaps in capability. The primary focus is to acquire quality products balancing process of satisfying user needs while improving mission capability and operational support, also adhering to scheduling constraints and justifiable costs.
During capability assessments, time and resources need to be set aside to measure and characterise current operational experience, organise metrics and supply line performance to reach conclusions about the causes of shortfalls.
It is also imperative to understand subsystem design complexity and influence on availability/reliability. Capabilities-based approach leverages the expertise of all service directorate activities defining new capabilities.
Primary focus is to ensure that joint force is properly equipped and supported to perform across disciplines to identify improvements to existing capabilities and create new warfighting capabilities.
Process defines needed capabilities through characterisation of doctrine, organisation, training, materiel, leadership, and Troop units. Availability/reliability levels are defined within this framework, principally in the category of materiel.
So Goal is to inform and share information among decision makers tasked with use and system support. Information to be shared includes user requirements, and how system will be used or potentially miss targets.
Key to any assessments is description of use/support location, constraints on what support is available for system, what information will be available to decision makers, and how that information will be verified.
Operation commanders typically have relatively clear ideas about the planning factors for specific pieces of equipment. However, Troops usually do not have an updated set of planning factors that it uses to develop mission plans or to guide what it brings aboard its ships.
This lack of training on character of planning factors led us to rely more heavily on the users of the application. We consulted equipment training tasks that provided details on the capabilities of pieces of equipment, such as payload, maximum speed, or lift capacity, and the application makes this information available to the user tasked with selection of equipment to accomplish required tasks.
Site Visit Executive Found Nothing but Problems During Simulation Training Exercises. The “problem” part of the Fleet Deployment Problem is inspired by several sources.
In some cases, we identified “problems” during our war games. Played at the operational level, [OpFor] team. Most of the time, each game identifies several key operational enablers employed by our forces.
With the help of our team, we capture that key moment—friendly force disposition, enemy situation, mission, and even the losses suffered on both sides—and use it as a point of departure both for our input to the Chief Integrated Agenda and for our Fleet Problems.
In other cases, as we considered how our ConOps would play out in reality, it became apparent there were warfighting tasks that were critical to success that we could not execute with confidence.
This gap was not because the unit did not practice these tasks individually—the efficiency of the plan ensured they could—but because we as a force never practiced them together, in combination with multiple tasks, against a free-playing, informed, and representative Red Team.
In one case, during an exercise planning session, we discussed a critical operational tactic that is used routinely in exercises and assumed to be executable by the fleet.
In the course of the brief, someone piped up and said, “Sir, you know we can’t actually do that.” In the silence that followed we could see some in the room thought we could, while others were nodding in agreement that we could not. With that, we had the scenario for a Fleet Problem with the mandate to “go find out.”
We learned that the doctrine and assumptions in one area had removed the capability to implement key tactics in another. Having never performed the task together in theatre, the disconnect never was identified clearly. It was not until we tried to execute under realistic, true free-play conditions that we discovered the problem’s causal factors, leading to the ability to apply solutions.
In discussion with the Troops we frame the problem that is given to a unit or formation. For example, it is critical that we be able to operate carrier strike groups [CSGs] in areas of significant submarine threat.
Our traditional approach to this challenge would be to create an antisubmarine warfare [ASW] exercise, tasking submarines to act as targets within a set geographic area.
In a Fleet Problem, we instead would task the CSG to conduct a combat mission-- conduct strikes on, giving it maximum flexibility in timing and mechanism. We then would create scenarios rich in submarine threats.
The CSG’s mission would not be ASW, but rather conducting a core combat strike mission in support of the joint fight in a robust submarine threat scenario.
Teaming the submarine threat is the means to the end —strike. If you destroyed all enemy submarines and lost no friendly units but were unable to execute the mission assigned—strike—then Blue loses and Red wins. How the CSG commander manages that threat to accomplish the mission is not prescribed.
Speed and manoeuvre? Go for it. Aggressive surface ASW? Great. Will the escorts sweep ahead or stay near the CSG? Air assets? Of course. How is that coordination going?
“Task success equals mission success” may hold true in training scenarios as long as Red is constrained. But in our experience, rarely in the real world does the sum of the tasks successfully completed equal mission success to note that there is no experimentation in the usual sense in Fleet Problems.
We are not notionally employing systems and weapons that are not already deployed in the fleet. Each unit attacks the problem using what it has physically on hand today. So commander developing and executing a plan must be based on our warfighting intent rather than ticking off a discrete schedule of individual training objectives.
The biggest challenge stems from the combinatorial size of the problem. There are several dozen organisations that team up to administer requirements and provision of tactical training, schoolhouses, resource sponsors, as well as a group of different Training status updates to specify policy for agent modeling and simulation and an unknown number of simulators.
Beyond the basic phase of training, training requirements for ships are only minimally articulated. Each ship must complete a set of training exercises tailored to counter deficiencies of the ship in question.
These exercises are selected and planned by a number of different teams. In the intermediate and advanced phases of training, there is little consistency in the choice of exercises or the requirements that must be met, and, in all phases of the operation several different teams evaluate the success of training.
Training requirements and standards are often too general or inconsistent for assessing readiness, further complicating the problem of determining how agent-based simulation must best be used.
Early directives have authorised DoD to use simulators to make training more effective and to help maintain military readiness. Agent-based training simulators permit:
1. Training impractical if done with actual systems or equipment
2. Concentrated practice in selected normal and emergency actions
3. Training of operators to address possible equipment faults
4. Training of maintainers to diagnose equipment failure
5. Increases in proficiency despite shortages of equipment
6. Improvements in abilities even with reduced amount of operational space
7. Authorised availability of training time for operations
8. More access to testing/training ranges
9. Control of training costs over all phases of service life
10. Reduction of systems required in maintenance training.