Site Visit Executive must continually assess Marine Corps approaches to systems training/engineering, including use of agent models, simulation results, artificial intelligence techniques, and tools that support training process so it is possible to stay ahead of demands for new and upgraded weapons systems.
Using agent models is not a new concept; however, digital engineering will address long-standing challenges associated with complexity, uncertainty, and rapid change in deploying defense systems. By providing for more agile and responsive development, digital training/engineering supports engineering excellence and provides a foundation for mission success in the future.
Realising end-to-end digital enterprises, automating tasks and processes, and making smarter, faster decisions all require next frontier of technologies to transform the way agents and machines interact. Advances in artificial intelligence have given rise to behavioural technologies capable of performing tasks that traditionally required intelligence of commanders in the field.
While live training will always remain the standard against which Marine unit readiness is measured, even live training has its limits. It costs a lot of money to ship Marines out to Twentynine Palms or other areas. It costs money to fire munitions. Some of those munitions can’t be fired in most areas.
The Marines want simulators in which commanders can lead virtual troops.
Some of the advanced weapons can’t be demonstrated where just anyone can see them in action, thus revealing our tech to adversaries.
And that is where simulations can help bridge the gap.
But first, there’s a list of things that must come to fruition.
Machines are now able to build knowledge, continuously learn, understand language and interact with agents better than traditional systems. Site Visit Executive vision expects agents to interact with machines to make faster information-driven decisions and help exploit information more effectively than commanders could on their own and develop awareness of these technologies, evaluating opportunities to pilot test, and demonstrating options for creating operational value.
Readiness block tasks show real-time agent behaviour for distributed sequence cues formation of interactions in mission/function constraint space.
We present the following framework for the representational system of a distributed artificial intelligence task for solving constraint problems by individual agents. This framework serves as a guide for our product demonstration report. In the process of addressing its requirements, project developed some new concepts that hold promise for broader application to distributed constraint optimisation.
Agents sponsor product support activities case study to delineate any limitations, constraints or boundary conditions so obstacles to executing coordinated field-level operations are reflected.
Much of what needs to happen is in areas of applications and bandwidth, basically getting better versions of terrains and simulations that are more realistic and can accommodate as much as a division’s worth of players and an equally complex, simulated adversary.
But some items are smaller and more hands-on, like better virtual reality and augmented reality headsets.
Those headsets are key since the Marines want them to work not as they do now, with pounds of cabling in bulky indoor shooting simulators but light with long-lasting batteries that can be taken in the field and on deployment.
Application problems in distributed artificial intelligence are concerned with finding a consistent combination of agent actions to be formalised as distributed constraint satisfaction problems involved in effort to find consistent assignment of values to variables distributed among multiple automated agents.
Motivated by the fact that components with different costs and failure time distributions from different agents can be available for the design of the same subsystem in practice, state-of-the-art is advanced by presenting a solution sequence to determine combined optimal design configuration and optimal operation of different standby series/parallel systems.
High-level programming techniques have been created for next generation sequence alignment tools for both productivity and well-defined performance. Sequences are lists of tasks that changes according to some pattern. Pattern-based programming framework provides agents with high-level parallel patterns.
There is the problem of parallel alignment so you must review how popular alignment tools function in single high-level parallel strategy. By using a high-level approach, you don't need to be concerned with complex aspects of parallel programming, such as linking task scheduling, so you can achieve seamless performance tuning.
You'll need to differentiate between sequence numbers and unique sortable identifiers by a specific criteria typically generation time. True sequence numbers imply knowledge of what all other agents have done, so a shared state is required. It is virtually impossible to do this in a distributed, high-scale manner.
There is an increasingly pressing need, by several applications in multiple distinct domains, for creating techniques able to index and mine very large collections of sequences summed into series.
We present our goals for the future in big sequence administration and mining. More efforts should concentrate on parallel/distributed solutions, which have received little attention.
Many Commanders confuse the distinct concepts of parallelism and distribution. Parallel Search refers to the distribution of the search space and Distributed Unlinked Search to the distribution of mission/function constraint problems solved by agents.
A certain amount of parallelism exists in any Distributed Unlinked Search and it increases with stronger association. However, in comparison to Parallel Search, the parallel effort in Distributed Unlinked Search can be characterised by redundant mission performance tasks.
Moreover, agents in Unlinked Search can have periods of inactivity which are less frequent in Parallel Search. Since Distributed Search is the only solution for certain classes of classic mission-oriented distributed problems, we show here how one can integrate the idea of Parallel Search in Distributed Unlinked Search.
Distributed artificial intelligence is concerned with interaction, especially coordination between agents exhibiting auto behaviour. Since distributed network solution strategies are spreading very rapidly due to tech advances, commanders have pressing needs for distributed techniques in mission readiness determination.
Agents are grouped to form clusters of similar agents and these clusters are considered as new agents in the process, establishing groups of optimal similar product configurations which to enable parts of the product configuration to be identified and create a range of products or just chosen among several proposals.
When a commander finishes the fight, they should be able to query the virtual enemy and figure out why it did what it did, how it gained a certain advantage.
And it shouldn’t take a programmer to “talk” with the simulation. Units communicate via voice and chat. That’s how simulations users must be able to talk with their simulated allies and enemies, in plain language.
These pursuits are not happening in a vacuum. They were done at a battalion level with a short prep time, far different than the large-scale Marine Expeditionary Unit or Marine Expeditionary Brigade-sized training that is typical.
That is part of a larger effort to create a “plug-and-play” type of training module that any battalion, and later smaller units, can use at home station or on deployment to conduct complex, coordinated training.
What made that work new was pairing legacy systems with a variety of operating systems between them.
That’s another example of what needs to be fixed.
Solution agents must be evaluated. A rating is assigned to each of them depending on their consistency with the requirements imposed by the customer, the constraints determined by the experts and the functions the product must perform.
Each solution agent determines the optimal configuration for the product concerned based on its local point of view. Then the configurations are evaluated considering elements of agent-based systems.
Both agents and mission space can be either simulated or real-world entities. The distinction is important since an agent-based system can be purely a simulation, a collection of realistic agents living in the mission space, or a hybrid e.g., real-world agents living in simulated mission space.
Any comprehensive proposed architecture not only must accommodate the possible types of agent-based systems but also transition from one to another.
We have proposed some fundamental requirements for modeling and simulation of agent-based system and provide categories based on support for 1) architectural integrity, 2) modeling agents and their mission space and 3) computational foundation.
Several important issues must be dealt with in order to build agents capable of footprint in their intended scenarios. Of particular importance here are the following issues: accounting for agent and mission space complexities e.g., assumptions such as complete and error-free information about the mission space an agent's complete knowledge about its mission space, an agent's capability to fully achieve its goals, providing a well-defined model of time, supporting multiple agents, well-defined interfaces between the agents and mission space, and exogenous events.
Marines and other services are, in many cases, using systems that were designed decades apart and creating a patchwork methods to get the hardware to work together when it wasn’t built for that type of operation.
The new systems must be open architecture so that new tech, new weapons and new terrain can be added on the fly. But also secure enough to operate across networks and not be spied upon by those who would want a peek at our tactics.
We have included these issues within the set of requirements for architectural support of agent-based system development:
Architecture should encourage reuse, allow a layer to be exchanged with another using well-defined interfaces, and if necessary only loosely implementation dependent, supporting procedural, and declarative knowledge representation.
Support for reuse can range from component level to layers of the proposed architectures. At the component level, a sensor or its model may be made reusable. More challenging is the ability to reuse a layer or a combination of layers. To achieve reuse for layers, as it is required at the component level, well-defined interfaces are needed. Realisation of interfaces, however, is considerably more difficult.
Architecture should treat modeling and simulation/execution as distinct layers. The separation of modeling and simulation activities has big impact on reusability and portability in integrated concurrent engineering. Existing commercial/research tools tend to support either depth in modeling of decision behaviors or depth in traditional simulation concerns such as production facilities, output assessments, etc.
However, few tools attempt to support constructing models with decision agent behaviour that can be simulated in realistic mission space and with the full power of traditional simulation systems. Such tools tend to tightly bind their modeling and simulation facilities so that models developed can only be executed by the simulation engine provided by the tool. So despite being capable of modeling agent behaviours, the models so developed cannot be tested in realistic simulation mission space.
The Marine Corps Warfighting Lab just finished a rapid capability assessment of a pair of goggles equipped with augmented reality that allow artillery maintainers to work on three-dimensional digital models of M777 155mm howitzers.
Marines like it ... you can tell what's missing, what's broken, what's cracked. "It can't do much for us right now, but when Marines were back at the schoolhouse, this would have helped out a lot to actually see parts in the howitzer. Some Marines are very visual learners; looking at a schematic doesn't help us much."
"Within training, it runs the spectrum. It can be maintainer training, it can be infantry training, it can be gun-drill training. We were talking to some snipers earlier. This could be used on a sniper training range, where you have the snipers crawling through the grass trying to get within shot range and not be observed while they are doing so.
"Currently, how are they being observed -- through a telescope. You can augment that telescope, which uses the human eyeball, with the laser range finding that the goggles are capable of, to pick up variances in the terrain in order to better detect those snipers, which will make them better snipers because now they've got to beat technology.
The modeling and simulation subsets should be fully integrated, i.e., based on the same structural context for their modeling constructs. Separation of models from simulators also has an important secondary benefit. This is the possibility it opens up to replace the simulation engine with an execution mission space so models are executed in real-time as well as logical time.
This would make it easier to migrate agent models from simulation to actual operation after fully testing their logistics capacity. Transition of models from the design phase to the implementation phase is a key feature of Simulation-based acquisition efforts.
Modular model structure supports development and testing of complex agent architectures. To avoid the pitfalls of huge models, it is necessary to adopt a modular model representation scheme. Modular construction enables verification and validation at every stage of a decomposition ordered by multiple levels.
Systematic model selection and composition based on generalisation and granularity Constraints ie, multi-resolution should be supported. Since model designers are generally faced with alternative choices -- specialisation and multiple-level decomposition for a given model, it is important to be able to represent a family of models in an exact structure.
Such model representation schemes allow model designers to compose many variations of models using a set of well-defined operations choosing one model variation vs. another, putting together a large model using alternative sets of model components.
It is key to specify early attempts at modeling constructs and model components. To support the flexibility demanded by any agent-architecture, its modeling environment should provide basic modeling constructs as well as modeling components.
Generic modeling constructs are early types which can be employed to represent varying levels of model components. Such components can range from generic to highly domain specific. For example, lower-level model components might be different kinds of generic queues, and higher-level domain-specific model components might be processors servicing time-critical commands of a robot. The model components ie, agents are "canned" components with well-defined input/output interfaces and behaviour.
Must support realistic virtual environments in which agent behaviours can be tested. The architecture should support construction of very responsive physical and behavioural mission space and stand in for the challenging real world counterparts in which agents are designed to function. This is a particular strength of an architecture that incorporates state-of-the-art simulation capabilities.
Accounting for executing the modeled behavior of agents and mission space is dependent on the classification of the agent-based system and support for collaborative model development and model repositories. Collaborative modeling enables dispersed modelers to develop modular model levels both coupled and uncoupled.
In such application based cooperative working scene, model repositories are essential to support efficient and systematic model reuse and integration. Such repositories offer many benefits for maintaining and using models. Repositories can be built from widely employed relational systems to be scaleable and provide standard queries for access to and from model content.
But many challenges remain to support collaborative development of levels based modular models and components within distributed and networked mission space. For example, a lot of work is required to develop workable schemes to assign ownership rights of enterprises and within enterprises and functional teams. participating in a model development effort.
On deployment, Marines can’t rely on a cadre of contractors back home to run their hardware. To that end, the Corps has stood up Simulation Professional Course and the Simulations Specialist Course.
Both give Marines in infantry units experience setting up simulations and running the games for their units. They input training objectives and can understand and put together training for the unit staff or just for their fire team back in the barracks.
Real-time distributed and parallel execution should be supported to enable execution of simulated agents and deal with model complexity, effective use of distributed, heterogeneous computing platforms, and to facilitate assessments of large systems. The simulation architecture should enable distribution of the model composed of several modules on nodes within network.
Distribution should be automated and distribution policies could take into account load balancing requirements, mobility of agents, and other state-dependent factors. Moreover, the architecture should provide for both logical-time and real-time execution of models to accommodate the types of simulated and real agent-based systems and their interoperation.
Support for information distribution comprises the set of services that attempt to reduce the message interchange traffic without impacting the accuracy of the simulation. Service sets include message filtering and when subscribers declare their interest in receiving from a subset of status posters. To the extent information mitigates growth in capacity requirements for large numbers of entities, it is a critical function.
Agent roles/responsible are defined by organisations and expected to translate action into performance for current task demands. Approach introduced into artificial intelligence typically considered distributed systems where each individual in such systems possesses potential for action based on events.
With such capabilities, individual agents in an organisation have capacity to make reasonable local decisions about what they should do given what they know about their tasks and mission space, as well as what they know about what others are likely to be doing.
Marines can create their own terrain maps and fight the simulated fight in the areas they’ll really be operating in.
But one step further is key: The enemy has to talk back.
It is precisely this last bit of knowledge--about what others are likely to be doing--that is available thanks to knowing the organisational structure. That is, if a participant knows the roles and responsibilities of others, it can make more informed decisions about what to do locally and how to interact with them.
And those video feeds that are now on every ISR platform in the real world? Simulations need them too, to be realistic. That means game designers have to have human-like activity going on in areas.
When a commander wants to zoom in on a tactical frame in the game, they’ll be able to do it just like in theater.
Which brings it to one of the more ambitious items beyond terrain and hardware: getting simulations to act more like humans.
As it works now, unit commanders set up their forces, work their mission sets and then the virtual “forces” collide and often a scripted scenario plays out.
Not too realistic.
What’s needed is simulations to act like populations might act in the real world and the same for the enemy, taking advantages, fighting and withdrawing.
So scenario in turn means agents should abide by their designated roles, so each agent must be able to focus its local decisions toward fulfilling its responsibilities.
Computational representations for organisations have been developed in terms of interest areas for agents. Individual agent interest areas would indicate what kinds of info processing tasks it was willing and able to tackle, and to what degree.
Faced with a variety of possible actions to take, agent would be influenced to a degree of possible modification by an experimenter by how well those actions fit within its most preferred areas of interest.
Because each agent knows the interest areas of the others, each could identify processing tasks, or information, that would be potentially of interest to them, focusing communication among them to eventually converge to a state where all the most important tasks were accomplished in a focused distributed manner.
Smart design of agent interest areas is of primary importance to the success of this approach. Interest areas that are too narrow could mean that processing tasks became unevenly distributed among the agents, leading to longer delays until overall task completion.
Moreover, if a subset of the agents failed to participate in the instance a network connecting them crashed, then some tasks would be left unaccomplished and the overall task would fail to be completed.
On the other hand, if interest areas were broadly defined so as to increase reliability and the chances that every agent would have something useful to do, then the situation could quickly deteriorate with agents duplicating effort and working at cross purposes.
In addition, by making every agent more of a “generalist,” communication among agents would explode because everyone would potentially be interested in everything.
Marine Corps and other services are focused on finding ways to use augmented reality in training.
"It seems unlikely that it's going to go away. We have Marine Corps Systems Command, interested in augmented reality ... and we have all been talking about these systems and what they are capable of."
Potential performance measures to be initially considered:
1. Response Time is the total time taken to accomplish a task. It is also called the turnaround time.
2. Throughput is the number of tasks accomplished per unit time. Without a definition of a unit task this measure is not defined well
3. System Utilisation is the fraction of the total system capacity being used at any given time. For a given resource, it is the fraction of time the resource is busy.
4. Communication Cost is the cost of transmitting a number of bits across the channel. If time is used as the cost, it may include the connection time plus the time to transmit a number of messages across the channel.
5. Sometimes number of bits or message packets transmitted across the communication channel may be used as a measure for communication cost.
6. Reliability refers to the probability system or a component under consideration does not experience any failures in a given time interval used to describe systems that cannot be repaired or where the operation of the system is so critical that no downtime for repair can be tolerated.
7. When system is composed of multiple subsystems and/or components, the reliability of each component can be used to evaluate the reliability of the total system. By using redundant components, the system reliability can be improved.
8. Availability refers to the probability that the system is operational according to its specification at a given point in time.
9. Yardsitcks can be used as some measure of systems suitability potential to be repaired and which can be out of service for short periods of time during repair.
10. Solution Quality refers to some objective measure of the quality of task results defined for particular task domains.