That information, as well as other data from platforms participating in the exercise, was then pushed to a control command post where leaders could watch updates in real time.
The technology under development in the ABMS program will give platforms the ability to simultaneously receive, fuse and act upon a massive collection of data from all domains instantaneously.
ABMS will require software and algorithms so that artificial intelligence and machine learning can compute and connect vast amounts of data from sensors and other sources at a speed and accuracy far beyond what is currently attainable” as well as hardware updates that include “radios, antenna, and more robust networks.”
Battlegroup stock of radios and other tactical communications systems is slated for an upgrade. Through a series of experiments, soldier feedback is being used to determine what its future communications systems will look like on tomorrow’s battlefields.
The teams are pursuing different types of servers down to the battalion level to increase the mobility of the unit. Typically, battalions need to rely on a brigade headquarters, but integrating new servers would eliminate that need.
“The brigade would have to be out in the field for all the battalions to be able to talk to each other. This breaks that paradigm by moving some of that compute capability down to the battalions … so they can operate independently.”
There are also additional ways to connect or extend system ranges. The Army has tethered drones that can elevate radios 200 or 300 feet in the air for better connectivity and data throughput. Army wants its tactical communications network to have modular systems so it can make quick upgrades.
“If we have modular systems, not pulling out necessarily the whole system isn’t being pulled out — maybe a part of this system of systems — and that allows us creativity and allows us modernization. As the service moves ahead in its capability sets, it will need to build upon previously fielded equipment.
Successful application of agents tech focus on a particular capability e.g., communication, planning, learning and seeks practical problems to demonstrate the usefulness of this capability. The design engineer has a practical problem to solve, and cares much more about the speed and cost-effectiveness of the solution than about its elegance or sophistication.
To the engineer, it offers an overview of the kinds of design problems faced, and some examples of agent technologies that have made their way into practical application.
To the engineer it explains why agents are not just the latest technical hype, but a natural match to the characteristics of a broad class of real problems includes selected design development projects that are not yet industrial strength, but embody industrially important concepts or are being conducted in a way likely to lead to deployable technology.
Also emphasises agent applications in manufacturing and physical control over other fielded industrial applications such as information-gathering agents, network management, or business planning agents.
Like any other technology, agents are best used for problems whose characteristics require their particular capabilities. Agents are appropriate for applications that are modular, decentralised, changeable, poorly structured, and complex In some cases, a problem may naturally exhibit or lack these characteristics, but many industrial problems can be formulated in different ways.
In these cases, attention to these characteristics during problem formulation and analysis can yield a solution that is more robust and adaptable than one supported by other technologies. Agents are pro-active objects, and share the benefits of modularity enjoyed by object technology. They are best suited to applications that fall into natural modules.
An agent has its own set of state variables, distinct from those of the environment. Some subset of the agent's state variables is coupled to some subset of the environment's state variables to provide input and output. An industrial entity is a good candidate for agency if it has a well-defined set of state variables that are distinct from those of its environment, and if its interfaces with that environment can be clearly identified.
The state-based view of the distinction between an agent and its environment suggests that functional decompositions are less well suited to agent-based systems than are physical decompositions. Functional decompositions tend to share many state variables across different functions.
Separate agents must share many state variables, leading to problems of consistency and unintended interaction. A physical decomposition naturally defines distinct sets of state variables that can be managed efficiently by individual agents with limited interactions. The choice between functional and physical decomposition is often up to the system analyst.
Emphasising the physical dimension enables more modular applications Because the agent characterizes a physical entity, that entity can be redeployed with minimal changes to the agent's code. As a result, the cost of design reconfiguration drops dramatically, and reusability increases.
Decentralisation is important because an agent is more than an object; it is a pro-active object, a bounded process. It does not need to be invoked externally, but autonomously monitors its own environment and takes action as it deems appropriate. This characteristic of agents makes them particularly suited for applications that can be decomposed into stand-alone processes, each capable of doing useful things without continuous direction by some other process.
Many industrial processes can be organised in either a centralised or a decentralised fashion. Centralized organisations focus on a central authority and elaborate bureaucracy to manage the flow of control down and information back up.
There is an alternative approach. The power of decentralisation has been made clear for the contrast in performance. Modern industrial strategists seek to eliminate excessive layers of management and push decision-making down to the very lowest level, and are developing the vision of the "virtual enterprise," formed for a particular market opportunity from a collection of independent firms with well-defined core competencies.
Agents are well suited to modular problems because they are objects. They are well suited to decentralised problems because they are pro-active objects. These two characteristics combine to make them especially valuable when a problem is likely to change frequently.
Modularity permits the system to be modified one piece at a time. Decentralisation minimizes the impact that changing one module has on the behavior of other modules. Modularization alone is not sufficient to permit frequent changes. In a system with a single digital thread of control, changes to a single module can cause later modules, those it invokes, to malfunction.
Decentralisation decouples the individual modules from one another, so that errors in one module impact only those modules that interact with it, leaving the rest of the system unaffected. From an industrial perspective, the ability to change a system quickly, frequently, and without damaging side effects is increasingly important to competitiveness.
The fundamental challenge in applying agents to both planning and control is satisfying a global criterion on the basis of parallel local decisions. In spite of the benefit centralisation has in dealing with control criteria, case studies show that many users have found agents an even better approach.
Operational systems must be maintained, and it is much easier to maintain a set of well-bounded modules than to make changes to a large programme. The move toward supply chains means that the manufacturing system is geographically distributed, and agent decentralisation reduces communication bottlenecks and permits local parts of the enterprise to continue operation during temporary lapses in connectivity.
Competitiveness increasingly depends on adjusting system operation frequently to track customer requirements, benefiting from the ability of agent systems to undergo change. The ability of agents to deal with poorly structured systems is less important in the operation of an engineered system than in its design.
However, the ability of agents to deal with complex changing structures means that computers can now be applied to direct systems such as networks of trading partners that formerly required extensive manual attention. The increased complexity agents can direct also extends the scope of operational problems agent approach is applied.
Here we present simple classification of the elements of agent-based systems: agents and mobile mission space. Both agents and mission space can be either simulated or real-world entities. The distinction is important since an agent-based system can be purely a simulation, a collection of realistic agents living in the mission space, or a hybrid e.g., real-world agents living in simulated mission space.
We have suggested some fundamental requirements for modeling and simulation of agent-based system and provide categories based on support for 1) architectural integrity, 2) modeling agents and their mission space and 3) computational foundation. Several important issues that must be dealt with in order to build agents capable of footprint in their intended scenarios.
Of particular importance here are the following issues: accounting for agent and mission space complexities e.g., assumptions such as complete and error-free information about the mission space an agent's complete knowledge about its mission space, an agent's capability to fully achieve its goals, providing a well-defined model of time, supporting multiple agents, well-defined interfaces between the agents and mission space, and exogenous events.
We have included these issues within the set of requirements for architectural support of agent-based system development:
1. Architecture must be layered to support well-known traits such as extensibility, scalability, and portability. Other key benefits offered by a layered architecture are process/storage management, persistence, and fault tolerance.
2. Architecture should encourage reuse, allow a layer to be exchanged with another using well-defined interfaces, and if necessary only loosely implementation dependent, supporting procedural, and declarative knowledge representation. Support for reuse can range from component level to layers of the proposed architectures. At the component level, a sensor or its model may be made reusable. More challenging is the ability to reuse a layer or a combination of layers. To achieve reuse for layers, as it is required at the component level, well-defined interfaces are needed. Realisation of interfaces, however, is considerably more difficult.
3. Architecture should treat modeling and simulation/execution as distinct layers. The separation of modeling and simulation activities has big impact on reusability and portability in integrated concurrent engineering. Existing tools tend to support either depth in modeling of decision behaviors or depth in traditional simulation concerns such as production facilities, output assessments, etc. However, few tools attempt to support constructing models with decision agent behaviour that can be simulated in realistic mission space and with the full power of traditional simulation systems. Such tools tend to tightly bind their modeling and simulation facilities so that models developed can only be executed by the simulation engine provided by the tool. So despite being capable of modeling agent behaviours, the models so developed cannot be tested in realistic simulation mission space. The modeling and simulation subsets should be fully integrated, i.e., based on the same structural context for their modeling constructs. Separation of models from simulators also has an important secondary benefit. This is the possibility it opens up to replace the simulation engine with an execution mission space so models are executed in real-time as well as logical time. This would make it easier to migrate agent models from simulation to actual operation after fully testing their logistics capacity. Transition of models from the design phase to the implementation phase is a key feature of Simulation-based acquisition efforts.
4. Modular model structure supports development and testing of complex agent architectures. To avoid the pitfalls of huge models, it is necessary to adopt a modular model representation scheme. Modular construction enables verification and validation at every stage of a decomposition ordered by multiple levels.
5. Systematic model selection and composition based on generalisation and granularity Constraints ie, multi-resolution should be supported. Since model designers are generally faced with alternative choices -- specialisation and multiple-level decomposition for a given model, it is important to be able to represent a family of models in an exact structure. Such model representation schemes, allow model designers to compose many variations of models using a set of well-defined operations choosing one model variation vs. another, putting together a large model using alternative sets of model components.
6. It is key to specify early attempts at modeling constructs and model components. To support the flexibility demanded by any agent-architecture, its modeling environment should provide basic modeling constructs as well as modeling components. Generic modeling constructs are early types which can be employed to represent varying levels of model components. Such components can range from generic to highly domain specific. For example, lower-level model components might be different kinds of generic queues, and higher-level domain-specific model components might be processors servicing time-critical commands of a robot. The model components ie, agents are "canned" components with well-defined input/output interfaces and behaviour.
7. Must support realistic virtual environments in which agent behaviours can be tested. The architecture should support construction of very responsive physical and behavioural mission space and stand in for the challenging real world counterparts in which agents are designed to function. This is a particular strength of an architecture that incorporates state-of-the-art simulation capabilities.
8. Accounting for executing the modeled behavior of agents and mission space is dependent on the classification of the agent-based system and support for collaborative model development and model repositories. Collaborative modeling enables dispersed modelers to develop modular model levels both coupled and uncoupled. In such application based cooperative working scene, model repositories are essential to support efficient and systematic model reuse and integration. Repositories can be built from widely employed relational systems to be scaleable and provide standard queries for access to and from model content. But many challenges remain to support collaborative development of levels based modular models and components within distributed and networked mission space. For example, a lot of work is required to develop workable schemes to assign ownership rights of enterprises and within enterprises and functional teams. participating in a model development effort.
9. Real-time distributed and parallel execution should be supported to enable execution of simulated agents and deal with model complexity, effective use of distributed, distinct computing platforms, and to facilitate assessments of large systems. The simulation architecture should enable distribution of the model composed of several modules on nodes within network. Distribution should be automated and distribution policies could take into account load balancing requirements, mobility of agents, and other state-dependent factors. Moreover, the architecture should provide for both logical-time and real-time execution of models to accommodate the types of simulated and real agent-based systems and their interoperation.
10. Support for information distribution comprises the set of services that attempt to reduce the message interchange traffic without impacting the accuracy of the simulation. Service sets include message filtering and when subscribers declare their interest in receiving from a subset of status posters. To the extent information mitigates growth in capacity requirements for large numbers of entities, it is a critical factor in obtaining simulation results in reasonable time.