The ABMS will be critical to mobile logistics needs and must evolve quickly, not conform to industrial-age requirements and schedules.
AMBS needs to be a cutting-edge, constantly improving network of radars, observers, and battle management to allow position of limited resources for maximum effect, aimed at exploiting the power of networked information to enable a limited force to prevail.
Artificial intelligence could pull together data from multiple sensors, each seeing the same target in different wavelengths or from a different angle, to build a composite picture more precise than its parts.
“We’re moving past just simple concepts of sensors and shooters. How do we get multiple sensors and shooters integrated do we get more out of them than an individual item could provide?”
“It’s not a specific sensor to a specific shooter. On a future battlefield… just about everything is going to be a sensor. So how you do you store that data and how do you enable a smart distribution of data to the right shooter? Because we can’t build architectures that are relying upon huge pipes and just massive bandwidth to make it work."
“If we can bring them together, you can use a sensor the Army already developed, bought and fielded to spot targets for one weapon – say, the Q-53 artillery radar – to feed targeting data into a totally different type of weapon – say, a Patriot battery.
The Army wants to plug in its new anti-aircraft and missile defense systems to the Network, but those technologies are at a critical juncture in their own individual test programs.
Army is also running tests with systems including aerial scouts, long-range missile launchers and armored vehicles.
Army has to test a sensor-to-shooter link that feeds data from a wide range of sources – in space, in the air, and on the ground – to an Army howitzer unit.
However, the Army had also wanted to experiment with new headquarters and organizations to command and control ultra-long-range artillery.”
That makes the stakes even higher for Project Convergence. “You can call it an experiment, you can call it a demonstration. “Right now, the plan is we’re going to do this every year… every fall as we continue to mature… this architecture that brings the sensors to the right shooter and through the right headquarters.”
GAO was highly critical of the ABMS program for lacking defined requirements, a program schedule, cost estimate, or affordability analysis. On the surface, this is a damning evaluation, given the terminal fates of similarly criticized programs like Future Combat System and the Joint Tactical Radio System.
But GAO’s methods of evaluation have been said to be outdated in today’s information age. The Air Force’s approach to ABMS should continue to center around a defined operational concept, not locking in technological requirements today that will only ensure obsolescence.
ABMS is a forward-leaning set of technologies that must be fluid if they are to remain relevant. They should not be managed and evaluated with industrial-era processes that strictly follow the rules that require a predetermined technical solution. That will simply guarantee obsolescence and a process too rigid to adapt to a set of dynamic threats. Instead, rigor should focus upon the operational concepts that underpin the fundamental design of ABMS.
Air Force has articulated a decentralized vision for an ABMS that promotes networked connectivity across platforms, domains, and even services. However, the effectiveness of any tool comes down to how it is used and how well it serves that purpose.
Networking is a foundational technical capability for ABMS, but the ultimate vision cannot be realized by simply connecting sensors and moving data. In fact, it could make problems worse.
Absent a clear operational concept, the availability of information could result in extreme micromanagement, given the propensity for senior leaders to reach into the tactical realm. Alternately, warfighters could find themselves drowning in data, universal availability of data could resulting in operational paralysis or operational chaos.
So far, Air Force “experiments” have focused on connectivity across platforms, domains, and services. But these highly scripted and rigid kill chains simply move data; they are not exploring the stickier problems of actual battle management. Instead, what must be demonstrated is how these ABMS technologies support future operational concepts.
ABMS will be successful if completely aligned with the air defense concept of operation. It can’t just be a collection of sensors and networks randomly closing kill chains. Leaders need to understand the larger battlespace to prioritize the threats, manage available assets, and link capabilities as the battle unfolds. It is this broader functionality that will enable the services to maximize its limited assets to prevail.
The Air Force must prove the value and functionality of its technical priorities in today’s systems and against future operational concepts. Instead of requirements, schedules, and cost estimates, the Air Force should develop an exercise, experimentation, and migration plan.
This plan should shift from today’s air battle management systems, like JSTARS and AWACS, to the future ABMS. Battle management is too crucial a function to retire on the promise of PowerPoint; a bridge is needed to the future.
The first step of developing this plan is envisioning what that that future combat operational concept is, and then shaping battle management functions to that architecture.
Information architectures and battle management concepts must align to operational concepts to deliver the speed, accuracy, relevance, and advantage future conflict will demand.
This means modularly inserting technologies into current battle management platforms as these new capabilities mature to a minimum viable product. The Air Force cannot rely on numerous technological consecutive miracles falling into place all at once. A gradual transition is how a seemingly unwieldy program can yield value and reduce risk in ways that past efforts like FCS and JTRS did not.
Finally, the Air Force should focus its experimentation on the fusion and decision functions of a battle management system. How does the system function with multiple targets and multiple assets with varying potential effects? What about the many support assets and actions? Experiments must also be tested with current platforms and tactics to be both backward- and forward-compatible.
If the Air Force’s ABMS program seems ambitious, it is. However, this is an endeavor driven by necessity. The realities of constrained budgets preclude the option of overwhelming combat power.
ABMS should not be assessed against traditional acquisition evaluation criteria that would ensure technological obsolescence. Instead, the Air Force must drive greater focus, vision, and alignment into its experimentation plan if it is to achieve anything close to a twenty-first century System.
And achieving that is not just ambitious; it is imperative.
The principal battlespace management commander first determines the mission and the specific tasks required to accomplish the mission. An example of a typical set of mission tasks might be planning, establishing a command post, securing routes, providing perimeter defense, and establishing locations for providing assistance.
Once you have identified your list of targeted resources to be profiled, you are ready to begin creating a profile for each one. The idea is for the resource owner to rate the resource's importance to the organisation from an information-security perspective and relative to all other assets in the organisation.
The profile tracks information at a business and function level and is not necessarily specific to implementation decisions. For example, if you are designing a new system, you should know what types of data will be processed and what the basic functions being performed will be before you decide on technologies or placement in the network.
When you are designing your profile questionnaire, it is important to note that not every question needs to be used in the calculation of the risk sensitivity. Some questions are meant to capture other pertinent information about the resource for reporting purposes and do not directly contribute to the risk-sensitivity score.
For example, you may ask a question about where the system is hosted. The answer to this question doesn't affect the sensitivity of the asset, but you may want to prioritise assessments of third-party hosted systems because of increased risk exposure, and the answer to this item will give you the desired information about which systems are hosted internally versus externally.
You may also want to ask a couple of high-level questions about whether basic security controls are in place for example, roles-based access, encryption, and audit logging. The answers to these questions may help you to focus your efforts on resources that don't meet the most basic security control requirements.
Similarly, you may want to ask if the system uses a common or central infrastructure for authentication and authorisation or logging to eliminate the need for assessing those areas any further. Systems using one-off solutions for these core security services may have more potential for risk exposure.
Factors do not change the sensitivity of the resource, but they can help with prioritisation. For example, whether or not a vulnerability test has been performed on the resource does not affect its sensitivity, but this knowledge is important for identifying resources that may have undiscovered vulnerabilities that are readily exploitable.
You will often find yourself trying to choose between several high-sensitivity resources to assess, and these other factors can help you decide which ones to focus on first.
The security risk profile questionnaire should include several questions about the resource to help determine the sensitivity and criticality of the application in comparison to others. It is essential to evaluate a resource's sensitivity on a relative scale.
Start by identifying the resource that is most crucial to the organisation, and use this as a reference point. This is important because the tendency is to rate resources too high. If you end up with all resources being rated as high sensitivity and none as moderate or low, then the scale becomes worthless.
You may not have defined the specifics of how the functions will be performed, but having business and functional requirements defined is enough to complete the security risk profile.
The best way to profile a resource is with a simple questionnaire. This questionnaire uses a series of targeted questions to measure the potential impact to the organisation of a generic security violation for the target resource.
1. Networking every shooter/sensor
Use artificial intelligence to ensure operational information is immediately sent across domains to any platform needing it.
2. Digital architecture, standards and concepts
Digital modeling/simulation technologies, trade studies and other standards development tools and processes to map out the entire architecture virtually and test how it would work in practice
Tools include line-of-sight and beyond line-of-sight communications networks
4. Sensor integration
Hardware/software allowing different equipment to share data reach compatibility and interoperability capabilities through use of open interfaces improved control of systems and data processing
Libraries of meta tagged data points analyzed/fused using AI algorithms to inform decision makers.
6. Secure processing
Tech moving/sharing data across technologies with different security protection levels
Deployment, training and support services for all devices and processing environments.
Tech turning platform into a data node, reduce latency, provide improved anti-jamming capabilities improve the speed/breadth of communications gear.
Design and development of apps to process, fuse and help present data to different audiences across domains.
10. Effects integration:
These involve networked weapons integrated with existing platforms for greater combined effect include data relay functions