While this approach allows for faster mission analysis, requires less data and computation, and demands less human judgment in the modeling choices, it also has some serious shortcomings. Because it remains divorced from a broader scenario planning context, an approach built on graphs may fail to include time/distance factors or competing demands for capabilities.
For example, if a mission set requires two C-5 squadrons for mobility, as long as there are two mission-capable squadrons anywhere in the joint force, it will show as ready. It will not take into account if those squadrons can actually make it to where they are needed in time or if they are tied up on other missions.
Another key shortcoming of this approach is the limited influence of the adversary. Enemy capabilities are accounted for in historical mission data, so this method predicts future demands based on past performance, and therefore does not include an agent-based or adaptive red force of the future.
Another approach to the basic questions of readiness takes these limitations into account, albeit at the cost of greater time and resources. This second method creates a fuller, more complex picture, by inputting the “as-is” picture of the current status of all assets into a scenario analysis tool that can model the full set of assigned missions.
Running the scenario tool then allows for variation and testing of how the current force could execute those missions under different conditions. Rather than relying on historical analysis, varying the scenario can determine if a given set of assets truly can do what the mission asks of them, or if other capability mixes can.
This approach can answer questions like, “Can the C-5s reach the airfield in time?” or “Can the helicopters assigned to the mission fit the raid force’s M327 120mm mortars?” It also allows for multiple scenarios to run against the “as-is” picture of the force concurrently. If a separate missions in different theatres overburden the same resources, then they cannot be effectively executed simultaneously, and these would be areas for potential investment.
Even more importantly, this method allows for agent-based simulation to be combined with the breadth of data and variation that AI can provide, creating the most realistic depiction possible of adversary capabilities and courses of action.
Here, the enemy is not simply a static list of capabilities or doctrinal templates; it can react appropriately to the strategy and tactics being used in the simulation. This aspect of scenario-based tools helps military planners to take into account new tactics or new adversaries on which there may not be much historical data.
For example, how could the Navy possibly know how to counter emerging technologies like hypersonic missiles or what if a new adversial group? The answer is to fight against them hundreds if not thousands of times digitally before ever meeting them on the battlefield.
Scenario analysis tools form a large part of meeting the National Defense Strategy Commission’s recommendation that the Department of Defense “must use analytic tools that can measure readiness across this broad range of missions, from low-intensity, gray-zone conflicts to protracted, high-intensity fights.” In short, this more detailed approach can not only help the military be ready for the fight today but also set appropriate force posture to be ready for future fights.
However, this method is also computationally intensive. The greater the accuracy desired from a model, the more and more varied types of data that must go into that model. A full-scale scenario model, for example, would require a near real-time picture of the force, meaning actively sensing and sending data on every operating asset.
That is an incredible amount of data to manage in one system. Beyond those technical challenges, there are philosophical challenges to overcome as well. Even the detailed and accurate model is still a model. As such, it is subject to the flaws and biases in human decision-making, as users make determinations around which models, scenarios, and parameters are most likely. AI tools can come together to meet the basic information needs of readiness.
1. Overcoming the challenges
Military decision-makers will face many challenges when implementing AI into their readiness strategies. These challenges can include but are not limited to: who owns the data; how to validate the data; where and how it is stored; the dependence of high-level simulations on lower-level simulations; the classification of data and outputs; and on what network everything should reside. A combination of general AI practices and custom considerations can help military leaders navigate this tangle of choices and chart a path to a fundamentally new readiness system.
2. Asking the right questions
AI is not magic. As we have seen, different types of AI have different strengths and do different things well, but with corresponding limitations. But real-world problems are rarely encapsulated in discrete, neatly defined questions. They are complex topics with many messy, interrelated issues. Therefore, the first challenge is to discover ways to render a general readiness problem into specific questions suitable for AI without losing fidelity or applicability to the real work problem at hand. This is more of an cultural power challenge versus a technical one. But unless that hard thinking is done up front, any solution generated by AI could be largely irrelevant to the mission problems faced in the real world.
3. The who, what, and where of data
The best starting point when dealing with such significant volumes of data is often going to be the cloud, which allows for a single, extensible repository. Increasingly, cloud providers are also integrating additional AI-enabled services that can speed data validation and other tasks.
The support available from cloud providers underlines the importance of getting all of the data in the first place. Gathering real-time statuses for every piece of equipment, infrastructure, and service member in the joint force may seem like an impossible task. However, the military may already have much of the data it needs without even knowing it.
4. Model accuracy
Another common problem for AI adoption is ensuring the accuracy of tools. Even the most advanced AI tools are still tools constructed by humans and, as such, can often mirror the judgments and biases of humans. Air Force only recently began running predictive maintenance programs on C-5, B-1, and C-130J airframes that had been producing detailed data about aircraft status that went uncollected for years. Identifying and tapping into such existing data sources can jump-start AI-enabled readiness assessments without the need for costly new systems. Previous research has shown that even adoption of transformational technology can often be accomplished by focusing on the existing data that an organization has without the need for new capital investments.
5. Uncover/Eliminate Bias issues with training data
One way to help ensure the desired accuracy of an AI system is to use participatory design, a process that includes a wide array of stakeholders, not just programmers and end-users, in the design process. This can help ensure a variety of perspectives are included in a simulation and that the right performance parameters are selected. In military applications, this can be even more important, because every military decision carries with it an implicit understanding of our own tactics and doctrine.
Since the enemy does not play by the same rules, to avoid AI tools that are not unintentionally biased toward our own strategies—and therefore predict overly rosy outcomes—it is crucial to include a “red team” dedicated to playing devil’s advocate in the design process.
6. Military-specific challenges
Design and data challenges are common to any organization pursuing a large-scale AI project. However, there will also be some challenges unique to the military that will need to be overcome.
7. Model dependencies
A complex scenario analysis tool is composed of several different models at different levels of detail. Higher-level models are dependent on lower-level models for their accuracy. For example, a force-flow model of fighter jets depends upon lower-level, more detailed models about engine performance and fuel consumption at various altitudes.
If those lower-level models are wrong, they can result in serious inaccuracies in a simulation, with aircraft flying faster than possible or never running out of fuel, or ground units walking for hundreds of miles without getting tired. In short, higher-level models cannot be accurate without getting the details of lower-level models right first.
To obtain the most accurate baseline models possible may require gathering the technical baseline data on key weapons systems. Readiness personnel should work with their acquisition counterparts to gather or gain access to that information for current systems and ensure that future contracts have access to that information for future systems.
8. Classification management
Perhaps the most closely held military secrets are what a military can and cannot do. So when assessing the readiness of a force against real-world mission sets, naturally, the results are expected to be classified. However, many of the lower-level models may use available data. It is only through aggregating many of these different data points that details of military capabilities and weaknesses are revealed. As a result, classification of such an AI-enabled system needs to be carefully managed to ensure that key vulnerabilities are not accidentally revealed
This challenge is compounded when considering that the classification of information will determine which communications network the tools must reside on. The higher the classification, the more difficult it will be to get tools certified to operate on that network. As a result, it is likely that an AI-enabled readiness system would exist on multiple networks, from unclassified to different levels of classification. The system will need procedures and tools for moving data from low-to-high and possibly for releasing appropriately classified data from high-to-low without revealing any important information or introducing vulnerabilities to the higher-classification networks.
9. Future Solutions
AI and cognitive tools may not have the history of the tank or the cachet of the aircraft carrier, but they are undoubtedly important parts of future militaries. Understanding the benefits and common challenges of applying AI to military problems such as readiness can not only improve readiness assessments, but can also position the military to use other forms of AI more effectively.
Navigating the general and military-specific challenges is just the first step to AI adoption. Creating a structured campaign plan for AI can help deploy the right AI for the right problems. Avoid the analog/digital equivalent of ineffective rounds against a target. The adoption of AI is not just like adding another team member: It can fundamentally change how humans and technology work together.
10. Resolve, Remodel, Reimagine
Determine the key readiness problem sets for AI to address. Identify the data you have related to those problem sets and resolve the issues with it; that is, organize and prepare your data to yield insights.
Change how you structure your data and your organization to make best use of the insights produced by AI. Make sure you have sufficient infrastructure and talent to manage the data and its use within the organization. Remember that outputs from AI systems may need some expert interpretation before decision makers can use them.
Finally, pilot entirely new services and tools that apply AI to even more complex or pressing problem sets. For example, AI is already aiding in real-world scenario planning, helping airports respond to weather events and Racing teams anticipate their competitors’ every move.