The Defense Department has a new artificial intelligence strategy emphasises the creation and tailoring of tools for specific commands and service branches, allowing them to move into AI operations sooner rather than later.
This strategy reflects an additional imperative, which is to translate the technology into decisions and impact in operations,” Pentagon is using artificial intelligence to sorting through intelligence footage. Project has been described “a pathfinder” but the strategy is much broader than one project.”
There’s a Big Obstacle to the Pentagon’s New Strategy to Speed AI to Troops. Officials want to accelerate the delivery of artificial-intelligence tools from the lab to the field., but it's hard to obtain the massive data streams that make AI work.
DoD will develop AI tools and programs to assist with everything the Pentagon does to include combat operations, Near-term projects include efforts to predict better spot network deviations that can indicate predictive maintenance.
Defense Innovation Unit is aiming to better predict and accelerate repairs for Bradley Fighting Vehicles. The company is building a virtual Bradley, using data streams from sensors on real Bradleys in the field.
Digital Twins enable us to collect what the best-performing Bradley would look like because we are able to go into many of the subsystems and pull the data. Just from a single vehicle, we were able to pull terabytes of data.
We aim to use industry data from external sensors, and use it to digitally recreate the vehicles’ operating environments. It’s a process that could be relevant to larger military endeavors, such as the effort to design a new combat vehicle,
If we prove the value here, imagine what we can do in that environment as we build that system out? There’s so much to learn from all the unused data in these industries; less than one percent of the data is ever actually used.”
. “It’s better to get in all the data right now while you’re designing the next generation of vehicles,” Massive amounts of diagnostic data plays the essential role. But finding that data isn’t easy.
The main thing that the Defense Department brings to the process isn’t the machine-learning rules and methodologies. Those are increasingly coming from industry. It’s the unique data sets. So if getting that data is a problem, then DoD has a major obstacle in terms of realising its goals.
How will contractors get the data that they need if DoD is so bad at collecting and keeping it internally? Who owns a datastream that combines industry and DoD data? Getting DoD to change how it collects and makes data available on systems like aircraft is less than totally straightforward. Another challenge is getting Leaders to actually accept AI-generated recommendations.
“All of the processes and procedures need to change. When we get a prediction now that says, this aircraft battery, the data shows you were parked in an environment with unique conditions like cold and you were there for years. You don’t have to swap out this battery on a regular cycle like you’re doing,’ how is policy/guidance and procedures changing to allow that to occur? That is our big question for DoD. We haven’t solved that.
Artificial intelligence is code entrusted to reason through independent choices. The decisions themselves are the result of coded paths and inputs, feeding data into different parts of algorithms, weighting outcomes, and then creating an end product that is designed to be useful to the humans that consume it.
There are degrees and worlds in AI, a vast space from deterministic and emergent behaviour, from online machine learning to targeting tools, and it is in that complexity that care is most required, that human direction is most desired in form and function, that is lacking in the new AI strategy.
Nowhere is the how or the what of AI spelled out. DoD wants to make sure investment continues in a permissive environment, but the term is an umbrella, a catch-all, with no specificity as to what it does, or why it might require efforts to train new workers.
Failure to adopt AI will result in legacy systems irrelevant to defense, eroding cohesion among partners and reduced access to markets. “We are launching a set of initiatives to incorporate AI rapidly, iteratively, and responsibly to enhance military decision-making and operations across key mission areas.
Examples include improving situational awareness and decision-making, increasing the safety of operating equipment, implementing predictive maintenance and supply, and streamlining business processes. We will prioritise the fielding of AI systems that augment the capabilities of our personnel by offloading tedious cognitive or physical tasks and introducing new ways of working.”
AI has the potential to enhance the safety of operating aircraft, ships, and vehicles in complex, rapidly changing situations by alerting operators to hidden dangers. Implementing AI in supply and maintenance operations can predict the failure of critical parts, automate diagnostics, and plan maintenance based on data and equipment condition. Similar technology will be used to guide provisioning of spare parts and optimize inventory levels. These advances will ensure appropriate inventory levels, assist in troubleshooting, and enable more rapidly deployable and adaptable forces at reduced cost.
Streamlining business processes. AI will be used to streamline business operation with the objective of reducing the time spent on highly manual, repetitive, and frequent tasks. By enabling humans to supervise automated tasks, AI has the potential to reduce the number and costs of mistakes, increase throughput and agility, and promote the allocation of DoD resources to higher-value activities and emerging mission priorities.
Artificial intelligence is about generating predictions. “ It takes information/data you have and uses it to generate information you don’t have.” In the past, collecting and parsing data, constructing models, and employing the resident statistical expertise to offer intelligible interpretations demanded significant resources..
Unlike guesses, predictions require data. More data provide more opportunities to discover critical linkages, generating better predictions. In the past, analytic techniques constrained the amount of data that could be scoured for correlations. Consequently, these techniques relied on an analyst’s intuition and they functioned only as an average, potentially never actually yielding a correct answer.
Not so with modern techniques in artificial intelligence, techniques feasting on the immense data sets and complex interactions that would otherwise overwhelm classic statistical models/ Without data the machine of artificial intlligence would grind to a halt.
But not all data are created equal. Data must be tailored to the task at hand. Asking an artificial intelligence to predict whether the pixels in an image can’t necessarily help when trying to predict if another group of pixels in another image correspond.
Sometimes the prediction problem is over simplified. For example, in autonomous driving, there is only a single necessary prediction identified: “What would a solider do?” While framing the problem this way may help an engineer move beyond a rules-based programming decision tree, to be relevant the prediction demands additional nuance.
For example, “What would a soldier do if a truck pulled out in front of him or her?” Only then can the data be searched for similar situations to generate a usable prediction. Without the nuance, the data collected is not applicable to soldiers driving their tanks on a battlefield.
Not only are data specific to the prediction, but the problems to which we can apply artificial intelligence are also situation-specific. There are “ known knowns“, “known unknowns“, “unknown unknowns“, and “unknown knowns.”
“Known knowns” represent a sweet spot for artificial intelligence—the data are rich and we are confident in the predictions. In contrast, neither known unknowns nor unknown unknowns are suitable for artificial intelligence. In the former, there are insufficient data to generate a prediction—perhaps the event is too rare, as may often be the case for military planning and deliberations. In the latter, the requirement for a prediction isn’t even specified. In the case off “unknown knowns“, the data may be plentiful and we may be confident in the prediction, but the answer can be very wrong due to unrecognised gaps in the data set, such as omitted variables and counterfactuals that can contribute to problems of reverse causality.
Current artificial intelligence prediction machines represent “point solutions optimised for “known known” situations with plentiful data relevant to specific, understood work flows. To understand how an artificial intelligence tool may function within a specific workflow, a “canvas” can help decompose tasks in order to understand the potential role of a prediction machine,” the importance and availability of data to support it, and the desired outcome.
The most important element of the artificial intelligence canvas is the core prediction. Its identification and accurate specification for the task-at-hand are essential. Otherwise, the entire artificial intelligence strategy can be derailed.
The tools of artificial intelligence rely on available data to generate a prediction. We identify three types of necessary data: training, input, and feedback. The tool is developed using training data and fed input data to generate its prediction. Feedback data from the generated prediction are then used to further improve the process.
More and richer training data generally contribute to better predictions, but collecting data can be resource intensive, constraining the data available for initial training. Feedback data fill the gap, allowing the prediction machine to continue learning. But that feedback data must come from use in the real world.
Consequently, the predictions or artificial intelligence are more likely to be wrong when the tool is first fielded. “Determining what constitutes good enough for initial release is a critical decision. What is the acceptable error rate, and who makes that determination?
Even if data are plentiful and the algorithm refined, if data are flawed the predictions will still be incorrect. Additionally, it’s important to remember that all data are vulnerable to manipulation, which would significantly degrade the tools of artificial intelligence.
For example, feeding corrupt input data into a prediction machine could crash an artificial intelligence tool. Alternatively, the input data could be subtly altered such that an artificial intelligence tool will continue to function while generating bad predictions.
Feedback data can be manipulated to alter the performance of an artificial intelligence tool,. Training data introduce their own vulnerabilities into artificial intelligence—an adversary can interrogate the algorithm, bombarding it with input data while monitoring the output in order to reverse-engineer the prediction machine. Once the inner workings are understood, the tool becomes susceptible to additional manipulation.
Detecting flawed predictions, either due to inadequate learning or adversarial data manipulation, poses a significant challenge. It’s impossible to open the “black box” of an artificial intelligence and identify “what causes what.”
While DoD is trying to resolve this shortcoming, presently the only way to validate whether the predictions are accurate is to study the generated predictions. Must test for flawed predictions and hidden biases, and then feeding select input data into the prediction machine to test the hypothesis.
However, since “we are most likely to deploy prediction machines in situations where prediction is hard, testing of these complex predictions may prove exceptionally difficult. This challenge may be further exacerbated in military-specific scenarios.
Predictions are but an input into eventual decisions and associated actions. For example, you could estimate the likelihood of your car breaking down in the next six months, or an unexpected relocation, but these predictions may not alter your decision to purchase a new car, because they don’t determine the value you have assigned to the outcome of driving a new car. The process of assigning that value—the associated reward or payoff—is a distinctly human judgement, and one that varies among individuals.
In the past, these prediction and judgement inputs into our decisions were obscured because we often performed both simultaneously in our head. However, the outsourcing of the prediction function to the new tools of artificial intelligence forces us to “to examine the anatomy of a decision” and acknowledge the distinction.
Occasionally, an appropriate payoff based on an prediction generated by artificial intelligence can be predetermined and the resulting decision coded into the machine. In these cases, because the prediction dictates the decision, the task itself is prime for automation.
But more often, situations are complex and prediction is hard. These are the situations where we are most likely to introduce prediction machines, and the residual uncertainty of the prediction can actually necessitate greater human judgment because the prediction, even if generated through artificial intelligence, may not always be correct. So instead of eliminating the human, artificial intelligence often places an even greater imperative on the human remaining within the system.
The human position within the system and relationship to the task will likely change. . Once the prediction function is automated and assigned to an artificial intelligence, tasks previously deemed essentially human and their associated skills will likely be superseded by new tasks and new skills.
The key policy question for DoD to solve isn’t about whether AI will bring benefits but about how those benefits will be distributed.” As these tools become more prevalent, individuals will have to learn new skills, and in the process income inequality may be temporarily exacerbated.
Our current and near-future artificial intelligence tools are suspect. Give them a problem and data for which they are trained, and they will perform remarkably; give them a problem for which they are ill-equipped, and they will fail. It doesn’t matter if the tool is designed for business or national defense.
Too often, artificial intelligence is portrayed as magic to be applied to all our most challenging problems. Prediction Machines provides a compelling, fresh perspective to help us understand what artificial intelligence is and its potential impact on our world. The text is essential reading for those grappling to make sense of the field.
Artificial intelligence is simply a prediction machine—it uses information we possess to generate information we do not possess. This simple realisation immediately refocuses discussions and guides useful development of artificial intelligence. It underscores the situation-specific nature of its data and tools. It discloses its fallibility.
So it reveals the role of predictions in our decision process, not as determinants but rather as inputs that must be evaluated according to our uniquely human judgement-- the “most significant implication of prediction machines”—they “increase the value of judgement.”
AI applied to difficult tasks such as imagery analysis can extract useful information from raw data and equip leaders with increased situational awareness. AI can generate and help commanders explore new options so that they can select courses of action that best achieve mission outcomes, minimizing risks to deployed forces.
1. Improving situational awareness and decision-making, increasing the safety of operating personnel/equipment, implementing predictive maintenance and supply, and streamlining business processes.
2. Use AI to predict the failure of critical parts, automate diagnostics, and plan maintenance based on data and equipment condition. Similar technology will be used to guide provisioning of spare parts and optimize inventory levels. These advances will ensure appropriate inventory levels, assist in troubleshooting, and enable more rapidly deployable and adaptable forces at reduced cost.
3. Streamlining business processes. AI will be used with the objective of reducing the time spent on highly manual, repetitive, and frequent tasks by enabling personnel to supervise automated tasks
4. AI has the potential to reduce the number and costs of mistakes, increase throughput and agility, and promote the allocation of DoD resources to higher-value activities and emerging mission priorities.
5. Directive requires realistic and rigorous testing and clear human-machine interface, as well as appropriate training for commanders and operators, so weapons function as anticipated in realistic operational environments against adaptive adversaries.
6. AI systems have a lower risk of accidents; are more resilient, including to hacking and adversarial spoofing
7. Create new approaches to testing, evaluation, verification, and validation, and increase our focus on defensive network system platforms as a precondition for secure uses of AI
8. Prioritize fielding of AI systems that augment the capabilities of our personnel by offloading tedious cognitive or physical tasks and introducing new ways of working.
9. Put in place key building blocks and platforms to scale access to AI include creating common foundation of shared data, reusable tools, frameworks and standards, and network services.
10. In parallel, take steps to ready existing processes for AI application through digitization and smart automation