Specific character traits of narrow AI systems mean they are trained for particular tasks, whether this is playing games or interpreting images. In warfare, however, the environment shifts rapidly due to the ‘fog and friction of war’.
AI systems have to work in a context that is highly unstructured and unpredictable, and with opponents that deliberately try to disrupt or deceive them. If the setting for the application of a given AI system changes, then the AI system may be unable to adapt, thus the risk of non-reliance is increased.
In context, militaries need to operate on the basis of reliability and trust. So, if human operators, whether in a static headquarters or battlefield command post, are not aware what AI will do in a given situation, it could complicate planning as well as make operations more difficult, and accidents more likely.
The increasing array of capabilities of AI systems will not be limited by what can be done, but by what actors trust their machines to do. The more capable our AI systems are, the greater their ability to conduct local processing and respond to more abstract, higher level commands.
The more we trust the AI, the lower the level of digital connectivity demanded to maintain system control. Within this context it will be critical to develop the appropriate standards, robust assurance, certification regimes, and the effective mechanisms to demonstrate meaningful human accountability.
Technology for big data and AI is currently developing at a tremendous pace, and it has major potential impacts for strategic, operational and tactical military decision-making processes. As such, operational benefits may be vast and diverse for both the US and its adversaries. However, the full potential of AI-enhanced technology cannot be fully predicted, and time is required for capabilities to mature.
The purpose of these reports is to provide a basic understanding of AI, its current development, and the realistic progress that can be expected for military applications, with examples related to air power, cyber, C2, training, and human-machine teaming.
The concept of what defines AI has changed over time. In essence, there has always been the view that AI is intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans.
In common language, the term AI is applied when a machine mimics cognitive functions attributed to human minds, such as learning and problem-solving. There are many different AI methods used by industry and DoD, with machine learning and neural networks currently at the forefront.
As computers and advanced algorithms become increasingly capable, tasks originally considered as requiring AI are often removed from the list since the involved computer programs are not showing intelligence, but working off a predetermined and limited set of responses to a predetermined and finite set of inputs. They are not ‘learning’.
Recently capabilities generally classified as AI include successfully understanding human speech, competing at the highest level in strategic game systems autonomous systems, intelligent routing in content delivery networks, and military simulations. Industry generally acknowledges significant advances in image recognition as cutting-edge technology in AI.
While such known and ‘applied AI’ systems are often quite powerful, it should be noted they are usually highly specialized and rigid. They use software tools limited to learning, reasoning, and problem-solving within a specific context, and are not able to adapt dynamically to novel situations.
This leads to the term ‘weak AI’ or ‘narrow AI’. Weak AI, in contrast to ‘strong AI’, does not attempt to perform the full range of human cognitive abilities. By contrast, strong AI or ‘general AI’ is the intelligence of a machine that could successfully perform any intellectual task that a human being can.
For strong AI, there is no essential difference between the piece of software, which is the AI exactly emulating the actions of the human brain, and actions of a human being, including its power of understanding and even its consciousness.
Defense air power is capitalizing on such capabilities with Mission Training through Distributed Simulation (MTDS), which has demonstrated reliable connectivity and beneficial training opportunities between multiple types of aircrew simulators and training centres
In addition, for over a decade Command & Control – Simulation Interoperation (C2SIM) data exchange standard offers existing command & control, communication, computers and intelligence (C4I) systems the potential to switch between interacting with real units and systems, including robotics and autonomous systems, and simulated forces and systems.
Unmanned vehicle research has allowed state-of-the-art remote operations to progress significantly during recent decades, for both civil and military applications. The advance of AI, however, is now offering unprecedented opportunities to go beyond remote control and build autonomous systems demanding far less direct control by human operators. Examples of autonomous systems development include self-driving cars, trains, and delivery systems in the civil traffic and transport sector.
In the same way, the military is developing systems to conduct routine vehicle operations autonomously. For example, US Navy X-47B program developed an Unmanned Combat Air Vehicle (UCAV) that had completed a significant number of aircraft carrier catapult launches, arrestments, and touch-and-go landings, with only human supervision. Recently, the X-47B successfully conducted the world’s first fully autonomous aerial refuelling.
Military commanders need to ask themselves how much trust they want to place in what the AI-enabled autonomous system promises to be able to do. How much better is it with regard to persistence, precision, safety, and reliability, as compared to the remote human operator? When it comes to kinetic targeting on the ground, the ‘human-in-the-loop’ being able to intervene at any time probably should remain a requirement.
Conversely, in the field of air-to-air combat, where millisecond long timeframes for critical decisions inhibit remote UCAV operations, there has been a recent and promising leap forward.
ALPHA, an Air Force AI agent has been built on high-performing and efficient ‘Genetic Fuzzy Trees’. During in-flight simulator tests it has constantly beaten an experienced combat pilot in a variety of air-to-air combat scenarios, which is something that previous AI-supported combat simulators never achieved.
While currently a simulation tool, further development of ALPHA is aimed towards increasing physical autonomous capabilities. For example, this may allow mixed combat teams of manned and unmanned fighter airframes to operate in highly contested scenarios with Human-Machine Teaming.
A variation on the autonomous physical system and military operations with human-controlled vehicles is the manned-unmanned teaming (MUM-T) concept, which leaders deem to be a critical capability for future military operations in all domains. Some systems are currently testing and implementing diverse configurations to improve the following: pilots’ safety, situational awareness, decision-making, and mission effectiveness in military aviation.
The Air Force has been working on the ‘Loyal Wingman’ model, where a manned command aircraft pairs with an unmanned off-board aircraft serving as a wingman or scout. In a recent live demonstration, a modified unmanned F-16 was paired with a manned F-16 in formation flight,
The pilotless F-16 broke off from the formation, attacked simulated targets on the ground, modified its flight pattern in response to mock threats and other changing environmental conditions, and re-entered formation with the manned aircraft. USAF planning foresees future applications pairing a manned F-35 Joint Strike Fighter with such an unmanned wingman.
In the above test scenario, however, the unmanned F-16 conducted only semi-autonomous operations based on a set of predetermined parameters, rather than doing much thinking for itself. The next technology waypoint with a more demanding AI requirement would be ‘Flocking’. This is distinct from the ‘Loyal Wingman’ concept in that a discernible number of unmanned aircraft in a flock (typically consisting of a half-dozen to two dozen aircraft) execute more abstract commander’s intent, while the command aircraft no longer exercises direct control over single aircraft in the flock.
‘Swarming’, exceeds the complexity of flocking, so that an operator cannot know the position or individual actions of any discrete swarm element, and must command the swarm in the aggregate. In turn, the swarm elements will complete the bulk of the combat work.
DoD has demonstrated a swarm of 100 autonomous micro-drones ejected from a fighter aircraft. The swarm successfully showed collective decision-making, adaptive formation flying, and self-healing abilities. While not primarily an offensive tool, there are a multitude of uses for such drone swarms, including reconnaissance and surveillance, locating and pursuing targets, or conducting electronic warfare measures. Furthermore the swarm could act as expendable decoys to spoof enemy air defenses by pretending to be much larger targets.
AI has demonstrated the ability to provide Support for C2 and Decision-Making. Military headquarters have largely moved from paper-based to electronic-based workflows. This, in-turn, adds information awareness but also data volume which the staff must manage. Future intelligence, surveillance, target acquisition and reconnaissance systems will generate even larger amounts of near real-time data that will be virtually impossible to process without automated support.
At the same time, increasingly advanced, network-enabled, joint, and multi-domain capabilities will emerge and get these tools available for use in their own operations. For commanders to effectively orchestrate actions in such environments, they need situational understanding and decision-support on possible courses of action (COAs), their effects, and consequences.
Improved data management and interoperability, data fusion, automated analysis support, and visualization technologies will all be essential to achieving manageable cognitive loads and enhanced decision-making. These critical capabilities are not only for commanders and headquarters staffs, but also for platform operators, dismounted combatants and support staff.
Aside from traditional algorithms, simulation and AI are envisaged as tools that can enhance decision-making. To better define the task, some of these inputs started by breaking down the Observe-Orient-Decide-Act (OODA) loop – an abstract model generically embracing different types of military decision-making processes – and then assigned future required AI roles and functions to each of the OODA steps.
Situational understanding/orientation can make used of Applying big data analytics and algorithms for data processing, then data presentation for timely abstraction and reasoning based on a condensed, unified view digestible by humans, but rich enough to provide the required level of detail. This should include graphical displays of the situation, resources (timelines, capabilities, and relations and dependencies of activities), and context (point of action and effects).
Plan generation/decisions present a timely, condensed view of the situation, with probable adversary COAs and recommended own COAs including advice on potential consequences to support decision-making. To this end, it must be made possible to assess and validate the reliability of the AI to ensure predictable and explainable outcomes allowing the human to properly trust the system.
As AI gets more advanced and/or time pressure increases, the human may only be requested to approve a pre-programmed action, or systems will take fully autonomous decisions. Requirements for such AI must be stringent, not only because unwanted, erroneous decisions should be prevented, but also because decision-makers will still be responsible for the actions the system takes.
What are the AI challenges? Let’s take a look!
1. Data Collection
To implement real case scenarios, you need to collect the data through web-scraping or coordinate with domain experts to collect the data then structure the data and store it in the database. Once the data is collected you need to validate if the quantity is sufficient for the use case must select learning algorithm and training the correct model using some of the acquired data.
2. Non-representative Training Data
The training data should be representative of the new cases to make predictions for general cases i.e., the data we use for training should cover all the cases that occurred and that is going to occur. By using a non-representative training set, the trained model is not likely to make accurate predictions.
3. Generalized Predictions
Systems developed to make predictions for generalized cases are good machine learning models. It will help the model to perform well even for the data which the data model has never seen. Quality of data is very important to get accurate results without false prediction. Data preprocessing needs to be done by filtering missing values, extract & rearrange what the model needs.
4. Poor Quality of Data
In reality, we don’t directly start training the model, analyzing data is the most important step. But the data we collected might not be ready for training, some samples are abnormal from others having outliers or missing values for instance. Fill the missing features/values using averages or simply remove the attributes/instances with missing values, or train the model with and without these instances.
5. Irrelevant/Unwanted Features
If the training data contains a large number of irrelevant features and enough relevant features, the machine learning system will not give the results as expected. One of the important aspects required for the success of a machine learning project is the selection of good features to train the model also known as Feature Selection.
6. Underfitting the Training data
Underfitting which is opposite to overfitting generally occurs when the model is too simple to understand the base structure of the data. It’s like trying to fit into an undersized jumpsuit. It generally happens when we have less information to construct an exact model and when we attempt to build or develop a linear model with non-linear information.
7. Memory networks
Memory networks or memory augmented neural networks still require large working memory to store data. This type of neural network needs to be hooked up to a memory block that can be both written and read by the network. Currently, we teach computers to represent languages and simulate reasoning still big challenge for deep networks
8. Focused Attention
Integrate large set of features not focusing on small chunks of input stimuli, one at a time, and then integrate the results at the end.
9. One-shot learning
While applications of neural networks have improved, we still haven’t been able to achieve one-shot learning. So far, traditional gradient-based networks need an enormous amount of data to learn and this is often in the form of extensive iterative training. Instead, we have to find a way to enable neural networks to learn using just one or two examples
10. Object detectionObject detection is still hard for algorithms to correctly identify because classification and localization in computer vision and machine learning are still lacking.