“The AI agent’s resounding victory demonstrated the ability of advanced algorithms to outperform humans in virtual dogfights. These simulations will culminate in a real-world competition involving full-scale tactical aircraft in 2024.
The algorithm easily defeated the fighter pilot in all five rounds that capped off a yearlong competition hosted by the Defense Advanced Research Projects Agency. The AI system gained notoriety throughout the competition for its aggressiveness and the accuracy of its shot.
But the system wasn’t perfect. The AI pilot often made an error in basic fighter maneuvers by turning away from enemy aircraft to where the AI thought the other aircraft would go. It was then unable to recover throughout the fights.
Military officials have long eyed the potential for AI to control aircraft, whether as part of a “loyal wingman” setup where a number of systems are controlled by one pilot, or through taking existing systems and making them optionally manned.
An AI beat a veteran human pilot 5-0 in DARPA’s virtual AlphaDogfight trials. That program, called Air Combat Evolution (ACE), will now advance to testing in actual fighter aircraft. But the current party line is that the US military does not seek to replace human judgment and control in combat operations, only to augment them.
“AI will support human decision-makers, not replace them. We see AI as a tool to free up resources, time, and manpower so our people can focus on higher priority tasks, and arrive at the decision point, whether in a lab or on the battlefield, faster and more precise than the competition.”
“Before AI can find its place, really, in the cockpit in a formation, it kind of has to earn its wings first.”
“It’s got to show that it can actually fly, that it can compete, and that it can con-tribute. “A human pilot goes through pretty much the same thing. You learn to fly, you have to demonstrate you have certain level of skill, then you can join a formation, be part of the squadron and go out and be part of the part of the team.”
The virtual games will help flesh out how human and machine pilots can share operational control of a fighter jet to maximize the chances of mission success.
“When we put this particular part of the program together, there were some really critical questions that have to be answered and we’ll see how that comes during the trials.
We need to understand: can we actually develop autonomy algorithms that can operate in the very demanding environment of air-to-air combat?”
“And we’ll need to understand —later on once we have those algorithms developed — when you do the human-machine teaming, how do you share the workload in the dynamic situations of aerial combat where aircraft are getting damaged and attritted.
How do you shift responsibilities back and forth so that you can have the air dominance that you want? Those are very complex problems.
ADT is not so much a contest between either human or machine, but instead heralding an era of “human-machine cooperation.” Applying this concept more generally to air warfare, key examples can be thought of as similar to the five types of cooperation: a Flight Group, with either all human pilots or all autonomous drones, such as in swarm deployment of attritable payloads or platforms, while the human pilot remains as a rear area local controller; offensive or defensive attritables or expendables with signature abilities; deployment of expendable payloads (e.g., from an “arsenal plane”) in salvos of homogenous or heterogenous weapon mixes; and deployment of cyber effects into a threat payload and/or platform, for immediate or latent impacts.
AI is the critical enabler in each of these manifestations of human-machine cooperation for air warfare. Further advancements in human-machine cooperation will enable the pilot to focus more on larger area mission planning while delegating the business of flying to the AI No longer will the pilot selection process need to emphasize the quickest reaction times and physiology for high g forces, but should instead prioritize aptitudes for synthesizing information, tasking prioritization and decision-making.
These skills will be critical to leverage and coordinate effects from a pilot’s own aircraft, as well as other offboard assets. Abstraction layers will result in less onerous and specialized training required to fly a plane, and more emphasis and ease on how to best use it.
Air Force has been promoting the concept of teaming unmanned and manned aircraft to undertake a variety of missions in future all-domain fights against peer adversaries — for example, championing Skyborg effort to mate AI-driven drones with fighter jets to perform traditional ‘wingman’ roles, as well as other operations.
The bot-tom line for the Air Force’s fighter pilots is the issue of trust; i.e. ceding life-or-death control to an artificial intelligence. “We are very reluctant — the fighter pilot community is very reluctant — towards any sort of change. There’s almost an institutional requirement to not accept change. “When it comes to autonomy, this has been avoided almost like the plague.”
Air Force has said some of the first areas the service could use AI pilots is in training.
“As we look at how we’re going to bring those pilots’ assistants in, and then how we might move toward a more autonomous vehicle, one of our ideas is we’d like to try it in red air first.”
“So, we’d like to take it and see if there are ways to do some unmanned adversaries against our manned aircraft, and then let them learn and train their algorithms and see how good they get at fighting against us.”
The overarching goal of the program is to gradually build up trust in increasingly capable autonomous systems that eventually can take on the full gamut of air combat missions.
DARPA’s Air Combat Evolution (ACE) Program hopes to build trust in AI for increasingly complex air combat missions.
Building trust that AIs can understand human intentions, is also the goal of another DAPRA program, called Adaptive Distributed Allocation of Probabilistic Tasks (ADAPT).
Adaptive Distributed Allocation of Probabilistic Tasks (ADAPT) -- will assist commanders with decision-making by quickly crunching data in fast-changing battlespaces and improving cooperation between human and AI agents.
"ADAPT will take a significant step forward in human-AI collaboration so warfighters and intelligent technology can reason and work together to make better, faster decisions than either could do on their own." By learning from its human counterparts, taking into account their goals, preferences and constraints, these more informed agents can guide AI in forecasting, creating and adapting action plans as missions evolve."
In a search-and-rescue scenario, for example, these advanced AI models and agents will think through millions of possible scenarios for commanders to choose the best plan, minimizing casualties and risks.
"Humans excel at learning from one another but can only process so much incoming information. AI, on the other hand, has incredible computational abilities but needs to learn from and communicate with humans in order to be used effectively in dynamic team situations.
"These combined attributes will elevate a commander's expertise and decision-making in fast-changing, information-intensive environments so they can respond, and adapt quickly, while considering future possibilities." ADAPT will collect and analyze huge influxes of data for humans, optimize strategy and reallocate assets on the fly.
“The idea of the AlphaDog trials was essentially to be an accelerator, or an incubator, for a lot of the technologies that are flowing into this larger program. “In ACE, we’re really demonstrating trusted, scalable, human level, AI-based and AI-driven autonomy for air combat.”
The overarching ACE concept is aimed at allowing pilots to shift “from single platform operator to mission commander” in charge not just of flying their own aircraft but managing teams of drones slaved to their fighter jets, The goal was not just to tackle those fundamental problems plaguing autonomous system development, but also speed actual solutions
“These performers have gone down very different paths, chosen very different approaches in AI, and for us that’s a bonus, because we not only get to see how these algorithms perform, but we get to see how these different areas of AI work.
This can inform later investments if it turns out that there are particular areas that need further development or if there are things that offer new advantages that we previously hadn’t anticipated.”
AI pilots face simulated ‘adversaries’, each based on a typical scenario involving offensive and defensive maneuvers.
The virtual games have attracted an enormous amount of attention within DoD, the various services and industry. “We’re trying to see if there are some other solutions available to let everybody in. But again, this shows the depth and breadth of interest in this particular area, and we’re very glad to see that kind of interest in this program.”
Developments in AI could lead to “hyperwar” — a type of conflict and competition so automated that it would collapse the decision action loop, eventually minimizing human control over most decisions.
We need to encourage the organizational transformation necessary to adopt safer, more explainable AI systems to maintain our competitive edge, now that the technical transformation is at our doorstep.
Software, AI, autonomy — these are the ultimate weapons. These technologies are the difference between hundreds of old Mig-19 and Mig-21 fighter jets lying in scrap yards, and their transformation to autonomous, maneuverable, and so-called “attritable,” or expendable, supersonic drones built from abundant air frames, equipped with swarm coordination and the ability to operate in contested airspaces.
Gone are the days when effectiveness and capability can be ascribed to individual systems and platforms. Now, it’s all about the network of assets, how they communicate, how they decide to act, and how efficiently they counter the system that is working in opposition to them. An individual aircraft carrier or a squadron of strategic bombers are no longer as independently meaningful as they once were.
In the emerging environment, network-connected, cognitive systems of war will engage each other. They will be made up principally of software, but also of legacy weapons platforms, humans, sometimes in combat, and newer assets capable of autonomous decision and action.
The picture of the environment in which they operate across time and space will only be made clear by intelligent systems capable of fusing massive amounts of data and automatically interpreting them to identify and simulate forward the complex web of probabilities that result.
Which actions are likely to be successful? With what degree of confidence? What are the adversary’s most likely counter-moves? The large scale, joint application of autonomously coordinated assets by a cognitive system will be unlike anything that has come before. It is this fast-evolving new paradigm, powered by artificial intelligence at every level, from the tactical to the strategic, that demands our attention. We must no longer focus on individual platforms or stand-alone assets, but on the cognitive system that runs an autonomous “Internet of War”.
One concern is that artificial intelligence has been broadly misunderstood as a product or a feature. It is not. It is a science, much like physics or mathematics. Its applications will lead not merely to incremental enhancements in weapon systems capability but require a fundamental recalculation of what constitutes deterrence and military strength.
For example, the combination of AI elements — visual recognition, language analysis, the automated extraction of topical hierarchies/ontologies, control of systems with reinforcement learning, simulation-based prediction, and advanced forms of search — with existing technologies and platforms, can rapidly yield entirely new and unforeseen capabilities. The integration of new AI into an existing platform represents a surprise in its own right. But the complex interactions of such platforms with others like them can create exponential, insurmountable surprise.
Integrating the “LEGO bricks” of intelligence and autonomy into conventional platforms results in unconventional upgrades. Fighter jets with autonomy aren’t just a 1950s era write-off. It becomes a system with new potential, diminished logistics dependencies, and an enhanced efficacy that goes far beyond an engine or radar upgrade. Broadly, the consequences of the use of AI to revitalize and reinvent conventional platforms will be transformative.
Despite the change occurring globally in value shifting from the physical to the digital, and the tremendous latent potential of AI, our Defense Department has not traditionally shown its best when it comes to understanding, acquiring, or deploying software capabilities. Hardware platforms come far more naturally to our acquisition professionals.
We can hope for a change of heart and perspective, but absent that, in order for AI to be meaningful to them in the near term, we must reinvent, enhance, and reimagine existing platforms just as we build new ones, so we can cost-effectively fulfill needs and create significant capabilities that open the door to even greater future potential. Briefing after briefing on the potential of AI, or distributing primers on machine learning inside the confines of the Pentagon won’t lead to critical adoption; the performance that results when AI is integrated into platforms will be the proverbial proof that lies in the eating of the pudding.
To compete in this new era of exponential technologies, the U.S. military and our intelligence agencies need both digital and physical systems powered by artificial intelligence. Imbued with synthetic cognition, such systems can make a meaningful difference to every branch of our armed services.
A serious effort to fuel the development of such systems will lay the groundwork for true, full-spectrum AI adoption across DoD. But for any of this to become reality, long held views and processes in DoD must change.
An aggressive and fast-track effort to incorporate AI into existing and new platforms will have to be adopted. In the age of hyperwar, our willingness to embrace commercial innovation, our decisiveness in acknowledging that we live in a post-platform era, and most importantly, the speed with which we operationalize new investments, will be the attributes that lead to success.
It's all about augmenting personnel efforts across battalions, regiments and divisions to raise the readiness levels of the entire force. Our Troops inside the wire should have the knowledge, technical acumen and agility to support all of the operations and technology our troops outside the wire are running.
1. Invest in both garrison and tactical systems and infrastructure that are capable of running and sustaining the increased compute power that comes with training and deploying AI programs.
2. Conduct these initiatives in parallel with operations to ensure red and blue force efforts are complimentary and requirements are aligned by having an algorithm alert to discrepancies or gaps in the operational plan and the intelligence needed to execute it.
3. Display both a geographic and temporal representation of multi-intelligence data with a natural language generated summary of the data. Include the ability to break data into individual entities as needed and ingest analyst annotations into the automated summary.
4. Automatically map finished intel products to the priority intelligence requirement they help answer with automated caveat classification of documents tied to user alerts/permissions. Include smart search capabilities to that repository so analysts can find relevant products more efficiently using NLP and machine learning.
5. Capture workflows and product development in a shared space so knowledge gaps are reduced between shifts or rotations. Use automation to track knowledge gaps and alert users to update analysis and finished products when significant knowledge gaps are filled.
6. Measure impact of operational intelligence (and any associated collections or requests that contributed to that intelligence) by automating inputs and processes that would serve as operational measurements.
7. Apply analytic rigor to the intelligence analysis process and automate a measure of objectivity into levels of confidence tied to intelligence assessments.
8. Add cognitive search into the massive data repositories analysts are required to sift through to move beyond keyword search and enable contextual search at an enterprise level.
9. Provide in-depth training on AI systems and set the standards by which the technology should augment the human analytic process but by no means replace the analyst behind the screen. In short, tie the technology into existing workflows and adjust workflows to account for technological innovation.
10. Use natural language processing (NLP) and machine learning to automatically classify and match incoming data to indicators and warnings being monitored. Provide alerts on trending topics, keywords or themes that may indicate emerging tactics, techniques and procedures.