Autonomy is generally defined as a machine having the ability to execute tasks with limited to no human intervention. Advances in autonomy are driven by converging technologies such as AI, robotics, dig data, and advanced motion sensors. Autonomous systems can involve a built-in human control mechanism -- human-in-the-loop, a human override mechanism-- human-on-the-loop, or minimal-to-no human involvement-- human-out-of-the-loop.
Autonomous systems can be conceptually divided into two categories: processes and assets. Autonomous processes include those capabilities driven by machine learning, big data, cloud storage, and AI to automate procedures and functions.
Major advances in autonomous processes could support mission planning, training , decision-making, administrative roles, and business functions. Autonomous assets include the physical equipment and resources the Services can use to carry out missions. These assets are divided primarily into three categories: unmanned aerial vehicles UAV, unmanned underwater vehicles UUV and unmanned surface vehicles USV..
USVs receive less attention than their aerial counterparts, but hold enormous potential. USVs support cross-domain integration and increase the capabilities of other unmanned systems with their large payloads, power reserves, and endurance. They also can help overcome anti-access/area-denial environments by projecting information operations, military deception campaigns and electronic warfare capabilities, Current projects aim to have swarms of autonomous vessels conducting both surveillance and security operations.
Autonomous assets can act as a major force multiplier. UAVs, USVs, and UUVs can increase the strength of the force and material readiness while the Navy’s requirements for deployments, readiness, and forward presence remain high.
“Plans to Accelerate Fielding of Unmanned/Autonomous Robotic Systems Face Technical Challenges”
Pentagon is planning the use of robots to carry out the dangerous, and often tedious, elements of combat.
Services are testing new ways of pairing troops with air and ground robots at the squad level with its sights focused on enhancing how the squad works on the battlefield with robots, and advanced targeting and sensing gear.
Squads are using air and ground vehicles to detect physical and electromagnetic threats, are able to demonstrate the ability to communicate and collaborate, even while operating on the edge of connectivity.’”
One program will give aviators a robot co-pilot with autonomous capability lo take the load off pilots so human pilots they can focus on mission tasks other than flying.
There is an ongoing effort to develop new technologies that would “extend squad awareness and engagement capabilities that can be extended without imposing physical and behavioural burdens.
Efforts aim to speed the development of new, lightweight, integrated systems that provide infantry squads awareness, adaptability and flexibility in complex environments like to enable dismounted troops to more intuitively understand and control their complex mission environments.
Those efforts fit within wider work being done by the Close Combat Lethality Task Force, a group set up to enhance close combat capabilities for infantry, special operations, scouts and some engineers.
Squad Sensing detects potential threats at a squad-relevant operational pace. Capabilities of interest include multi-source data fusion and autonomous threat detection.
Squad Autonomy Increases squad members’ real-time knowledge of their own and team locations in GPS-denied environments using embedded unmanned air and ground systems. Capabilities of interest include robust collaboration between humans and unmanned systems.
“Each run, they learned a bit more on the systems and how they could support the operation,” “By the end, they were using the unmanned ground and aerial systems to maximise the squad’s combat power and allow a squad to complete a mission that normally would take a platoon to execute.”
Troops have been equipped with a variety of robotic and autonomous systems with the aim of improving areas such as combat mass, soldier lethality and overall information gathering.
In one scenario, soldiers used robotic engineering vehicles to clear an obstacle, while a small quadcopter flew overhead to provide infrared imagery before armored infantry rolled in to take an enemy position.
Robotic systems with varying levels of autonomy were a key part of the exercise, ranging from radar-equipped drones for detecting buried IEDs, to small two-wheeled robots that are thrown into buildings to search for enemy fighters.
A related challenge continues to be lack of experience using unmanned and autonomous systems, with commanders using exercises to better understand capability enhancements as well as the inevitable shortfalls.
“This is a real opportunity to bring stuff into the field to see if military users will use it the way industry thinks they will use it. “There’s no one single piece of kit that will solve all our problems, it’s a combination of something in the air such as a surveillance asset, something on the ground, perhaps with a weapon on it or just doing logistics, but then it all links through an information system where you can pass that data and make better decisions to generate tempo.”
One issue is an increasingly crowded radio frequency spectrum, especially as several unmanned systems compete for space to beam back high-resolution data from onboard sensors. “The problem is when they start cutting each other out, we are dealing with physics here, if we want to have great high definition video passing across the battlefield we need to trade somewhere else.”
Not only will there be a need to ensure that the control systems do not interfere with each other, but also that leaders “will have to be convinced that new systems are not simply too vulnerable to jamming and other disruptive techniques by an adversary.”
A promising development from trials is the ability to optionally man a standard vehicle using kits that can be fitted within a few hours including a remote-controlled infantry fighting vehicle and a lightweight tactical vehicle.
Troops in the exercise used the vehicles in unintended ways, utilising surveillance tool on onboard camera. Squads also used vehicles to help in entering buildings and to carry supplies or troops
“What we have found is that when troops are using these vehicles they just want to jump on the vehicle because it goes faster than they can, and you can move groups very quickly on them. For safety reasons the soldiers were not allowed to hop on board during the exercise. “Optionally manned is good, but we don’t know if it needs to be optionally manned with a steering wheel and a seat.
Additionally, autonomous assets strategically support principles such as distribution and maneuver by leveraging “additional weapons and sensors over large areas” and optimizing the “strategic depth of the force.“ Both airborne and surface-borne drones can support intelligence collection and targeting requirements for multi-domain battlespaces and over-the-horizon amphib ops with adequate fire support for landing forces, and autonomous drones could overcome this challenge by acting as mobile mini-mortars with increased on-station times.
Autonomy and man-machine teaming can allow leaders to make better decisions faster. Military leaders must “be prepared to make decisions at the speed of relevance. When the speed of relevance is the speed of electrons, the Navy will depend on autonomy to remain a relevant fighting force.
The military already uses autonomous systems for offensive and defensive missions. Various levels of autonomy support mobility, targeting, intelligence, and interoperability, Autonomy empowers homing missiles, navigation, and autopilot capabilities. Basic targeting systems use automated target recognition to identify objects and support non-kinetic targeting for ISR collections.
Counter-artillery batteries and Phalanx close-in-weapons-systems can engage automatically upon detecting a threat. Recurring and rules-based tasks such as scheduling replenishments at sea, naval weapon-target assignment plans, dynamic frequency allocations, and planning daily aircraft routing are candidates for integration with AI in the near future.
Navy recently created its first underwater drone squadron. Future uses of USVs are under-explored but hold substantial promise. USVs have significant advantages over UAVs and UUVs with regard to endurance and payload capacity for prolonged operations.
Previous exercises highlighted the ability of USVs to relay instructions from shore to underwater assets, in this case by ordering the launch of a UAV which a UUV was carrying
Most USVs are directed toward missions such as observation and collection, physical environment mapping, countermeasures, countering small boats, and testing to involve automated payloads and autonomous coordination with multiple ships.
The Navy and Marine Corps are uniquely suited to benefit from autonomous systems. Attributes that welcome autonomy: empowering lower-skilled workers to perform higher-skilled work, replication for large-scale operations, faster-than-human reaction speed, superhuman precision, extended patience, and operations away from reliable communications. Some strides are being made to foster autonomy, but more can be done.
Most AI systems require some level of guidance from humans. Sailors and Marines will require instruction and training on these technical systems, just as officers will require education on how to integrate them into operations and planning. Educating front-line leaders on the capabilities of autonomous systems should be a priority.
As military forces move in human-built environments they should consider the possibility that remote or autonomous machines, legged as well as winged, could also be traversing in the same way.
The programme is explorting precision Engagement of threats to maintain compatibility with infantry weapon systems without imposing weight or operational burdens on that would negatively affect mission effectiveness. Capabilities of interest include distributed, non-line-of-sight targeting and guided munitions.
Non-Kinetic Engagement disrupt enemy command and control, communications and use of drones. Capabilities of interest include disaggregated electronic surveillance and coordinated effects from distributed platforms.
Military is carrying out a number of experiments in communications, EW, loitering munitions and targeting. Services are looking for ways to enhance infantry capabilities using manned-unmanned teaming.
Augmented Spectral Situational Awareness, and Unaided Localisation for Transformative Squads are being tested using autonomous robots with sensor systems to detect enemy locations to target the enemy with a precision grenade before the enemy could detect their movement.
Small units using Electronic Attack Module were able to detect, locate, and attack specific threats in the radio frequency domains, part of larger efforts to put more detection and fires at lower echelons in ground force units.
This important work is presently done by humans, who often have to physically place detonation charges on the mines they find. Some day, autonomous robots could perform the same task with less operational risk.
The Swarm Diver is a surface or underwater drones can release swarms of smaller autonomous underwater robots to scout, identify and counter threats in littoral waters. Autonomy is key here, as communicating underwater is difficult and communicating with above-water assets from underwater especially tricky without an intermediary.
Should the Swarm Diver project work as intended, swarms of autonomous robots could be the long-awaited answer to the enduring threat posed by autonomous explosives.
Air Force is doubling down on efforts to fuse text, video, and virtually every potential source of information together through artificial intelligence. The goal is to change the way every commander, airman, and even thinking machine makes decisions. Air Force has begun programme called Data to Decision, or D2D.
D2D objective is a complete cycle of understanding, from predicting what we expect to encounter, prescribe what can be done to help understand the environment, then find, fix, track, target, engage, assess, anything, anytime, anywhere in any domain.
We want to improve every decision – those made by airmen or by machines.”
That includes decisions about where a piece of information needs to go next. “How you aggregate all the data, take the pieces that matter for a mission, move it where you need to move it-- even for different purposes then think about where it needs to move and why”
The programme is grand in its ambitions to use a wide variety of data, extending well beyond traditional aerial surveillance footage to potentially include, well, everything: like live-streaming diagnostic data off of jets, drones, and other aircraft, attainable whether data is soldier-worn sensors, and more.
“We will use all data available” including unstructured, open sources. We are shifting thinking to focus on the object in question and looking for any data that may be relevant. Machines will help us determine relevance, especially as we aggregate in ways never before considered. So all the potential data cited could be part of the relevance discussion.”
How do you take all of that data and use it to output a dynamic and credible picture of the future? Here, too, the strategy is: diversity in approach. The potential artificial intelligence tool kit the Air Force is using ranges from neural nets and deep learning approaches to less exotic machine learning methods, useful for tasks where the data is structured or the variables are fewer, like chess.
It’s not the first time the military has experimented with fusing a wide variety of data for better decision making to create a fuller picture of the battlespace by combining and crunching data. It used multiple neural nets to populate a larger engine that then used statistics to output predictions and probability.
D2D uses some lessons from past efforts. “AI and Machine Learning can help by creating neural networks through disparate data sources that would then help us better understand the potential to use the nets for Air Force mission execution.
That’s key, because statistical methods don’t provide the most accurate answer the first time you use them. You run the formula over and over again, stirring in new data or information that you receive, coming closer and closer to a prediction in which you can have confidence.
“The components of the D2D Program have transitioned into operational prototypes in a variety of mission areas and include activities in video, image, and text analytics. The D2D technical products and lessons learned are critical elements of the DoD roadmap for future capability in machine learning and artificial intelligence.
On that roadmap, there are high expectations. “You’ll see some modest investments in that space. If you look inside what we are pushing to aviators, it’s a lot of information to absorb. So it’s about, can you actually push the information to the human. Can you make it make sense to the human, to help the human make decisions at greater speed? That last part is the hard part. The human has to decide faster. So parsing the data and how you decide actually matters. That’s a technology piece that’s really going to require some experimentation.
“Some would argue that is man-machine teaming but it can be described differently: you take that same person that’s working in that cockpit — with all that information streaming to them — and you augment it by a factor of ten by teaming that airplane — that can push that amount of information to an aviator who is making target engagement decisions with offboard sensors that push ten times more data to that same person and give them the weapons that will allow them to prosecute those targets.
Now you are talking about more than what fighter pilots call beyond-visual-range shots with high confidence. You’re talking about being able to prosecute the enemy at hundreds of kilometers away from your airplane with very high confidence that you’re actually engaging the targets you intend to engage, because we don’t engage targets unless we have high confidence.
1. Targets are not always detected upon first visit, there is not always probability of detection on a visit is independent of overall time spent searching. A memory-less process implies that the searcher will return to the target after detection i.e., the process itself does not include a termination-upon-detection requirement
2. The duration of pre-detection visits to a target, during detection visits-- when detection actually occurs, and post-detection visits should all be equivalent and should be exponentially distributed.
3. The pre- and post-detection visit durations are exponential and essentially identical. However, the during detection visit durations tend to be long--in the case of the test data, nearly twice as long as pre- and post-detection durations, there were few very short-duration visits, and the distribution lacked a tail of long-duration visits.
4. Data shows during detection visits are more normally than exponentially distributed., delay may have been attributable to a motor response and some sort of inhibition in the pilot movement system
5. Different strategy could have been used for verification leading to a detection rather than checking when no detection decision was made.
6. The distribution of first visit times is actually close to an exponential but only after a delay. This result is consistent with observations in the scene perception study indicating that observers do not begin immediately searching the scene when it appears.
7. When an observer is confronted by a new scene, the observer first spends short time glancing around at it to “orient” and extract the spatial layout of the scene. Some visual and possibly cognitive process has to extract scene information sufficient to delineate points of interest before the search process as described by the model can begin.
8. The distribution between gaps, times between visits to the target is described by a memory-less statistical process implies that the gaps before and after detection will be distributed in the same way. After detection, the gaps are not distributed exponentially. and the search process returns to the target too soon after detection for it not to have learned i.e., search is not a memory-less process
9. The detection process i.e., the assumption that detection is based on time exploring the target and not search time overall, makes predictions within the framework of the statistical process: probability of detection is exponential in the time on target.
10. The distribution of the number of targets detected across all trials in the data set is described by two or three exponentials. This result is supported by the data when the search time is shifted to account for the delay in first visit