These kinds of predicaments, which characterize much of what soldiers train to face, are immeasurably improved by emerging applications of AI; artificial intelligence can already gather, fuse, organize and analyze otherwise disparate pools of combat-sensitive data for individual soldiers.
Target information from night vision sensors, weapons sights, navigational devices and enemy fire detection systems can increasingly be gathered and organized for individual human soldier decision-makers.
However, what comes after this? Where will AI go next in terms of changing modern warfare for Army infantry on the move in war? Teams are exploring a “next-level” of AI. Fundamentally, this means not only using advanced algorithms to ease the cognitive burden for individual soldiers -- but also network and integrate otherwise stovepiped applications of AI systems. In effect, this could be described as performing AI-enabled analytics on groups of AI systems themselves.
“Autonomy is doing things in a snipped way that can be connected. We can benefit from an overarching AI approach, something that looks at the entire mission. Right now our autonomy solves very discreet problems that are getting more complicated.”
What does this mean? In essence, it translates into a way combat commanders will not only receive AI-generated input from individual soldiers but also be able to assess how different AI systems can themselves be compared to one another and analyzed as a dynamic group.
For instance, multiple soldier-centric AI-empowered assessments can be collected and analyzed in relation to one another with a mind to how they impact a broader, squad-level combat dynamic. In particular, simultaneous analysis of multiple soldier-oriented AI system can help determine a best course of action for an entire unit, in relation to an overall mission objective.
“What is the entire mission and possible courses of action? Do we optimize the logistics flow? Find targets as the dynamic battlefield gets more complex? The Commander can draw upon advanced AI to explore new options.
So in addition to drawing upon algorithms able to organize data within a given individual system, future AI will encompass using real-time analytics to assess multiple systems simultaneously and they how impact one another to offer an overall integrated view. All of this progress, just as is the case now, will still rely heavily upon human decision-making faculties to optimize its added value for combat. Integrating a collective picture, drawing upon a greater range of variables will require soldiers to incorporate new tactics and methods of analysis to best leverage the additional available information.
“When we have new and improved autonomy coming in, soldiers need to know how to use that. How do you keep the soldier always at the center and adapt to them as you adapt to the new AI?
One soldier could receive organized sensor-driven targeting data relevant to a specific swath of terrain, while another AI system is organizing variables to determine the supply flow of ammunition, fuel or other logistical factors.
“Data never seen cannot be learned. It is not about AI, but combining AI with a soldier who has the concept of an entire mission. AI provides information and then they get put together. When you are under fire, you are going to need different types of information.”
For example, comparing and analyzing various AI systems to get a collective picture of some kind might enable a commander to know ….“If you go this way you will use more fuel but it will be safer.”
While Machine-Learning techniques continue to accelerate the pace at which an existing AI database can quickly integrate and perform analytics on new information, AI-infused computing can only make decisions or solve problems in relation to the information it already has stored. Now it goes without saying that these databases are increasingly vast, almost seeming limitless, yet they do need to consistently be fed with not-yet-stored information of great relevance to wartime decisions.
Every autonomous system that interacts in a dynamic environment must construct a world model and continually update that model. This means that the world must be sensed through cameras, microphones and/or tactile sensors and then reconstructed in such a way that the computer ‘brain’ has an effective and updated model of the world it is in before it can make decisions. The fidelity of the world model and the timeliness of its updates are the keys to an effective autonomous system.
“AI & Robots Crush Foes In Army Wargame with simulated infantry platoon, reinforced with drones and ground robots” How big a difference does it make when you reinforce foot troops with drones and ground robots? You get about a 10–fold increase in combat power, according to a recent Army wargame.
“Their capabilities were awesome when you can command a robot-reinforced platoon in nearly a dozen computer-simulated battles.
That mission: dislodge a defending company of infantry, about 120 soldiers, with a single platoon of just 40 attackers on foot. That’s a task that would normally be assigned to a battalion of over 600. In other words, instead of the minimum 3:1 superiority in numbers that military tradition requires for a successful attack the simulated force was outnumbered 1:3.
When they ran the scenario without futuristic technologies, using the infantry platoon as it exists today, “that did not go well for us.”:
But that was just the warm-up, getting the captain and his four human subordinates – three lieutenants and a staff sergeant, each commanding a simulated squad with a complex physics-based model so fine-grained it can assess whether an individual simulated soldier is compromised in any given attack. The amount of information each solider gets is limited. They only know what their simulated soldiers on the battlefield could, so it replicates real-world action.
Then the wargame organizers added dozens of unmanned systems to the simulation. The immediate impact was on what the team could see. Instead of being limited to the immediate field of view of their simulated soldiers, they could send the drones ahead to scout. Instead of being able to engage the enemy about 500 meters away, they could spot and attack them from 5,000 meters
“It was awesome to be able to increase that zone of where we knew exactly what was going on, without being right on top of the enemy. We were able to pretty much control the amount of area that probably a battalion-minus would have been able to control, with just one platoon.”
That doesn’t mean it was easy to adapt to the new tools. “The first time we used them was definitely a learning curve.” Drones can move much faster than ground robots, but they can’t carry as much firepower as a ground vehicle of similar size and cost. So, at first the fliers rushed ahead, found the enemy position, and then had to wait for the ground units to catch up.
Meanwhile the opposing players, controlling the enemy force, noticed the drones and, although they weren’t able to shoot them down, they could use the time to ready their defenses. The manned-unmanned team still won, but not as decisively as they wanted to.
“Our UAS [Unmanned Aerial Systems] were able to identify exactly where enemy were, but we were unable to eliminate them without our ground vehicles. You have to figure out how you’re going to mass combat power,” rather than attack piecemeal.
“As we did more and more iterations, we were able to build in more control measures and have more of … a human in the loop.After about the second or third run with all the advanced systems, , the human players were able to coordinate the air and ground robots in a single synchronized assault.
Coordinating these high-tech combined arms – aerial drones, unmanned ground vehicles, and human foot soldiers – was a lot more complex than leading an ordinary infantry platoon. While young troops who grew up on video know how to use computer control interfaces, they may not have the tactical experience required.
What’s next? The Army wants to build actual prototypes of select technologies for a series of real-world field tests and experiments in 2020. Army will try out the individual prototypes, then integrate them together into a series of increasingly complex experiments, culminating in a full “system of systems” field exercise.
Increasing the range of the platoon’s technologies 10-fold – from 500 meters to 5,000 – increases the area it has to control exponentially. The key technology was a platoon artificial intelligence cloud, the architecture that allowed our soldiers to be able to control robotic systems that were extending their reach within that battlespace.”
Modern drones can fly themselves from point to point. The human just has to set the destination. Even ground robots, which have to deal with rocks, trees, mud, and more, are increasingly capable of detecting obstacles – which requires a lot of AI brainpower to interpret sensor data – and finding their way around them.
So the simulation assumed the robots could find their own way to an objective without a human remotely dictating every twist and turn along the way.
“We were not flying these things, we were not telling them how to drive,. We‘re saying, this 100 by 500 meter area, you have to go here,’ and they would figure out how to do it.”
“Soldiers are not controlling these systems. “They are commanding the AI cloud to control these systems.”
Now, just getting places is not enough. Probably the most complex and critical task of the artificial intelligence – both the individual AI on each unmanned vehicle and the overarching AI in the platoon cloud – is to pull together sensor data, digest it, and condense all the millions of 1s and 0s into a single picture of the tactical situation that a human commander can understand.
“The thing that made this work was that platoon AI cloud that gave you the situational awareness of what was going on with that huge area. That level of artificial intelligence doesn’t exist – at least not yet. To simulate its effects, the wargame relied heavily on human beings, a neutral “white cell” that took the sensor data, interpreted it, and summed it up in text messages to the team.
In the tech world, it’s called a “mechanical Turk”: human labor pretending to be automation. Developing an AI that can synthesize data this way in the real world is a major effort across the armed services, part of the wider push for what’s called Joint All-Domain Command & Control.
The human-driven process used in the wargame was intended to help the Army think about how to use such an AI without waiting for someone to build first. “It’s a surrogate. It’s a model of the real world that has met all of the requirements to be accepted for analysis.”
The next step is building the real thing – and testing that it really works
"Pentagon Proposes Automation to be Seamless as in Strategy Game"
With a switch click, unknown tanks and infantry are clear, as our tank-commanding avatar holds a tablet with the adversary positions illuminated in red. Finding these adversaries are an array of systems, from satellites to drone swarms to uncrewed reconnaissance vehicles on the ground.
Another click, and the hostile forces on the screen are replaced by scorch marks, the tank commander’s tablet illuminated with the range of strikes called in from air and land forces.
While it exists in simulations and in games, perfect information on a battlefield remains an impossibility. Creating a “red force tracker;” that is, an intelligence collection process that provides real time information on where enemies are at all times, is a stretch for current technology. But it is one that could get closer to reality with autonomous robots scouting and providing information. This would take a great degree of information integration and distillation at the point of collection to work.
Rather than remote-control or teleoperated machines, future machines could be autonomous enough to require little human supervision, employ complex tactics, and to allow for a high degree of coordination with little need for communication.
“If we want to reduce load on soldiers, we have to get the equivalent of Siri for robots. We have to get the same interaction from a human-computer interface that a tank commander has with its driver, where it can maneuver in that space.”
Consider example of the tablet-commanded robot scouts and called-in strikes. This is a vision of military command where a human sits at the center of an autonomous body of sensors, perhaps gives them objectives but not specific targets, and then lets the machines process information to convert objects recorded with cameras into coordinates for where airplanes and artillery should place explosives. It’s a vision of war almost as seamless as a round of Command & Conquer real time strategies.
1. Challenges facing military decisions clearly involve interdependences, uncertainties
2. Assessing complex domains necessitate critical judgments to increase outcome probability
3. Quantifying big data alone to make decisions is often inadequate
4. Effective analyses require qualitative information to uncover insights into behavioural domain
5. Select metrics chosen based on ability to represent battlefield conditions
6. Establish clear standards of spatial/temporal data reliability
7. Account for impact of behavioral factors such as discipline
8. Use valid assumptions and establish category testing criteria
9. Consider the probability that the analysis was wrong?
10. Identify risks to operations if the analysis was wrong?