Robotic systems can help reduce warfighters’ presence on the ground in dangerous areas. “Robots going into these areas ... have the dexterity and the capabilities to clear rubble,. These technologies have features that can be directly applied to robotic systems on the battlefield.
Unmanned platforms could be used to complement Marines working at forward arming and refueling points in remote locations. Such technology is already present in industry, like exoskeletons that provide support to manufacturing lines.
“That's the small, lethal footprint that required to get after for this logistics concept that we seek. The Navy is also examining how it can integrate unmanned vessels into its logistics fleet.
Specifically, the Navy and Marine Corps are looking at how they can leverage medium and large-size vessels for logistics operations, “We don't have specific requirements or specific plans as of this moment, but that's going to be a big growth area for the Navy and Marine Corps over the next couple years.”
Additionally, the Navy is working to ensure that its logistics enterprise is able to support all warfighting functions “with increased speed, agility and survivability. The service is employing multiple lines of effort to do so, which include diversifying distribution, improving sustainment, enabling logistics awareness, optimizing installations and supporting sustained operations.
Future fighting environments will push sea bases further from fleet Marine forces ashore. That means there will be a need for long-range connector capabilities that can operate from sea bases hundreds of miles off the coast. The services will also need to make its equipment lighter and more economical to use.
The Army’s future plans rely a lot not just on AI but also on ever-more-intelligent ground robots. Right now, a single Army operator can control about two ground robots. The Army’s plans are to get that ratio to one human to a dozen robots.
That will require those future ground robots to not just collect visual data but actually perceive the world around them, designating objects in their field of perception. Those robots will have to make decisions with minimal human oversight as well since the availability high-bandwidth networking is hardly certain.
Team ran robotic experiments where ground robots demonstrated that they could collect intelligence, maneuver autonomously and even decipher what it meant to move “covertly,” with minimal human commands. The robot learns and applies labels to objects in its environment after watching humans.
Army is determined to field a mid-sized combat robot vehicle, but the prototypes are outstripping the datalinks that would connect them.
One prototype is a 10-ton, 20-foot electrically-powered treaded minitank that can carry a small aerial drone on its back and can pop a smaller ground robot out of a front compartment. Army leaders say that they’ve also been experimenting with battle concepts that combine soldiers, unmanned tanks, and small UAVs
Beyond its quest for semi-autonomous ground robots, the Army is looking into more and more data-intensive gear, such as the Integrated Visual Augmentation System, or IVAS, a set of augmented-reality goggles intended to give soldiers a lot of visual real time data to help with tasks like targeting during operations, and also with training and simulation during downtime.
That’s also supposed to hook up with data feeds from tanks or other robots. But the rush to develop and field the newest tech concepts, and to integrate heavy amounts of data into all facets of operation, have driven the Army’s data needs skyward.
Open architecture will allow the Army to upgrade its communications and data networking as needed, as well as to incorporate higher levels of autonomy, as those capabilities emerged. Teams experimented with integrating ground and aerial robots with the Ripsaw, but not yet in a communications-denied environment, in part because the Army has not yet published their specific needs for future mid-sized robot combat vehicles.
Robots may help extend solid data connectivity further afield, serving as flying or rolling cellular towers in a moving mesh network. “We’re also looking at unmanned vehicles to expand the network, to expand the line of sight so we can push these robots out as far as possible. So that they get in the riskiest places on earth and the soldier.
Manoeuvre Roadmap will forward strategies by listing the types of AI that will be needed year-to-year to support military strategy and maintain a firm understanding of what AI is and how it will be used to benefit the organisation. This understanding should go beyond buzzwords and definitions.
The trickier and more relevant question for many is: How will it do so, when it will do so, and in which markets and applications will it have the most impact? Certainly, professional computing applications and virtual work spaces are among those most clearly in the crosshairs of machine learning and professional spaces
“There's a reason for the confusion between Artificial Intelligence and Machine Learning“
Vendors are rushing AI solutions to market before the ultimate decision-makers and buyers are up to speed on what they need. But the plot thickens when Military Leaders are asked about the specifics of what AI can accomplish—and how AI differs from machine learning and deep learning.
Very quickly, technology leaders recognize that they need to put AI and its various subcategories e.g., machine learning, deep learning into practice—and into a common business vocabulary that everyone can understand.
The first step is communicating what the definitions are for AI, machine learning ML, and deep learning. There is some argument that AI, ML, and deep learning are each individual technologies. Leaders must view AI/ML/deep learning as successive stages of computer automation and analytics that are built on a common platform.
On the first tier of this platform sits AI, which analyzes data and quickly delivers analytical outcomes to users.
Machine learning sits on the tier two application of AI that not only analyzes raw data, but it also looks for patterns in the data that can yield further insights.
Deep learning is a third-tier application that analyzes data and data patterns, but it goes even further. The computer also uses advanced algorithms developed by data scientists that ask more questions about the data with the ability to yield even more insights.
The best way to demonstrate these different layers of increasingly complex analytics is by finding a business example that can show the benefits to the decision makers in the business.
Let's take the sample of traffic planning.
Tier one: AI
You develop an AI application that tells your traffic engineers and planners where the major traffic congestion points are located in the city. This assists them in planning for road repairs, stop lights, and other infrastructure that, hopefully, can relieve congestion in certain areas.
Tier two: Machine learning
You further develop your AI/analytics so that it also looks for patterns in the data. For instance, it notices the traffic at certain intersections is most congested in the morning between 6 am and 8 am, or that traffic queues up in the evening, ahead of a sporting event.
Knowledge of the situation gives planners and engineers more insight because now they can plan not only for traffic snarls but also for future events like football and hockey games.
Tier 3: Deep learning
Deep learning is where data analytics moves beyond raw data and data patterns. Deep learning adds specific algorithms that data scientists develop to further expand the querying and insights derived from the data.
Algorithms that could be added to the traffic analysis might include: What areas of the city will see the greatest population growth over the next ten years? Or, which roads will need major repairs in the next five years? Or, do weather projections say that we will have more or less snow over the next five years? By adding these algorithms on top of pattern and data analyses, users get a more complete picture of the situation they are trying to act on and assess.
Putting it all together into an AI roadmap
Being able to break down the differences between AI, machine learning, and deep learning is important because it shows Leaders not only the different tiers and capabilities of AI automation but also the increasing levels of business insights that can be gained from it.
By visualising these different AI tiers into a strategic roadmap, an organisation can measure tangible results for mission objectives.
So a city, for instance, can say that next year it will have a comprehensive understanding of its road system, and where the traffic congestion is located. In year two, the city will be able to predict traffic jams from rush hour and special event traffic and be able to proactively inform travelers to use alternate routes. And in year three, the city will be able to develop plans for the future by assessing population/traffic growth and infrastructure repair shutdowns.
New contract to apply artificial intelligence to Marine Corps maintenance could streamline logistics and help lessen dependence of fighting forces from long supply lines. Ultimately, AI could enable the far-ranging manoeuvres envisioned by the multi-domain operations concept.
Most debate about military AI centers on robots, but professionals usually talk logistics. Without fuel, ammunition, spare parts, and maintenance, no weapon, manned or unmanned, is going anywhere.
What’s more, while AI has made great progress in recognising objects/targets and navigating the physical world, autonomous combat robots are far in the future.
Marines will apply AI-driven “predictive maintenance” to part of its aging fleet troop carriers equipped with diesel engines, heavy-duty transmissions, and other features with hundreds million hours of metrics on diesel engines alone, and in the world of AI machine learning, the more metrics you have, the more accurate your predictions get.
The concept of self-driving cars has been around for years, but only recently have increasing advances in networking, satellites, and laser equipment made this dream a reality.
Vehicle manufacturers realized that they could use camera systems to relay data to an onboard computer that would process images of the road and create responses. Although we do not have robotic vehicles filling our roadways as of yet, some vehicles already contain numerous autonomous features that make driving easier and safer than ever before. Some models offer assisted parking or braking systems that activate automatically if they sense an issue. Vehicles can sense lane position and make adjustments there as well.
1. Reduce the number of accidents that occur on roadways.
When we are riding along in a driverless car, then what happens on the road is no longer subject to the numerous bad behaviors that human drivers exhibit as they attempt to reach their destination. A great majority of automobile crashes are as a result of human error. If computers are in more control, then there could be fewer behavioral incidents, costs that are associated with damage and help to reduce overall driving times.
While autonomous trucks aren't yet completely safe and accident-proof, especially in certain weather environments and road conditions, various reports that have been conducted claim that they will lead to a significant decrease in accidents compared to human-driven trucks.
2. Driverless cars could work with higher speed limits.
As human populations move toward the use of driverless cars, it may become possible to raise the speed limit that vehicles can drive on extended trips. The computers would calculate the operations of the automobile to ensure the occupants remain safe. That means passengers could take care of other needs while the vehicle does the work of transportation without compromising the safety of the people who are on the roadways.
Higher Speed limits might be considered as an option if more people are using self-driving cars. Since the computers calculate operation of the vehicle safely, driving time could be reduced by faster speeds allowed on the road.
3. It could reduce the amount of fuel that we consume for transportation needs.
Computers would make it possible for driverless cars to maximize the fuel economy of every trip in multiple ways. Platooning would allow for the vehicles to draft with one another to reduce the effort that the engines would need to work while on the road. Real-time updates to driving conditions could help automobiles avoid high-traffic areas, places where weather disruptions are possible, and other potential hazards in the road. Because these vehicles would likely communicate with each other while on the roadway, they could ensure that everyone reaps these rewards of this advantage while still providing a higher level of safety.
4. Driverless vehicles could reduce commute times.
Because a driverless car would likely communicate with the other vehicles around it and the roadway, it would know where to maximize speed and movement to ensure the quickest possible commute. Other automobiles would react when a vehicle needed to exist a highway, for example, preventing the need to force oneself into lanes, cutoff drivers, or miss an exit. Vehicles could travel in bumper-to-bumper platoons while automatically merging to accommodate oncoming traffic.
5. Decrease in Traffic Jams and Congestion
Another often overlooked advantage that autonomous trucks can provide is that they can greatly reduce traffic jams and congestion, especially on highways. That's because autonomous trucks will be programmed to take the most optimal route and will also not be susceptible to delayed human reactions that often lead to traffic backups.
Because self-driving cars are rarely involved in accidents, their potential to ease congestion is high. Not only that, because self-driving cars can communicate with each other, they would eliminate the need for traffic signals. By driving at a slower rate but with less stops, better coordinated traffic would lead to less congestion.
7. Cost Savings
An advantage is that autonomous trucks can result in cost savings to the companies that use them. Of course, there will be a large upfront investment in buying a vehicle that's capable of autonomous driving, but over time, this is expected to pay off through a combination of increased efficiency and a decrease in drivers that have to be paid.
:8. Nearly No Error
The incredibly complicated technology behind self-driving cars lets the on board computer make hundreds of calculations a second. These include how far you are from objects, current speed, behaviour of other cars, and location on the globe. These super accurate readings have virtually eliminated driving errors for test cars on the road, as the only accidents so far are while human drivers have been in control.
A computer might not be 100% perfect, but it is far closer to that standard than a human driver could ever be. By using complex algorithms that guide the vehicle in the correct lane, calculate the appropriate stopping distance, and other information that’s available while on the road, there is a significant decrease in the risk of an automobile accident when using this technology.
9. Less space in between vehicles
Autonomous cars allow vehicles can ride closer together, therefore allowing more cars on the road with actually less traffic. We could drive faster with driverless vehicles on the road.
Because computers would be handling the driving responsibilities for long-distance trips, the design of our highway system would support a higher speed limit on straight stretches of road. That means we could arrive at our destinations faster without increasing the risk of an accident. Each autonomous vehicle could calculate its distance between vehicles, determine the highest and safest possible speed.
10. Sensor technology
Sensors could potentially perceive the environment better than humans could, seeing farther ahead, better in poor visibility, and detecting smaller and more subtle obstacles. Plus, several cameras might be used at once, and cameras have no blind spots, so they will be more aware and vigilant than a human driver ever could be.