Marine  Magnet Dispatch Service Centre
  • Drone Fleet Readiness Office
  • Status Updates
  • Marine Corps Integration
  • Submit Comments
  • Download Reports
  • Building Drone Swarms

Top 10 Goals Build Enterprise Network for Timely/Persistent Info Exchange in Demanding Conditions

10/20/2020

0 Comments

 
Pentagon planners have been developing a new joint war-fighting concept, a plan meant to guide how the Defense Department fights in the coming decades.

The concept is centering around a key idea — one that requires tossing decades of traditional thinking out the window. “As opposed to what many leaders have been doing their entire career, the biggest difference is that in the future there will be no lines on the battlefield.”
 
The current structure is all about dividing areas of operations. “Wherever we go, if we have to fight, we established the forward edge of the battle area, we’ve established the fire support coordination line, the forward line of troops, and we say: ‘OK, Army can operate here. Air Force can operate here.”

“Everything is about lines” now.  But to function in modern contested environments, “those lines are eliminated.”
 
What does that mean in practice? Pentagon has put forth a vision in which every force can both defend itself and have a deep-strike capability to hold an enemy at bay, built around a unified command-and-control system.
 
“A naval force can defend itself or strike deep. An Air Force that can defend itself or strike deep. The Marines can defend itself or strike deep. “Everybody.”
 
That “everybody” includes coalition framework so this plan only works if it can integrate others. And for the entire structure to succeed, the Pentagon needs to create the Joint All-Domain Command and Control capability currently under development.

 “We had disparate services with their concepts of fighting. We never really had a manner to pull all the services together to fight as a coherent unit.”

The war-fighting concept will directly “drive some of our investments” in the future and tie together a number of ongoing efforts within the department — including the individual combatant command reviews and the Navy’s shipbuilding plan.

There’s some critical components from those reviews — how you command and control the forces, how you do logistics; there are some common themes in there in a joint war-fighting concept.
 
If we had that concept right now, we could use that concept right now to influence the ships that we are building, the amount of ships that we need, what we want the combatant commands to do.

“So this war-fighting concept is filling a gap. Leadership wishes we had it now.  “It would inform all of the decisions that we make today because now is about positioning ourselves in the future for success.”
 
There is an aspect that we need to share of this joint war-fighting concept. We have to preserve the classified nature of It and be careful what is reported, to a degree.
 
It’s Time to Fix the Command Post.
 
Headquarters need to optimize mobility, survivability and interoperability for the future fight.

Under the conditions of the modern battlefield, brigade combat teams (BCTs) need to find ways to mitigate adversary threats through improvements in mobility and survivability as well as a reduction in the signatures of BCT command posts and tactical assembly areas.
 
It’s time to ask whether enough has been done to prepare Army units for the challenge.
 
What Has Been Done?
 
In the past, the acquisitions process often yielded technology and systems in compartmentalized warfighting-function packages rather than holistically. Establishing the Army Futures Command signalled a unified development and acquisition efforts and represented a substantive first step in identifying and solving interoperability problems across the force.
 
Leaders recognized the shortcomings of legacy systems and openly committed themselves to breaking “stovepipes” of data and technology as the Army shifted away from counterinsurgency and stability operations and prepares increasingly for large-scale combat operations.
 
Emphasizing the use of analog Military Decision Making Process products enables command posts’ continuity of operations through displacements and minimizes the need for PowerPoint slide decks.
 
Army Futures Command and unit-level initiatives to optimize tactical operations center processes continue to permeate unit headquarters across the Army, along with the occasional incorporation of commercial off-the-shelf technology and significant upgrades to communications like Tactical Network Transport On-the-Move.
 
Digital systems certainly have a place in the way forward, but without a solution to the multi-hour system boots and shutdowns needed to establish connectivity, many units opt to fight with a minimalist architecture.
 
Army is trying to improve capabilities offered by the intelligence system of record, the Distributed Common Ground System – Army (DCGS-A). These are all steps in the right direction to resolve gaps in how tactical data is pushed to commanders in austere, resource-constrained environments.
 
The Army’s efforts to modernize also come with the acknowledgement that at times, commanders are being saturated with “too much data,” which can be alleviated with software improvements that allow for targeted data pulls.
 
What More Can Be Done?
 
Miniaturize Army Footprints
 
As the Army shifts back toward preparing for large-scale combat operations, it needs to look hard at the size of maneuver units’ footprints. Some units have experimented by, for example, downsizing sprawling, counterinsurgency-era mission command centers through the optimization and decentralization of legacy systems.
                                          
The current force is vulnerable to precision fires cued by visual and electromagnetic signatures, which threat forces use to target formations from the top down. The Army should move the small tent cities onto highly mobile platforms that come equipped with built-in electronics and communications suites.
 
The command centers of the future must be capable of rapid displacement with the ability to maintain scalable mission command. In order to reduce signatures, it is also in the interest of developers to hone communications suites with dedicated bandwidth that can reach back to processing, exploitation, and dissemination cells in the consolidation area.
 
No tent can replicate the mobility of mission command on wheels. Even the smallest, most mobile tents still take upwards of twenty to thirty minutes to break down and set up, even with practiced crews. Achieving this kind of mounted mobility may compel the Army to sacrifice vehicle armor for expandable capability.
 
Ultimately, this risk must be weighed against the anticipated character of future kinetic fights. The current inability to achieve rapid system displacement is so pervasive that it will require a refashioning of BCT and battalion command posts across the force.
 
The use of expandable vehicles at several BCTs, repurposed from legacy workshop vehicles, has shown that formations can adjust to the new reality. While effective, the Army should look to field these types of systems from the ground up for the next generation.
 
As legacy intelligence and signals systems are phased out, it is imperative they are not replaced with vehicles at a one-to-one ratio. Acquisitions should look to fashion systems that bring functions under fewer roofs and can displace at a moment’s notice.
 
A smaller physical presence, complete with built-in retractable antennas and dishes, will be key to dispersion and the elimination of antenna farms and other time-consuming displacement items. This will require the elimination of multi-hour system boots and shutdowns or the permanent tasking of these systems to the consolidation area.
 
Footprint reduction must also come with the institutional concession that displacement crew drills cannot fully mitigate the threat of BM-21 multiple launch rocket system salvos. While drills are important and should be trained, commanders should not let proficiency in crew displacement create a false sense of security. Systems modifications to promote mobility will be a huge challenge for the acquisitions community.
 
We need viable frameworks to get intelligence to “move at the speed of decisive action” while performing key tasks to inform commanders’ decision-making processes. The transition to reduce vulnerable forward footprints with condensed, modular systems will take time and vision.
 
At the core of this shift should be a sustained commitment to maneuver supported by precision fires. The Army will not perpetuate these successful institutional norms without more maneuverable BCT and battalion headquarters and complementary fires development to protect units from long-range artillery.
 
The unmitigated effects of adversaries’ intelligence, surveillance, and reconnaissance, electronic warfare, and artillery against the American constellation of ground command posts could lead to overmatch and potential defeat on a future battlefield.
 
Mitigate Bandwidth Issues and Improve Interoperability
 
Establishing and maintaining sustained communications in remote operational environments is a struggle for even the best-trained organizations.
 
While some units master the tenets of “shoot” and “move,” the last building block, “communicate,” often remains elusive. Many units fail due to overly complicated architectures, lack of support, lack of priority, and user error.
 
Tactical bandwidth requirements continue to outpace system capabilities, with different warfighting functions demanding alternate networks to meet classification needs. The “soda straw” analogy is a popular visualization of this issue, describing how voluminous data can only flow at the rate of available “piping” or bandwidth.
 
The typical BCT pipes several levels of data networks, tactical radio, and satellite-based tracking systems simultaneously, saturating the signal environment. The lapses when moving forward headquarters leaves ample room for shutdown and startup errors.
 
The Army should seek to condense as many of these tactical data streams and functions as possible. One reliable network that is accessible across the operational environment is a reasonable aim, even if it makes networks slightly more vulnerable to electronic attack.
 
Some say the Army has already taken this step with the tactical SIPR network, but there are numerous systems that do not fall under this umbrella. This goes back to the Army’s culture of security, which is rigid by regulation but often loose in practice to reduce friction.
 
The Department of Defense consistently labels systems and documents “SECRET” when it’s not necessary, slowing down the dissemination and collection of battlefield data.
 
In essence, this creates operational barriers at lower echelons. A thorough review of future systems and documents should emphasize lowering classification when possible, reducing the security burden on commanders and freeing up bandwidth across the BCT for priority communications.
 
Achieving this kind of network would come with a tradeoff between communications and operational security in exchange for reliability and access. In this area, innovators have already started to step in, providing alternative solutions like soldier-level ATAK (Android Tactical Assault Kit) modules and encrypted communications that use cellular network backbones.
 
Though these might not work well in an austere environment and are more vulnerable to targeting by electronic warfare, the need to simplify user-level systems and encourage interoperability is paramount.
 
Simplification is an area where the Army has made some strides. There have been efforts to overhaul of the legacy DCGS-A software suite to eliminate extraneous functions, condensing essential tools into fewer, and more user-friendly, applications.
 
With regard to mission command, BCTs regularly fail to achieve doctrinal expectations using currently fielded systems. The Army can learn from “self jamming” failures and build on successful simplification efforts to balance the risks between security and reliable mass communications.
 
Pairing on-the-move mission command with at-the-halt capabilities hampers commanders’ ability to maintain tempo. Mobility enhancement and dispersion enable commanders to reduce the threat to command posts and keep vulnerable elements out of the range rings of the enemy.
 
The Army doesn’t get to choose its next conflict or adversary, but it can prepare for the fight through investments in command nodes, communications, and trust in commanders’ equipment requests.
 
A blind brigade will never be able to mass and link/coordinate effects. On the other hand, a brigade enabled by highly mobile and redundant mission command, bound together by flexibility in tactics and communication, will stand ready to meet and defeat adversaries on a future battlefield.

Each aligned Network project includes supporting Plans of Actions and Milestones that encompassing issues, risks, and mitigation plans to meet the overall strategic vision. Each project contains measurable parameters verified in advance of commencement.

Project progress measures against the originally set parameters quantifiably tracking each project’s advancement. Planning, prioritization, and coordination of these project efforts are necessary to establish and maintain tempo and momentum in system development.

Here we provide objectives addressing technical methodology and details to provide engineering details and parameters of interrelated projects across the strategy. In other words, they provide the “how” in regards to the technical parts, processes, and policies required in achieving assured C2.

1. Transform Air Ground Task Force Command and Control

Establish warfighting network providing interoperable, always present and rapid access to information in any location/condition

2. Unification

Standardize network by eliminating legacy systems, consolidating resources, and optimizing network operations

3. Applications

Establish and maintain application development standards, processes, and infrastructure allow system capabilities to be delivered rapidly and efficiently

4. Installation/Tactical Processing Nodes

Standardize capabilities of regionally aligned Installation Processing Nodes, increasing effective access to secure information

5. Command and Control Network Survivability

Ensure survivability of the Command and Control network through the most austere, challenged, and contested conditions

6. Network Security

Establish appropriate measures to protect and defend data, users, systems, connections, and missions

7. Equipment

Maintain proven Command and Control capabilities, while fielding capable, relevant, and interoperable systems employed in innovative ways

8. Workforce Training and Resources

Establish and enforce disciplined workforce skill sets and processes for delivering effective/efficient enterprise capabilities across all warfighting functions requirements

9. Enterprise Command/Control Policy

Establish enterprise policies, processes, and standards that govern the implementation, operation, and sustainment of the Command and Control network

10. Information Technology Value

Influence and shape investment opportunities to demonstrate fiscal responsibility while guiding innovation throughout Capabilities Based Assessment
 

0 Comments

Top 50 Network Design Foundation Established at Project Demo Test New Concepts Modernisation Efforts

10/20/2020

0 Comments

 
Army officials witnessed the progress of new technologies at Project Convergence. Other times, they saw room for improvement. “This is a major step forward in transforming the Army for the next 40 years. This may be the most important thing the Army is doing today,” outside of global ongoing operations.
 
Army leaders hope Project Convergence shows the service is more willing to abandon the customary hand-wringing over whether technology is completely mature, and instead jump into the fray to try and fail.
 
“Every one of these weapons systems, as they get anywhere close to a determination by Army leadership … we’re putting it in Convergence. It’s a much more effective means for operational tests.”
 
Perfection vs. progress
 
Project Convergence was meant to be hard, to challenge in the harshest of environments the systems and concepts that would be needed against sophisticated adversaries. It was so hot in the  desert that aircraft tires blew up on the runway.
 
While the effort met and, in some cases, greatly exceeded expectations, it was wholly imperfect. In a small warehouse full of military leadership, reporters, foreign military counterparts,  and civilian engineers, the Army put its tech to the test.
 
At the front of the room were eight large television screens: The top four showed livestreams of the action occurring on the range — mortar fires, drones flying overhead and the targets set to be destroyed.
 
Yet the real action was taking place on the four bottom screens, where observers could see the Tactical Assault Kit — a software system giving operators a common picture of what was happening on the range — and other advanced software applications. And below those screens were 10 Army operators controlling all of the action from behind their monitors.
 
Leaders focused on those 10 people up front that were simulating a tactical operation and a little less so on the screens. “The only screens that I really cared about were the lower ones that showed how they were moving the pieces from the battlefield.”
 
In other words, Project Convergence was not about the boom. It was not about hitting the targets.
 
Which is good. Because not all of them did. On that day, the Army missed two targets during live-fire demonstrations. In the first miss, a modified Gray Eagle UAV used another drone as a munitions mule. The primary Gray Eagle identified the threat, ordering the other drone to drop a glide munition on the target. The first Gray Eagle was supposed to guide that munition to the target over the Army’s tactical network — however, a lost network connection led to a miss.
 
In the other miss, an XM1113 projectile failed to explode on its target after being fired from the first prototype of the Army’s Extended Range Cannon, which has the capability to fire beyond 70 kilometers. “This was the first time we had the XM1113 firing targets with high explosives inside of it, instead of inert, with a currently fielded precision guidance kit, not the next long-range precision guidance kit. It didn’t work every time. It was accurate, but it didn’t go off every time.”
 
 
Networks aren’t built in a day
 
Over the course of the effort, the Army was able to shorten its sensor-to-shooter timeline from 20 minutes to 20 seconds by using emerging artificial intelligence algorithms, experimental tactical networks and space-based sensors. This introduced a new pipeline that takes sensor data, transforms it into targeting data and delivers it to the best possible shooter at blazing speeds.
 
“Taking information from space-based sensors and passing into ground- and air-based effectors seems really simple and happens super fast, but it was very complex and took us weeks of hard coding in order to get it done.
 
 “It’s all about the ability to pass data, and if you can’t do that, a long-range cannon becomes interesting, but not really relevant." During the effort, “we had all kinds of problems. That’s why we were rewriting code and fixing stuff constantly. But it worked.
 
 
The network “was a plus and minus.  It was the backbone of everything the Army was doing at Project Convergence, but “like everything else we brought, what we had, it wasn’t designed for this.”
 
The network used during the event is being designed for a tactical brigade, and not to operate in the context of Project Convergence in a constrained bandwidth environment. The network worked. It just took a tremendous amount of work to get it to work."
 
For example, the Army had a hard time getting the network to connect from ground assets to air assets, while ground-to-ground and air-to-air connectivity performed well.
 
The connectivity between ground and air was established — for the first time ever — through a mesh network of unmanned aircraft called Air-Launched Effects that were intended to serve as repeaters to extend connectivity beyond line of sight.
 
Some of the problems had to do with the conditions. “There were lots of days we couldn’t fly because of crosswinds. ”We’re continuing to refine the mesh network and increasing — really not necessarily the range but the throughput capacity — is probably where we need to go.”
 
There were also tests that surpassed expectations. With some systems, it was expected that “swivel chair” methods would be needed to manually input information from one system into another. “We were able to actually automate machine to machine, which we weren’t expecting to be able to do.”
 
Project Convergence was easily the Army’s — and perhaps the entire military’s — largest test of emerging AI capabilities to date. AI algorithms were used to speed up the sensor-to-shooter chain at every point. But perhaps the biggest star of the event was FIRES Synchronization to Optimize Responses in Multi-Domain Operations, or FIRESTORM.
 
“What is FIRESTORM? Simply put, it’s a computer brain that recommends the best shooter, updates the common operating picture with the current enemy situation and friendly situation, and admissions the effectors that we want to use to eradicate the enemy on the battlefield.”
 
Observers at Project Convergence were able to see FIRESTORM’s recommendation pop-ups in real time — often in the blink of an eye — allowing operators to rapidly review and approve the algorithm’s recommendations.
 
But the AI systems at play are still in development, and Army officials acknowledged they remain brittle.
 
While prepping for Project Convergence, the Army used more than 3.5 million images of opposing and friendly forces' tanks and vehicles to build a database of enough images in different contexts to train the algorithms needed for the exercise. “Every image you get to train an algorithm just makes that algorithm better.”
 
“Artificial intelligence is no different than a human being.  "It needs a number of repetitions to be perfect. The difference is they’ll be faster and better because it’s a machine. But you still have to have hundreds if not thousands of these repetitions to make it perfect. "And you might have seen the fourth rep. Come back when there’s 400.”
 
Cultural shift
 
Project Convergence pointed the Army in the direction it must go if it wants to shape a force for future conflict, but the effort also pushed the service toward a major cultural change.
 
“It’s OK to fail. We’ve done some touchpoints with soldiers, and soldiers aren’t used to failing and it actually drives them crazy when you put a piece of immature technology in their hands that doesn’t work exactly like they expect it to work.
 
“That’s part of the culture change: It’s OK. “You’re learning, growing and making better decisions, investments, as you continue to mature this technology. This is absolutely the way forward.”
 
Army has linked together experimental drones, super guns, ground robots and satellites in a massive test of its future warfare plans. The service mounted the first demonstration of Project Convergence, brought in some 34 fresh-out-of-the-lab technologies.
 
The goal: to show that these weapons and tools—linked and led by artificial intelligence—can allow humans to find a target, designate it as such, and strike it — from the air, from kilometers away, using any available weapon and in a fraction of the time it takes to execute that kill today.
 
It was an ambitious test that revealed how far Army leaders have come in their goal of networked warfare across the domains of air, land, space and cyberspace. It also provided a vivid picture of how much further the Army has to go.
 
The scenarios involved different phases of a land invasion. In the first phase, dubbed “Penetrate,” satellites in low Earth orbit detected enemy anti-air missile ground launchers. That info was passed to a ground processing station called the Tactical Intelligence Targeting Access Node, or TITAN, more than a thousand miles away.
 
The TITAN operator sent a target-data message to Yuma where a fire command was processed and sent to the Extended Range Cannon Artillery, or ERCA, the Army’s new 70-km super gun. Next, a scout helicopter — actually a surrogate for the Future Attack Reconnaissance Aircraft, or FARA — located the command-and control-node of the enemy air defenses, a wheeled amphibious armored personnel carrier, using an object-detection AI dubbed Dead Center onboard the drone.
 
An Air Launched Effects drone, or ALE, launched from the helicopter, provided a floating mesh network beyond 50 km. An autonomously flying Grey Eagle drone swooped in at 300 feet — far below its normal operating floor of 10,000 feet or so — and hit the target with a Hellfire missile.
 
With some of the key targets out of the way, the second phase, dubbed “Disintegrate,” sought to dissolve the remainder of the adversary’s anti-aircraft capabilities.
 
Helicopters serving as surrogates for Future Vertical Lift aircraft and an ALE looked for other targets, passing their sensor data back through the mesh network. An artificial intelligence called FIRESTORM — short for or FIRES Synchronization to Optimize Responses in MDO , took in the data, mapped the battlefield, and generated recommendations for which weapon should hit which target.
 
The ERCA gun fired a round, hitting a multiple launch rocket system some 56 kilometers away. One Grey Eagle transmitted targeting data based on visual information — not GPS or laser designation — to another one, which attempted to hit the target with a GBU-69 glide bomb. But it’s unclear if the missile actually fired as the communication link was briefly lost. The target was not destroyed.
 
In the third and final phase, “Exploit”, manned and unmanned ground vehicles began to move into the area. Operators used Aided Target Recognition Software, or AiDTR, to find new targets like armored transport trucks. FIRESTORM tasked Next-Generation Combat Vehicles, or NGCVs (also played by surrogates in the exercise) to hit the targets. Another small drone called a Tarot, also with AiDTR, launched and detected enemy infantry fighting vehicles. FIRESTORM issued orders to suppress the enemy with mortars until they could be hit directly.
 
Operators in the NGCVs ordered unmanned ground vehicles to launch even smaller helicopter recon drones. As new enemies showed up, FIRESTORM sent recommendations about which weapon to employ.
 
The action was at times difficult to follow. Multiple things seemed to happen at once, in part by design. Some of the technologies displayed, such as the Grey Eagle’s low-altitude autonomous flying and the coordination among some of the drones, would have made for impressive demonstrations by themselves.
 
 
The Army may not be able to ever fail cheap. But it can fail cheaper. And it can derive greater reward for the effort. The Convergence demonstration bore out the premise of the project: data can be sensed from a wide variety of sources, fused, passed through machine intelligence, used to designate a target and then hit that target in a literal fraction of the time it takes the Army to execute those steps today, from 15 minutes down to less than one. The ability to do that will determine victory or defeat, life or death, in future conflicts.
 
The electronic “kill chain” that the Army is building to connect other services’ long-range sensors to its new long-range shooters is increasingly agnostic about which sensors and shooters you plug in. “The sensor and the shooter are unimportant. It’s the pathway. “Once the pathway was proven, it could be against any target with any shooter.”
 
 
 
 
1. Transform Air Ground Task Force Command and Control

2. Standardize the network by eliminating legacy systems, consolidating resources, and optimizing network operations

3. Complete transition to one unified warfighting network

4. Permits total force access to mission-relevant information regardless of location or time

5. Reduce costs, increase efficiency, and improve security

6. Ensure adherence to standardized permissions models, structures, and procedures across the range of military operations

7. Move Beyond Disparate network/segment architectures currently compromising operations and protection of vital information

8. Enforce a comprehensive network architecture with standards, controls, measurable performance, and consistent C2 services

9. Unify Non-Classified Internet Protocol Router Network domains by eliminating legacy networks, systems, and applications

10. Position force to operate in the Joint and Coalition environment, including future Joint network security capabilities

11. Implement a virtual solution for access and releasability of classified information with Coalition partners

12. Advance integration with host networks aboard ships/platforms

13. Identify critical information exchange requirements, i.e., eliminate the need for multiple network configurations

14. Establish Enterprise Service Desk supporting the full spectrum of warfighting and business missions

15. Enable individual and organizational mobility

16. Deliver accessible and survivable “fighting hole” C2 information

17. Decrease the MAGTF’s footprint, power, and lift requirements

18. Allow enterprise data to be shared and analyzed

19. Supports rapid deployment of mission critical applications/services

20. Reduce network engineering/provision lead times pursue mobility and operational flexibility

21. Move beyond Reliance on “build-from-scratch” networks and services impeding speed and tempo

22. Improve ability to rapidly access common applications/services capability

23. Create secure means to share data for analytics

24. Establish network capabilities delivering mobility, operational flexibility, and warm-start resources

25. Expand network capabilities incrementally to all tactical units

26. Establish and maintain application development standards, processes, and infrastructure that allow system capabilities to be delivered rapidly and efficiently

27. Provide application mobility, function, purpose, and sustainment at reduced costs while ensuring interoperability and compatibility

28. Enforce standard applications that are properly hosted, deployed, and interoperable across hardware, platforms, and missions

29. Allow for increased automation and analytics across enterprise to enable faster decision cycles

30. Move past chaotic environment of disparate systems and information sources counterproductive to mission success

31. Establish application standards improves performance lowers maintenance costs with unsustainable overhead

32. Streamline application development process improve ability to quickly innovate and adapt to mission needs

33. Standardize and automate application build, hosting, and deployment

34. Migrate tactical and business applications to Enterprise Information Technology Services

35. Standardize capabilities of regionally aligned Installation Processing Nodes to increase effective global access to secure information

36. Provision a Comply-To-Connect capability that greatly enhances network performance and security

37. Facilitate single identity via a common user experience across the force with no distinction between garrison and tactical environments

38. Align network capabilities for systems at risk of failing, posing security vulnerabilities across the force

39. Create automated ability to ensure network security compliance, e.g.,“see” end-to-end or push security patches across the enterprise

40. Standardize Tactical Processing Nodes and Tactical Entry Points

41. Provide access to applications and information at tactical edge to supports execution

42. Consolidate and onboard data and application requirements from non-core data centers to a common hosting platform

43. Deliver resiliency and redundancy across the force to mitigate adversarial use of technologies disrupting and degrading C2 capability

44. Reduce electronic signatures

45. Alleviate overreliance on highly vulnerable space-based capabilities

46. Increase training for disconnected, intermittent, low-bandwidth environments

47. Enforce resilience and survivability performance parameters on C2 network architecture and systems

48. Incorporate radio waveforms designed for operations in a contested

49. Utilize emerging technologies for more survivable systems, e.g., free space optics, network foraging, and signal emission control

50. Train in information-contested/denied operating environments with realistic threats and operational challenges

0 Comments

Top 50 Networks Initiatives Established Results of Demo Build Foundation Sensor/Shooter Interoperability

10/20/2020

0 Comments

 
Unity among military branches and a combined, all-domain effort could be the difference in winning large-scale, multi-domain battles the Army expects to fight in the future. To help achieve that goal, is developing of Combined Joint All-Domain Command and Control, or CJADC2, which will impact units in both branches.

Services aim to establish CJADC2 at the most “basic levels” by defining mutual standards for data sharing and service interfacing designed to deliver CJADC2 capabilities to the warfighter quicker and to promote “shared” understanding of concepts and capabilities.

In the CJADC2 concept, each of the military branches would connect sensors, shooters and command nodes in a “mesh network” that would allow commanders more options and the ability to act faster. Each branch must learn to interface with each other and successfully access data, reconnaissance and intelligence collected from across joint networks.

“The core challenges of the future fight are speed and scale. The future fight will be much faster, and the joint force will have more sensors and more shooters. It will be more widely distributed than ever before.” The initiative will test the Army’s Project Convergence impact on the joint forces’ training as well as exercises and demonstrations.

Project Convergence is the Army’s plan to merge its joint force capabilities and keep pace with technological change, testing artificial intelligence capabilities along with its abilities to transmit information from sensors in the air, space and on the ground. to enable the joint force to quickly collect, analyze and transmit data at machine speeds.
 
The exercise is designed to help make informed battlefield decisions faster. “Imagine allowing operators to choose what data feeds are important to them and for others to be able to subscribe to get the information they need. The power of this architecture is unlocked by services  working together to connect networks and share information at machine speed. That’s all-domain superiority. And this event took us one step closer to realizing that future.”

“We will have all sensors, the right shooter, and the right command and control coming together to give us the speed and the depth in the battlefield that we’ve never had before.

Critical to the Army’s contribution to JADC2 will be its new Multi-Domain Task Force because of its ability to provide long-range precision effects, as well as intelligence, information and cyber capabilities.

“We’re going to have the capability for long-range precision fires at ranges that we’ve never even considered before. “And this will give us a cross-domain capability to work with the joint force and coalition partners, and give us capabilities that are really going to make a difference.”

The Army is entering a new era, and after observing Project it’s clear that the Army is indeed reaching a new level of experimentation and testing. The Army is converting ideas, prototypes, and various modes of operating (i.e., new ways of fighting) into new capabilities.
 
This is a departure from the past, where a lion share of the budget and programming narrowly focused on incremental upgrades to existing platforms, adding armor, speed, reach, and lethality at exorbitant costs, over long periods of time. This approach is dated, and the Army is aggressively pursuing innovation to attain a decisive edge over rising competitors.

There is a distinction between modernization and innovation. The key to innovation is finding cost-wise solutions to operational problems the nation is facing on a fast-paced and rapidly changing battlefield. At best, modernization has been the approach to make incremental changes, but innovation is now the collective aim.

In an unprecedented level of collaboration, Project Convergence has throttled the needle forward and sped the kill chain from minutes to seconds, combining over thirty technologies to alter the speed, complexity, and overall geometry of the battlefield. To be sure, the experimentation was not perfect, but the aggressive use of machine learning and artificial intelligence to sense, detect, and assign shooters to service targets was fast and rather remarkable.

We need our partners to have the ability to collect on information requirements to support the close and deep fight, probe enemy gaps, and, if needed, strike hard and fast, destroying as much enemy force as possible with minimal friendly economy of force, speed and precision. In essence, they need to plug into our battle networks with their sensors and shooters while adhering to interoperability standards and their national caveats.

Our combined headquarters require the ability and capacity to rapidly coalesce an alliance of forces, spanning multiple echelons and across multiple domains, into a highly dense digital environment. Success requires digital alignment across all warfighting functions and a common set of tools in a mission partnered environment.
 
Improving on the exercise performance will require more people writing and analyzing code and data in real-time, closer to the action — and that’s not just for the experiment. One of the biggest changes the Army envisions for the way it fights is bringing a new type of soldier, trained in software development, data science and AI, to work and rework algorithms on, or very near, the front lines.
 
The service needs “code writers at the edge” of battle because “the software, the algorithms change…The enemy is going to change things too. Their systems are going to change, so we have to have code-writers forward to be responsive to commanders to say, that algorithm needs to change because it’s not moving the data fast enough.’ And if we own the code, like we own software, we can make those adjustments forward.”
 
Today’s process for doing processing, exploitation, and dissemination of data, sounds something like, “‘Call back to this guy at another base and have it changed, I need it in 12 hours.”
 
That’s not suitable for the accelerating pace of warfare. “They can’t wait three hours to get that target into the targeting cycle to get it approved and go through some laborious process. It’s going to have to change instantaneously so we can stay ahead of the decisions adversaries are making.”
 
The Army is already setting out to train these soldier coders who will  practice writing software, doing data science, closer to combat. “We have to get closer to the edge because the speed of finding a target and sending it to something that can process it, the speed of calling a fire mission, a medevac mission, that’s what we’re after.”
 
During Army high-profile test of emerging technology, service leaders packed explosives into a prototype of the Extended Range Cannon as part of a bold experiment.
 
The cannon had the ability to fire beyond 70 kilometers, and the XM1113 projectile had been customized with a precision guidance kit, not the next long-range precision guidance kit that will be integrated later.
 
“It didn’t go off every time. It was kind of disappointing when there was a hole in the ground right next to the target because it didn’t explode.”

“The bullets flying through the air and exploding is interesting, but that’s not what’s compelling about Project Convergence. It’s everything that happens before the lanyard is pulled, the trigger is pulled. We didn’t come out here for a precision fires exercise. What we came here to do is increase the speed of information between sensing the target and passing that information to the effector.”
 
By creating a complex network of systems, the Army was able to dramatically shorten its kill chain. And the service moved the architecture from PowerPoint to a living and breathing capability in less than eight months.
 
Science and technology efforts that were just “two months ago an idea in a lab some place in an air-conditioned building” were worked into the exercise.
 
The exercise exceeded any expectation we could have had. It was an experiment. It was an experiment at scale for a combined arms operations, and the fact that they compressed the timespan was incredible.
 
“We started with a lot more technology than we demonstrated today. It was a very deliberate process of determining things that would be ready and things that would not be ready.” In some cases, the Army overshot its goal, literally. In several instances, the video of the target after the launching of effects showed the target still standing.
 
“Aided target recognition, it’s brittle. We need more work, more sets, to continue to train and solidify that and do it on the move with rough terrain and stability systems. The air-to-air coordination and air-to-ground, that worked extremely well. The mapping worked very well. But we all have our eyes wide open. This is a first step. We can now look ourselves in the eye and say we know exactly where we’re starting.”
 
There were things that didn’t work perfectly. We missed a couple of targets.” But, That really doesn’t matter. All the things we wanted to work in terms of the ability to see, decide and act first will be going to be the key thing to winning on the future battlefield. The key elements of that, worked perfectly today.”

1. Establish appropriate measures to protect and defend data, users, systems, network connections, and missions

2. Expand automated end-to-end network visibility and assured interoperability through a single security architecture framework

3. Enforce network security standards and policies for systems from inception through fielding, including sustainment

4. Drive organizational compliance with single security architecture framework

5. Reduce external attack surface, standardize security controls, and align with Joint Information Environment concepts

6. Enhance shared analytics, real-time metrics, information sharing, and reporting to improve security and situational awareness

7. Improve cross-organizational training exercises and network security testing efforts

8. Provide means to employ assured C2 with increased tempo and lethality

9. Achieve a high state of institutional readiness and flexibility across the force

10. Take advantage of new waveforms and capabilities supporting “On-The-Move” requirements

11. Find solutions to problems of Inventory comprised largely of legacy, end-of-life systems with no path to modernization due to sustainment costs

12. Innovate rapid capability development, planning and programming, policy, and acquisition methodology to obtain required technologies

13. Provide an approved equipment and services list to rapidly fulfill operational requirements

14. Enforce standard configurations to optimize protection and usability to include the Joint, communities

15. Deliver manpower systems, policies, and processes to attract, develop, retain, and support qualified workforce

16. Structure the Communications MOS to operate and maintain new technology while ensuring flexibility to adjust with advancements and innovation

17. Deliver a modern, dynamic training environment resourced, e.g., personnel, equipment, facilities, time, and funding, to support C2 requirements and technology

18. Train workforce to compete and win in an information-contested/denied operating environment

19. Leverage modern, immersive training and simulation technologies meeting classified/unclassified requirements integrated with training and standards

20. Upgrade Brick-and-mortar training facilities require designs to implement modern training techniques to inject resources into a virtual environment

21. Adapt workforce to changing requirements, incorporating emerging technologies to provide integrated solutions across the training and knowledge continuum

22. Provide live and virtual training, hands-on laboratories, and realistic operational exercises

23. Enable a flexible and dynamic process of planning and investment of C2 technologies to meet the demands of the warfighter

24. Ensure a competitive advantage while maximizing investments to enhance current and future operations.

25. Establish enterprise policies, processes, and standards to govern the implementation, operation, and sustainment of the Command and Control network

26. Provide governance, policy, and oversight for data strategy, data architecture, and data management efforts

27. Maximize investments while minimizing the amount of unnecessary redundancy resulting from disparate planning and development efforts

28. Establish accountability for data use, data quality, information exchange, accessibility

29. Oversee strategic planning and governance for all information and information management functions

30. Provide governance, policy, and oversight of data strategy; data architecture; enterprise data management; data sharing; and information, spectrum, and circuit management

31. Maximize investment value, inform decision making, and minimize unnecessary redundancy through a unified system

32. Ensure systems acquisition and development efforts continues to observe policies while expediting delivery of capabilities

33. Ensure compliance to program guidance for interoperable, cost-effective, and secure systems

34. Prioritize a list of investments via a value and risk assessment to maximize capability

35. Develop procedures for detecting, reporting, and responding to security incidents

36. Influence and shape investment opportunities to demonstrate fiscal responsibility and guide innovation throughout Capabilities Based Assessment

37. Improve quality of services, cost transparency and allow for innovation

38. Eliminate labor and costs associated with duplicative networks and systems and reduce spending of separate investment paths

39. Leverage an enterprise approach while advocating for the development of Joint capabilities

40. Implement lifecycle best practices of planning, installing, operating, maintaining, and continuously improving systems

41. Eliminate procurement of disparate, non-interoperable systems, and duplicative equipment while leveraging sister-service acquisitions

42. Stop practice of using organizational funds to implement special projects without aligning solutions to the larger enterprise view

43. Improve enterprise visibility of the full sustainment costs to be captured as part of the Planning, Programming, Budgeting, and Execution process

44. Enforce information technology Investment Strategy aligning policies, projects, acquisitions, and funding

45. Provide modern, cutting-edge technology and valuate means include nonstandard methods

46. Deliver rapid capabilities not normally associated with technology refreshes and maintenance cycles

47. Establish integrated priorities by linking the governance and Investment Strategy processes to improve resourcing process

48. Ensure the information technology Investment Strategy includes the tactical edge

49. Provide metrics and measurements to assess the value of the systems within portfolio

50. Overhaul Procurement Request Review Approval System process to ensure approval process provides compliant solutions aligned to strategic vision and goals
 
0 Comments

Top 10 Maintenance Workload Case Studies Rebuild Readiness Fleet Implement Sustainable Operational Schedules

10/10/2020

0 Comments

 
There have long been concerns about the consolidation of the shipbuilding industrial base. If the U.S. were to go to war, so few yards have experience working with the Navy and building Navy ships, it would be hard for them to ramp up and help in a shipbuilding surge.
 
The idea of bringing smaller yards into the industrial base has been one of the positives to come out of the discussions of Distributed Maritime Operations and its call to have a lot of small and unmanned ships in the fleet: more companies can compete for these types of ships, bringing fresh ideas and a larger industrial base for the Navy to work with.
 
However, if the large yards see a decline in business, it’s unclear what that will mean for the yard and their workers. For example, there is only one yard that builds destroyers, which may be in less demand under the upcoming force structure assessment, still being reviewed by the Pentagon.
 
The concern over the capacity of the handful of public and private US shipyards and the increasing time it takes them to do scheduled maintenance work has been top of the Navy’s leadership’s to-do-list as it struggles to plan.
 
One answer could be to bring in smaller commercial shipyards that haven’t traditionally worked with the Navy, while bringing a number of large shipbuilding companies into the fold to begin performing repair and overhaul work.
 
“There are shipyards that maybe don’t think they’re into overhaul mode but they probably need to be. They should be available or trying to make themselves available for overhaul work. There’s capacity there that hasn’t been tapped yet.”
 
A number of small, commercial shipyards could build the kinds of smaller manned and unmanned vessels the Navy and Pentagon leadership are looking at as part of the revamped 30-year shipbuilding plan, and much-delayed force structure assessment.
 
Work on destroyers and amphibious ships – while certainly not in jeopardy of going away altogether – could see reduced demand as the Navy and Marine Corps eye smaller combatants like a frigate, and a Light Amphibious Warship (LAW) and small amphib in lieu of the traditional ships Ingalls has built for decades.
 
Many shipbuilders not only work on Navy warships but auxiliaries, foreign military sales ships and commercial ships. These yards would all be set up well to compete for small or unmanned ships for the Navy, but they might be going up against large yards if they find themselves needing the work, too.
  
“Not knowing what the force structure analysis is going to tell us we need, it’s  hard to say at this point, because the big yards could flex, absolutely. Are they better aligned at the moment to build the larger ships that they’re building? Yes. We’ll just have to see how the [FSA] plays out.”
 
Navy has indicated that, in a time of war, small repair yards could be called upon to help build ships; and they suggested that large yards could look to repair work to supplement any lost shipbuilding work – highlighting the predicament the entire enterprise finds itself in, with capacity and capability not necessarily matching up to needs and budgets.

“We’re not as effective or efficient — we have so much to be proud of — but we can’t get ships delivered on time with the predictability we need today.”
 
US will be hard-pressed to repair ships quickly enough during uptempo operations, underscoring concerns recently put forth by the Marine Corps
acknowledging that “replacing ships lost in combat will be problematic,”
 
Because the US “industrial base has shrunk while peer adversaries have expanded their shipbuilding capacity. In an extended conflict, US will be on the losing end of a production race.”
 
The commandant of the Marine Corps talked about the concern he has regarding the ability of the Navy shipbuilding industrial base and ship repair industrial base to restore or rebuild the fleet in the face of losses that might occur in a conflict.
 
The commandant also indicated that this is an element of deterrence: if you don’t have the ability to sustain a fight, a protracted fight, then perhaps your adversaries think they can wait you out or just push through and eventually you’ll get to the point where you can no longer continue the combat.”
 
Some of the questions asked during this planning effort were how shipbuilders could rapidly deliver ships nearing the end of their construction and how they could accelerate construction of hulls still in early phases of work.
 
Noting that submarine builders take on submarine overhauls to supplement their construction work, there are shipyards that maybe don’t think they’re back into overhaul mode, but they probably need to. For example, one of the yards is able to do overhauls and new construction at the same time.
 
In the past, Yards have been swayed against doing repair work because, depending how the contract is structured, it could be a financially risky venture, especially given how common it is for growth work to appear once an overhaul is started.
 
There is capacity there that hasn’t been tapped yet. It’s tried in the past, and just because someone said no in 2018 doesn’t mean they’re going to say no in 2021.

It’s no secret that the Navy’s four public shipyards have prioritized attack submarines last, instead of focusing the yards’ limited resources on aircraft carrier maintenance and ballistic missile submarine refuelings.
 
But even though the SSBN refuelings are drawing to an end, which should free up resources for SSN maintenance, the time SSNs will sit idle waiting for maintenance work to begin will actually continue to increase for the next two years. The problem has been in both the number of subs sitting idly and the duration that each sub is waiting for its turn in the repair yard.
 
The backlogs will be increasingly difficult to push through, as Pentagon budgets are likely to remain flat in coming years, pinching shipbuilding and repair budgets as the Navy moves out on its new Columbia-class submarine effort at the same time it undertakes a major overhaul of many Virginia-class subs.

1.   Maintenance Delays Result in Challenges to Rebuild Readiness  
The Navy continues to face persistent and substantial maintenance delays that affect the majority of its maintenance efforts and hinder its attempts to restore readiness. From fiscal year 2014 to the end of fiscal year 2019, Navy ships have spent over 33,700 more days in maintenance than expected. The Navy was unable to complete scheduled ship maintenance on time for about 75 percent of the maintenance periods conducted during fiscal years 2014 through 2019, with more than half of the delays in fiscal year 2019 exceeding 90 days. When maintenance is not completed on time, fewer ships are available for training or operations, which can hinder readiness.

 2. Actions to Address Main Factors Causing Maintenance Delays  

The ability of the Navy’s four shipyards to complete aircraft carrier and submarine maintenance on time directly affects military readiness because maintenance delays reduce the amount of time aircraft carriers and submarines are available to perform their missions and protect our national security. The Navy’s four shipyards have continued to face chronic and substantial delays in over half of aircraft carrier and more than three-quarters of submarine maintenance periods, and the Navy has experienced substantial growth in idle time for submarines awaiting the start of maintenance periods.
 
The Navy has recognized the factors associated with maintenance delays and has begun focusing upon the unplanned work and workforce factors that are contributing to most aircraft carrier and submarine maintenance delays. However, even though the Navy has taken steps, such as attempting to more accurately project the duration and resource requirements for planned maintenance on aircraft carriers and submarines, continuing to routinely and consistently use overtime to meet planned maintenance is untenable.
 
Shipyard Performance to Plan initiative may help NAVSEA and shipyard leadership better understand factors contributing to maintenance delays and inform decisions to address them. However, NAVSEA has not developed over half of its metrics for measuring the impact of the unplanned work and workforce factors or implemented related goals, action plans, milestones, and a monitoring process to improve the timely completion of maintenance.

Set of metrics would help the Navy better address the main causes of maintenance delays, metrics on their own would not resolve those issues. Unless NAVSEA uses the key elements of a results-oriented management approach to address factors contributing to maintenance delays such as unplanned work and workforce issues at the Navy shipyards, delays in maintenance periods and idle time are likely to persist. Completing these actions as soon as possible could increase the overall availability of aircraft carriers and submarines to perform needed training and operations in support of their various missions and improve readiness
 
3.   Challenges in Measuring Performance/Planning of Maintenance Work  
The Army and Marine Corps operate large industrial depots to maintain, overhaul, and upgrade numerous weapon systems and equipment. The depots play a key role in sustaining readiness by completing maintenance on time and returning refurbished equipment to warfighting customers. Army is recommended to develop procedures to ensure depot input on metrics, develop guidance for depot customers, and analyze the causes of maintenance changes; and that the Marine Corps develop a complete baseline
 
Both services are also improving their performance metrics in order to better manage depot maintenance. The Army’s initiative to develop a new performance metrics framework shows promise, but Army depot officials told us that they have significant concerns about how and whether AMC is factoring in their input when developing the new metrics. It is particularly important that AMC develop procedures to ensure that it will incorporate depot stakeholder input into the new metrics framework for the Army’s organic industrial base through iterative and ongoing processes.
 
Doing so will allow the Army to develop maintenance-related metrics that are beneficial for helping officials at all levels—AMC, LCMC, and depot—to assess and improve depot performance. Moreover, the Marine Corps does not yet have a complete baseline to accurately measure the effectiveness of its planning for depot maintenance. Establishing a complete baseline will allow the Marine Corps to better assess how well it has planned its depot maintenance work by comparing this plan against actual performance.
 
Both the Marine Corps and the Army have taken steps to plan and execute depot maintenance more efficiently and effectively. The Marine Corps has undertaken several efforts to revise its depot maintenance planning process, and to analyze and address the reasons for changing customer needs. The Army has taken steps to synchronize its depot planning timelines to better align resources and requirements. However, developing guidance for depot customers to synchronize these timelines would better position Army depots to make decisions based on the most accurate information possible, as early as possible. Additionally, systematically analyzing the causes of changing customer needs would help the Army identify why its depots experience such variability in their workload. This, in turn, would better position the Army to identify specific solutions for reducing such unplanned changes

4.  Depot Maintenance Backup Inventory Reserve Sustain Readiness The Navy and Marine Corps provide operational units with replacement aircraft or vehicles, known as backup aircraft and DMFA, to maintain readiness levels when a unit’s aircraft or vehicle undergo depot maintenance, modification, or repair. We reviewed the backup aircraft for the F/A‑18 aircraft, T‑45 aircraft, and MH‑60 helicopter and the DMFA for the Assault Amphibious Vehicle, Light Armored Vehicle, and Mine Resistant Ambush Protected vehicle. The on‑hand quantity of backup aircraft and depot maintenance float vehicles can change daily due to fluctuations in depot workload.
 
The insufficient quantity of available backup aircraft occurred because the squadrons and training wings used the backup inventory to transition squadrons to newer models and replace training aircraft that were damaged to the extent that repair was uneconomical or impractical. also extended the service life of the F/A‑18 and T‑45 aircraft. Although pilots were receiving the required amount of training before a deployment, a Navy official stated it was a problem because pilots were barely meeting the minimum requirement. In addition, the Navy and Marine Corps could experience a future shortfall of trained pilots, potentially impacting mission readiness if aircraft shortages continue
 
5.   Carryover Metric Provides Quality Information on Funded Unfinished Work  

DoD allows carryover from one fiscal year to the next to ensure the smooth flow of maintenance work performed at depots. DoD has reported that approximately 6 months of carryover is optimal. Excess carryover (i.e., more unfinished work than allowed) may reflect an inefficient use of resources and tie up funds that could be used for other priorities. Congress directed DoD to report on its current DoD carryover metric and consider alternatives.
 
DoD’s report discussed three carryover metrics: the current DoD carryover metric, an Office of the Secretary of Defense-proposed carryover metric, and an Army-proposed carryover metric. DoD’s historical carryover and the metrics presented by DoD have been reviewed. This report, among other things, (1) describes the total carryover for fiscal years 2007 through 2018, and the reasons for it; (2) evaluates the carryover metrics DOD presented in its report to Congress and whether they would provide quality information; and (3) describes private industry and foreign military policies for determining allowable carryover
 
DoD allows depots to carry over billions of dollars of funded unfinished work from one fiscal year to the next to facilitate the smooth flow of work. While some carryover of work is appropriate, excessive carryover may reflect an inefficient use of resources that otherwise might be redirected to other priorities. DoD considered three metric options for calculating depot maintenance carryover; however, the metrics do not fully address key attributes of providing quality information that is reliable, complete, consistent, and appropriate and have varied depot management implications.
 
Ensuring that the carryover metric meets key attributes for providing quality information would improve decision-makers' ability to assess whether depots are managed as efficiently and effectively as possible, and determine the amount of carryover sufficient to support smooth operations from year to year. Until DoD adopts a carryover metric that addresses the attributes for providing quality information, decision makers may not know if the billions of dollars invested for work performed at depots are being used efficiently or might be redirected for other purposes.
 
DoD identified three metrics for calculating allowable carryover in its report to Congress. However, the three metrics identified do not fully meet all key attributes—reliability, completeness, consistency, and appropriateness—for providing quality information to decision makers, although the Office of the Secretary of Defense-proposed carryover metric meets the most attributes.

6. Contract Type Program Outcomes Evaluation Informs Schedule Decisions  

The Navy spends billions annually in maintaining this fleet. In 2015, the Navy changed how it contracts for such maintenance work, aiming to better control costs and improve quality. The new approach, called MAC-MO, generally uses firm-fixed-price contract delivery orders for individual ship availabilities competed among pre-qualified contractors at Navy regional maintenance centers.
 
Since shifting to the Multiple Award Contract-Multi Order (MAC-MO) contracting approach for ship maintenance work in 2015, the Navy has increased competition opportunities, gained flexibility to ensure quality of work, and limited cost growth, but schedule delays persist.
 
Although the MAC-MO strategy appears to have stabilized the cost and quality components, completing maintenance availabilities within allotted schedules continues to elude the Navy. The Navy has taken steps to more readily accommodate growth work needs as they emerge, however these likely cannot completely eliminate the Navy’s need for upward obligations. The Navy has pointed to the low cost threshold and upward obligations approval process, as provided for in statute, as not providing it with the agility it needs to fund growth work on a schedule that minimizes disruption to an availability.
 
Recently, Congress enacted legislation which establishes an OPN-funded pilot program and provides the Navy a platform to potentially demonstrate that it can meet its MAC-MO schedule goals when freed from the time intensive process of upward obligations. Nonetheless, every pilot program should be thought out before it starts, including consideration of what data need to be collected and how the data will be analyzed. Otherwise, the pilot could be poorly run or could miss opportunities to gain information and lessons learned. Such planning for the OPN-funded pilot could enhance the quality, credibility, and usefulness of the pilot program.
 
7.  Sustainment Focus Early in the Acquisition Process Improves Budget The quantity and breadth of issues identified in this report—resulting in billions of dollars in unexpected costs, maintenance delays, and unreliable ships—suggest that existing policies and guidance have not ensured that new ships are reliable and can be sustained as planned. Recently, due to some of these problems, DoD and the Navy have recognized the importance of considering the requirements and costs of sustainment during the acquisition process, and Congress has passed legislation related to sustainment planning. This report, along with other DoD initiatives discussed in this review, demonstrate that the Navy needs to take many steps to infuse its acquisition decision-making with a greater focus on sustainment outcomes.
 
Systemic changes are needed to improve shipbuilding programs’ sustainment outcomes, including: 1) setting clear sustainment requirements that are useful for acquisition decision-making and reporting the results to Congress, 2) improving O&S cost estimates, sustainment planning, and logistics assessments, and 3) involving the PSM early in the acquisition process. However, these changes will only be successful if Navy leadership commits more time, attention, and resources to ensuring that sustainment is thoroughly considered throughout the acquisition process. Until the Navy resolves these issues, its shipbuilding programs will continue to pass costly sustainment risk to the fleet that results in ships and submarines that experience major sustainment problems.

8. Actions to Improve Poor Conditions of Facilities and Equipment that Affect Maintenance Timeliness and Efficiency
 
The condition of depot facilities and equipment directly affects the timeliness of maintenance and the readiness of the weapon systems they repair. The condition of facilities at a majority of DoD depots is poor and the age of equipment is generally past its useful life, but the services do not consistently track the effect that these conditions have on depot performance.
 
DoD and the services’ approach for managing investments to improve the efficiency and effectiveness of its depots lacks elements important to addressing key challenges. The services have efforts underway to complete their plans to address their depots’ facility and equipment needs. However, these plans are preliminary and will not include key elements, such as analytically-based goals; results-oriented metrics; a full accounting of the resources, risks, and stakeholders; and a process for reporting on progress.

9.  Progress in Standardizing Designs for Facility Construction Centers’ various activities such as conducting value engineering and life-cycle cost studies to identify possible cost savings and analyze long-term costs of new facilities are consistent with key principles and concepts

Army does not have performance measures in place to assess the progress the Centers have made toward assuring consistent application of standards from the Centers’ program or monitoring the Centers’ execution to meet the overarching objectives and priorities of the AFSP and standardization process including, among other things, reducing design costs and time, construction costs and time, and change orders during construction. This hinders the Centers’ ability to determine how well they are supporting the objectives of both the Army Facility Standardization Program and DoD’s annual performance plans, as well as the Centers’ ability to demonstrate the extent to which they are achieving their objectives.

10.  Assess Effectiveness of Workforce Initiatives to Maintain Critical Skills  
DoD employs over 80,000 civilian personnel at its 17 major maintenance depots to maintain weapon systems such as aircraft, combat vehicles, and ships. The depot workforce has unique skills, and the depots must compete with the private sector for qualified personnel. Increasing numbers of depot workers have been retiring, and the number eligible to retire is expected to increase. Because it takes 5 years or more to become proficient in some occupations, DoD must systematically plan and prepare to hire, train and retain the workforce it needs to support its vital maintenance and repair mission.
 
Depots identified a variety of workforce challenges, such as hiring personnel in a timely manner and providing inexperienced personnel with the training necessary to become proficient in skilled occupations. According to DoD officials, these challenges contributed to delays in the maintenance of some weapon systems.
 
While the services have collected some data regarding these actions, the depots and the services do not know how effective they have been, because they have not assessed the results of these actions in helping the depots to hire, train, and retain skilled personnel—including determining whether the actions are cost effective. By assessing the cost and effectiveness of these actions, the services would be better able to identify the hiring, training, and retention actions that work, and identify those that are ineffective or cost prohibitive. This would allow them to better tailor their actions to ensure that the depots hire, train, and retain personnel for skilled occupations, and help ensure they are positioned to provide effective depot maintenance for DoD’s weapon systems.
 
Without conducting assessments of the actions the depots have taken, and are currently taking, the services will not know how effective those actions are in helping to maintain critical skills in the depot workforce, or if the actions are cost effective to pursue. By assessing the overall effectiveness of these actions, including the associated costs, the services will be able to identify the hiring, training, and retention actions that work well, as well as those actions that are not effective, or are cost prohibitive, and should not be used to help maintain the critical skills of the depot workforce. To ensure that the depots are using their resources most effectively, an assessment of these actions should be conducted.
0 Comments

Top 50 Enterprise Performance Predictive Maintenance Tool Capabilities Track Decision Advantage

10/10/2020

0 Comments

 
Teams are making digital replicas of aircraft to help make maintenance faster, more effective and to be able to forecast potential problems with its planes in the future.

B-1 “Digital Twin” project will be the first time a complete scan has been performed on a military aircraft. The process began when a B-1 was for a complete teardown, hich means every nut and bolt, all the skin panels are all coming off. “They’re going to do a complete inspection of the aircraft for us, looking for fatigue cracks.”

As a heavy aircraft flies, gravity and air turbulence make it feel its weight. Fatigue cracks form on stress points and grow under the load of the jostled plane.

“We want to find those fatigue cracks and how they’re growing and why they’re growing and prevent them and how we can repair them if we need to.

Making a digital twin is an opportunity to get inside of a jet and see all of the internal parts not exposed during regular maintenance,  allowing us access to areas we’ve never looked at before.

Every component of the B-1 will be painstakingly scanned using laser equipment which triangulates its position on the part to create a digital copy that can then be turned into a 3D model.

“It creates a digital representation on a computer and then a technician runs a verification against it to make sure that all the points have been scanned and there’s enough data. And then they can take that and turn it into the solid model for maintenance and manufacturing.”

Starting at the top of the B-1, the technicians will scan and peel away layers of the plane, then scan again. Rinse and repeat.

“It’s a long process. So, they’re scanning as they go along because we want to make sure we understand how everything is stacked up and put together. Once finished, the Air Force will have a complete digital clone of the B-1 it can use to make predictions about damage the jet could potentially sustain in different scenarios.
 
“It can fly right along with the fleet, so we can actually take real-world data and put it into our digital model and see what’s going on in our fleet as a whole. “And so what that does for us is allows us to fly ahead of the fleet in the digital environment. I can simulate more hours on that aircraft without ever breaking one and I can see what’s going to happen in the future.”
 
The modeling will allow the Air Force to begin planning repairs before a plane is damaged.
 
“So we get out of our current reactive state, which means an aircraft breaks then I have to start building and repair it at inspection into a standpoint of ‘This is the next area we’re pretty sure something’s going to break. Why don’t we plan now for what type of inspection and start planning a type of repair. So this will help us with downtime and we’ll be able to turn jets quicker because we already know what to expect, where to go look, what we should be finding, why it occurred.”
 
Having a digital aircraft also means a manufacturer can be given the 3D renderings to have components built more quickly.
 
“In the past, all we had was the 2D digital drawings. A lot of aircraft technicians don’t know how to read these dated 2D manufacturer drawings anymore. This is an opportunity for us, for once, to have high-fidelity, 3D CAD drawings that can be put right into maintenance and manufacturing.”

"What’s new is that we’re reclaiming an aircraft out of the Boneyard and then tearing it all the way down, having everything scanned and built back up so you get an entire aircraft. Further, we’re taking that to do the loads analysis. We’re going to simulate it actually flying, so that we can understand the future stresses on the aircraft.

Another possible use for a digitally cloned plane is virtual reality. We have only just begun to imagine a scenario where the technology can be used in regular aircraft maintenance and training.
 
“You can highlight it in the digital aircraft and maintainers could pull it up on the computer and pull it up on a set of VR goggles and say ‘OK, I know exactly where I need to be,’ as maintainers walks out to the aircraft he’s going to be working on,”
 
Then we can do the same thing for virtually prototyping repairs. You can do a much better test fit since the aircraft is fully represented in the digital environment. You can actually build a repair and test it, take it to get 3D printed so that our first repair is going to have a much higher level of success.
 
There must be a high state of materiel readiness across the force. In addition to appropriately funding the sustainment of equipment, and the establishment of appropriate stocks in appropriate areas to enable operational contingencies, the means of sustaining equipment must be as appropriate for support operations as they are for efficiency in garrison.
 
Failures in materiel readiness are often replicated in major sustainability issues on operations, and necessitate consequential actions such as switching parts between aircraft to achieve desired operational readiness outcomes."
 
Job site completed the initial phase of a novel virtual roll-out of its squadron-level maintenance improvements to units at Naval Air Stations. Using a combination of technology, innovation and dedication to mission, the team involved prevented a delay in this important implementation.
 
The roll-out involves the Organizational-level (O-level) Reform efforts of the Naval Sustainment System-Aviation (NSS-A), an initiative that began approximately two years ago as a way to improve readiness across Naval Aviation. At the direction of the then-Secretary of Defense, the F/A-18 and EA-18G served as the initial communities to introduce NSS-A. 
 
The F/A-18E-F NSS-A efforts produced around 90 additional mission capable Super Hornets—the equivalent of returning $5.5 billion in assets back to the flight line. The Super Hornet community reduced the turnaround time of special inspections by more than 40%, meaning more aircraft were available for the flight schedule. After the F/A-18EF successes—which include the highest mission-capable rates in decades—the NAE began to turn attention to rolling out O-level Reform to other squadrons and communities.
 
“ We were able to get this implementation done on schedule using virtual methods, but we were also able to introduce and ‘normalize’ new software tools for our folks.  “We’re vigilant about maintaining the balance between operational readiness and personnel safety. This virtual option allowed us to get into place improvements we were already planning without pushing the timeline. We’re looking forward to seeing benefits in turnaround time and overall maintenance operations.”
 
 “The virtual roll-out benefitted us because it let us implement the reforms on our time line, but it also saved us time from having to find a place on our schedule to do this later in the year, possibly impacting our planned operations. The community is used to being flexible, independent and prepared for any scenario, so we’re a natural fit to try this virtual option.”
 
O-level Reform changes involve a variety of adjustments to procedures including delegating ownership over aircraft being repaired to individual crew leads with white boards next to aircraft that show anyone going by the current repair status of that aircraft; keeping all tools, parts and other materials necessary to fix a specific aircraft kept in a dedicated space with the aircraft instead of in other storage areas; and daily meetings to keep everyone informed of the status of every aircraft in for repairs. 
 
Another major emphasis of the reform is a reinvigoration of Continuous Process Improvement (CPI) across Naval Aviation maintenance. 
 
“This is much more than coming in with some proven changes and adjusting squadron maintenance,” “In fact, we’re looking re-moniker this effort as we go forward and call it O-Level Maintenance Management. It’s not just about reform, it’s about a new way of doing business and constantly evaluating our approach so we can constantly find ways to do it better.”
 
The team will have the chance to look for new CPI opportunities. With the initial phase complete, the TYCOM team hands ownership of the implementation process to the type wings and then re-engage directly to evaluate progress and make adjustments. The hope is this time, if it’s not possible to visit the squadrons in person, virtual methods they already proved work to complete their efforts will continue.
 
In future Multi-Domain conflict, maintenance and logistics requirements follow operators, so we’ve got to be able to have the sense, orient and respond tools  required to posture the aircraft forces in supporting the missions.
 
Aircraft programs need to replace old network systems with modern capabilities, including data fusion from multiple sensors — whether those be onboard an aircraft or from a machine doing specific maintenance.

Air Force plans to expand its “predictive maintenance” using artificial intelligence (AI) and machine learning. “Predictive maintenance is a real game changer for us “There’s a lot of power in moving unscheduled maintenance into scheduled maintenance, and we’re firmly convinced that it will improve our readiness and improve our combat capabilities by doing so.”
 
“We have long been a fly-to-fail force, simply waiting for aircraft to quit working and then trying to fix them by moving parts to wherever the planes were grounded. But today’s unpredictable and relatively slow approach to getting fighters and bombers back in the air simply won’t be possible in future conflicts.
 
Maintainers think they can now start to put some problems behind them using Condition-based maintenance (CBM) to enable crews to, Instead of replacing parts after they’ve failed or relying on fixed schedules, they can predict failures with a high degree of accuracy and get ahead of problems when maintenance makes the most operational sense.
 
“That data really is the key to our awareness of what’s happening in the mission environment, and what’s happening in the broader enterprise, to include at home and the depots and the broader supply system.” Modern tools are enabling units to predict when an aircraft part will fail before it actually happens.

After the aircraft lands, information from the data is uploaded to one of two organically developed post-flight analysis tools, Data Analysis and Redistribution Tool, or DART, and the Embedded Diagnostic System Data Analysis Tool, EDAT. Aircraft engineers and maintainers can access the data on the web and use the information to troubleshoot and maintain the aircraft fleet.
 
In explaining the usefulness of DART, “It’s kind of like when the car ‘check engine’ light comes on. With DART, you can see what’s going on with the airplane.”
 
DART can show up to 18 months of data, while EDAT looks at just one flight at a time. Another predictive tool, Aging Fleet Integrity and Reliability Management application, or AFIRM, is being used as weapon system integrity program system of record for the Aircraft System Program Office.

“If a car breaks down on the road, a mechanic has to come, you have to haul the vehicle in and parts have to be ordered. It takes hours and hours to get the car back on the road. When you can predict when a part is going to break, you can replace that part and you won’t break down in the desert or somewhere there is a lack of support.”
 
The predictive measures tools allow for timely, planned maintenance processes. Time and money are saved while safety is greatly enhanced. CBM+ toolkit will soon be added to DART further decrease aircraft downtime and maintenance times.
 
By leveraging past years flight data stored in DART, CBM+ algorithms can establish a part's health and accurately predict when that part will fail.  Before failure, maintenance crews will be alerted that the part is near the end of its life and can prepare for its replacement. Thus, all parts and personnel needed for maintenance can be acquired and in place at a scheduled maintenance date, not after the part has broken and left the plane stranded.

With CBM+ data from DART, the plane can be flown to the chosen maintenance station under its own power before failure, rather than having to scramble needed parts and maintenance personnel to wherever the plane has broken down. The plane can then be repaired and put into service in a vastly shorter timeframe. This saves valuable time, money and resources in an effort to keep the aircraft fleet healthy and supporting its mission.
 
EDAT is used by the engineers to diagnose aircraft issues by graphically displaying EDS data. The data provided by EDAT reduces the diagnostic process from days to minutes.
 
The most current iteration of EDAT includes a signal validation tool that shows the user which signals are exhibiting non-standard behavior for the engine bleed air system. As the CBM+ tools are expanded, EDAT will include modules for thrust reverser, air cycle refrigeration subsystem and cabin pressure control.
 
The tool will enable maintainers to identify the optimal time to replace parts before failure and ensure that both field maintainers and supply are aware of upcoming demands. “You can look into the future a little bit and see how the aircraft does,” he said. “It saves you a lot of headaches.”

 “CBM+ variant is still in its infancy, but officials have high hopes for how it might cut maintenance costs and boost aircraft readiness, but many of the military airframes the service operates are decades old, and aren’t outfitted with the same number and quality of sensors that spit out detailed data about which components are coming due for service.
 
In addition to CBM, another predictive maintenance initiative, “enhanced reliability centered maintenance (ERCM) an initiative that is really laying that artificial intelligence and machine learning on top of the information systems that we have, the maintenance information system data, that we have today, and understanding failure rates and understanding mission characteristics of the aircraft and how they fail.”
 
Because of that, at Air Mobility Command is planning to include much of the CBM program to rely on eRCM. Instead of depending on data feeds from an individual aircraft’s sensors, algorithms will crunch through detailed records the Air Force already has about how a particular part has historically performed across the fleet — and on a particular airplane — and determine the ideal time to replace or repair it.
 
“We’re able to forecast up to two years in advance when parts need to come off of aircraft,” “What eRCM allows us to do is, component by component, position by position, tail number by tail number, get a specific look at what’s going on to give us a much more accurate picture and then be able to again adjust our removal forecast based on what’s actually happening out there instead of just using a generic average.”
 
Air Force can take advantage of the performance data it does have from its newer airframes that are outfitted with more modern sensors. Sensor data will feed into a separate line of effort — the other 20% of the CBM+ approach — called Predictive Algorithm Development (PAD).
 
“We’re focusing on eRCM just because it’s going to get us our biggest return on investment, but eventually we will go back to the PAD side, because that completes the holistic view of CBM.  “But it’s going to be very challenging for some of the aircraft that do not have those onboard diagnostics. But what we can do is look at things like the flight data recorder that’s typically used for safety investigations. We’re looking for some things that we may be able to tie back to the maintenance side.”
 
And even on some of its newer platforms that are outfitted with sensors, the data they collect is often encoded or encrypted by the original equipment manufacturer, because the Air Force wasn’t thinking about CBM at the time it signed the acquisition agreements for those systems.
 
But now that the service is starting to get a better handle on what sorts of data are useful for maintenance purposes, it’s beginning to use those lessons to inform its policies on the front-end of the acquisition process.

“Right now we’re in some significant conversations with new aircraft programs about things that they need to be writing into their contracts as the aircraft is developed to capture all of these lessons learned.  “And as we go through some of our other platforms, as contracts come up for renewal with our equipment supplier partners, we’re having those same conversations just to make sure that we’ve got on-ramps and off-ramps going through as CBM+ matures over the next number of years.”
 
The Air Force’s gradual move toward condition-based maintenance has major implications for its supply chain. On one hand, DoD industry systems and processes that deliver parts to where they’re needed will have to adjust to a cadence that ensures they are ready to install well before they’ve failed.
 
But if all goes according to plan, it also means the Air Force and Defense Logistics Agency will be able to reduce the total number of spare parts they keep on hand just in case of unexpected problems.

Costs can be removed from the supply inventory by better predicting failures and reducing that ‘just in case’ inventory. So we expect to see results in the Air Force supply system as we are able to pinpoint where we need parts, when we need them.”
 
1. Design fault detection, fault isolation, and fault prediction capabilities

2. Ensure capabilities are sufficient to meet condition monitoring and predictive requirements.

3. Utilize built-in-test and off-equipment for prediction achieve desired functionality

4. Maximize the use of predictive maintenance strategies and implementation of CBM+

5. Enable improvements in failure prediction capabilities.

6. Develop and apply digital system tools for more accurate condition-based monitoring

7. Integrate maintenance and other functional logistics information systems across the enterprise

8. Design prognostic analytics with flexibility to accept many different sources of data for accurate predictions

9. Develop integrated CBM+ architecture early in the implementation process

10. Use Architectural Framework to ensure accurate and timely condition monitoring results

Top 10 Practices Consider Tech Enablers when Developing AI Condition Based Maintenance Functionality Strategy
 
1. Perform business case analysis to determine where applications of CBM+ meet economic goals


2. Reliability centered maintenance approach provides balance of reactive, preventive, and predictive processes

3. Use reliability analysis to determine optimum maintenance task functionality

4. Utilize condition monitoring analysis information when evaluating potential investments in reliability improvements

5. Assess consequences of changes in equipment maintenance approaches

6. Invest in sensor, data collection, and analytic capabilities

7. Minimize condition monitoring errors and determine failure modes

8. Develop metrics driven by condition-based maintenance information to enhance equipment performance

9. Use Life Cycle guidelines when applying CBM+ throughout the equipment acquisition process.

10. Consider using modeling and simulation in CBM+ to determine best design approaches.
 
Top 10 Best Practices to consider Tech Enablers when developing CBM+ Design Strategy
 
1. Incorporate open system architecture when designing business processes of digital tools

2. Achieve maximum interoperability, portability, and scalability

3. Apply industry standards to achieve open systems architecture

4. Apply Enterprise Application Integration when designing data exchange and storage strategies

5. Choose applications to facilitate the sharing of data

6. Promote integration of maintenance and logistics information systems
​
7. Deign measurable, consistent, and accurate predictive parameters related to specific failure modes

8. Use automatic entry and retrieval to achieve more accurate data

9. Integrate data from different sources to achieve to achieve condition monitoring capability

10. Utilize shared databases to maximize the benefit of condition-based maintenance data.
 
Top 10 Real-time System Condition Management Collect Data for Enterprise Repositories
 
1. Configuration management data and applications

2. Operating history data collection and storage

3. Digital logbooks, an automated event recording system

4. Event driven network Message managers

5. Data base management system for asset condition management data

6. Diagnostic applications for analyzing causes of failures

7. Predictive maintenance forecasting

8. Single or multiple correlated sensor data trend analysis

9. Model of correlated predictive sensor and measurement data

10 .Interactive electronic technical manuals and interactive training
 
Top 10 Challenges for Implementation of CBM+ Technologies and Operational Applicability of Machine Assessments
 
1. Development and integration of self-powered or power-harvesting wireless micro sensors capable of operating in high thermal or high mechanical load environments.

2. Improved models and methods to predict condition and expected life based on physical, mechanical measurements.

3. Reliable methods to measure and predict corrosion degradation in unstable environments

4. Predictive tools for advanced materials, materials systems, and structures and design concepts for in-service monitoring

5. Design tools to assist in selecting the most appropriate monitoring approach for a specific mechanical or electrical/electronic system

6. Development of miniature sensors enable condition monitoring of debris in lubricating oils

7. Sensors enable the detection severity of hidden corrosion and general corrosion of environments

8. Sensors enable detection of acoustic and vibrational measures

9. Life-prediction methodologies and real-time computations
​
10. Signal processing and multi-sensor data fusion
 

0 Comments

Top 50 Digital Twin Case Studies Technology Application Development Increasing Service Providers

10/10/2020

0 Comments

 
Many skeptics say “Digital Twin” is a marketing term which has been used to re-package certain technologies that have been available in the market for a long period. To some extent that might be true, but not all digital twins are made equal. Providing value depends on how Digital Twin is constructed and maintained.
 
Decision cycles across DoD industries are becoming increasingly disrupted by proliferation of data, new data sources and compute speeds within an increasingly volatile business environment. The digital twin is the key to effective decision-making in this new world.
 
Making better and faster decisions that can be executed perfectly every time is vital for delivering superior and sustainable business value. However, this is easier said than done because every individual perspective is under-pinned by a series of unique cognitive biases that drive swift action in adversity but make accurately weighing evidence, assessing probabilities, and deciding logically a challenge. Look no further than the constant discrepancy between strategic planning/ambition and results realization
 
A single view to the truth and analytics is therefore key to situational awareness and effective organizational decision-making. But are many players in the industry stuck on what type of analytics they need? The solution to this question should be driven by the problem that needs solving, not how much analytics can be thrown at data in the hope it will both find the problem, as well as solve it. The desired outcome should influence the type of analytics being sought and the available analytics technology that is fit-for-purpose.
 
Basic analytics technology can move data around and get Key Performance Indicators (KPI’s) displayed to the right people at the right time to enable decision-making, and this works well in hindsight for understanding what happened. However, increasing process plant complexity requires more sophisticated ways of approaching KPIs and targets.
 
In some cases, a rudimentary approach to KPI’s setting and monitoring can even become ineffective and counterproductive. In this case, deeper analytics technology, utilizing a digital twin, is necessary to account for the multi-dimensional factors and non-linear trade-offs that make effective decision-making a challenge.
 
The digital twin allows “What if?” and “What’s best?” scenarios to be run automatically to determine available strategies that maximize profitability. Experts can then review the recommended strategies to assess the impact of each recommended approach.
 
A digital twin works in the present, mirroring the actual device, system or process in simulated mode, but with full knowledge of its historical performance and accurate understanding of its future potential.
 
Therefore, the digital twin can exist at any level within some traditional architectures and can be defined as a decision support tool that enables improved safety, reliability and profitability in design or operations. It is a virtual/digital copy of a device, system or process that accurately mimics actual performance, in real-time, that is executable and can be manipulated, allowing a better future to be developed.
 
A digital twin is useful across the entire lifecycle of an asset. It is ideally created during the initial study to evaluate the feasibility and process model of the asset. It is then used and further developed during the design, construction and commissioning of the asset; thereby facilitating the optimum design of the asset and the training of the staff who will operate the asset. During the bulk of a plant’s lifecycle, operation and maintenance activities, the digital twin can be employed for optimization and predictive maintenance.
 
The digital twin provides the opportunity to see inside assets and processes and perceive things that are not being directly measured. It is wired up so that insights are instantly available without data and model wrangling by end users and run in a consistent way that can be understood and agreed upon. In this way the digital twin is able to drive agility and convergence in understanding and implementation across the whole business, for example from Engineering to Operations; Operations to Supply Chain; Reservoir to Facilities; Shop floor to Board room; etc.
 

  1. The Digital Twin merges physics-based system modeling and distributed real-time process data to generate an authoritative digital design of the system at pre-production phase. A digital twin-based analytical decoupling framework is also developed to provide engineering analysis capabilities and support the decision-making over the system designing and solution evaluation. Several key enabling techniques as well as a case study in a production line are addressed to validate the proposed approach. Several new advanced manufacturing strategies are issued to achieve smart manufacturing, resulting in the increasing number of newly designed production lines. Under the individualized designing demands, more realistic virtual models mirroring the real worlds of production lines are essential to bridge the gap between design and operation.
  2. Factory design offers many promising capabilities regarding productivity and floor utilization. To evaluate the design and help the designer to escape design flaws, digital twin is proposed to support factory design. With considering of frequently changing in design phase, we present a modular approach to help building flexible digital twin and conducting corresponding changes. By using flexible digital twin, designer can quickly evaluate different designs and find design flaws in an easy way. And consequently time saving can be benefited. A case study of application on real factory is presented to illustrate the advantage.
  3. Digital twin provides an effective way for the digital-physical integration of manufacturing. Smart manufacturing services could optimize the entire business processes and operation procedure of manufacturing, to achieve a new higher level of productivity. The combination of smart manufacturing services and digital twin would radically change product design, manufacturing, usage, maintenance and other processes. Combined with the services, the digital twin will generate more reasonable manufacturing planning and precise production control to help achieve smart manufacturing, through the two-way connectivity between the virtual and physical worlds of manufacturing. Here we specify and highlight how manufacturing services and digital twin are converged together and the various components of digital twin are used by manufacturers in the form of services.
  4. Multiple forms of digital transformation are imminent. Digital Twins represent one concept, where we may use tools and technologies to “map” data from objects. It is gaining momentum because the “map” can act as a “compass” to reveal the status of (things, devices, components, machines, people, process visibility and real-time transparency. Adoption of digital proxies, or digital duplicates, may face hurdles due to lack of semantic interoperability between architecture and standards. The technologies necessary for automated discovery are in short supply. Progress depends on the convergence of information technology, operational technology and protocol-agnostic telecommunications. Making sense of the data, ability to curate data, and perform data analytics, at the edge. Delivering algorithm engines to the edge, are crucial for edge analytics, if latency is detrimental. The confluence of these, and other factors, may chart the future path for Digital Twins. The number of unknown unknowns, and the known unknowns, in this process, makes it imperative to create global infrastructures and organize groups, to pursue the development of fundamental building blocks. We need new ideas and research, to generate creative and innovative solutions.  
  5. Digital twin implements concepts that try to solve the problem of handling large amounts of data that is accessed concurrently and has numerous internal semantic dependencies. Here we provide an application of digital twin through the adoption of simulation techniques. The sample takes inspiration from a real plant, which produces anchoring plates for electric motor brake discs The weakness point of the cycle is represented by problems arising from its discontinuity. To simulate and study solutions, software was used to create virtual simulation models. Here we optimize a machining cycle for electric motors brake disk support plates. Today it is possible to adopt modeling and simulation techniques that thanks to the increasing power of technologies and the Industry network platform allow understanding the impacts of changes in the production and therefore verifying its effectiveness and limits.
  6. The vision of the Digital Twin itself refers to a comprehensive physical and functional description of a component, product or system together with all available operational data. This includes more or less all information which could be useful in all - the current and subsequent - lifecycle phases. One of the main benefits of the Digital Twin for mechatronic and digital-physical systems is to provide the information created during design and engineering also at the operation of the system. The comprehensive networking of all information, shared between partners and connecting design, production and usage, forms the presented paradigm of next generation Digital Twin. This will bridge the gap between physics-based design simulation and its use in operation and service phases. Based on the example of a point machine the benefits of using the Digital Twin are shown.  
  7. Digital Twin models are computerized clones of physical assets that can be used for in-depth analysis. Industrial production lines tend to have multiple sensors to generate near real-time status information for production. Industrial object networks datasets are difficult to analyze and infer valuable insights such as points of failure, estimated overhead. etc. Here we introduce a simple way of formalizing knowledge as digital twin models coming from sensors in industrial production lines. We present a way on to extract and infer knowledge from large scale production line data, and enhance manufacturing process management with reasoning capabilities, by introducing a semantic query mechanism. Our system primarily utilizes a graph-based query language equivalent to conjunctive queries and has been enriched with inference rules.
  8. For fast integration of new vehicles, a current digital image of the real production plant ─ the Digital Twin ─ is groundbreaking. This Digital Twin of a factory consists of a current bill of resources for cost calculation and a current layout planning state. Here we describe a concept for creating a Digital Twin of a body production system for the concept and rough planning projects. In the internal concept planning phase, planners do cost calculations and layouts for ordering factory suppliers. However, for integration planning, the original concept and rough planning project have to be updated. Therefore, a new concept has been developed which uses current information from the digital-physical system and a current 3D scan to update the bill of resources and the layout planning on demand. Increasing competition in the automotive industry makes cost-saving integration of more and more vehicle derivatives and variants as well as electrical and combustion engine models into existing production systems. In contrast to the original concept and rough planning state, the automated production plants are continuously optimized during the detail planning phase as well as after the start of production as a result of improved processes and model upgrading  
  9. The concept of Industry network of objects and smart factories has increasingly gained more importance. One of the central aspects of this innovation is the coupling of physical systems with a corresponding virtual representation, the Digital Twin. This technology enables new powerful applications, such as real-time production optimization or advanced network services. To ensure the real-virtual equivalence it is necessary to implement multimodal data acquisition frameworks for each production system using their sensing capabilities, as well as appropriate communication and control architectures. Here we extend the concept of the digital twin of a production system adding a virtual representation of its operational environment. We describe a proof of concept using an industrial robot, where the objects inside its working volume are captured by an optical tracking system. Detected objects are added to the digital twin model of the cell along with the robot, having in this way a synchronized virtual representation of the complete system that is updated in real time. The paper describes this tracking system as well as the integration of the digital twin in a networked 3D virtual environment that can be accessed from any compatible devices.
  10. With the recent advances in industrial object networks, the significance of information technologies to modern industry is upgraded from purely providing surveillance-centric functions to building a comprehensive information framework of the industrial processes. Innovative techniques and concepts emerge under such circumstances, e.g. Digital Twin, which essentially involves data acquisition, human-machine-product interconnection, knowledge discovery and generation, and intelligent control, etc. Signal processing techniques are crucial to the above-mentioned procedures, but face unprecedented challenges when they are applied in the complex industrial environments. Here we survey the promising industrial applications of object network technologies and discuss the challenges and recent advances in this area. We also share our early experience with a real-world industrial system that enables comprehensive surveillance and remote diagnosis for ultra-high-voltage converter stations. Challenges in building such a system are also categorized and discussed to highlight potential future directions
 
Top 10 Digital Twin Objectives Building Effective Communication between Physical/Information Domain with Performance Maps
 
Digital Twins provide accurate representations of a device, system or process over its full range of operation and its full lifecycle. Ideally the digital twin should be able to transition from design to operations with ease.
 
Data quality issues be identified and mitigated so the digital twin can be trusted to reflect reality and relied on for quality and accuracy of its predictions. While individual point solution digital twins exist today, future success stories may involve multi-purpose digital twin. It is unrealistic to achieve a future state in one step, but more likely to be achieved by the connectivity of valuable high performing individual elements. Must be agile – think big, start small, scale fast and drive adoption.
 
  1. Digital Twin application framework includes changes in conceptual design matching/validation stages with model approach saves workload/time for manufacturing process and production system
  2. Digital Twin allows designers to forecast product behavior and reduce inconsistencies through verification of virtual products without waiting for prototype shortens design cycles
  3. Digital Twin provides effective method to draw up plan and optimize process in production execution by pre-designed modules so only series of parameters need to be modified to conduct simulation under different design scenarios
  4. Digital Twin builds shop-floor with specialized work  models include physical space, rules, behavior, dynamics so workload and development time can be saved
  5. Digital Twin links collection of  digital artifacts include engineering data, operation data and behaviour descriptions via multiple simulation models
  6. Digital Twin simulation models are specific for intended use and apply  suitable fidelity for problem solutions, growing along with the real system for whole life cycle and integrates currently available data
  7. Digital Twin allows performance/utilization data created during design and engineering to be available/ready for evaluation during operation of system
  8. Digital Twin transports data, information and executable models of all system elements from development, products delivery by supplier to operation.
  9. Digital Twin embedded/available on edge device creates loop between real world and digital world as prerequisite for autonomous systems
  10. Digital Twin controls service system access models identify/control failure behavior to invoke repair, or replace the broken-down equipment
 

Top 10 Digital Twin Technology Benefit Multiple Levels of Organization Investment Plan Align Production with Supply Chain Processes
 
Manufacturers have successfully achieved development of an integrated production management system digital twin that operates across the entirety of the process manufacturing supply chain and asset lifecycle to align production management and reliability, supply chain optimization, and strategic asset investment planning.
 

  1. Enterprise Insight: A series of business and financial Key Performance Indicators (KPI’s) are updated in real-time, plan versus actual, as part of an enterprise-wide balanced scorecard for corporate situational awareness. Advanced logic is applied to real-time operations data to project current and future understanding of the business for executives.
  2. Capability Assurance: Key operator actions are captured, controlled and manipulated in real-time through monitoring and control of work processes. Minimize the learning curve for new operators, support change management and enable vastly improved scenario validation through operator training simulation and modular procedure automation solution.
  3. Advanced Production: Multivariable predictive controls drive the plant continuously to its optimum constraints by reacting to disturbances in a closed loop.
  4. Value Chain Optimization: Drive agile and efficient alignment of supply of premium products as closely as possible to market demand with sufficient resilience or operational flexibility to readily adjust production. Exploit market opportunities through supply chain planning, scheduling and production accounting.
  5. Automation and Control Integrity: Digital representation of the live plant and its automation algorithms through the “twin” function of an Integrated Control and Safety System allows engineers to conduct fundamental process control tests at an engineering workstation, as well as any proposed adjustments, before they are applied on the live plant
  6.  Instrumentation and Equipment Productivity: Minimize the need for breakdown and preventative maintenance through advanced online monitoring and prediction of field device health, and process interface conditions that reduce unnecessary trips
  7.  Advanced Chemistry: Highly intelligent devices, such as pumps, flowmeters, transmitters, and chemical analyzers provide total insight into their ongoing performance as well as the ability to adapt to changing duty requirements throughout the measurement device lifecycle.
  8. Plant Processes Optimization: Online and offline high-fidelity models for non-linear performance monitoring, simulation, and optimization using first principle kinetics deliver optimized yield performance, flow assurance, energy efficiency improvement, enhanced reliability and operator capability assurance.
  9. Strategy execution Ensuring field and line employees have the information they need to understand the bottom-line impact of their day-to-day choices
  10. Facilitating information flows across organizational boundaries to minimize second-guessing of decisions
 
 
 
Top 10 Process Automation System Factory Acceptance Test Simulated from Simplified Model Signals Allows Operator Training/Case Studies
 
The control system simulator used with the dynamic simulator must run the exact application software without deletions or modifications from the plant system. In order to provide a solution that can cost effectively be kept current with the plant system, it is essential that the control system simulator reflect the control system configuration with no additions or deletions.

  1. Strategy for the Digital Twin is defined and the functional requirements are developed using simulation to review process design/control plan and identify control and operational issues early in plant design.
  2. Automation System Design / Implement / Test - during control system engineering, the dynamic simulation is integrated to the control system simulator. This begins the use of the integrated Digital Twin. The Digital Twin is used to test control system configuration and graphics. Control design can be evaluated early in the project when rework and changes will have the least project impact.
  3. Factory Acceptance Testing - including system integrity and operational tests are done with the Digital Twin. Operating procedures are tested and refined. Problems and issues that could delay unit startup or disrupt production are caught before they impact the project. Loops are initially tuned to support smooth startups.
  4. Operator Training – begins early, at the completion or concurrent with factory acceptance test. Operators begin developing competency well before the startup of the plant or the commissioning of the process automation system using the Digital Twin.
  5. Structured training and open exploration of process dynamics and control system performance are both valuable in preparing the operator for actual plant operations.
  6. Process Control Improvements are developed, tested, and demonstrated to operations management without affecting the operation or production of the actual plant. The control system configuration developed in the Digital Twin is exported directly to the process automation system minimizing operational risk.
  7. Training New Operators on process operations, startup and shutdown procedures, and hazardous or infrequent process occurrences, is accomplished without affecting the running process.
  8. Evaluating New and Experienced Plant Operations is done on pre-developed training scenarios. Plant operations competency requirements are established and reinforced with repeatable, measurable, documented training sessions.
  9. Process Optimization Studies are done on the Digital Twin providing the process engineers with a tool that accurately models the process dynamics not seen in steady-state design models. Process changes with control improvements are thoroughly tested before construction begins, reducing rework and startup times.
  10. Process Safety Design is tested on the Digital Twin without impacting the process. Levels of Protection in Operating procedures are verified before implementation. Capital investment decisions are validated and optimized using the Digital Twin.
 
 
Top 10 Digital Initiatives within Reach for Manufacturing Industries Plant Connectivity Accelerate Decision Cycle Creates Sustainable Business Value
 
 
Digital twin must exist within a strong oversight framework. This includes challenges to create well defined business processes along with clarity around the decision rights/actions responsibility of workers Must considerer guidance from subject matter experts and their associated analytical insights working remotely from the operational location. Currently these insights are obtained only when there is a problem or there are structured inputs on a regular basis, such as periodic service review schedule.
 

  1. Reduce field qualification requirements and minimize field deviations
  2. Speed up isolation of issues to individual objects for Change Management
  3. Enforce consistent testing of the automation by using automated testing scripts to accelerate and reduce manual testing
  4. Catch configuration and code regression errors early in the project cycle when remediation is best cost and operation costs are minimized
  5. Perform a subset of software commissioning/shakedown activities prior to installing the code on the production equipment. This can reduce the time needed to perform testing on the production equipment, reducing the startup time for the plant
  6. Dynamic process models must be first-principles, following fluid dynamics, kinetics, and thermodynamics laws, but must also be dynamic and real-time. Real-time response must support good process dynamics for even fast processes without changes to process control system loop tuning.
  7. Provide cost-effective, consistent approach, that maximizes the return on investment in the model development using A multi-purpose approach
  8. Allow incremental model development, enhancement, and tuning to support the evolving requirements of the lifecycle.
  9. Ability to start with a model with preliminary data and then to tune it with detailed engineering or actual data
  10. Simulation must support a wide range of process model complexity from simple inputs/outputs pushbuttons, switches, and sensors models to complex unit operations models.
 
Top 10 Reasons to Exploit Cloud for Hosting Digital Twin Gaining Interest in Industries as Key Enabler of Digital Enterprise
 
The Cloud is already the infrastructure of choice for most business applications. However, it remains unexploited for most operational applications. The reason is that most valuable operational applications rely on a continuous feed of plant data, which means they can never be isolated from the plant in a way that a worker performance management system or capital budgeting system can.form a network point of view, the Cloud offers some compelling savings vs on-premise. But unless the operational risk associated with exposing the plant to the Cloud is offset by value created by the people and applications it serves, its use will remain marginal.
 

  1. Can engage people and technologies from outside corporate boundaries
  2. Build Augmentation with visualization models in resource-constrained environments
  3. Allows remote subject matter experts to join in the day-to-day troubleshooting
  4. Contribute to profit improvement activities of the plant
  5.  Enables the digital twin to subscribe to external data feeds that can enrich its resolution.
  6. Allows analytical capabilities to be offered remotely by experts.
  7.  Supports and nourishes agility with respect to the digital twin.
  8. Allows experimentation and rapid deployment of new solutions.
  9.  Makes solution updates trivial and significantly reduces infrastructure costs.
  10.  Reduces the cost of termination – if a solution does not work out as expected, a cloud solution can often be switched off with little to no on-going cost.
 

0 Comments

Top 10 Artificial Intelligence Takes Steps in Alpha Dogfight Trials Display Potential of Air Power Technology

10/1/2020

0 Comments

 
​DARPA’s AlphaDogfight Trials (ADT) conducted a simulated aerial dual pitting the best AI F-16 ‘pilot’ against a human pilot, aiming to determine the viability of relying on machines in a fast-paced, unpredictable air-to-air combat environment.
 
“The AI agent’s resounding victory demonstrated the ability of advanced algorithms to outperform humans in virtual dogfights. These simulations will culminate in a real-world competition involving full-scale tactical aircraft in 2024.
 
The algorithm easily defeated the fighter pilot in all five rounds that capped off a yearlong competition hosted by the Defense Advanced Research Projects Agency. The AI system gained notoriety throughout the competition for its aggressiveness and the accuracy of its shot.
 
But the system wasn’t perfect. The AI pilot often made an error in basic fighter maneuvers by turning away from enemy aircraft to where the AI thought the other aircraft would go. It was then unable to recover throughout the fights.
 
Military officials have long eyed the potential for AI to control aircraft, whether as part of a “loyal wingman” setup where a number of systems are controlled by one pilot, or through taking existing systems and making them optionally manned.
 
An AI beat a veteran human pilot 5-0 in DARPA’s virtual AlphaDogfight trials. That program, called Air Combat Evolution (ACE), will now advance to testing in actual fighter aircraft. But the current party line is that the US military does not seek to replace human judgment and control in combat operations, only to augment them.
 
“AI will support human decision-makers, not replace them. We see AI as a tool to free up resources, time, and manpower so our people can focus on higher priority tasks, and arrive at the decision point, whether in a lab or on the battlefield, faster and more precise than the competition.”
 
 
 
“Before AI can find its place, really, in the cockpit in a formation, it kind of has to earn its wings first.”
 
“It’s got to show that it can actually fly, that it can compete, and that it can con-tribute. “A human pilot goes through pretty much the same thing. You learn to fly, you have to demonstrate you have certain level of skill, then you can join a formation, be part of the squadron and go out and be part of the part of the team.”
 
The  virtual games will help flesh out how human and machine pilots can share operational control of a fighter jet to maximize the chances of mission success.
 
“When we put this particular part of the program together, there were some really critical questions that have to be answered and we’ll see how that comes during the trials.
 
We need to understand: can we actually develop autonomy algorithms that can operate in the very demanding environment of air-to-air combat?”
 
“And we’ll need to understand —later on once we have those algorithms developed — when you do the human-machine teaming, how do you share the workload in the dynamic situations of aerial combat where aircraft are getting damaged and attritted.
 
How do you shift responsibilities back and forth so that you can have the air dominance that you want? Those are very complex problems.
 
ADT is not so much a contest between either human or machine, but instead heralding an era of “human-machine cooperation.” Applying this concept more generally to air warfare, key examples can be thought of as similar to the five types of cooperation: a Flight Group, with either all human pilots or all autonomous drones, such as in swarm deployment of attritable payloads or platforms, while the human pilot remains as a rear area local controller; offensive or defensive attritables or expendables with signature abilities; deployment of expendable payloads (e.g., from an “arsenal plane”) in salvos of homogenous or heterogenous weapon mixes; and deployment of cyber effects into a threat payload and/or platform, for immediate or latent impacts.
 
AI is the critical enabler in each of these manifestations of human-machine cooperation for air warfare. Further advancements in human-machine cooperation will enable the pilot to focus more on larger area mission planning while delegating the business of flying to the AI No longer will the pilot selection process need to emphasize the quickest reaction times and physiology for high g forces, but should instead prioritize aptitudes for synthesizing information, tasking prioritization and decision-making.
 
These skills will be critical to leverage and coordinate effects from a pilot’s own aircraft, as well as other offboard assets. Abstraction layers will result in less onerous and specialized training required to fly a plane, and more emphasis and ease on how to best use it.
 
Air Force has been promoting the concept of teaming unmanned and manned aircraft to undertake a variety of missions in future all-domain fights against peer adversaries — for example, championing Skyborg effort to mate AI-driven drones with fighter jets to perform traditional ‘wingman’ roles, as well as other operations.
 
The bot-tom line for the Air Force’s fighter pilots is the issue of trust; i.e. ceding life-or-death control to an artificial intelligence. “We are very reluctant — the fighter pilot community is very reluctant — towards any sort of change. There’s almost an institutional requirement to not accept change.  “When it comes to autonomy, this has been avoided almost like the plague.”
 
Air Force has said some of the first areas the service could use AI pilots is in training.
 
 
“As we look at how we’re going to bring those pilots’ assistants in, and then how we might move toward a more autonomous vehicle, one of our ideas is we’d like to try it in red air first.”
 
“So, we’d like to take it and see if there are ways to do some unmanned adversaries against our manned aircraft, and then let them learn and train their algorithms and see how good they get at fighting against us.”
 
The overarching goal of the program is to gradually build up trust in increasingly capable autonomous systems that eventually can take on the full gamut of air combat missions.
 
DARPA’s Air Combat Evolution (ACE) Program hopes to build trust in AI for increasingly complex air combat missions.
 
Building trust that AIs can understand human intentions, is also the goal of another DAPRA program, called Adaptive Distributed Allocation of Probabilistic Tasks (ADAPT).
 
Adaptive Distributed Allocation of Probabilistic Tasks (ADAPT) -- will assist commanders with decision-making by quickly crunching data in fast-changing battlespaces and improving cooperation between human and AI agents.
 
"ADAPT will take a significant step forward in human-AI collaboration so warfighters and intelligent technology can reason and work together to make better, faster decisions than either could do on their own." By learning from its human counterparts, taking into account their goals, preferences and constraints, these more informed agents can guide AI in forecasting, creating and adapting action plans as missions evolve."
 
In a search-and-rescue scenario, for example, these advanced AI models and agents will think through millions of possible scenarios for commanders to choose the best plan, minimizing casualties and risks.
 
"Humans excel at learning from one another but can only process so much incoming information. AI, on the other hand, has incredible computational abilities but needs to learn from and communicate with humans in order to be used effectively in dynamic team situations.
 
 "These combined attributes will elevate a commander's expertise and decision-making in fast-changing, information-intensive environments so they can respond, and adapt quickly, while considering future possibilities." ADAPT will collect and analyze huge influxes of data for humans, optimize strategy and reallocate assets on the fly.
 
“The idea of the AlphaDog trials was essentially to be an accelerator, or an incubator, for a lot of the technologies that are flowing into this larger program. “In ACE, we’re really demonstrating trusted, scalable, human level, AI-based and AI-driven autonomy for air combat.”
 
The  overarching ACE concept is aimed at allowing pilots to shift “from single platform operator to mission commander” in charge not just of flying their own aircraft but managing teams of drones slaved to their fighter jets, The goal was not just to tackle those fundamental problems plaguing autonomous system development, but also speed actual solutions
 
“These performers have gone down very different paths, chosen very different approaches in AI, and for us that’s a bonus, because we not only get to see how these algorithms perform, but we get to see how these different areas of AI work.


This can inform later investments if it turns out that there are particular areas that need further development or if there are things that offer new advantages that we previously hadn’t anticipated.”
 
AI pilots face simulated ‘adversaries’, each based on a typical scenario involving offensive and defensive maneuvers.
 
The virtual games have attracted an enormous amount of attention within DoD, the various services and industry. “We’re trying to see if there are some other solutions available to let everybody in. But again, this shows the depth and breadth of interest in this particular area, and we’re very glad to see that kind of interest in this program.”
 
Developments in AI could lead to “hyperwar” — a type of conflict and competition so automated that it would collapse the decision action loop, eventually minimizing human control over most decisions.
 
 We need to encourage the organizational transformation necessary to adopt safer, more explainable AI systems to maintain our competitive edge, now that the technical transformation is at our doorstep.
 
Software, AI, autonomy — these are the ultimate weapons. These technologies are the difference between hundreds of old Mig-19 and Mig-21 fighter jets lying in scrap yards, and their transformation to autonomous, maneuverable, and so-called “attritable,” or expendable, supersonic drones built from abundant air frames, equipped with swarm coordination and the ability to operate in contested airspaces.
 
Gone are the days when effectiveness and capability can be ascribed to individual systems and platforms. Now, it’s all about the network of assets, how they communicate, how they decide to act, and how efficiently they counter the system that is working in opposition to them. An individual aircraft carrier or a squadron of strategic bombers are no longer as independently meaningful as they once were.
 
In the emerging environment, network-connected, cognitive systems of war will engage each other. They will be made up principally of software, but also of legacy weapons platforms, humans, sometimes in combat, and newer assets capable of autonomous decision and action.
 
The picture of the environment in which they operate across time and space will only be made clear by intelligent systems capable of fusing massive amounts of data and automatically interpreting them to identify and simulate forward the complex web of probabilities that result.
 
Which actions are likely to be successful? With what degree of confidence? What are the adversary’s most likely counter-moves? The large scale, joint application of autonomously coordinated assets by a cognitive system will be unlike anything that has come before. It is this fast-evolving new paradigm, powered by artificial intelligence at every level, from the tactical to the strategic, that demands our attention. We must no longer focus on individual platforms or stand-alone assets, but on the cognitive system that runs an autonomous “Internet of War”.
 
One concern is that artificial intelligence has been broadly misunderstood as a product or a feature. It is not. It is a science, much like physics or mathematics. Its applications will lead not merely to incremental enhancements in weapon systems capability but require a fundamental recalculation of what constitutes deterrence and military strength.
 
For example, the combination of AI elements — visual recognition, language analysis, the automated extraction of topical hierarchies/ontologies, control of systems with reinforcement learning, simulation-based prediction, and advanced forms of search — with existing technologies and platforms, can rapidly yield entirely new and unforeseen capabilities. The integration of new AI into an existing platform represents a surprise in its own right. But the complex interactions of such platforms with others like them can create exponential, insurmountable surprise.
 
Integrating the “LEGO bricks” of intelligence and autonomy into conventional platforms results in unconventional upgrades. Fighter jets with autonomy aren’t just a 1950s era write-off. It becomes a system with new potential, diminished logistics dependencies, and an enhanced efficacy that goes far beyond an engine or radar upgrade. Broadly, the consequences of the use of AI to revitalize and reinvent conventional platforms will be transformative.
 
Despite the change occurring globally in value shifting from the physical to the digital, and the tremendous latent potential of AI, our Defense Department has not traditionally shown its best when it comes to understanding, acquiring, or deploying software capabilities. Hardware platforms come far more naturally to our acquisition professionals.
 
We can hope for a change of heart and perspective, but absent that, in order for AI to be meaningful to them in the near term, we must reinvent, enhance, and reimagine existing platforms just as we build new ones, so we can cost-effectively fulfill needs and create significant capabilities that open the door to even greater future potential. Briefing after briefing on the potential of AI, or distributing primers on machine learning inside the confines of the Pentagon won’t lead to critical adoption; the performance that results when AI is integrated into platforms will be the proverbial proof that lies in the eating of the pudding.
 
To compete in this new era of exponential technologies, the U.S. military and our intelligence agencies need both digital and physical systems powered by artificial intelligence. Imbued with synthetic cognition, such systems can make a meaningful difference to every branch of our armed services.
 
A serious effort to fuel the development of such systems will lay the groundwork for true, full-spectrum AI adoption across DoD. But for any of this to become reality, long held views and processes in DoD must change.
 
An aggressive and fast-track effort to incorporate AI into existing and new platforms will have to be adopted. In the age of hyperwar, our willingness to embrace commercial innovation, our decisiveness in acknowledging that we live in a post-platform era, and most importantly, the speed with which we operationalize new investments, will be the attributes that lead to success.
 
 
It's all about augmenting personnel efforts across battalions, regiments and divisions to raise the readiness levels of the entire force. Our Troops inside the wire should have the knowledge, technical acumen and agility to support all of the operations and technology our troops outside the wire are running.
 
1. Invest in both garrison and tactical systems and infrastructure that are capable of running and sustaining the increased compute power that comes with training and deploying AI programs.
 
2. Conduct these initiatives in parallel with operations to ensure red and blue force efforts are complimentary and requirements are aligned by having an algorithm alert to discrepancies or gaps in the operational plan and the intelligence needed to execute it.
 
3. Display both a geographic and temporal representation of multi-intelligence data with a natural language generated summary of the data. Include the ability to break data into individual entities as needed and ingest analyst annotations into the automated summary.
 
4. Automatically map finished intel products to the priority intelligence requirement they help answer with automated caveat classification of documents tied to user alerts/permissions. Include smart search capabilities to that repository so analysts can find relevant products more efficiently using NLP and machine learning.
 
5.  Capture workflows and product development in a shared space so knowledge gaps are reduced between shifts or rotations. Use automation to track knowledge gaps and alert users to update analysis and finished products when significant knowledge gaps are filled.
 
6. Measure impact of operational intelligence (and any associated collections or requests that contributed to that intelligence) by automating inputs and processes that would serve as operational measurements.
 
7. Apply analytic rigor to the intelligence analysis process and automate a measure of objectivity into levels of confidence tied to intelligence assessments.
 
8. Add cognitive search into the massive data repositories analysts are required to sift through to move beyond keyword search and enable contextual search at an enterprise level.
 
9.  Provide in-depth training on AI systems and set the standards by which the technology should augment the human analytic process but by no means replace the analyst behind the screen. In short, tie the technology into existing workflows and adjust workflows to account for technological innovation.
 
10. Use natural language processing (NLP) and machine learning to automatically classify and match incoming data to indicators and warnings being monitored. Provide alerts on trending topics, keywords or themes that may indicate emerging tactics, techniques and procedures.
 

0 Comments

Top 10 Artificial Intelligence Application Underlines Modern Tech Integration on Conduct of Future Conflict

10/1/2020

0 Comments

 
DARPA Air Combat Evolution (ACE) team has pushed state-of-the-art for the use of agent-based modeling and artificial intelligence applications to air warfare.
 
Much of the initial reporting focused on the unambiguous final result when AI defeated the human pilot in each of their five dogfights. Here, as in the past, when such a decisive result occurs, some herald it as the end of an era and the dawn of a new one, like the shift from cavalry to tanks.
 
 Conversely, skeptics highlight the unrealistic conditions that applied to the test, such as the fact that the Alpha Dogfight Trials (ADT) used “perfect” data during the scenario conditions, a fact that any experienced pilot or controller would identify as unrealistic. In the ADT, this meant that a kill was adjudicated by reaction time in close quarters, which gives a significant inherent advantage to the AI.
 
The art of dog-fighting can be distilled into four steps: Observe your opponent for both obvious and subtle cues, Orient the bandit’s maneuvering in relation to your jet, Decide what to do, and then Act to defeat him. You can define the rapidly hanging dynamics associated with fighting called a thinking enemy, particularly in a highly maneuverable fighter, the OODA Loop, for Observe, Orient, Decide and Act.

The software loaded into computers we now refer to as AI can readily handle the last two steps, but what feeds that “intelligent” system with the ever-changing cues for the “observe” and “orient” steps? The only operational fighter with built-in technology that can “see” other aircraft is the F-35, and while it has sensors that can optically capture an opposing platform in any quadrant, it sees with an acuity of only 20-30.  

Even then, it cannot discern the type of bandit it is fighting—its configuration, aspect, heading-crossing angle, or the “rate” at which its nose is tracking—unless the bandit is in front of the F-35 where its radar can track the target. Even then, it gets only some of what it needs.    

Lacking the ability to independently observe and orient means AI cannot feed itself the inputs required for the programmed elements to kick in and defeat an adversary in the decide and act steps. So, how did the AI simulation gain those details in this fight with Banger?
 
That information came not from some visual interpretation of the other aircraft on the simulator’s screen, but from “perfect information” supplied by the simulator. The exact range, altitude, airspeed and nose position of the manned fighter is calculated and immediately fed into the AI simulation. That level of clarity can never be gained in a dynamic, neutral fight — but with it, even humans are hard to beat.

Unless you have super vision, it’s really hard to keep track of two aircraft that are maneuvering to kill you, so if everything else is equal in a two versus one (2-V-1) scenario, the “1” always gets killed. On that particular day, we had problems with one of the radios we would normally use, which forced adversaries to talk “amongst themselves” on the same radio pilot was using. To successfully prosecute a 2-V-1 attack, coordination and communication is critical. Every time one fighter would take his nose off of me to get more airspeed and maneuvering space, that pilot would tell the other pilot so he could pitch back in to attack.

Cues received in real-world combat sorties pale in comparison to the perfect information the DARPA simulation fed to Banger’s AI opponent in their virtual fights. There was no need for the machine to “look outside” and find Banger, then try to assess how much airspeed he had, when his afterburner was cooking, when he went to idle with his speed brakes deployed, or how much “G” he was pulling. The simulation fed all that info to the AI fighter in real time.

While those might seem like petty elements, the observe and orient steps are the heart and soul of dogfighting – the two most critical elements in the OODA sequence and there is no system in the world that can touch a human’s ability to capture and process those tasks.
 
These artificialities aside, DARPA appropriately chose a technically challenging but simplified tactical problem for this cutting-edge experimentation in air warfare. What then should we learn from the experiment?
 
High-profile DARPA experiments, like the ADT, are critical catalysts to stimulating technology and industrial ecosystems, while also pushing the boundaries of the state-of-the-art, embracing competition and learning, and inspiring the wider technology community.
 
Services would benefit from more consistently embracing this type of approach to promote innovation and progress, along with more acquisition programmatic “on ramps,” so mature technologies can be included in critical weapon system upgrades.
 
Ideas of individual combat loom large in the military aviation community’s ethos of “aerial knights” dueling in the sky, using quick reaction maneuvers in close proximity to win. However, 1 v. 1 aerial gun-based dogfighting, or even short-range missiles, are increasingly a relic of air-to-air combat from the age before sensors and missiles grew in range, sophistication and lethality.
 
For several decades now, technology advancements have already enabled air warfare to evolve from dogfighting to beyond visual range missile engagements. Future air warfare scenarios are unlikely to include 1 v. 1 dogfighting between aircraft using guns, due to advances in sensor ranges and fusion, along with network-enabled weapons and cooperative teaming, that are already fielded on 5th generation fighters.
 
Weapons and deployable platforms using increasingly sophisticated combinations of these technologies will be able to more easily kill a target or team together to achieve more advantageous positioning for a successful kill.
 
Further efforts to incorporate uncertainty, such as fuzzy logic controllers, will make simulated combat conditions and performance with AI more realistic and enable effective transition to real-world conditions.
 
Although excited speculation continues about how AI will replace a pilot in the cockpit and thus enable an unmanned fighter aircraft to pull many more g's than a human, this advantage already exists today: a human deployed missile can pull many more g’s than a fighter aircraft.
 
Such considerations aside, the types of AI approaches that ADT demonstrated are also a valuable way for DoD to define for industry how to further enhance a missile’s ability to dogfight with a target. Using AI’s strength to assess aircraft maneuvers and transitions will allow pilots to have a higher probability of kill from a wider range of conditions, including at the boundaries of a weapons engagement zone. AI's maturation has been, and will likely continue to be, much more evolutionary than revolutionary.
 
As aviation technology advanced, complex mechanical systems were replaced by analog switchboards in the cockpit, requiring an aircraft second-seater. Subsequently, digitized cockpits with sensor readouts, autopilot, and automated navigation followed. While current systems even include automated take-off and landing, integrated displays to aid with mission planning and weapons selection, as well as coordination across flight groups.
 
Correspondingly, the preponderance of pilot workload for advanced aircraft continues to shift from being primarily about how to best fly the plane, instead of using the aircraft’s own sensors and weapons, in conjunction with other offboard assets most effectively in support of the Joint Force.
 
This crucially and fundamentally shifts pilots' emphasis from manning the equipment (the fighter) to equipping the pilot to perform a wider range of functions in more lethal and effective ways.
 
The ultimate goal of using AI in warfare is to provide decisive advantage in an engagement to achieve victory. The ADT and broader DARPA ACE program is a crucial catalyst to spawn advancements for “pilot assist” technologies, just like driver-assist technologies continue to lay a crucial foundation for future driverless cars.
 
Today, AI can become the virtual “second seater,” able to navigate and perform complex flying functions, while the human pilot retains more of a Weapon Systems Officer (WSO) focused role. The AI virtual second seater function will enable algorithms to learn from the human operator, further building trust, sophistication, and capability in the near-term. To further build trust with human pilots, ensure algorithm explainability, and enhance AI learning and capability, the Services should establish a version of mission debrief for AI in both virtual and real-world employment.
 
The use of AI based pilot assist technologies will enable a human pilot’s role to shift to a weapons version of air traffic control: a combination of local air battle manager and platoon leader in the sky. As a local aerial mission commander, the human pilot’s focus can be on directing a pack of other assets– either as platforms, or network-enabled weapons with smart control, such as DARPA’s Gremlins or the Air Force Golden Horde programs.
 
As a local air warfare conductor, the pilot can be close to the action, but from a safe vantage point, allowing for unmanned platforms and weapons to synchronize and improvise. This will necessitate a radical shift in pilot training to focus on cultivating mission command at a higher echelon level, but at a much earlier stage in a human pilot’s career.
 
Organizationally, this will also require a reformulation of the concept of the Air Force “Flights” echelon to one in which a human oversees and directs each Flight but delegates AI control over each of the corresponding supporting "Elements." This approach parallels and can leverage ongoing developments in other sectors, such as industrial robotic manufacturing, autonomous shipping flotillas, and autonomous trucking and taxi fleets.
 
In each of these cases, automation and AI enables lower-level tasking to be performed, while abstraction enables a human to oversee the coordination and conduct of multiple assets in a feasible and constructive way.
 
The Services continue to have revolutionary visions of blended flight groups of manned fighters and loyal wingmen, or fully autonomous multi-role fighters. The challenges to achieving these visions are not insurmountable, but fully realizing them will not be as quickly as desired.
 
However, a significant evolution of air warfare is already underway. Over the last decade, the rapid convergence of key technologies is occurring beyond AI and machine learning. These technologies include fusion engines, miniaturized active electronically scanned array (AESA), onboard processing, advanced displays, and interfaces, swarming coordination, sophisticated multivariate recommendations, palletized munitions, controllers for many vs. many, abstraction layers, and systems to enable conversion of legacy platforms to unmanned (e.g., the F-16 to a QF-16).
 
The combination of these technologies will soon allow a pilot to lead a force of previously obsolete converted unmanned aircraft into battle, loaded with network-enabled weapons, and re-enforced by palletized munitions deployed from rearward air bastions of relative safety.
 
Select unmanned squadrons, or swarms of munitions, will be able to engage in specific deconflicted sectors of operation while operating at appropriate levels of balanced trust and risk. In parallel to these technologies, a more pragmatic conceptualization is required for how AI can be best used.
 
A.I is not a pilot replacement but can be a virtual second-seater, enabling rapid and continuous evolution of human-machine cooperation for decisive advantage. Instead of considering options for either human pilots or AI-based replacements, DARPA and the Services, through a symbiosis of commercial and military partnership, should focus on advancing virtual second-seaters for air warfare victories into the next decade and beyond.

1. Collapse Human-Machine decision/action loop operates across time/space

2. Transform Dated Platforms into auto, manoeuvre, attritable

3. Swarm Potential coordinates operation in contested airspace

4. Individual Systems/Platforms capability overmatched by networks

5. Ability to discriminate action decisions to counter adversary systems

6. Fuses/Integrates data to identify/simulate web of probabilites/confidence

7. Extraction of topical hierarchies control systems with reinforcement

8. Enables Simulation-based prediction and search

9. Interaction of intel/autonomous blocks makes upgrades possible
​
10. Must fast-track AI applications into existing and new platforms
 
0 Comments

Top 10 Artificial Intelligence Impacts on Military Operations Presents Opportunities/Challenges

10/1/2020

0 Comments

 
​New AI technologies not only have potential benefits, but also shortcomings and risks that need to be assessed and mitigated as necessary. The very nature of AI – a machine that determines the best action to take and then pursues it – could make it hard to predict its behaviour.
 
Specific character traits of narrow AI systems mean they are trained for particular tasks, whether this is playing games or interpreting images. In warfare, however, the environment shifts rapidly due to the ‘fog and friction of war’.
 
AI systems have to work in a context that is highly unstructured and unpredictable, and with opponents that deliberately try to disrupt or deceive them. If the setting for the application of a given AI system changes, then the AI system may be unable to adapt, thus the risk of non-reliance is increased.
 
In context, militaries need to operate on the basis of reliability and trust. So, if human operators, whether in a static headquarters or battlefield command post, are not aware what AI will do in a given situation, it could complicate planning as well as make operations more difficult, and accidents more likely.
 
The increasing array of capabilities of AI systems will not be limited by what can be done, but by what actors trust their machines to do. The more capable our AI systems are, the greater their ability to conduct local processing and respond to more abstract, higher level commands.
 
The more we trust the AI, the lower the level of digital connectivity demanded to maintain system control. Within this context it will be critical to develop the appropriate standards, robust assurance, certification regimes, and the effective mechanisms to demonstrate meaningful human accountability.
 
Technology for big data and AI is currently developing at a tremendous pace, and it has major potential impacts for strategic, operational and tactical military decision-making processes. As such, operational benefits may be vast and diverse for both the US and its adversaries. However, the full potential of AI-enhanced technology cannot be fully predicted, and time is required for capabilities to mature.
 
The purpose of these reports is to provide a basic understanding of AI, its current development, and the realistic progress that can be expected for military applications, with examples related to air power, cyber, C2, training, and human-machine teaming.

The concept of what defines AI has changed over time. In essence, there has always been the view that AI is intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans.

In common language, the term AI is applied when a machine mimics cognitive functions attributed to human minds, such as learning and problem-solving. There are many different AI methods used by industry and DoD, with machine learning and neural networks currently at the forefront.

As computers and advanced algorithms become increasingly capable, tasks originally considered as requiring AI are often removed from the list since the involved computer programs are not showing intelligence, but working off a predetermined and limited set of responses to a predetermined and finite set of inputs. They are not ‘learning’.
 
Recently capabilities generally classified as AI include successfully understanding human speech, competing at the highest level in strategic game systems autonomous systems, intelligent routing in content delivery networks, and military simulations. Industry generally acknowledges significant advances in image recognition as cutting-edge technology in AI.

While such known and ‘applied AI’ systems are often quite powerful, it should be noted they are usually highly specialized and rigid. They use software tools limited to learning, reasoning, and problem-solving within a specific context, and are not able to adapt dynamically to novel situations.
 
This leads to the term ‘weak AI’ or ‘narrow AI’. Weak AI, in contrast to ‘strong AI’, does not attempt to perform the full range of human cognitive abilities. By contrast, strong AI or ‘general AI’ is the intelligence of a machine that could successfully perform any intellectual task that a human being can.
 
For strong AI, there is no essential difference between the piece of software, which is the AI exactly emulating the actions of the human brain, and actions of a human being, including its power of understanding and even its consciousness.
 
Defense air power is capitalizing on such capabilities with Mission Training through Distributed Simulation (MTDS), which has demonstrated reliable connectivity and beneficial training opportunities between multiple types of aircrew simulators and training centres
 
In addition, for over a decade Command & Control – Simulation Interoperation (C2SIM) data exchange standard offers existing command & control, communication, computers and intelligence (C4I) systems the potential to switch between interacting with real units and systems, including robotics and autonomous systems, and simulated forces and systems.
 
Unmanned vehicle research has allowed state-of-the-art remote operations to progress significantly during recent decades, for both civil and military applications. The advance of AI, however, is now offering unprecedented opportunities to go beyond remote control and build autonomous systems demanding far less direct control by human operators. Examples of autonomous systems development include self-driving cars, trains, and delivery systems in the civil traffic and transport sector.

In the same way, the military is developing systems to conduct routine vehicle operations autonomously. For example, US Navy X-47B program developed an Unmanned Combat Air Vehicle (UCAV) that had completed a significant number of aircraft carrier catapult launches, arrestments, and touch-and-go landings, with only human supervision. Recently, the X-47B successfully conducted the world’s first fully autonomous aerial refuelling.

Military commanders need to ask themselves how much trust they want to place in what the AI-enabled autonomous system promises to be able to do. How much better is it with regard to persistence, precision, safety, and reliability, as compared to the remote human operator? When it comes to kinetic targeting on the ground, the ‘human-in-the-loop’ being able to intervene at any time probably should remain a requirement.
Conversely, in the field of air-to-air combat, where millisecond long timeframes for critical decisions inhibit remote UCAV operations, there has been a recent and promising leap forward.
 
ALPHA, an Air Force AI agent has been built on high-performing and efficient ‘Genetic Fuzzy Trees’. During in-flight simulator tests it has constantly beaten an experienced combat pilot in a variety of air-to-air combat scenarios, which is something that previous AI-supported combat simulators never achieved.
 
While currently a simulation tool, further development of ALPHA is aimed towards increasing physical autonomous capabilities. For example, this may allow mixed combat teams of manned and unmanned fighter airframes to operate in highly contested scenarios with Human-Machine Teaming.
 
A variation on the autonomous physical system and military operations with human-controlled vehicles is the manned-unmanned teaming (MUM-T) concept, which leaders deem to be a critical capability for future military operations in all domains. Some systems are currently testing and implementing diverse configurations to improve the following: pilots’ safety, situational awareness, decision-making, and mission effectiveness in military aviation.
 
The Air Force  has been working on the ‘Loyal Wingman’ model, where a manned command aircraft pairs with an unmanned off-board aircraft serving as a wingman or scout. In a recent live demonstration, a modified unmanned F-16 was paired with a manned F-16 in formation flight,
 
The pilotless F-16 broke off from the formation, attacked simulated targets on the ground, modified its flight pattern in response to mock threats and other changing environmental conditions, and re-entered formation with the manned aircraft.  USAF planning foresees future applications pairing a manned F-35 Joint Strike Fighter with such an unmanned wingman.
 
In the above test scenario, however, the unmanned F-16 conducted only semi-autonomous operations based on a set of predetermined parameters, rather than doing much thinking for itself. The next technology waypoint with a more demanding AI requirement would be ‘Flocking’. This is distinct from the ‘Loyal Wingman’ concept in that a discernible number of unmanned aircraft in a flock (typically consisting of a half-dozen to two dozen aircraft) execute more abstract commander’s intent, while the command aircraft no longer exercises direct control over single aircraft in the flock.
 
‘Swarming’, exceeds the complexity of flocking, so that an operator cannot know the position or individual actions of any discrete swarm element, and must command the swarm in the aggregate. In turn, the swarm elements will complete the bulk of the combat work.
 
DoD has demonstrated a swarm of 100 autonomous micro-drones ejected from a fighter aircraft. The swarm successfully showed collective decision-making, adaptive formation flying, and self-healing abilities. While not primarily an offensive tool, there are a multitude of uses for such drone swarms, including reconnaissance and surveillance, locating and pursuing targets, or conducting electronic warfare measures. Furthermore the swarm could act as expendable decoys to spoof enemy air defenses by pretending to be much larger targets.
 
AI has demonstrated the ability to provide Support for C2 and Decision-Making. Military headquarters have largely moved from paper-based to electronic-based workflows. This, in-turn, adds information awareness but also data volume which the staff must manage. Future intelligence, surveillance, target acquisition and reconnaissance systems will generate even larger amounts of near real-time data that will be virtually impossible to process without automated support.

At the same time, increasingly advanced, network-enabled, joint, and multi-domain capabilities will emerge and get these tools available for use in their own operations. For commanders to effectively orchestrate actions in such environments, they need situational understanding and decision-support on possible courses of action (COAs), their effects, and consequences.

Improved data management and interoperability, data fusion, automated analysis support, and visualization technologies will all be essential to achieving manageable cognitive loads and enhanced decision-making. These critical capabilities are not only for commanders and headquarters staffs, but also for platform operators, dismounted combatants and support staff.

Aside from traditional algorithms, simulation and AI are envisaged as tools that can enhance decision-making. To better define the task, some of these inputs started by breaking down the Observe-Orient-Decide-Act (OODA) loop – an abstract model generically embracing different types of military decision-making processes – and then assigned future required AI roles and functions to each of the OODA steps.
 
Situational understanding/orientation can make used of Applying big data analytics and algorithms for data processing, then data presentation for timely abstraction and reasoning based on a condensed, unified view digestible by humans, but rich enough to provide the required level of detail. This should include graphical displays of the situation, resources (timelines, capabilities, and relations and dependencies of activities), and context (point of action and effects).
 
Plan generation/decisions present a timely, condensed view of the situation, with probable adversary COAs and recommended own COAs including advice on potential consequences to support decision-making. To this end, it must be made possible to assess and validate the reliability of the AI to ensure predictable and explainable outcomes allowing the human to properly trust the system.
 
As AI gets more advanced and/or time pressure increases, the human may only be requested to approve a pre-programmed action, or systems will take fully autonomous decisions. Requirements for such AI must be stringent, not only because unwanted, erroneous decisions should be prevented, but also because decision-makers will still be responsible for the actions the system takes.
 
 
What are the AI challenges? Let’s take a look!
 
1. Data Collection 

To implement real case scenarios, you need to collect the data through web-scraping or coordinate with domain experts to collect the data then structure the data and store it in the database. Once the data is collected you need to validate if the quantity is sufficient for the use case must select learning algorithm and training the correct model using some of the acquired data.

2. Non-representative Training Data 

The training data should be representative of the new cases to make predictions for general cases i.e., the data we use for training should cover all the cases that occurred and that is going to occur. By using a non-representative training set, the trained model is not likely to make accurate predictions.

3. Generalized Predictions

Systems developed to make predictions for generalized cases are good machine learning models. It will help the model to perform well even for the data which the data model has never seen. Quality of data is very important to get accurate results without false prediction. Data preprocessing needs to be done by filtering missing values, extract & rearrange what the model needs.

4. Poor Quality of Data 

In reality, we don’t directly start training the model, analyzing data is the most important step. But the data we collected might not be ready for training, some samples are abnormal from others having outliers or missing values for instance. Fill the missing features/values using averages or simply remove the attributes/instances with missing values, or train the model with and without these instances.

5. Irrelevant/Unwanted Features 

If the training data contains a large number of irrelevant features and enough relevant features, the machine learning system will not give the results as expected. One of the important aspects required for the success of a machine learning project is the selection of good features to train the model also known as Feature Selection.
 
6. Underfitting the Training data 

Underfitting which is opposite to overfitting generally occurs when the model is too simple to understand the base structure of the data. It’s like trying to fit into an undersized jumpsuit. It generally happens when we have less information to construct an exact model and when we attempt to build or develop a linear model with non-linear information.
 
7. Memory networks 

Memory networks or memory augmented neural networks still require large working memory to store data. This type of neural network needs to be hooked up to a memory block that can be both written and read by the network. Currently, we teach computers to represent languages and simulate reasoning still big challenge for deep networks
 
8. Focused Attention
 
Integrate large set of features not focusing on small chunks of input stimuli, one at a time, and then integrate the results at the end.

9. One-shot learning

While applications of neural networks have improved, we still haven’t been able to achieve one-shot learning. So far, traditional gradient-based networks need an enormous amount of data to learn and this is often in the form of extensive iterative training. Instead, we have to find a way to enable neural networks to learn using just one or two examples

10. Object detectionObject detection is still hard for algorithms to correctly identify because classification and localization in computer vision and machine learning are still lacking.
 
0 Comments

    Site Visit Executive

    Provides Periodic Equipment Sustainment Programme Reports Assess Help Desk, Training, Workshops & Clinics for Product Information & Reliability Types, Systems Upgrades, Communication Links & Supplier Participant Retrieval Status

    Archives

    April 2021
    March 2021
    February 2021
    January 2021
    December 2020
    November 2020
    October 2020
    September 2020
    August 2020
    July 2020
    June 2020
    May 2020
    April 2020
    March 2020
    February 2020
    January 2020
    December 2019
    November 2019
    October 2019
    September 2019
    August 2019
    July 2019
    June 2019
    May 2019
    April 2019
    March 2019
    February 2019
    January 2019
    December 2018
    November 2018
    October 2018
    September 2018
    August 2018
    July 2018
    June 2018
    May 2018
    April 2018
    March 2018
    February 2018
    January 2018
    December 2017
    November 2017
    October 2017
    September 2017
    August 2017
    July 2017
    June 2017
    May 2017
    April 2017
    March 2017
    February 2017
    January 2017
    December 2016
    November 2016
    October 2016
    September 2016
    August 2016
    July 2016
    June 2016
    May 2016
    April 2016
    March 2016
    February 2016
    January 2016
    December 2015
    November 2015
    October 2015
    September 2015
    August 2015
    July 2015
    June 2015
    May 2015
    April 2015
    February 2015
    January 2015
    December 2014
    April 2014
    January 2014
    December 2013
    November 2013
    October 2013
    September 2013
    August 2013
    June 2013
    May 2013
    April 2013
    March 2013

    Categories

    All

    RSS Feed

Web Hosting by Dotster