Marine  Magnet Dispatch Service Centre
  • Drone Fleet Readiness Office
  • Status Updates
  • Marine Corps Integration
  • Submit Comments
  • Download Reports
  • Building Drone Swarms

Top 10 Operational Units Pursuing Virtual Reality Tactical Decision Kit Network Connect Training Simulations

5/20/2020

0 Comments

 
​Marine Corps is demonstrating how to use the Tactical Decision Kits [TDK], a digital scenario based tool designed to train and challenge Marines on their tactical decision making abilities.

The Marine Corps is investing in a suite of virtual and constructive training systems, augmented reality goggles and other emerging technologies to give Marines more repetitions and, in some cases, more authentic experiences during training than the service could provide before.

Among the top priorities as the Marines invest in more technologies, though, is ensuring they can be networked together to allow for cross-community training events – fire support teams talking to artillery units, forward air controllers talking to pilots, ground combat units talking to the logistics teams that support them, and so on.

The Marine Corps have planned to conduct an analysis of alternatives AoA for a Live, Virtual, and Constructive Training Environment [LVC-TE] architecture that would network the simulators together, but due to continuing resolutions and other factors the assessment just began last year.

The AoA looks at several back-end options for netting together the technologies that the Marine Corps and the Office of Naval Research [ONR] have been pursuing – including using the joint force’s network or creating a new one. The process defines the training requirements, based around a variety of scenarios at the company and battalion levels up to a Marine Expeditionary Force level, and then defines training effectiveness, cost and risk for each of the LVC-TE backbone options.

“We’ve kind of realized we just can’t train those pockets of Marines. … You really need to be able to connect those different training audiences to work their procedures and do those supporting and supported relationships and do those standardised procedures and get used to working with people, Marines in other communities – as you send your calls for fires, requests for support, and do battle handoffs with them and work all those different things. So that necessitates an integration between a bunch of different training systems that were originally not designed or procured to ever work with each other.”

Other benefits of LVC training are the ability to simulate large formations instead of trying to amass that many people in one place for a live exercise; the ability to practice high-end or sensitive tactics “behind the curtain” where an adversary can’t spy on them; the ability to do things the military couldn’t do in real life outside of a warzone, such as use a jammer on a civilian area; and the ability to practice certain tactics without risking friendly-fire casualties, such as suppressing an enemy with fires and then stopping as soon as friendly forces reach that location.

“We’ll never get away from live training because the natural realism of being out in the real world will never be completely replaced by simulations, because any simulation is something short of reality. So that will always be, at minimum, our graduation exercise. But we found that through the multiple repetitions you can get in simulations and the fact that you can focus those simulations on particular areas that might be your problem points – we look at simulation as a gateway to live.”

One package of LVC systems the Marines are now working with is the Tactical Decision Kit, which grew out of an ONR effort and was adopted by the 2nd Battalion, 6th Marines, who wanted to bring more simulation into their training events.

The kit includes a range of virtual and constructive training systems, as well as supporting systems like drones and GPS trackers that enhances the whole continuum of training.

The suite begins with an Interactive Tactical Decision Game, where a small unit leader could be presented with a situation, see the available resources, and start to map out a plan.

The Augmented Reality [AR] Sandtable then allows the small unit leader and up to three teammates to view three-dimensional terrain with AR goggles and begin to think through the positions of machine guns, for example. In the Virtual Battlespace, which is like a first-person shooter video game, each Marine is represented by an avatar, and the Marines can run through scenarios with as many repetitions as they want. Once the units are ready to move into live training, a force-on-force training system puts a GPS tracker on each Marine and a laser system on their weapons to track who was where during the training scenario, who hit their target, who was shot and more.

It is clear from the Tactical Decision Kits that simulation can be applied as training increases in unit size, in the complexity of the scenario and in fidelity – from tabletop games to real training in the field. A couple key technology areas the military and the gaming industry are pursuing are set to make that continuum of training even better.

Marine Corps is very interested in augmented reality goggles, especially if they can be reduced to the weight and size of the ballistic goggles the Marines already wear. Whereas some trainer systems exist in a dome room, where users would see a terrain around them and be able to carry out their mission, moving that training outside with AR goggles would provide an even better experience.

The Mobile Fire Support Trainer for fire support teams, for example: “without having to deploy artillery units out to the field or have tank hulls to shoot at, they can put on those goggles and they can send a call for fire. It will insert targets, potentially even moving targets – which we typically don’t get, we’re usually just shooting at old rusty tank hulls that are sitting on the ground.

So we can have moving targets. We can integrate those fires with simulated friendly forces that are moving towards an objective – so you can validate that you can turn off your fires at the right time so that you don’t cause friendly casualties – all sorts of interesting things that you can do with this MFTS technology,”

Though the MFST uses goggles that are heavier than the Marines’ ballistic goggles, and therefore not quite the technology the service would want to invest in for all Marines’ training, Harder the fire support teams tend to operate from a stationary position and therefore the technology is good enough to invest in for this one community.

“It’s a nice first step towards providing augmented reality training capability out there, and it’s one of the unique projects that the Marine Corps has that, no other services have,” Though the programme is still in the engineering and manufacturing development phase and trying to reduce the weight a bit more, a fielding plan is already in place to bring MFST to schoolhouses first and then to operational units.

Another tech development area the Marines are keeping a close eye on is what the gaming industry is doing to enhance cognitive behaviour representation – or ensuring that all the people and items in the background of the scenario make realistic decisions. As commanders have more tools in the field for situational awareness, that has to be reflected in simulators now too, which means the simulator must be more detailed to reflect this new way commanders can see the battlefield around them.

“Back just 10 or 20 years ago when we were just moving large formations across the battlefield, one icon could maybe represent 100 Marines. But now we’ve got unmanned platforms with video capability on them that could be anywhere on the battlefield, so at any moment the commander wants to be able to say, show me a live video feed of that spot right there; so now you have to be able to simulate not just, here’s an icon representing 100 Troops, you have to be able to zoom in and see what are those people doing and are they acting realistically, and you’re going to want to be making decisions in real-time based on what you see.”

“So that’s a much higher level of Troop behavior representation that we need to be able to provide. Both the sort of functional capability of do those individual entities make the right decisions within their communities or their units, but also just the distributed processing capability to run all those many decision-making engines without having the whole computer system come crashing down.”

Marine Corps Operating Concept highlights the importance of training, and LVC training in particular, that doesn’t always translate to sufficient funding. But several communities in the Marine Corps are taking it upon themselves to create integrated LVC training experiences, and their hunger for the capability and success in proving its benefits helps argue for more service support for the technologies and their integration.

At Marine Corps Air Ground Combat Center 29 Palms in California, Marine Aviation Weapons and Tactics Squadron-One [MAWTS-1], the Marine Air Ground Task Force [MAGTF] Training Command and I Marine Expeditionary Force (MEF) decided to pool resources and create a temporary integrated training event. “If it wasn’t important, they wouldn’t be doing it.

The challenge is, that there is not an institutional Marine Corps program to support those types of events. So that’s the shortfall Marines are trying to close with establishing that LVC-TE program, because we want to provide a funded, well-designed standing capability that a unit can easily tap into whenever they want to conduct that type of training.”


Videos show how augmented-reality tablets can allow Marines to “see through weapons systems blocks and overlay designs and other information onto a real space as technicians move around.

It’s part of a larger plan to make Marine Corps training paperless, tablets can also contain training videos and other instructions for Marines. In coming years, concept may expand even further.

“The vision for this whole thing is you can tell it who you are and it hands you an iPad with your work downloaded for the day.” At the end of the day, Marines can return their tablets to the central location, where they will be reset with the next day’s work and information.

“Our goal is to be drawing with this tablet technology soon. “It will be the first drawingless system, and we think the operational gains associated with that are tremendous. Today the when we look at how we go do this, how do we do this efficiently, what are the benefits associated with that.”

Marine Leaders have shined a spotlight on live, virtual and constructive [LVC] training. The Marine Corps should use simulators to the greatest extent possible. But they need to cover all the right warfighting areas. And the service needs to ensure Marines get enough hours in the simulator. And the simulators need to align with training and readiness goals.

With a new focus on LVC training, the Marine Corps Training and Education Command [TECOM] is in the midst of several efforts to ensure its LVC training capabilities are supporting the right skills and in the right quantities.

Operational planning teams have been ordered to take a comprehensive look at existing capabilities for LVC training, how they’re using the force today and where gaps exist between training needs and current capabilities.
The deep dive looks at all the simulators in use, what warfighting functions in the Marine Air-Ground Task Force [MAGTF] they support – maneoeuvre, fires, intelligence, logistics and more – and at what levels, from individual to battalion and staff levels. The subsequent chart this effort created pays particular attention to the areas highlighted by Marine Corps Leaders.

We have discussed how marines want to see greater incorporation of simulation into training and readiness standards, … greater use of immersive simulation such as the Infantry Immersion Trainers and simulators that would provide realistic scenarios for the individual Marines.”

We want emphasis on small unit leader decision-making in every way possible that we could incorporate simulation to where that’s the first scenario the Marines are exposed to before they see something live, and once they weigh in then TECOM can assess where the capability gaps are, not just LVC training incorporated into all warfighting functions, but that the training needs to support the Marines’ standards for full combat readiness.

“For a long while a lot of the simulators out there provided a great capability but weren’t necessarily linked to training and readiness standards, and that’s where a lot of this effort is currently underway to look at that.”

To that end, the Marines are launching a massive effort to assess several warfighting areas to ensure the simulators available match the requirements laid out in the training and readiness [T&R] manuals.

“The idea is that we needed to take a comprehensive look at all the T&R manuals – literally every single event in the T&R manuals – against the capabilities of all the simulators that are currently fielded.”

“So we have to match current requirements for the Marines to train against what the simulators currently can do and say, using the subject matter experts, yes or no for each one of those events. And then, if yes, how much time is required in that simulator to train that to standard. “And then we will roll that up at the end of the week – so we are able to give to the commander a report out of that training and readiness manual.”

“This is how much simulator time we’re going to require as the numerator, the denominator being how much simulator time we have available. So then we will be able to assess where are we, how good and how much more of that are we going to need.”

The first SAWG will tackle tanks. Fires will come next and combine several T&R manuals, including artillery, tactical air control party and air naval gun fire liaison companies [ANGLICO].

“Those will all be combined into one because many of those events in those units are the same. We don’t want them to independently assess and make different assessments of the sims. We want them all to come to consensus so that there’s a Marine standard on employing that sim.”

Notably missing from the lineup is aviation. The aviation community is far more advanced in its simulated training capability than other communities in the MAGTF, so this effort will hopefully help the ground, logistics and other sections of the Marine Corps catch up.

At the end of the trial period, the SAWGs will have made recommendations on each warfighting area regarding whether simulation training is adequate, more is needed or a better simulator is needed. The budget controlers in TECOM will ultimately have to decide how to allocate resources to fill in any gaps. In some cases, there may be enough simulators but the Marines will need more contractor support to run the simulators for more hours a day. In other cases, the Marine Corps may need to invest in more simulators.

In TECOM’s capabilities division, an acquisition effort is underway to take the existing simulators – which will be verified as meeting training and readiness requirements – and tie them together to create a more holistic MAGTF training experience. The division is “federating existing simulators … so that we can have a distributed Live Virtual and Constructive Training Environment.”

The simulators themselves already exist, and standards divisions are ensuring the simulators teach the right lessons. But we want to build an architecture to tie them all together.

So you’re able to do better collective training – for example, if you have a pilot in his aircraft simulator and then you have a Marine in the virtual battlespace too doing close air support or something, if you’re able to interconnect the two that makes collective training so much better.

“That just adds so many more realistic variables. … If you’re training in a system, that’s not as bad – the aircraft is going to show up on time. But when you’re training with a pilot in another station, it adds some more variables. You’ve got to get your communications up, he’s got to be on time.”

Another key benefit of collective training is that important relationships are formed between Marines training in different locations. Individual training in a simulator only lets a Marine interact with the computer and the officer or contractor running the training event. Collective training puts Marines in contact with other Marines who they might find themselves in a combat zone with down the road.

The next step in creating the LVC Training Environment is the analysis of alternatives, which will generate various packages for leadership to choose from, but not all the simulators will be connected but that those that should be interconnected will be.

“While we’re making systems interoperable and federating systems, another major goal is to make sure we continue to integrate the MAGTF.

There is real uncertainty whether such things as robotic tanks and high-speed scout helicopters are possible on the requried timeline. But if there's one area where a high-speed approach can work, it's training simulations, where Marines can piggyback on the rapid development in commercial gaming.

To train troops for future wars, we want to build the ultimate video game. To get that game ASAP, the Marines are blowing up the usual bureaucracy and borrowing high-speed development techniques from private sector companies.

The service has already held industry days on different aspects of the technology, and combat soldiers have already tried out some industry offerings, Some systems will be ready to when it enters service. A full augmented reality training system will soon be ready complete with interior maps of buildings around the world and simulated Troops

It’s not the typical process Instead, she said, her Cross Functional Team — so-called because it pulls together experts from across the services — is working closely with industry in a tight cycle: “let me see the products you have, let’s give you feedback, let’s continue to develop this thing, over and over.”

The top priority is an augmented reality system to train soldiers on foot, the Soldier-Squad Immersive Environment. Augmented Reality goggles would superimpose virtual obstacles and enemies over the Troops field of view so troops can simulate any scenario at their own home base. This is something the Marines never had before, and it could revolutionise infantry training.

Collective trainers are used to train a vehicle crew to operate as a team where each simulator replicates a single vehicle. But current simulators are mostly products of sever decades ago when each program — Humvees, tanks, helicopters, etc. — contracted for its own custom training systems. The result is a mess often outdated and incompatible systems. The Marines want a new family of vehicle simulators it can easily network together so tank crews, pilots, and more can all practice combined arms tactics in the same digital exercise.

The game engine runs all the specific simulations. Commercial gaming has made dramatic advances from the two-dimensional, cartoon Pac-Man 40 years ago to cinematic experiences like Call of Duty today, and Marines are operating on much of which is still in the Pac-Man era — so it wants to take advantage of what that industry has to offer.

All these playing pieces exist on top of the digital game board. Called One World Terrain, it is a global database of real-world terrain for use in training scenarios, replacing dozens of different and often incompatible databases used by current simulators. One World Terrain has been described as “a military-grade Google Earth,” but it’s actually more ambitious that that, because Marines are not satisfied with top-down satellite views.

Instead, to train for urban combat, the services want to include underground tunnels and the interiors of buildings. Some floor plans will be best-guess approximations generated based on typical building layouts — but there is also work going on to automatically upload the schematics for specific buildings.

The goal is to train pilots and other crew members on how to respond to threats by providing “real-time orientation and positioning status to support simulated engagements during live exercises at the Combat Training Centers and will essentially turn this form of combat training into a video game.

“The device will interface through a network, providing its location and orientation on the training battlefield. When the Marine engages the aircraft with it, it will transmit that information to the system, which will also be tracking the aircraft flying in the air. The device is expected to improve how pilots and associated personnel respond and react to threats, and will provide real-time performance metrics points.

Once the prototype device is delivered, Marines will review it and could choose to develop additional units and deploy them across the military branch. “This is a vital project and we are confident that our solution will boost training and improve the ability to combat enemy threats, and we are excited to be able to pioneer next-generation capabilities during this effort.”

1. Built-in Measurement and Grading

One of the great things about developing training tools is that your metrics and grading systems can be coded right into the tools. VR training tools can observe and capture responses and behaviors during the virtual training exercise. This unique feature of VR training frees instructors to provide more subjective feedback and students won’t even feel like they are being tested.

2. Collaborative Learning

The increased connectivity provided by the tools is making it easier and easier to collaborate. Your teams probably already work together across the nation and maybe even the world in other aspects of your business. Virtual reality, lets you get all of your trainees in the same room, working together, even if they’re located at opposite ends of the world.

3. Implemented Remotely

Maybe it’s not just your trainees that are spread all over the world; maybe your training department is stuck in some remote headquarters building. VR training can allow you to deploy the very best and most current training experiences into remote job sites and offices, from wherever your training team might be.

4. Combined Learning Styles

Virtual reality, because it is a reflection of the real world, can allow you to present trainees a variety of training approached. Even though it is a computer simulation, you can introduce textual and visual learning queues into your VR training – satisfying a variety of learning styles.

5. Gamification

Introducing an element of competition and gaming makes education much more attractive. Virtual reality allows trainers to gamify every lesson: from simple actions like “open the door” or “choose an option” to a more complex multiplayer experience. Gaming element can be logically incorporated into the topic of the lesson.

6. Advanced training and virtual tours

Maintenance professionals have a lot to gain by capitalizing on the benefits of virtual reality. It starts with training. With virtual reality, customer sites come to technicians instead of the other way around. Technicians can immerse themselves in their future work environment to explore the site topology and become familiar with the environment in which they will be working. Technicians can practice the activities they will need to perform, repeat them any number of times, and learn from their mistakes with no risk of customer repercussions or danger. In a virtual world, errors can’t damage equipment. This type of training is particularly useful before technicians are sent to higher-risk or remote sites.

7. Training Benefits 

Virtual reality is transforming many industries. One of its most practical functions is in the area of training. A good example of this is virtual reality training in fire. Firefighters can use VR systems to simulate actual scenarios they’re likely to encounter.  Virtual reality has benefits for many types of training like firefighters.

8. A Safe Training Environment

One of the key advantages of VR for fire training is that it creates simulated emergencies without putting trainees in any real danger. Firefighters can get a sense of being in a very realistic situation and test their abilities without incurring any risk.

9. VR Training Can be Delivered Remotely

Traditionally, firefighter training had to be conducted in a specific location at a specific time. With VR and other computer-based training methods, however, this is no longer the case. As long as trainees have the appropriate equipment, they can undergo training from any location.

10. A Highly Visual Form of Learning
​
Workers generally learn better in a visual environment as opposed to simply reading a book or listening to a lecture. VR training creates visually stimulating and realistic scenarios that can be more engaging than traditional 
0 Comments

Top 10 Air Ground Task Force Battle Simulation Center  Provide Virtual Reality Training Tools Action Prep

5/20/2020

0 Comments

 
Marine Corps uses parts of kinetic and virtual training to enhance their readiness, the Battle Simulation Center is one of several virtual training facilities aboard the Combat Center.

The Battle Simulation Center supports the Corps by providing units with various training simulations that assist in individual, small unit and staff level operations. The technology available helps the Marines feel a sense of realism of their environment as well as provide communication with artillery units, aircrafts and other Marines.

The Battle Simulation Center trains approximately 15,000 Marines annually from units throughout the Marine Corps. They will continue to provide Marines the training they need in preparation for their field exercises and ultimately their deployments.

In constructive training the Marines can see what is supposed to be done in certain situations. Once the Marines understand what to do they move onto virtual training, where they can put their knowledge into action. The simulations allow the Marines to receive live feedback from their instructors, this allows the Marines to make mistakes and be corrected without risk of injury or loss of resources. After the Marines have had a chance to practice and be coached in a safe environment they can move on to live training.

“When we have to conduct certain exercises in which we believe the risk of injury is too high, so we practice in the simulations first. When the Marines go to do the live exercise then the risk is much lower since the Marines know how to react to each situation.”

The BSC provides training for any size unit from individual to regiment, for any warfighting discipline from infantry to logistics, and from all parts of the combat spectrum from full scale war to establishing local governance.

“We break up our training into live, virtual, and constructive training,” Live training consists of real people using real systems, virtual training is live people using virtual systems and constructive training is virtual people using virtual systems.”

“The different weapons the Marines train with in the simulation can range from the M9 service pistol to mortars, shot guns, and heavy machine guns,” The center also has different vehicle simulations where Marines can practice movement of troops dealing with enemy resistance, and many other situations where Marines would have to think on their feet.”

A simulation complex includes the large task trainers as well as a small simulation center. All of the vehicle and convoy simulators are housed at Camp Wilson. Camp Wilson offers a wide array of simulation opportunities for visiting units.

The BSC was stood up at the Combat Center in 1996 and originally offered only a couple of training simulations, the MAGTF Tactical Warfare Simulation and the Joint Conflict and Tactical Simulation.

MTWS focused primarily on larger-scale training, meaning the company, battalion and regimental levels, while JCATS was designed to train Marines at the fire team through platoon levels.

The Battle Simulation Center works closely with the MAGTF Integrated Systems Training Center, which focuses of command and control systems training. To date, the BSC offers 10 different training simulators and the MISTC hosts seven training programs.

In addition to the numerous simulators the BSC has to offer, it is also working on integrating simulations with live training exercises.“One of the things we’re looking at is the integration of live forces in the field with virtual and constructive simulation.

If a company is training in the field alone, we can simulate other units on the battlefield that don’t really exist, but are needed for staff planning purposes.”Constructive simulation is currently being used by the BSC and is fully operational.

CPX-2 is a two-part training event that focuses on training battalion staff and is a part of TALONEX 2-18, a pre-deployment training event that coincides with Weapons and Tactics Instructors Course.

Throughout CPX-2, Marines at the Battle Simulation Center utilized multiple simulations in conjunction with other units at Camp Wilson aboard the Combat Center, Marine Corps Base Camp Pendleton, Calif., and Marine Corps Air Station Yuma, Ariz. This is all part of an effort called Marine Air Ground Task Force Tactical Integrated Training Environment.

"The idea behind the MAGTF TITE effort is to create a persistent capability which permits collective training in a distributed and constructive environment in order to enhance integrated training," "During TALONEX 2-18, Marine pilots, Joint Terminal Attack Controllers, the Direct Air Support Center and Fire Support Coordination Center/Fire Direction Center will train in conjunction with battalion staff using distributed simulation."

CPX-2 utilized a constructive simulation called MAGTF Tactical Warfare Simulation, which served as the hub for the training. To run their high-fidelity cockpit trainers and to fly a virtual unmanned aircraft system, the Battle Simulation Center used a virtual simulation called Virtual Battle Space 3.

"Using multiple simulations together does create a lot of challenges and issues, such as making sure that one model that comes up in one simulation will appear the same way in another and making sure that the terrain is the same across all platforms,""We continue to work through these issues to try to refine the simulations and make them more realistic."

Another goal of the MAGTF TITE initiative is to provide more realistic training for Marines. According to the Ground Training Simulation Implementation Plan of June 2017, using simulations allows Marines and units to replicate situations and conditions that are more difficult to enact in certain on-the-ground training environments.

"This training helps to emphasise operational cohesion by providing more realism in an exercise where you're relying on the proficiency of other Marines, as well as the realistic nature of the uncertainty and miscommunication that can occur when it's real individuals participating instead of a role player," "It allows for more development on critical thinking and exposure to non-standard events and increased integration with external factors."

We are getting the support and flexibility from the Marines who are participating because they understand that there are challenges associated with experimental training exercises," "The feedback we get from them helps to shape the way we move forward with setting up future simulation-based exercises. This wouldn't be possible without the support of the Marines and agencies participating."

Virtual Battle Space 1 and 2:

VBS 1/2 are PC-based first-person viewpoints of a fully functional battlefield that focus on smaller-unit operations. VBS2 is currently more advanced and more prevalent than its older counterpart. Depending on the demands of the individual units, VBS can take the form of many different combat scenarios and environments, which can immerse between one and 100 Marines into a virtual world where small-unit leaders can test their standard operating procedures, as well as conduct rehearsals on the same terrain they will be likely walking to in the near future.

MAGTF Tactical Warfare Simulation:

MTWS is a “birds-eye-view” of a battlefield that allows unit commanders to practice command and control functions, and standard operating procedures. The simulation offers real-time engagement and movement, and mission recording for after-action review. Commanders using MTWS can receive orders from their combat operations center for their units and carry out those orders through the simulation.

Forward Observer PC Simulation:

FOPCSim is another PC-based first-person viewpoint, similar to VBS, only focusing on a forward observer calling for artillery fire support. The purpose of the simulator is to hone the individual Marines’ call-for-fire skills on stationary and mobile targets. The program can be used by itself as well as integrated with other simulators, which make up the Combined Arms Network.

Combined Arms Planning Tool:

The CAPT program is designed to test the elements of a commander’s fire support plan. It is able to test a fire support plan and identify potential problems based on war fighting doctrine, which is incorporated into the program.

Virtual Combat Convoy Trainer:

This training simulator consists of four mock High Mobility Multipurpose Wheeled Vehicles with a 360-degree view. VCCT is designed to simulate convoy operations in a combat environment. It can be used alongside other simulations to familiarise Marines with how to use convoys in conjunction with other operations.

Operator Driver Simulator:

ODS is a training program used to teach Marines how to operate HMMWVs, Medium Tactical Vehicle Replacements, known as seven-ton trucks, and, coming soon, Mine Resistant Ambush Protected vehicles. The system can simulate a number of driving conditions in most foreign areas of operation.

HMMWV Egress Assistance Trainer:

The purpose of HEAT is to simulate HMMWVs in rollover conditions. It teaches Marines the proper ways to exit a vehicle that is upside down and assist fellow Marines who were injured in the rollover. Marines are also required to transport injured Marines to safety and secure the simulated rollover site.

MISTC:

MISTC is designed to train MAGTF commanders and battle staffs in the art and science of command and control so they can better organise, deploy, fight and defeat the enemy.

VR and immersive learning simulations can be used to assess whether employees are best suited to a given role or set of roles, as well as to better understand how candidates and managers would behave in real-world scenarios..

In contrast with traditional methods, assessment in VR captures far more comprehensive data that can be analysed. Subjective ratings of confidence, satisfaction and engagement can be obtained so teams can determine if learning has actually occurred and challenged the learner. These can be combined with data from eye gaze, heat maps and more, providing insights into the attentional processes and engagement of the learner. This data can be used to iterate towards an optimal training solution.

Because it improves the efficiency of training activities and reduces costs, virtual reality plays a key role in industries where people spend hundreds of hours every year maintaining equipment.

The construction sector is involved in the digital revolution, using building information modeling technology to generate digital representations of the physical, technical and functional characteristics of buildings based on data.

Technology can be used to collect data such as pressure and temperature readings from sensors to determine the overall state of the equipment. In the construction industry, a maintenance technician can use augmented reality to “see” the interior workings of a building, such as the electrical cables behind a suspended ceiling or the pipes hidden behind a concrete wall.

With augmented reality, information is superimposed on a tablet, a smartphone, glasses, or even directly on equipment using a projector. These systems free technicians from having to consult manuals and, in the case of augmented reality glasses, even free technicians’ hands so they can continue working on tasks while reading the information.

Technicians can use augmented reality to order spare parts and even get help from remote experts by sharing the 3D visualization with them. This is a great way to help improve first-time fix rates.

While using VR training for firefighters is a fairly new concept, it’s already being utilized. The technique can feature a set of VR goggles that let users experience simulated emergencies with a 360-degree view. VR systems can be attached to a computer so vital metrics such as reaction time can be measured.

Construction sites are looking to virtual reality to decrease the number of preventable accidents and bring a new dimension to construction safety training. It has the potential to cover the architectural, construction, and engineering fields and train entire operations in rigorous, real-life, heavy equipment exercises using high-fidelity virtual models of both existing and possible work sites.

The beauty of 3D simulation and virtual reality in construction or architectural projects is the safety of the training process. Using these models introduces an array of benefits that you never knew possible when it comes to worker safety training. Complex situations and processes can be recreated through VR allowing employees to practice situations in a simulated environment. For example, employees can practice welding without the risk of burn injuries.

This is where a 3D modelling and virtual reality display can come to life and bring the training process into a risk-free, controlled, and safe environment in a realistic, innovative, and productive way. The technical and educational services on offer allow users to get the full experience of real-life projects. 

Using 3D and virtual reality environments as part of your training methodology allows the workforce to experience an entirely new side of training. This type of technology breathes life back into traditional computer based learning and re-awakens the enthusiasm in users who are used to this technology in other circles outside of training. 

By modelling your equipment, possibly down to the last detail you could distribute a training programme to all your employees that will allow them to interact with it, follow best practice procedures or carry out fault finding scenarios, all without having to access and possibly damage the real item.

Customer service training, as an example, requires teaching employees how to impact, retain and understand customer satisfaction as well as the use of greetings, body language, appropriate tone of voice and even the best way to deal with customer complaints.

Virtual reality applications can enable employees to get as much mobility and flexibility as they desire; by virtually accessing the work space. Thereby virtual reality technology gives employees the autonomy in terms of when, where and how they work.

Virtual reality can be used to help potential candidates make more informed decisions? In that context, virtual reality technology can be used for showing a day in the life of an employee at the employer’s organization and experiencing a tour of the company offices. Facilitating this can in the end increase retention rates and decrease employee turnover.

VR experiences are easily repeatable, allowing subjects to be exposed to varying levels of intensity in the experience. The subject can therefore gradually become accustomed to the stronger stimuli. Any stressful situation can be turned into a safe VR experience, such as dealing with an angry customer and putting out a fire.

In addition, the jump from reading a manual and watching others do a task, to actually getting to perform that task well, is bigger and more error prone than the one between a VR experience of practicing that task and doing it for real. VR is now in the phase when enterprises begin to make serious investments and pilot their first mainstream implementations.

All that is to say, if you’re thinking about getting VR into your training program, you’re in good company, and your business case should be relatively easy to assemble. Now, let’s dive into some of the specific benefits of using VR training.

1. Gamification

The best learning results are achieved when you are involved, and when you get a chance to try something, fail and then do it right. Nothing beats personal experience, and virtual reality supports that. Introducing gamification and competition is what can make them eager to learn. Offering an element of competition and gaming makes training more attractive. VR enables the trainer to‘gamify’ even the simplest lessons, like ‘open the tab’ or ‘close the door’ to a more complex multiplayer experience. 

2. Improved data processing

In a typical training scenario, we could only use our sight to analyse and interpret data. But what if we could use other senses too? This is exactly what Virtual Reality offers in data visualization. With data-audio relationships, we could easily determine the location, subject, and significance of a specific data through its direction, volume and type. With  feedback gloves gaining popularity, we are not far away from a period where we could actually feel the data.

Data visualization techniques that have been developed over the years have enabled us to improve data processing rate, but it is still information from two-dimensional screens. Enter virtual reality and this is in for a total changeover. By completely immersing you in a stimulating 3D environment, virtual reality engages your brain and help you make full use of your bandwidth.

3. Reducing Training Budget and Providing Scalability

VR is tied to results. It’s possible to collect metrics from virtual education, showing the improvement in outcomes. Anything learned — whether facts or skills — can be tested, and an organization can easily compare current methods to a virtual learning course. VR modules also provide feedback during the training period, so instructors can iterate.

Savings also take the form of equipment longevity. Heavy equipment doesn’t have to be brought to a special training location, or suffer wear and tear as numerous trainees learn how to operate it. Logistics reduction. Firefighters don’t have to set buildings on fire to do the repetitious part of training. Instead, after virtual training, they can save the test fire environment for a “final exam” type of situation.

4. Encourage Exploration and Trial & Error

Like making mistakes, learning by trial & error is one of the best ways to retain new skills. Techniques and tricks that we learn by tinkering and then adapt to fit our particular work style are invaluable.  Virtual reality training gives instructors and trainees the flexibility to discover the best ways to make decisions, troubleshoot and solve problems, or simply do their jobs a little bit better. Remember: little improvements in productivity can pay big dividends over time.

VR is already making headway in a range of industries. For example, Machine Operations. Heavy equipment manufacturing are offering virtual reality simulations to train employees on highly-specialized equipment. In addition to enhanced learning, VR training offers savings on machine depreciation and wear and tear. 

5.  Removing Time and Travel from the Equation

Learning with virtual reality is an exciting option, and more districts are making it a reality. Currently, VR field trip opportunities are a main focus for workers. Using headsets, workers can explore any potential location/scenario–the options are limitless. 

Subject-intensive modules are also gaining in interest. Commanders are are discovering “time-travel” opportunities as more and more content is produced, like air-to-air wingmen, battlefield trench, complete with artifacts that would have existed there at the time.

Although their full implications are yet to be explored, VR technologies make training more engaging and productive. They are here to stay, and who knows what benefits they will bring to future trainees. As the technology evolves, so too will the applications in VR. That’s why it's essential for trainingpros to keep up with cutting-edge tech and come up with new and innovative uses for VR tools.

6.  Create Scenarios That Otherwise Are Impossible To Create

Augmented and Visual Reality technologies have added another dimension to the training field, taking workers to They another world and allow them to gain experience without any risk. This technology also enables organizations to incorporate environments that would be too costly to recreate in the real world.  Besides saving costs, training in a virtual environment also increases the levels of safety. This method ensures the trainee is clear about what they are being taught and can apply it in the real world.

7. Appropriately Pace Learning

The flexibility and affordability of a simulated virtual reality work environment gives trainees the ability to work through learning objectives at their own pace.  Trainees can try and try again until they feel completely confident with each new skill or show off their depth of knowledge with scenario randomizations that ensure no two training experiences are the same. VR training provides a much more flexible way to train.

8. Focus On A Practical Approach 

For the most part, our existing training system focuses more on books/manuals than practical approach. That is the reason why people tend to forget rotely learned concepts so easily. On the contrary, Augmented and Visual Reality make learning a practical experience. And experiences are what stick with trainees and enable them to recall the information for later use. Some concepts that in theory appear to be dry, fail to catch trainees attention for extended periods of time.

However, AR and VR can make them more interesting by adding practical application and immersion,helping trainees to appreciate the importance of concepts and ideas instead of merely brushing them off as book knowledge that has no correlation with their work duties or responsibilities.

9. Encourage Trainees To Learn From Their Mistakes

Trainees tend to experience some degree of confusion when they encounter new challenges or unfamiliar situations. This usually happens when they contradict what the book teaches. In that case, incorporation of alternate reality technologies gives you the power to remove any doubts the trainee might be experiencing. With these technologies, you put your trainees in a situation where they can try out their own ideas and reach their own conclusions. This also ensures that the lesson learned sticks with them and creates an emotional connection.

10. Allow For Self-Guided Exploration

Augmented and Visual Reality technologies give you the ability to create a safe environment for trainees to experiment and try things which would otherwise be impossible.  Take troops for example. Imagine the pressure they must face when coming across an armed adversary for the first time. A wrong decision at this point can make the situation worse and that may even shatter the confidence of the unit.  However, by replicating the same situation with the help of Virtual Reality, troops can be prepared for such dangerous situations beforehand without having to worry about any repercussions.

0 Comments

Top 10 Virtual Reality Simulator Training Feedback Program Support/Enhance Live Real World Exercises

5/20/2020

0 Comments

 
​New simulator tools are just one way we are bringing training up to modern tech standards using technology drawn from the world of gaming to support our troops in training."

Don’t confuse virtual reality with augmented reality. While virtual reality helps technicians prepare for service activities, augmented reality helps technicians during service activities by superimposing relevant information on the real world.

“Augmented” maintenance technicians can access all of the information needed to complete their tasks — plans, data sheets, instruction manuals — by simply scanning code associated with the equipment they’re working on.

Augmented reality, or AR, differs significantly from virtual reality in that, instead of immersing the user in a technologically imagined world, augmented reality users are viewing their real-world physical environment while objects are superimposed against it. Think “Pokémon GO” on overdrive.

This technology can be used by airborne pilots who can view synthetic images of anything from a moving adversary aircraft to a refueling tanker to surface ships.

Intricate cockpit simulators play a critical role in training today’s pilot, but ground-based simulator machinery can mimic only so many of the stresses produced by actual air-to-air combat.

Flying with AR, enables pilots to experience their real operational environments, all while the complex visor tracks not only the aircraft’s maneuvers, but the position and movement of the pilot’s head as well.

Using the latest advances in gaming technology, the new VR training platform aims to improve training for personnel by making it more realistic, intuitive and immersive.

Trainees can use the simulator and use intuitive gesture control designed to match real-world battlefield actions. This is coupled with HD surround sound and highly realistic visuals to bring to life training scenarios in VR.

Trainees will be able to hold a virtual ‘gun’ and crouch and crawl when necessary, just as they would on a real-life exercise. They will be able to practice this virtual exercise as many times as needed before going into the field for real, preparing them more effectively for operational deployments

Training is changing as the services pursue dynamic live, virtual, and mixed-reality training that offers data analysis supported by artificial intelligence and other smart systems.. Being able to take the data from your training to be analyzed for trend analysis and predictive analysis is going to be a game changer."

And all this data is being run through machine learning systems for trend and predictive analysis, producing a readiness score for essential tasks.  Imagine soldiers training to fight augmented reality adversaries in virtual battle spaces, showdowns that like video games can take place in cities around the world. 

The service is collecting data to reconstruct cities, mountainsides, bunkers and more to more accurately represent what soldiers will see in the virtual-reality environment. That poses a challenge, but service members must get an accurate representation of what they may face in combat.

Marines will be exposed to more realistic combat scenarios, "enabling units to enter live training at a much higher level of proficiency.  The goal is to rely less on bulky hardware for simulations, and more on software and networks, including virtual reality goggles and iPads for streaming services.

While live training will always remain the standard against which Marine unit  readiness is measured, even live training has its limits. It costs a lot of money to ship Marines out to Twentynine Palms or other areas. It costs money to fire munitions. Some of those munitions can’t be fired in most areas.

In the past, live training has been key to preparing personnel for their missions. However, staging a live training event can consume significant physical and fiscal resources, from aircraft, ground equipment and ships to all the personnel involved. Plus, the risk of accidents resulting in damage to equipment, or worse yet, endangering personnel, can increase. 

That’s why the military started utilising virtual training to provide many of the same positive benefits while minimising the negative impacts of live training. These benefits, including personnel safety, readiness improvement and cost reduction, have led the military to take training a step further and utilise live, virtual and constructive training that brings together multiple systems using networking capabilities.
 
Live, virtual and constructive training allows personnel not physically present at a live training event to participate virtually and through constructive simulations that inject battlefield effects and simulated or constructed threats into live systems. 

The Marines want simulators in which commanders can lead virtual troops.

Some of the advanced weapons can’t be demonstrated where just anyone can see them in action, thus revealing our tech to adversaries.

And that is where simulations can help bridge the gap.

But first, there’s a list of things that must come to fruition.

Identify and mitigate risks quickly: To keep up with evolving threats, an intent-based network can serve as both a sensor and enforcer of security policy, leveraging artificial intelligence and machine learning to move at machine speed and counter advanced threats. Networks can also provide the ability to rapidly reconfigure given changes in real-world conditions or across various training scenarios.

Much of that is going to be applications and bandwidth, basically getting better versions of terrains and simulations that are more realistic and can accommodate as much as a division’s worth of players and an equally complex, simulated adversary.

But some items are smaller and more hands-on, like better virtual reality and augmented reality headsets.

Those headsets are key since the Marines want them to work not as they do now, with pounds of cabling in bulky indoor shooting simulators but light with long-lasting batteries that can be taken in the field and on deployment.

Goggles that is about twice the weight of existing eye protection, perhaps with its power source somewhere on the body, is likely five to 10 years away based on his survey of the field.

There’s another an ongoing need: better drones.

But instead of longer flying, large-scale drones that can coordinate complex fires and sensors for the operational environment, simulations needs are smaller drones that can fly lower, giving Marines a street-level, detailed view of the battlespace.

Marines can create their own terrain maps and fight the simulated fight in the areas they’ll really be operating in. 

And those video feeds that are now on every ISR platform in the real world? Simulations need them too, to be realistic. That means game designers have to have human-like activity going on in areas instead of some digital “blob” representing enemies. 

That way, when a commander wants to zoom in on a tactical frame in the game, they’ll be able to do it just like in theater.

Which brings it to one of the more ambitious items beyond terrain and hardware: getting simulations to act more like humans.

As it works now, unit commanders set up their forces, work their mission sets and then the virtual “forces” collide and often a scripted scenario plays out.

Not too realistic.

What’s needed is simulations to act like populations might act in the real world and the same for the enemy, taking advantages, fighting and withdrawing.

But one step further is key: The enemy has to talk back.

When a commander finishes the fight, they should be able to query the virtual enemy and figure out why it did what it did, how it gained a certain advantage.

And it shouldn’t take a programmer to “talk” with the simulation. Units communicate via voice and chat. That’s how simulations users must be able to talk with their simulated civilians, allies and enemies, in plain language.

These pursuits are not happening in a vacuum. They were done at a battalion level with a short prep time, far different than the large-scale Marine Expeditionary Unit or Marine Expeditionary Brigade-sized training that is typical.

That is part of a larger effort to create a “plug-and-play” type of training module that any battalion, and later smaller units, can use at home station or on deployment to conduct complex, coordinated training.

What made that work new was pairing legacy systems with a variety of operating systems between them.

That’s another example of what needs to be fixed.

Marines and other services are, in many cases, using systems that were designed decades apart and creating a patchwork methods to get the hardware to work together when it wasn’t built for that type of operation.

The new systems must be open architecture so that new tech, new weapons and new terrain can be added on the fly. But also secure enough to operate across networks and not be spied upon by those who would want a peek at our tactics.

Across the infantry battalions Marines received new gear last year called Tactical Decision Kits. These allow for squad to company-sized elements to do video game-play for their unit exercises, complete with NFL-style replay of engagements and decisions.

That’s a low-level example of one thing that’s lacking in current training. Right now the main piece of tech for a Marine commander conducting an after action review is a pen and paper pad.

But with ISR drones, body cams and sensors, Marines in the near-term future should be able to monitor individual Marine’s energy and hydration levels, where they pointed their weapon, when they fired, how many rounds, if they hit their target, even where their eyes were looking while on patrol.

And, if on deployment, Marines can’t rely on a cadre of  contractors back home to run their hardware. To that end, the Corps began two courses last year, the Simulation Professional Course and the Simulations Specialist Course.

Both give Marines in infantry units experience setting up simulations and running the games for their units. They input training objectives and can understand and put together training for the unit staff or just for their fire team back in the barracks.


The Marine Corps Warfighting Lab just finished a rapid capability assessment of a pair of goggles equipped with augmented reality that allow artillery maintainers to work on three-dimensional digital models of M777 155mm howitzers.
 
"I like it ... you can tell what's missing, what's broken, what's cracked. "It can't do much for me right now, but when I was back at the schoolhouse, this would have helped out a lot to actually see parts in the howitzer. I am a very visual person; looking at a schematic doesn't help me much."

"Within training, it runs the spectrum. It can be maintainer training, it can be infantry training, it can be gun-drill training. I was talking to some snipers earlier. This could be used on a sniper training range, where you have the snipers crawling through the grass trying to get within shot range and not be observed while they are doing so.

"Currently, how are they being observed -- through a telescope. You can augment that telescope, which uses the human eyeball, with the laser range finding that the goggles are capable of, to pick up variances in the terrain in order to better detect those snipers, which will make them better snipers because now they've got to beat technology.

Marine Corps and other services are focused on finding ways to use augmented reality in training.

"It seems unlikely that it's going to go away. We have Marine Corps Systems Command, interested in augmented reality ... and we have all been talking about these systems and what they are capable of."

There’s little question that virtual reality technology is ready for serious use in military training courses. In many cases, VR training has become the best and maybe the only way to accomplish many training objectives. How will you pilot your first VR training experience? 

The benefits of virtual reality and augmented reality for maintenance include letting you train technicians before they get to customer sites while augmented reality helps technicians execute maintenance tasks. In both cases, the initial feedback about the value of these technologies is positive.

Military is starting to pay more attention to VR training. From traditional classroom environments to extreme training situations, VR reduces investment and increases enrichment across a range of industries. When it comes to absorbing, retaining, and applying new skills, VR delivers distinct advantages.

The are many benefits of using VR in training, and it is useful to describe exactly what alternate reality technologies encompass. The term Virtual Reality means recreating an experience through the use of tools specialized devices, whereas Augmented Reality is about combining digital information with our own environment. Unlike Virtual Reality, instead of "creating" a new learning experience it uses the existing surroundings.

Trainees won’t miss out on the realistic options as they can be immersed into the experience, as if it’s happening in real time. 

VR provides the opportunity to experience situations that you wouldn’t be able to easily construct for training in a real-life situation which, ultimately, can test trainees at a progressive, higher level and a more difficult standard to include navigating tight corners, understanding the use of directional arrows and avoiding hazardous placement. 

The multi-player capabilities also mean that they won’t miss out on the teamwork aspect of the work through coordinating with other operators. This allows simultaneous interactions amongst users, so that the trainee can work together as a crew in the virtual plant, preparing them for communicating their issues and needs to other workers or management. 

Working at height using construction equipment and tools is particularly risky, and remains one of the biggest causes of disasters in the construction industry. Training in a VR environment removes that element of danger. In the future, likely we will be able to control the entire on-site process from VR.

This form of training also has the benefit of being quicker and easier to track. Rather than finding the time, travel, and resources to locate a free work site, this innovative model brings everything you need in one place. No inconvenience or disruption from the weather; you can do everything from a permanent site. VR training can be run over and over again with no additional incremental cost or trainee risk.

Use of virtual reality training allows an immersive and realistic experience where operators can prove their knowledge under pressure and get a real sense of what to do in a high risk situations. Playing out emergency procedures in real-time aids progressive learning for all members of the team, not just new starters. Being told what you have to do in emergencies using traditional techniques is nowhere near as informative as experiencing it through VR.

The realistic nature of the VR technology also means training is not limited; if anything, there are more options and choices through technological advancements, which will only continue to expand. The typical training scenario is surrounded by distractions. The virtual environment shuts out the world, allowing them to concentrate on the task at hand. They are more likely to pick up all the relevant information they need without worrying about the people around them.

1. Virtual Reality Training Gives Trainers Better Evaluation Tools

We previously mentioned the challenges of evaluating trainees and even the training itself. These challenges are particularly acute in construction training. In many of the construction safety training programs used today, trainers are struggling to evaluate trainees under less than ideal circumstances. Trainers are either assessing from a safe, but obscured vantage point, or struggling to evaluate from the same precarious positions like extreme heights, narrow spaces, unstable platforms.

In contrast, a training environment constructed with virtual reality tools can put trainers in the best possible position to observe and evaluate their trainees. Besides, the tools can also capture data points that help analyze why trainees are experiencing success and failure.. Another benefit to evaluating training in virtual reality is the simplicity of collecting and analyzing data – no more clipboards and tally sheets.

2. VR Training Supports Better Retention

Because it’s so realistic, VR training is likely to stay in trainees’ minds and muscle memories for a long time. Whereas people are apt to forget something they heard, read or even watched on a screen, when they have a direct experience, it stays with them longer.

3. Risk Free Training

One of the biggest issues faced by practical training is the large risk initiated by putting the trainees in a new and uncontrolled environment for example a machine in factory and maintenance trainings. The virtual reality simulations neutralise this risk while keeping the same training features, by creating the same environment virtually and putting the trainees inside without having to worry about the risk.

4. Realistic Scenarios

The simulations are built based on real-life operations and manipulations needed in after training situations. For example trainers are building simulations that imitate with high precision the situations that encounters maintainers in the field

5. Can Be Done Remotely

With VR simulations, the training no longer need a trainer in place to guide and give instructions to trainees. The content can be put in a platform that gathers all the simulations and give users access to the right content. Platforms enable the trainers to track their trainees performances through a dashboard using data analysis and data visualization techniques.

6. Improves retention and recall

The main purpose of a training and what really differentiate it from traditional learning is building a muscle memory. The mere observation of a skill cannot be sufficient to acquire that skill, especially when the training is about maintenance skills, or high precision manipulations. VR trainings solves this problem by enabling trainees to use their hands to manipulate anything inside the simulation. This can be done either by controllers or by VR gloves.

7. Simplifies complex problems/situations

Virtual Reality simulations are built in such a way to simplify the most complex notions and situations that cannot be understood with traditional training. It enables the trainees to discover the tiny details in a system to study without the constraints that can be encountered in real training sessions.

8. Suitable for different learning styles

When doing a training with virtual reality, the trainee will have all the freedom to test anything in the simulation in order to build an in depth understanding according to their own learning style. Platform provides a recommendation system based on machine learning, to suggest contents based on user’s data and previous simulations results. Training is easier if the experience is innovative and practical which means higher level of engagement and understanding.

9. Improve Trainee Performance

Because virtual reality simulates the real-world, students get hands-on and the opportunity for nearly unlimited practice repetitions, allowing for a process that steadily improves performance. Unlike traditional classroom or even video training, VR training ensures that when employees encounter training scenarios in the field, they will have practiced that skill or protocol over and over, to perfection, with any number of situational variables at play
.
10. Easy Access
​
Virtual reality lets everyone have an equal share of education so to say, in this context. Education is not a stigma anymore thanks to VR. There are workers who face limitations in accessing education because of various reasons like distance. VR eliminates those boundaries; It enables an easy start with any lesson. Everything is already set and easy to be moved if necessary. VR is more than welcome when it comes to supporting distance education. This way, the worker gets a real and full experience of being in a realistic environment instead of sitting in front of a computer or by video chatting.
0 Comments

Top 10 Performance to Plan P2P Initiative Elements Power Digital Hangar to Recover Readiness Levels

5/10/2020

0 Comments

 
​P2P has changed the way everyone approaches their jobs because they know they’re being measured, and their performance is being briefed up the chain of command.

Supported commander is the single person accountable for the readiness of Naval Aviation. P2P aligns all stakeholders, including Naval Supply Systems Command (NAVSUP) supply experts and Naval Air Systems Command (NAVAIR) engineering, logistics and artisan experts, and our Type Wing and squadron Sailors and Marines so we are all working toward the same goals. 

We have set ourselves up for success by including and adopting data analytics to help underpin the decisions we make. Since we expect some efforts to be more fruitful than others, we want to make sure we’re pulling the right levers with the proper focus to get the maximum gain from our investment of time and dollars.

Having a plan, then regularly checking our performance against it is the best way to get us to where we need to be. We have regular drumbeat briefings that look at what we’re doing at our squadrons, in supply and at our Fleet Readiness Centers (FRCs). Leaders and champions of the various enterprise pillars get a chance to brief and say, “Here’s my organization’s plan. Here’s how we’re performing to that plan. Here’s what we’re learning, and here’s where we need your help.”

Due to the inherent complexities of the Navy Enterprise, there is a very large volume of available program execution data; a Readiness and Performance Analysis process centers leadership focus on the most impactful performance drivers to achieve Readiness recovery, while highlighting key opportunities to better achieve measurable outcomes in the most efficient manner.

Performance measurement and management is one of the most significant developments in the sphere of people management. Within organisations, it has become a key business process. It is viewed as a major lever for achieving the culture .change needed to enable organisations to respond to readiness challenges

Performance measurement and management is a set of processes for developing a shared understanding among employees of what needs to be done to enable an organisation to achieve its strategic goals. These processes include developing appropriate performance measures, and managing and developing people using approaches that are likely to produce continued success.

Performance measurement and management is about the "how” as well as the "what" of performance. It is not about "quick fixes" and "panaceas”. It is about developing a culture of confidence and trust among all employees, which reinforces both team and individual achievement. Success stems from demonstrable commitment from the organisation's senior level and from investment - of time and resources - into developing and training employees to deliver good performance.

Most organisations have some sort of process or framework to help measure and manage the performance of their employees. There is a growing awareness of the need to move away from the retrospective top-down annual appraisals to a forward-looking and two-way approach to communicating objectives, and so delivering performance for the business by valuing the contribution of all staff irrespective of status or job title.

The design of any performance measurement system should reflect the basic operating assumption of the organisations it supports. If the organisation changes and the measurement system doesn't, the latter will be at least ineffective or, more likely, counter productive. Traditional measurement systems tell an organisation where it stands in its efforts to achieve goals but not how it got there or, even more important, what it should do differently.

The challenge is to raise awareness of, and encourage dialogue about, performance as part of the daily business of an organisation. It is a matter not of only defining, measuring and managing performance, but of planning development activity and developing problem solving approaches to meet objectives. This approach relies on the ability of all employees to work as a team to common objectives and with a common sense of ownership and success.

“Parts, People, Planes: Sustainment, Aviation Leaders Visit Readiness Center”

The recurring theme in Naval Aviation has been “We need more people, planes and parts.” In an effort to break that pattern, Naval Aviation implemented the Naval Sustainment System (NSS) to change how it conducts business.

A collaboration between military and industry leaders to remove barriers, accelerate actions and improve processes, NSS encourages the adoption of commercial best practices and empowers commands to make changes. NSS is also a complementary strategy to the Performance to Plan (P2P) initiative, which focuses on training, warfighting demands and aligning priorities of materiel and operational readiness stakeholders.

To evaluate the results of these efforts, Naval Aviation Enterprise (NAE) leaders visit installations and organizations throughout the year. These Boots on the Ground (BoG) events provide leadership with an on the ground analysis of P2P and NSS efforts.

They also afford the opportunity to see firsthand how maintenance and supply activities have incorporated better business practices. The goal is to elevate P2P barriers and readiness challenges while showcasing best practices.

Following command overview briefs, Airborne Command Control and Logistics Wing (ACCLW) team, shared results of the E-2D Advanced Hawkeye and C-2 Greyhound type/model/series NSS approach We’ve had eight months of month-to-month increases in the number of E-2D MC [mission capable] aircraft—that’s eight straight months of improvement. We’ve had a lot of positive trendlines with NSS, but we are still short by a margin below our MC need number for E-2Ds. Our No. 1 readiness constraint is a lack of key, critical spare parts specific to this aircraft and its weapons system.”

The wing brought this issue to the Reliability Control Board—part of the NSS engineering and maintenance reform pillar—noting that some of the E-2D components are not living up to their predicted life expectancy.
We implemented Broad Unscheduled Rapid Support Training (BURST), which delivers a condensed version of the standardized instruction to Naval Aviation maintenance technicians at their squadrons.

“We just conducted this training last month for the first time. The training is made up of eight hours of classroom instruction followed by 30 hours of practical training, which allows us to teach technical training solutions. “They get to perform detailed maintenance actions on a specific platform such as system components, troubleshooting and operational checks. BURST allows a faster response time because it increases a maintainer’s level of knowledge required to complete their tasks.”

For  E-2D, T-56 and MH-53E T-64 engine lines, leaders observed how FRC reform initiatives were incorporated including the adoption of proven commercial practices to maximize quality and cost efficiency while minimizing cycle times.

“So far, we really like this [FRC reform] system. It has allowed our detachment to meet this fiscal year’s production goal of 17 [MH-53E T-64] engines, but we are still falling short of the global pool engine requirements. “We have the ability to produce more, but we are suffering from key critical component shortages.”

The recurring theme of “lack of parts” shifted to a manpower shortage.

“Currently, there are no impacts to operational readiness or the flight line, but the problem is that we are running crisis mode because our civilian manning is at 50 percent.“It has an effect on the people who are here, but the pressure that we run around here 24/7 is going to start having impacts especially in the near future and on the flight line.”

“One of our mitigation strategies is to get that talent, home grow it and build it up from the bottom. “We’re building it from the ground up so it’s going to take us some time to get that talent skilled up to the level we need them to be, but we know this is going to work.”

Leadership acknowledged the accomplishments and challenges addressed at the BoG.

“This is a continual process, but having the stakeholders and organizations represented here that are critical to the support of the fleet is really important, We picked up on a few new best practices here and we were able to visualize the work they are doing here. The tone of this BoG was very optimistic despite the action items that we need to address and that illustrates Naval Sustainment System at work.”

“As I look back on my first year” Air Boss described his first year on the job “a year of discovery and alignment” Now that we are in year two, the actions we have taken are gaining traction and will enable us to rapidly improve and sustain much higher levels of readiness. I look at this yearas the year of results.

Air Boss added “While I feel good about the state of Naval Aviation and its future, readiness is not where it needs to be for today’s combat environment. Improving readiness remains our main focus across the entire NAE-from leaders, to Sailors and Marines, to our civilian engineers and artisans, to our industry partners. To use a sports analogy, I see myself as the head coach. During the past year, I saw team members doing their jobs well but not necessarily with the understanding of how their work contributes to the overall effort of the team.

We’ve spent a lot of time aligning all of our activities so every person in the NAE understands how what they do on a daily basis contributes toward achieving our goals across every aircraft series we fly. The most pressing focus is building 341 mission-capable, lethal Super Hornet aircraft that can fight and win tonight, but it is only one aircraft across Naval Aviation and there are goals for everyone. Our metrics are aligned enterprise-wide, and we have clear expectations that we communicate through regular drumbeat briefings, Air Plans, podcasts and Naval Aviation News.

Air Boss added, "I am also listening to the fleet voice-on the flight line and in the aviation depots, as barriers are elevated to leadership so that we can resolve them. This is all part of effective communication. I’ve heard a number of times during Boots-on-the-Ground events that if the Sailors, Marines or artisans just had this one tool or this one piece of gear, their jobs would be easier, and they’d be more effective. 

The first step to taking action on these challenges, is hearing about them and understanding what is required. Clear communication and expectations give us all the same goals and allow us to work as a team. To that end, I want to elaborate on two initiatives underway: Performance to Plan (P2P) and the Naval Sustainment Systems (NSS).

In conjunction with P2P, the NSS initiative is leveraging best practices from commercial industry to help us reform aspects of our FRCs, organizational-level maintenance, supply chain, engineering and maintenance organizations, and our governance processes.”

We’ve hired industry leaders to help us with this holistic reform effort that involves people, parts, processes and governance across the NAE. The NSS initiative helps ensure we are aligned and also more transparent and more aware of what every other contributing stakeholder is doing and how each of their roles contributes to readiness.

The NSS is concentrating on getting the Navy Super Hornet fleet healthy again. We are focusing on the Super Hornet fleet first for two reasons: one, they have operated at a higher operational tempo than most other aircraft over the last 17 years; and two, this platform is critical for executing the high-end fight and supporting our troops on the ground.

But it’s not just Super Hornets. Secretary of Defense directed all the services with fighter and strike fighter aircraft—the Air Force, Navy and Marine Corps—to achieve an 80-percent mission-capable rate across their warfighting squadrons. While we had already started on that initiative, this directive acknowledges the importance of every aircraft and the need to apply all learning from this initial work in applying the NSS to every Navy and Marine Corps aircraft.

We have already seen success with NSS. We reformed how select work flows through the depot production lines and have implemented a more visual way to track that flow. These changes mean that at any time, you can walk into the hydraulic servo workshop at FRC Southwest (FRCSW) and see a diagram of their work in progress.

The diagram shows current status of every part and where the shop has encountered an issue and whether it is a supply or engineering issue. This allows managers to easily see and address issues immediately. We swarm that problem, we fix it, and the work continues to flow.

When you visit the landing gear shop at FRCSW, you see the same visual workflow and are able to identify the barrier or impediment there as well. Again, we can swarm, fix and improve.

We’ve already seen a 50-percent reduction in turnaround time in the two shops, and that translates to meeting the needs on the flight lines.

When Air Boss visited FRCW 15 seconds of entering the production control center, I saw a stack of papers in one area of the work flow depiction and I knew immediately that was where the problem existed. I said, “Okay, we have a problem there. What is it?” That instant awareness helps everyone know where to focus their efforts.

They said, “Here’s our problem. We don’t have enough engineers, and that’s why we have a backlog in engineering.” I said, “Okay, what do you need?” They responded, “Well sir, we need three stress engineers full-time so we can work off this backlog.” NAVAIR quickly responded, and we have three stress engineers in FRCW today making a difference.

It’s exciting to learn that we are currently exceeding our predicted gains. As we learn, we are raising the bar even higher. This gives me great hope as I look at our P2P metrics and reform our practices under the NSS. All of it is contributing to greater readiness across Naval Aviation. We are winning today, and we will win well into the future.

Key elements of P2P are:

1.  Creating a shared understanding of organizational metrics both backward- and forward-looking

2. Understanding the effort needed to achieve Readiness success

3.  Elevating barriers and matters requiring Echelon I leadership action to resolve

4.  Fostering a data-driven decision culture

5. Simplifying and standardizing metrics reporting to spotlight issues and improve problem-solving

6. Clearly articulating performance gaps, identifying barriers to execution  

7. Developing potential solutions to achieve an integrated enterprise approach to reduce intuition based decisions

8. Increase confidence in data-driven cause and effect relationships

9. Improve the cadence of accountability in execution 

10. Move forward velocity of learning across the Navy.


0 Comments

Top 50 Performance to Plan P2P Solutions to Digital Hangar Work Orders Execute Time/Cost Effect

5/10/2020

0 Comments

 
​Digital hangar will act as a virtual Job Site containing Digital Twins of aerospace systems that have been gated through rigorous validation and verification processes. A goal of the digital hangar is to research and identify high-value data that need to be maintained, to produce an enduring set of digital objects for aerospace platforms uncovered by investigation.

The Digital hangar strategy defines digital engineering as an integrated digital approach that uses authoritative sources of system data and models as a continuum across disciplines to support service life activities.

Digital Hangar continues to be developed and will eventually house high-value design information for digital representations of aerospace systems that will inform decision-making across the services.

We are developing a competency-based virtual and augmented-reality training capability for the aircraft maintenance and career enlisted aviator communities.

As part of the initiative, Virtual Training Hangars are being built for the classroom and flightline with 3D Aircraft Mission Design Series environments for every airframe in  inventory, with robust augmented-reality capabilities and comprehensive instructor tools, with a goal to enable training anywhere and any time.

“This effort is tied to our priority to transform the way workers learn through the aggressive and cost-effective modernization of education and training. We have to be visionary and agile when it comes to training today and the intent is to apply current and emerging technology to support the warfighter, no matter where they might be, so they can operate within joint, all-domain environments.”

The objective is to work collaboratively across services, to develop and execute a competency-based learning strategy and environment that utilizes current technology such as VR/AR, artificial intelligence, and machine learning, for officer and enlisted career fields.

“We are working with career field managers across multiple communities to steer the development of a viable occupational-competency model that will take us from a time-and-task-based model of the industrial age to a competency-based model for the future, digital age. 

We will work to identify and utilize new technologies to teach to these occupational competencies while also implementing new methodologies for training that is learner-centric. These technologies include blended and modularized training, as well as hands-on training.

“Another major reason for the initiative is we want to eliminate duplicative efforts on the virtual-reality front. We want to streamline the program with a simplified process so all the services can come in with dollars and tap into the expertise and experience of the process that has been built up over time.”

The virtual hangar and flightline, with most common aerospace ground equipment scanned the aircraft in multiple configurations, internal and exterior, for our career enlisted aviator training. “These models were used initially as the shell to build instructional, interactive courseware and other training tools.”

Once the framework was established, the idea to partner with other services came through the sharing of the already created virtual hangars and aircraft platform environments, which created an increased demand signal to create other individual and advanced 3D aspects of aircraft to meet service-specific requirements.

“The need to create more individual virtual components for advanced training or just-in-time training was there, so we invited the other major commands to join the initiative. “We started with both force development and innovation funding to get the program started.”

The push to include the capability in addition to the virtual environment was driven by a learner-centric, mission-focused, and competency-based approach to force development that is the heart of the force development mission.

“By using the interactive courseware, workers can learn more about individual problems by using the technology. “The augmented-reality environment really adds a dimension to the training that hasn’t existed before.

“We built this program with the career enlisted aviators, first and foremost, in mind, and aircraft maintenance was quickly paired with this effort. The capabilities and processes we are utilizing can be applied to almost any Service Specialty Code. We can apply this process to any career field as long as we know their requirements and have the funding source to create it.”

Demonstrating outstanding “past performance & future potential” at Work Site is one of your units most valuable assets contributing to obtaining favourable Reviews. We are looking for innovation, creativity, agility, new efficiency initiatives and cost savings in interactions at your Work Site. 

Let’s put it in terms you can really utilise as equipment digital site administrator. Bottom line is that most capable site operators offer responsible/effective use of Navy resources, becoming poised to eventually win follow-up work task orders. This imperative is key to successfully competing task orders, so your digital site must be capable first and foremost in offering top-notch technical services.

Building up your repair/upgrade site capabilities to Navy standards quickly as you can in your areas of specialisation needs to be a very high-priority goal for you. Tailoring and targeting your units success rates for future follow-on work is great idea.

Site Visit Reviews alone cannot guarantee you orders for follow-on work. You must get them yourself! Keys are for you to become proactive & concentrate on building up your units core competence as fast as possible. Beyond the all-important capabilities of your upgrade/repair site, be sure to get top-notch expert guidance on how to read work order solicitations & write effective proposals.

Visit information sources on Best Practises provided by Navy on regular basis, search for work orders in your areas of expertise & products/services. Then, it is your responsibility to prepare & submit effective cost quotes for large-dollar-value work orders. Be sure to send in effective technical, past performance & cost proposal package.

Investigate advantages of support services offered to you in Navy Technical Manuals by contacting office of Visiting Executive. There are many services and training opportunities to place your unit in strong position to secure follow-up work. 

There aren’t enough Digital Twin Agents to go around. If you do happen to find a qualified candidate, they still might not end up being a good fit for your unit. Content creators come in all shapes and sizes and have many different talents they can bring to the table. There isn't a single set of skills that defines a Great Digital Twin Agent.

This position is responsible for independently evaluating, selecting, and applying standard mechanical design techniques, procedures and criteria in solving technical problems pertaining to product design and/or development, manufacturability, product quality and test or systems compatibility.

Site Visit Executive can work for you by highlighting your issues & concerns with Top Brass in charge of making policy your unit will have to follow. Jump-start your operations with this solution checklist.

1. Capable of executing design processes in finalizing specific product character model

2. Knowledge and experience with 3D printing and rigging techniques

3. Shadow and learn the product approval process, with the goal of completing product from concept to final

4. Contribute to the development of characteristics art with an emphasis on digital inking

5. Develop 3D models of machines, tools and other objects that can be found at job site

6. Compose event-driving program game scenes and program game functionality and controllers

7. Assist with building the library of 3D modeled characters for consumer products central creative team

8. Interpret and artistically translate 2D concept sketches into 3D concept models and prototypes

9. Meet all technological standards using a combination of hand modeling techniques and 3D modeling software

10. Intuitive understanding of 3D visualization and the ability to sense scale and materials

11. Demonstrate knowledge of hard surface modeling with concern for how these models will function

12. Determine how models will be manufactured, and how they are assembled

13. Knowledge and experience with modeling materials and rapid prototyping

14. Ability to bring 2D to life using 3D tools and produce successful 3D models that ultimately translates into production and a profitable product

15. Support the project prototyping process from tech pack generation through commercialization and production

16. Ensure critical product development dates and on-time delivery of review samples are met

17. Develop technical competencies to understand the use of materials, construction characteristics, mold making and costing 

18. Experience with game design or development and Virtual Reality technology

19. Understanding of game design and gameplay, particularly in training or instructional design

20. Apply knowledge of the technological capabilities and limitations associated with modeling and texturing for gaming

21. Work with design leads to create various levels of detail for each model 

22. Include high/medium/low count versions, as well as possible damage states to each

23. Assist in getting models prepped for game placement by working with technical artists on various tasks

24. Interface directly with customers, users, graphic designers, and web content specialists to ensure that needs are technically feasible and meet customer strategy and goal

25. Assist with designing and developing user-interface features, site animation, and special-effects elements

26. Prepare technical documentation of work in progress and work completed to provide feedback of task progression

27. Create and maintain visual artwork, presentations, logos, websites, interactive media, videos, screenshots, and documents

28. Provide collection management, analysis, processing and dissemination of geospatial model products derived from imagery and tactical intelligence

29. Prepare imagery derived geospatial models using 3D modeling reports and products

30. Utilize highly advanced model techniques to tailor terrain products and/or other target features in digital formats

31. Perform digital data manipulation of imagery and topographic information

32. Query, view, evaluate, and download digital data

33. Demonstrate 3D modeling skills and experience with hard surface modeling

34. Experience with 3D model cleanup and optimization techniques

35. Create of models/assets programming for mobile, multi-player game engines

36. Demonstrate capability to work in a production pipeline

37. Experience creating training or instructional materials

38. Must include a link to work samples or a real-time demo

39. Create high-quality animations while optimizing to meet technical constraints 

40. Make changes to model that are requested based on direction

41. Create animation systems that meet game needs in collaboration with team members from other trades 

42. Work in close collaboration with the product director, game designers and programmers to understand the quality objectives

43. Script game play intentions and engine capacities in order to anticipate the in-game look of the animations

44. Estimate the time required to carry out own tasks and manage time to meet deadlines

45. Quickly prototype animation systems that will serve as a basis for animation- and gameplay-related discussions

46. Create animations of quality in collaboration with the Designers and Programmers to fit  established style and vision

47. Integrate and support animations within the game engine

48. Communicate and working closely with game designers, artist and programmers to find effective solutions or fix issues, and achieve goals in time

49. Review final delivery of 3D images, animations prior to delivery 

50. Ensure high quality product has been produced and client requests have been implemented properly

0 Comments

Top 50 Performance to Plan P2P  Work Skills Requirements Apply at Digital Hangar Maintenance Job Site

5/10/2020

0 Comments

 
​Navy successfully demonstrated the transmission of a Super Hornet's systems' status data from an airborne F/A-18F to the ground based Automated Maintenance Environment AME.

The demonstration is the latest example of using network centric capabilities to increase the effectiveness of an existing weapon system. By using an existing tactical data link aircrews can transmit data to base operations while in-flight. 

Upon receipt, the ground station automatically routes the data to sea- or land-based operational maintenance centers. The data enables maintenance personnel to respond with parts and equipment as soon as the aircraft lands, decreasing aircraft turnaround times.

"Maintenance data downlink transforms maintenance. Because maintenance crews receive the data before the aircraft lands, they can be ready and waiting with the right support equipment, the right part, and the right technical data to quickly return the aircraft to flight status. 

No hardware or structural changes are required as the enhanced maintenance capability is provided by software changes to the aircraft and ground equipment. The software-only nature of this change allows this capability to be easily transitioned to the operational fleet.

Demos were conducted to validate emerging technology for the warfighter. Navy continues to explore and demonstrate technologies that will result in greater connectivity, enhanced situational awareness and fleet readiness.

The operation of Navy vessels is complex, with hundreds of sailors performing thousands of tasks related to the maintenance and logistics associated with keeping systems running properly. 

A ship is a dynamic environment, requiring the ability to access critical data regarding ship operations while moving about the ship, or in harsh environments, such as an engine room.

Historically, sailors would move about the ship recording data manually into notebooks and then returning to the chief engineering office to enter the data into the official Ship Engineering Log or, if available, into the Integrated Condition Assessment System ICAS.. 

This process engages a large number of staff in manual and routine monitoring. There was also a significant time lapse that would occur between the time data was recorded into the system and when the data was acted upon, should maintenance be required.

Navy designed a secure wireless local area network WLAN that allowed for communication in the engineering spaces of the ship while in port or when sailing. The network included the installation of sensors on critical systems to allow for the transmission of data over a wireless network to ICAS in real time. 

The chief engineering officer could immediately identify equipment problems and communicate with maintenance personnel to implement corrective action quickly. The solution called for pre-wiring sensors into local data acquisition devices, which communicated wirelessly to a WLAN to send the sensor data to ICAS. 

The local data acquisition boxes have the ability to support a range of sensor types, including vibration, RTD, thermocouple, smart sensors, and legacy analog sensors. For equipment that could not be sensorized, 

Navy developed a wireless client for Pocket System that could be carried by a sailor and used to record local gauge readings and send the information back to ICAS over the WLAN while moving about the ship.

This solution has been deployed on several Navy ships to support the test and evaluation of the concept. This solution is highly flexible and scalable to a wide variety of military fleet components.

Automated Maintenance Environment Products include: Wireless LAN-Enabled Data Acquisition System, Access Point / Bridge with USB for Sensor Networks, Outdoor Dual Radio Wireless Mesh Node, Outdoor Wireless Interface, Security Server and Cryptographic Client Software

A secure WLAN solution has been deployed on several Navy ships, The solution helps streamline ship maintenance activities.

A secure wireless network helps Navy utilize its personnel more effectively, while improving shipboard maintenance and damage control. The solution saves valuable sailor time, results in a lighter workload, requires fewer personnel to perform routine monitoring of equipment for maintenance purposes, and allows for more time to focus on warfighting.

Since the data is transmitted in real time, personnel are able to respond to maintenance situations faster, potentially preventing more intensive equipment repairs and reducing the overall maintenance and logistics burden. 

The solution keeps critical and sensitive information secure, as the equipment meets security standards. The Navy also benefits because no disruptive and costly wired construction is required on board the ship

Good communications skills required for interaction with agencies, contractors, and parallel organizational test leads and briefing test status to Program Leadership.

1. Experience with Joint Mission Planning System (JMPS) and /or Automated Maintenance Equipment (FAME) System 

2. Researches, identifies and resolves customer supply support system inventory discrepancies. 

3. Investigates total platform vehicle demand management, replenishment and electronic data systems. 

4. Monitors logistics system performance. 

5. Solicits customer feedback and takes action to improve satisfaction with company services. 

6. Training end-users in both OJT and class room environments

7. Monitoring FAME database integrity and configuration management 

8. Monitor database integrity and configuration.  

9. Perform system testing, troubleshooting and fleet training as required. 
 
10. Train personnel on proper use of the F/A-18 Automated Maintenance Environment (FAME). 
 
11. Coordinate activity maintenance reporting requirements 

12. Monitor and provide feedback on the technical aspects of the program including the systems architecture, system interfaces, associated technical performance measures, and logistics elements contained within FAME.
  
13. Provide user training at all FAME activities, including informal, over-the-shoulder, and annual classroom refresher training for all squadrons to include logs and records (AZ) training.  

14. Support the identification, analysis, and tracking of trouble reports though the use of the FAME help desk and/or Virtual Electronic Correspondence Tracking and Online Reporting VECTOR application.  

15. Knowledge of responsibilities and tasks performed by various Logistics/Engineering departments/disciplines e.g., design, test, software, technology, avionics, LSA, Provisioning, Technical Publications

16. Able to operate with deployed and CONUS based military operations centers and field based commands. 

17. Knowledge and/or experience with F/A-18 and/or Navy Flight Operations / Maintenance is preferred. 

18. Knowledge of the interaction between departments/disciplines and how their products/processes affect one another and impact non-engineering processes e.g., Operations and Business 

19. Strong knowledge of process improvement, and installations of integrated systems of people, materials, equipment, and methods. 

20. Knowledge of network communication concepts, principles and architectures, associated with network planning, design, integration and maintenance. 

21. Working knowledge of performance monitoring and diagnostic analysis 

22. Knowledge of fleet aviation operations & maintenance processes, policies and standard practices to effectively represent the operator 

23. Operate with deployed and CONUS based military operations centers and field based commands. 

24. Management and sustainment of all aircraft logbooks, aeronautical equipment service records, aircraft maintenance files, records and reports, directives and correspondence in an aircraft maintenance and operations environment

25. Utilize the current management information systems to maintain aircraft forms and records as required by customer.

26. Draft and submit aircraft/engine management and Inventory Reporting System reports in a timely manner
.
27. Process readiness documents, administer the aircraft configuration status accounting program and verify aircraft utilization reports.

28. Monitor aircraft configuration status, weight and balance and aircraft inventory data for accuracy.

29. Initiate and distribute applicable maintenance forms in accordance with established procedures.

30. Monitor, verify and log Support Equipment Custody records, Aviation Armament Equipment, aircraft inventory records and Technical Directives as applicable.

31. Comply with all established general and industrial safety rules and regulations as applicable to the contract, facility and job assignment.

32. Ability to meet required scheduling deadlines and maintain necessary work flow.

33. Thorough knowledge of aircraft log books, maintenance records, applicable maintenance / technical manuals, publications and forms.

34. High degree of knowledge in computer operation and keypunch skills. 

35. Maintain aircraft log books, aeronautical equipment service records and associated logs.
 
36. Manages field office resources. 

37. Capturing and documenting new user requirements

38. Provide FAME technical support  

39. Provide PM required reports and attend weekly conference calls.
 
40. Recommend changes to maintenance policies and procedures.

41. Ensure customer satisfaction, always keeping the user informed at all times.

42. Knowledge of F/A-18 Automated Maintenance Environment (FAME) 

43. Navy and Marine Corps Intranet (NMCI)

44. Optimized Organizational Maintenance Activity (OOMA)

45. Knowledge of database management tools and /or operating systems protocols.
 
46. As required by work and customer specifications. May require travel in support of detachments of unknown duration.

47. Operate in austere environments and be physically separated from personnel. 

48. Ability to work independently as well as within a team environment and under stressful conditions is essential.  

49. Assist in preparation of the Monthly Maintenance Plan.Provide technical assistance, guidance and instruction as required.
​
50.Ability to work independently as well as within a team environment and under stressful conditions is essential.
0 Comments

Top 10 Artificial Intelligence Implement Autonomous Systems Operator Target Range Decision Speed

5/1/2020

1 Comment

 
​Questions about autonomous warfare  address artificial intelligence understanding the context of its actions, its predictability, ability to transfer lessons from one task to another and durability, 

Autonomy is generally defined as a machine having the ability to execute tasks with limited to no human intervention. Advances in autonomy are driven by converging technologies such as AI, robotics, dig data, and advanced motion sensors. Autonomous systems can involve a built-in human control mechanism -- human-in-the-loop, a human override mechanism-- human-on-the-loop, or minimal-to-no human involvement-- human-out-of-the-loop.

Autonomous systems can be conceptually divided into two categories: processes and assets. Autonomous processes include those capabilities driven by machine learning, big data, cloud storage, and AI to automate procedures and functions. 

Major advances in autonomous processes could support mission planning, training , decision-making, administrative roles, and business functions. Autonomous assets include the physical equipment and resources the Services can use to carry out missions. These assets are divided primarily into three categories: unmanned aerial vehicles UAV, unmanned underwater vehicles UUV and unmanned surface vehicles USV..

USVs receive less attention than their aerial counterparts, but hold enormous potential. USVs support cross-domain integration and increase the capabilities of other unmanned systems with their large payloads, power reserves, and endurance. They also can help overcome anti-access/area-denial environments by projecting information operations, military deception campaigns and electronic warfare capabilities, Current projects aim to have swarms of autonomous vessels conducting both surveillance and security operations.

Autonomous assets can act as a major force multiplier. UAVs, USVs, and UUVs can increase the strength of the force and material readiness while the Navy’s requirements for deployments, readiness, and forward presence remain high.

“Plans to Accelerate  Fielding of Unmanned/Autonomous Robotic Systems  Face Technical Challenges”

Pentagon is planning the  use of robots to carry out the dangerous, and often tedious, elements of combat.

Services  are testing new ways of pairing troops with air and ground robots at the squad level with its sights focused on enhancing how the squad works on the battlefield with robots, and advanced targeting and sensing gear. 

Squads are using air and ground vehicles to detect physical and electromagnetic threats, are able to demonstrate the ability to communicate and collaborate, even while operating  on the edge of connectivity.’”

One program will give aviators a robot co-pilot  with autonomous capability lo take the load off pilots so human pilots they can focus on mission tasks other than flying.

There is an ongoing effort to develop new technologies that would “extend squad awareness and engagement capabilities that can be extended without imposing physical and behavioural  burdens.

Efforts aim to speed the development of new, lightweight, integrated systems that provide infantry squads awareness, adaptability and flexibility in complex environments like to enable dismounted troops to more intuitively understand and control their complex mission environments.

Those efforts fit within wider work being done by the Close Combat Lethality Task Force, a group set up to enhance close combat capabilities for infantry, special operations, scouts and some engineers.

Squad Sensing detects potential threats at a squad-relevant operational pace. Capabilities of interest include multi-source data fusion and autonomous threat detection. 

Squad Autonomy Increases squad members’ real-time knowledge of their own and team locations in GPS-denied environments using embedded unmanned air and ground systems. Capabilities of interest include robust collaboration between humans and unmanned systems.
 
“Each run, they learned a bit more on the systems and how they could support the operation,” “By the end, they were using the unmanned ground and aerial systems to maximise the squad’s combat power and allow a squad to complete a mission that normally would take a platoon to execute.”

Troops have been equipped with a variety of robotic and autonomous systems with the aim of improving areas such as combat mass, soldier lethality and overall information gathering.

In one scenario, soldiers used robotic engineering vehicles to clear an obstacle, while a small quadcopter flew overhead to provide infrared imagery before armored infantry rolled in to take an enemy position.

 Robotic systems with varying levels of autonomy were a key part of the exercise, ranging from radar-equipped drones for detecting buried IEDs, to small two-wheeled robots that are thrown into buildings to search for enemy fighters.

A related challenge continues to be lack of experience using unmanned and autonomous systems, with commanders using exercises to better understand capability enhancements as well as the inevitable shortfalls.

“This is a real opportunity to bring stuff into the field to see if military users will use it the way industry thinks they will use it.  “There’s no one single piece of kit that will solve all our problems, it’s a combination of something in the air such as a surveillance asset, something on the ground, perhaps with a weapon on it or just doing logistics, but then it all links through an information system where you can pass that data and make better decisions to generate tempo.”

One issue is an  increasingly crowded radio frequency spectrum, especially as several unmanned systems compete for space to beam back high-resolution data from onboard sensors. “The problem is when they start cutting each other out, we are dealing with physics here, if we want to have great high definition video passing across the battlefield we need to trade somewhere else.”

Not only will there be a need to ensure that the control systems do not interfere with each other, but also that leaders  “will have to be convinced that new systems are not simply too vulnerable to jamming and other disruptive techniques by an adversary.”

A promising development from trials is the ability to optionally man a standard vehicle using  kits that can be fitted within a few hours including a remote-controlled infantry fighting vehicle and a lightweight  tactical vehicle.

Troops in the exercise used the vehicles in unintended ways, utilising surveillance tool on onboard camera. Squads also used vehicles to help in entering buildings and to carry supplies or troops

“What we have found is that when troops are using these vehicles they just want to jump on the vehicle because it goes faster than they can, and you can move groups very quickly on them. For  safety reasons the soldiers were not allowed to hop on board during the exercise. “Optionally manned is good, but we don’t know if  it needs to be optionally manned with a steering wheel and a seat.

Additionally, autonomous assets strategically support principles such as distribution and maneuver by leveraging “additional weapons and sensors over large areas” and optimizing the “strategic depth of the force.“ Both airborne and surface-borne drones can support intelligence collection and targeting requirements for multi-domain battlespaces and over-the-horizon  amphib ops with adequate fire support for landing forces, and autonomous drones could overcome this challenge by acting as mobile mini-mortars with increased on-station times.

Autonomy and man-machine teaming can allow leaders to make better decisions faster. Military leaders must “be prepared to make decisions at the speed of relevance. When the speed of relevance is the speed of electrons, the Navy will depend on autonomy to remain a relevant fighting force.

The military already uses autonomous systems for offensive and defensive missions. Various levels of autonomy support mobility, targeting, intelligence, and interoperability, Autonomy empowers homing missiles, navigation, and autopilot capabilities. Basic targeting systems use automated target recognition to identify objects and support non-kinetic targeting for ISR collections. 

Counter-artillery batteries and Phalanx close-in-weapons-systems can engage automatically upon detecting a threat. Recurring and rules-based tasks such as scheduling replenishments at sea, naval weapon-target assignment plans, dynamic frequency allocations, and planning daily aircraft routing are candidates for integration with AI in the near future. 

Navy recently created its first underwater drone squadron. Future uses of USVs are under-explored but hold substantial promise. USVs have significant advantages over UAVs and UUVs with regard to endurance and payload capacity for prolonged operations.

Previous exercises highlighted the ability of USVs to relay instructions from shore to underwater assets, in this case by ordering the launch of a UAV which a UUV was carrying 

Most USVs are directed toward missions such as observation and collection, physical environment mapping, countermeasures, countering small boats, and testing to involve automated payloads and autonomous coordination with multiple ships.

The Navy and Marine Corps are uniquely suited to benefit from autonomous systems. Attributes that welcome autonomy: empowering lower-skilled workers to perform higher-skilled work, replication for large-scale operations, faster-than-human reaction speed, superhuman precision, extended patience, and operations away from reliable communications. Some strides are being made to foster autonomy, but more can be done. 

Most AI systems require some level of guidance from humans. Sailors and Marines will require instruction and training on these technical systems, just as officers will require education on how to integrate them into operations and planning. Educating front-line leaders on the capabilities of autonomous systems should be a priority. 

As military forces move in human-built environments they should consider the possibility that remote or autonomous machines, legged as well as winged, could also be traversing in the same way. 

The programme is explorting precision Engagement of threats to maintain compatibility with infantry weapon systems without imposing weight or operational burdens  on that would negatively affect mission effectiveness. Capabilities of interest include distributed, non-line-of-sight targeting and guided munitions.

Non-Kinetic Engagement disrupt enemy command and control, communications and use of drones. Capabilities of interest include disaggregated electronic surveillance and coordinated effects from distributed platforms.

Military is carrying out a number of experiments in communications, EW, loitering munitions and targeting. Services are looking for ways to enhance infantry capabilities using manned-unmanned teaming.

Augmented Spectral Situational Awareness, and Unaided Localisation for Transformative Squads are being tested using autonomous robots with sensor systems to detect enemy locations to target the enemy with a precision grenade before the enemy could detect their movement.

Small units using Electronic Attack Module were able to detect, locate, and attack specific threats in the radio frequency domains, part of larger efforts to put more detection and fires at lower echelons in ground force units.

This important work is presently done by humans, who often have to physically place detonation charges on the mines they find. Some day, autonomous robots could perform the same task with less operational risk.

The Swarm Diver is a surface  or underwater drones can release swarms of smaller autonomous underwater robots to scout, identify and counter threats in littoral waters. Autonomy is key here, as communicating underwater is difficult and communicating with above-water assets from underwater especially tricky without an intermediary.

Should the Swarm Diver project work as intended, swarms of autonomous robots could be the long-awaited answer to the enduring threat posed by autonomous explosives.

Air Force is doubling down on efforts to fuse text, video, and virtually every potential source of information together through artificial intelligence. The goal is to change the way every commander, airman, and even thinking machine makes decisions. Air Force has begun programme called Data to Decision, or D2D. 

D2D objective is a complete cycle of understanding, from predicting what we expect to encounter, prescribe what can be done to help understand the environment, then find, fix, track, target, engage, assess, anything, anytime, anywhere in any domain.

We want to improve every decision – those made by airmen or by machines.”

That includes decisions about where a piece of information needs to go next. “How you aggregate all the data, take the pieces that matter for a mission, move it where you need to move it-- even for different purposes then think about where it needs to move and why” 

The programme is grand in its ambitions to use a wide variety of data, extending well beyond traditional aerial surveillance footage to potentially include, well, everything: like live-streaming diagnostic data off of jets, drones, and other aircraft, attainable whether data is soldier-worn sensors, and more. 

“We will use all data available” including unstructured, open sources. We are shifting thinking to focus on the object in question and looking for any data that may be relevant. Machines will help us determine relevance, especially as we aggregate in ways never before considered. So all the potential data cited could be part of the relevance discussion.”

How do you take all of that data and use it to output a dynamic and credible picture of the future? Here, too, the strategy is: diversity in approach. The potential artificial intelligence tool kit the Air Force is using ranges from neural nets and deep learning approaches to less exotic machine learning methods, useful for tasks where the data is structured or the variables are fewer, like chess.

It’s not the first time the military has experimented with fusing a wide variety of data for better decision making  to create a fuller picture of the battlespace by combining and crunching data. It used multiple neural nets to populate a larger engine that then used statistics to output predictions and probability.

D2D uses some lessons from past efforts. “AI and Machine Learning can help by creating neural networks through disparate data sources that would then help us better understand the potential to use the nets for Air Force mission execution.

That’s key, because statistical methods don’t provide the most accurate answer the first time you use them. You run the formula over and over again, stirring in new data or information that you receive, coming closer and closer to a prediction in which you can have confidence.

“The components of the D2D Program have transitioned into operational prototypes in a variety of mission areas and include activities in video, image, and text analytics. The D2D technical products and lessons learned are critical elements of the DoD roadmap for future capability in machine learning and artificial intelligence.
 
On that roadmap, there are high expectations. “You’ll see some modest investments in that space. If you look inside what we are pushing to aviators, it’s a lot of information to absorb. So it’s about, can you actually push the information to the human. Can you make it make sense to the human, to help the human make decisions at greater speed? That last part is the hard part. The human has to decide faster. So parsing the data and how you decide actually matters. That’s a technology piece that’s really going to require some experimentation.

“Some would argue that is man-machine teaming  but it can be described differently: you take that same person that’s working in that cockpit — with all that information streaming to them — and you augment it by a factor of ten by teaming that airplane — that can push that amount of information to an aviator who is making target engagement decisions with offboard sensors that push ten times more data to that same person and give them the weapons that will allow them to prosecute those targets. 

Now you are talking about more than what fighter pilots call beyond-visual-range shots with high confidence. You’re talking about being able to prosecute the enemy at hundreds of kilometers away from your airplane with very high confidence that you’re actually engaging the targets you intend to engage, because we don’t engage targets unless we have high confidence.

1. Targets are not always detected upon first visit,  there is not always probability of detection on a visit is independent of overall time spent searching. A memory-less process implies that the searcher will return to the target after detection i.e., the process itself does not include a termination-upon-detection requirement 

2.  The duration of pre-detection visits to a target, during detection visits-- when detection actually occurs, and post-detection visits should all be equivalent and should be exponentially distributed. 

3. The pre- and post-detection visit durations are exponential and essentially identical.  However, the during detection visit durations tend to be long--in the case of the test data, nearly twice as long as pre- and post-detection durations, there were few very short-duration visits, and the distribution lacked a tail of long-duration visits.  

4. Data shows during detection visits are more normally than exponentially distributed., delay may have been attributable to a motor response and some sort of inhibition in the pilot movement system

5. Different strategy could have been used for verification leading to a detection rather than checking when no detection decision was made. 

6. The distribution of first visit times is actually close to an exponential but only after a delay.  This result is consistent with observations in the scene perception study indicating that observers do not begin immediately searching the scene when it appears.  

7. When an observer is confronted by a new scene, the observer first spends short time glancing around at it to “orient” and extract the spatial layout of the scene. Some visual and possibly cognitive process has to extract scene information sufficient to delineate points of interest before the search process as described by the model can begin. 

8. The distribution between gaps, times between visits to the target is described by a memory-less statistical process implies that the gaps before and after detection will be distributed in the same way. After detection, the gaps are not distributed exponentially.  and the search process returns to the target too soon after detection for it not to have learned i.e., search is not a memory-less process 

9. The detection process i.e., the assumption that detection is based on time exploring the target and not search time overall, makes predictions within the framework of the statistical process: probability of detection is exponential in the time on target. 

10. The distribution of the number of targets detected across all trials in the data set is described by two or three exponentials. This result is supported by the data when the search time is shifted to account for the delay in first visit 

1 Comment

Top 10 Artificial Intelligence Advantages Robotic Deployment Transforms Ability to Execute Logistics Work

5/1/2020

0 Comments

 
​Today's military forces need more than ammunition to win battles. When evaluating resources, leaders need to leverage the latest technology tools—artificial intelligence and digital information—to make the best decisions first.

Artificial intelligence could soon be helping the military predict when equipment will break, defense against attacks, and prevent ships from colliding with one another.

The Pentagon has made no secret that it wants to pair humans with machines to help them make decisions faster. The military has been looking for ways to automate intelligence processing using a new tech warfare cell. 

Already, these algorithms are being used by commercial firms to forecast failure rates for ship turbines and pumps. And they can predict more than when equipment will fail, but also the type of failure and why it’s failing.

“We built a solution for them that extended their failure forewarning from four hours to five days. “That’s the kind of impact the tech brings. We don’t know anything about pumps; we’re not domain experts in pumps.”

“So instead of going up and looking through these manuals and figuring out what to do when something breaks and going through that prognostics process, you want the model to basically anticipate what you need and also fetch information that is collected from various different manuals … based on the intelligent understanding of that content and is provided back to the person asking the question.

“The threat surface is so massive that dealing with the types of threats that are emanating 
It might also be able to help with the reported pump and turbine problems with the Navy’s Littoral Combat Ships. “That is the exact kind of thing we look into and that we protect on a commercial level with a large number of companies today.”

Then there’s the applicability across all military systems.

An “AI watchman” could prevent ships from colliding with one another since the computers are “constantly looking at sensor data and is making sense of the environment and the situation.”

“There is that safety aspect of using artificial intelligence to augment the level of capability and intelligence available on ships, on tanks, in aircraft, all over, where you almost have an embedded AI technician be part of every military asset.

“That is a capability and it leads to tremendous benefits. And the possibilities may be endless. There’s an easy answer to the question: Where can AI be applied? “It can be applied literally everywhere.

Military Applications of Artificial Intelligence Tech: Questions Remain How Best to Utilise for Operation Scenarios

The Pentagon recently drove tactical trucks with sensors, electronics and other applications powered by commercially-developed artificial intelligence technology as a way to take new steps in more quickly predicting and identifying mechanical failures of great relevance to combat operations. 

An Army-industry assessment incorporated attempts to use AI and real-time data analytics for newer, fast-evolving applications of Conditioned Based Maintenance technology. 

Advanced computer algorithms, enhanced in some instances through machine learning, enable systems to instantly draw upon vast volumes of historical data as a way to expedite analysis of key mechanical indicators. Real-time analytics, drawing upon documented pools of established data through computer automation, can integrate otherwise disconnected sensors and other on-board vehicle systems.

“We identified some of the challenges in how you integrate sensor data that is delivered from different solutions. You can take unstructured information from maintenance manuals, reports, safety materials, vehicle history information and other vehicle technologies – and use AI to analyze data and draw informed conclusions of great significance to military operators. 

Faster diagnostics, of course, enables vehicle operators to anticipate when various failures, such as engine or transmission challenges, may happen in advance of a potentially disruptive battlefield event. Alongside an unmistakable operational benefit, faster Conditioned Based Maintenance activity also greatly streamlines the logistics train, optimizes repairs and reduces costs for the Army. 

Army wheeled tactical vehicles, which include things like the Family of Medium Tactical Vehicles and emerging Joint Light Tactical Vehicle, are moving towards using more automation and AI to gather, organise and analyze sensor data and key technical indicators from on-board systems. 

“We identified Army data challenges, delivered new sensors – and used different approaches – invariably brings on different ways that data can be delivered to the Army,” 

Faster computer processing brings substantial advantages to Army vehicles which increasingly rely upon networked electronics, sensors and C4ISR systems. 

“We know there is going to be unmanned systems for the future, and we want to look at unmanned systems and working with teams of manned systems. This involves AI-enabled machine learning in high priority areas we know are going to be long term as well as near term applications.

Technical gains in the area of AI and autonomy are arriving at lightning speed, offering faster, more efficient technical functions across a wide range of platforms. Years ago, the Army began experimenting with “leader-follower” algorithms designed to program an unmanned tactical vehicle to follow a manned vehicle, mirroring its movements. 

Autonomous or semi-autonomous navigation brings a range of combat advantages. A truck able to drive itself can, among other things, free up vehicle operators for other high-priority combat tasks. AI enabled condition based maintenance to function through a variety of methods; sensor information can be gathered, organised and then subsequently downloaded, or wirelessly transmitted using cloud technology.

Military is creating robots that can follow orders and understand what they’re told to do—and execute it with minimal supervision.

Military robots have always had limitations like having practically no onboard intelligence and is piloted by remote control. What the military has long wanted instead are intelligent robot teammates that can follow orders without constant supervision.    

The robot can take verbal instructions and interpret gestures. But it can also be controlled via a tablet and return data in the form of maps and images so the operator can see exactly what is behind the building, for example. 
 
The team used a hybrid approach to help robots make sense of the world around them. Deep learning is particularly good at image recognition, so algorithms similar to those Google uses to recognize objects in photos let the robots identify buildings, vehicles, and people. As well as identifying whole objects, a robot can recognize key points like the headlights and wheels of a car, helping them work out the car’s exact position and orientation.

Once it has used deep learning to identify an object, the robot uses a knowledge base to pull out more detailed information that helps it carry out its orders. For example, when it identifies an object as a car, it consults a list of facts relating to cars: a car is a vehicle, it has wheels and an engine, and so on. These facts need to be hand-coded and are time consuming to compile, but teams are looking into ways to streamline this like combining deep learning with a knowledge-base-centered approach so a robot can learn and show judgment.

Consider the example of the command “Go behind the farthest truck on the left.” As well as recognizing objects and their locations, the robot has to decipher “behind” and “left,” which depend on where the speaker is standing, facing, and pointing. Its hard-coded knowledge of the environment gives it further conceptual clues as to how to carry out its task.

The robot can also ask questions to deal with ambiguity. If it is told to “go behind the building,” it might come back with: “You mean the building on the right?”

“We have integrated basic forms of all of the pieces needed to enable acting as a teammate. “The robot can make maps, label objects in those maps, interpret and execute simple commands with respect to those objects, and ask for clarification when there is ambiguity in the command.”

Artificial Intelligence Data Platform Identifies Utility for Combat Vehicles 

Sensors were installed on Stryker vehicles, and Artificial Intelligence platform ingested and analyzed maintenance manuals and work orders to create a comprehensive maintenance picture.

With that information, the system was able to flag anomalies and predict when components in the vehicles were likely to fail. That information helped the Army more easily spot and track problems in the field and limit the number of breakdowns that took vehicles out of operation.

With the massive volume of information analyzed, the Army could set up maintenance for individual vehicles rather than sending them in groups for scheduled maintenance.

Maintenance data processed by AI-connected vehicles to save manufacture ring time and money because they would know what parts to keep in inventory for likely repairs.

“Wherever a solider is in the world, we will give them information faster and better analyzed. Newer analysis technologies work faster than what humans have time to look at. This is really going to take us to a greater sense of awareness regarding our equipment.”

Primary purpose of platform is to function as a logistics brain for the Army and improve readiness by tracking, analyzing and disseminating Army readiness information through the organisation of logistics data to speed up commander’s decision-making ability,

Overall, the Army system manages large equipment inventories and repairs, such as those needed for combat vehicles, helicopters and trucks along with vast amounts of smaller items spanning from small arms inventories to data services.

“We enable users to go in and out of a system within a very small technical footprint. We make all of that available 24/7 to Army users with system access.”

Enabling greater interoperability, data-sharing and faster network access can shine a spotlight on security issues in a double-pronged manner. In one sense, streamlining information can make data less dispersed and varied, therefore making it easier to protect.

At the same time, some raise the concern that fewer points of access could avail potential intruders an opportunity to launch a more targeted attack and enable them to reach, and potentially cause damage, deeper into networks

Predictive Maintenance and Logistics Use Case Examples of AI Applications Improve Performance

Power of machine learning to detect anomalies. Some existing predictive maintenance systems have analyzed time series data from sensors, such as those monitoring temperature or vibration, in order to detect anomalies or make forecasts on the remaining useful life of components. Sensor data such as anomaly detection on engine vibration data, and images and video of engine condition.

Deep learning’s capacity to analyze very large amounts of high dimensional data can take this to a new level. By layering in additional data, such as audio and image data, from other sensors—including relatively cheap ones such as microphones and cameras—neural networks can enhance and possibly replace more traditional methods.

AI’s ability to predict failures and allow planned interventions can be used to reduce downtime and operating costs while improving production yield.

In some industry examples, using remote on-board diagnostics to anticipate the need for service generate operational value. In a case involving cargo aircraft, AI can extend the life of the plane beyond what is possible using traditional analytic techniques by combining plane model data, maintenance history.

Application of AI techniques such as continuous estimation to logistics can add substantial value across many sectors. through real-time forecasts and behavioral coaching.

AI can optimize routing of delivery traffic, improving fuel efficiency and reducing delivery times.. By using sensors to monitor both vehicle performance and driver behavior, drivers receive real-time coaching, including when to speed up or slow down, optimising fuel consumption and reducing maintenance costs.

One autonomous vehicle uses a morphed tire/track for traction run with a one-handed remote control, non-line of sight manoeuvre with onboard sensors and cameras. Planners are looking to the terrain challenges of dismounted operations to use armed unmanned ground vehicles to provide stand-off force protection.

The concept of self-driving cars has been around for years, but only recently have increasing advances in networking, satellites, and laser equipment made this dream a reality. 

Vehicle manufacturers realized that they could use camera systems to relay data to an onboard computer that would process images of the road and create responses. Although we do not have robotic vehicles filling our roadways as of yet, some vehicles already contain numerous autonomous features that make driving easier and safer than ever before. Some models offer assisted parking or braking systems that activate automatically if they sense an issue. Vehicles can sense lane position and make adjustments there as well.

To be useful, robots must navigate the world much as humans do like a test for what could be the future of maintenance work.

The robot isn’t winning any races, but it has endurance. Its battery holds power for three hours and the robot can lower itself onto a charging station when it needs to power up. It’s not light, but could be carried into place on a small vehicle or by a couple of troops. Its limbs can push buttons and push open doors, though it would likely take extra modifications to get it to manipulate doorknobs.

For the tunnel exploration, the robot was lowered into place, and then guided by a joystick. Autonomous movement is possible, but using a remote control allowed the human observers to keep a closer eye on what, exactly, the machine was doing underground. 

The robot normally navigates by Light Detection and Ranging, remote sensing technology used to measure distances and 3D mapping of the surrounding environment. To better comprehend the terrain in low-light environments, it is also exploring sensors at the end of its feet, providing a sense of touch. All of this could prove critical to taking  place underground tunnels fights.
Finding new ways to incorporate robots and autonomous or semi-autonomous vehicles into warfighting has captured the attention of top commanders but nothing as basic and practical as the gear mule concept has come so close to reality.

1. Importance and necessity of AI transparency is application-specific.

2. Trust must be met across algorithms, data, and outcomes.

3. Users must understand the mechanisms by which systems can be spoofed.

4. Robust and resilient digital capability requires balancing development, operations, and security.

5. Network risk management and network security ownership throughout and across organisations is critical.

6. Applying AI requires a skilled and educated workforce with domain expertise, technical training, and the appropriate tools.

7. Organisations must develop workforce expertise in digital data models

8. Success for users in machine learning requires iteration, experimentation, and learning through early sub-optimal performance.

9. Organisation must build the foundational digital capability to successfully apply AI technologies-- database management, information integration.
​
10. Gaining competitive advantage through information and analytics is an enterprise-wide endeavor from headquarters to the deployed warfighter.
0 Comments

Top 10 Artificial Intelligence Tech Limitations Machine Learning Systems Without Ability to Explain Outputs

5/1/2020

0 Comments

 
​We can’t wait for AI technologies to mature before educating and training personnel. Personnel experts in knowledge of their particular warfighting domains and artificial intelligence and machine learning technologies will likely provide the best interdisciplinary approach to harnessing the power of these future systems. 

Expansion of wargames and military case studies that introduce challenging dilemmas at the intersection of technology and military art will go a long way towards cultivating the mindsets needed in the 21st century service member.

Troops on the front lines should be the leaders of driving innovations in this space. Recently released planning guidance by the Commandant of the Marine Corps recognizes the importance of individuals with artificial intelligence skill sets. 

Developing programs that mimic human behaviors such as reasoning or judgement is a non-trivial matter. Hard-coding general human-like behavior is also unrealistic, because the code required would quickly become unwieldy in accounting for countless combinations of events or changes in contextual factors. Artificial intelligence technologies overcome the shortcomings of hard-coding human-like behaviors through a variety of techniques, but they also trade increased performance for greater opacity.

 For instance, under the umbrella of machine learning, techniques such as deep learning e.g., neural networks, convolutional neural networks, etc. use feature extraction techniques composed of many neural layers fed millions of examples to train and adjust the algorithm’s parameters. Thus, natural language processing and self-driving cars are made possible, not through hard-coding explicit instructions, but through different types and combinations of deep learning methods.

Understanding the results of algorithms that ingest petabytes of data, however, presents a black-box problem for consumers of such system outputs. The march towards greater automation will further conflate the lines between human and machine decision-making where a greater number of outputs are inextricably tied to artificial intelligence and machine learning technologies. 

In an AI Demo, robot was used to demonstrate how well the technology allowed robots to understand instructions. Two of the three demonstrations went off perfectly. The robot had to be rebooted during the third when its navigation system locked up.

Trust is the key to getting robots and humans to work together. Soldiers will need to learn the robot’s capabilities and limitations, and at the same time, the machine will learn the unit’s language and procedures.

Big challenges remain. First, many robot are currently too slow for practical use. Second, it needs to be far more resilient. All AI systems can go wrong, but military robots have to be reliable in life-and-death situations. These challenges are being addressed.

The big questions for many with the task of designing robots for practical use involve achieving speed and resiliency: How will it do so, when it will do so, and in which markets and applications will it have the most impact? Certainly, military field-level computing applications  and virtual work spaces are among those most clearly in the crosshairs of machine learning and professional spaces.

Artificial intelligence tech is limited by challenge of accepting outputs from artificial intelligence tools, especially those outputs based on opaque processes. It’s one reason why policy makers were historically slow to embrace artificial intelligence, and particularly neural networks, for decision making.

Neural nets and deep learning approaches have proven effective in chaotic, unstructured field scenarios and problems, such as helping self-driving cars navigate city streets, helping algorithms identify objects in YouTube videos, or helping computer games win the high-variable game of Go. But the underlying processes behind them defy easy explanation, and that’s a big drawback.

“The biggest part of the problem of artificial intelligence is: they build these incredibly long algorithms with all of these gates to go through. They push all of this machine learning and data through it. Frankly, we are not entirely sure how all of that works, all the time.

“If someone sabotages your data that you are feeding your algorithm for learning, we’re not entirely sure what’s going to come out of the other end. It’s going to take a while to figure out how to organize these things to be successful.

Wouldn’t it be great if we could shoot someone in the face at 200 kilometers? They don’t even know you are there. That’s the kind of man-machine teaming we really want to get after, but Commanders need some way to better explain why they arrived at the decision they did, besides “the machine made me do it.”

“We haven’t cracked the nut on man-machine teaming yet. No one really has. The closest we’ve gotten is the extremely high level of information we’ve pushed to aviators in cockpits.

Pattern-recognition capabilities of AI with potential to enable applications ranging from machines that spot and zap bugs to apps that, with the help of augmented-reality displays on smart phones, enable troops to diagnose and solve problems with mission critical gear.

AI programs are striving to clearly explain the basis of their actions and how they arrive at particular decisions. AI that can explain itself should enable users to trust it, a good thing as increasingly complex AI-driven systems become commonplace. 

Early AI researchers focused on chess because it presents a difficult intellectual challenge for humans, yet the rules are simple enough to describe easily in a computer programming language.

Chess champions use knowledge of the game to ignore most potential moves that would make no sense to execute. The first AI chess programs used heuristics, or rules of thumb, to decide which moves to spend time considering. 

Decades ago, computers were automating boring and laborious tasks, like payroll accounting, or solving complex mathematical equations, such as plotting the trajectories of the Apollo missions to the moon. 

Not surprisingly, AI researchers ignored the boring applications of computers and instead conceived of artificial intelligence as computers solving complex mathematical equations, expressed as algorithms. 

Algorithms are sets of simple instructions that computers execute in sequence to produce results, such as calculating the trajectory of a lunar lander, when it should fire its retro rockets, and for how long.

As the centrality of knowledge to intelligence became apparent, AI researchers focused on building so-called expert systems. These programs captured the specialized knowledge of experts in rules that they could then apply to situations of interest to generate useful results. 

If you’ve ever used a program such as TurboTax to prepare your income tax return, you’ve used an expert system. It became apparent that expert systems were difficult to update and maintain, and they would give bizarrely wrong answers when confronted with unusual inputs. 

The hype around AI gave way to disappointment and the term AI fell out of favor and was superseded by terms such as distributed agents, probabilistic reasoning, and neural networks.

Later, another approach to AI was gaining momentum. Rather than focus on explicitly writing down knowledge, why not try to create machines that learn the way people do? A robot that could learn from people, observations, and experience should be able to get around in the world, stopping to ask for directions or calling for help when necessary. 

So-called machine-learning approaches try to extract useful knowledge directly from data about the world. Rather than structuring this knowledge as rules, machine-learning systems apply statistical and probabilistic methods to create generalizations from many data points. The resulting systems are not always correct, but then again, neither are people. Being right most of the time is sufficient for many real-world tasks.

Artificial neural networks that mimic capabilities of the brain in order to recognize patterns. Instead of writing computer code to program these networks, researchers train them. A common method is called supervised learning, in which researchers collect a large set of data, such as photos of objects to be recognized, and label them appropriately. 

Artificial intelligence has developed in two major waves. The first wave focused on hand-crafted knowledge, in which experts characterized their understanding of a particular area, such as income tax return preparation, as a set of rules. The second wave focused on machine-learning, which creates pattern-recognition systems by training on large sets of data. The resulting systems are surprisingly good at recognizing objects, such as faces.

DARPA believes that the next major wave of progress will combine techniques from the first and second waves to create systems that can explain their outputs and apply commonsense reasoning to act as problem-solving partners. 

Deep neural networks can make use of more data to improve their recognition accuracy well past the point at which other approaches cease improving. This superior performance has made deep networks the mainstay of the current wave of AI applications.

By extracting knowledge directly from data, neural networks avoid the need to write down rules that describe the world. This approach makes them better for capturing knowledge that’s hard to describe in words. 

Autonomous Land Vehicle program developed self-driving cars. Both teams used neural networks to enable their vehicles to recognize the edges of the road. However, the systems were easily confused by leaves or muddy tire tracks on the road, because the hardware available at the time was not powerful enough. Nonetheless, the program established the scientific and engineering foundations of autonomous vehicles.

But deep neural networks are very inefficient learners, requiring millions of images to learn how to detect objects. They are better thought of as statistical pattern recognizers produced by an algorithm that maps the contours of the training data. Give these algorithms enough pictures of objects, and they will find the differences that distinguish the one from the other. For some applications, this inefficiency is not an issue. 

Internet search engines can now find pictures of just about anything. For applications where training data is scarce, neural networks can generate it. An approach called generative adversarial networks takes a training set of pictures and pits Digital Twin networks against each other. 

For application of Digital Twin Tool, one tries to generate new pictures that are similar to the training set, and the other tries to detect the generated pictures. Over multiple rounds, the two networks get better at generation and detection, until the pictures produced are novel, yet usefully close to real ones, so that they can be used to augment a training set. Note that no labels are required for this generation phase, as the objective is to generate new pictures, not classify existing ones.

DARPA is running a program to develop systems that can produce accurate explanations at the right level for a user. Systems that can explain themselves will enable more effective human/machine partnerships. 

Another approach to machine learning relies on cues from the environment to reinforce good behavior and suppress bad behavior. For example, the program AlphaGo Zero can teach itself to play the board game Go at championship levels without any human input, other than the rules of the game. It starts by playing against itself, making random moves. It uses the rules of the game to score its results, and these scores reinforce winning tactics. 

This so-called reinforcement learning can be highly effective in situations where there are clear rewards for effective behavior. However, determining what behavior created the desired result in many real-world situations can be difficult.

As AI increasingly makes its way into industrial settings and consumer products, companies are discovering that its substantial benefits come with costs, in the form of engineering complexity and unique requirements for ongoing maintenance. 

The computational intensity of AI systems requires racks of servers and networking gear, which must be secured against and continuously monitored for intrusions. The unlimited appetite of these systems for data often makes them dependent on many different enterprise databases, which requires ever-increasing coordination of operations across the organization. 

Machine-learning systems must be continually retrained to keep them in sync with the world as it continually changes and evolves. Ultimately, people are still far more effective learners than machines. We can learn from teachers, books, observation, and experience. We can quickly apply what we’ve learned to new situations, and we learn constantly in daily life. We can also explain our actions, which can be quite helpful during the learning process. 

In contrast, deep learning systems do all their learning in a training phase, which must be complete before they can reliably recognize things in the world. Trying to learn while doing can create catastrophic forgetting, as backpropagation makes wholesale changes to the link weights between the nodes of the neural network. 

DARPA Learning Machines program is exploring ways to enable machines to learn while doing without catastrophic forgetting. Such a capability would enable systems to improve on the fly, recover from surprises, and keep them from drifting out of sync with the world.

The real breakthrough for artificial intelligence will come when researchers figure out a way to learn or otherwise acquire common sense. Without common sense, AI systems will be powerful but limited tools that require human inputs to function. With common sense, an AI could become a partner in problem-solving.

The knowledge of a trained neural network is contained in the thousands of weights on its links. This encoding prevents neural networks from explaining their results in any meaningful way. DARPA is currently running a program called Explainable AI to develop new machine-learning architectures that can produce accurate explanations of their decisions in a form that makes sense to humans. As AI algorithms become more widely used, reasonable self-explanation will help users understand how these systems work, and how much to trust them in various situations.

Once trained, current machine-learning systems no longer adapt to their environments. DARPA’s Lifelong Learning Machines program is researching ways to enable systems to learn from surprises and adapt to changes in their environments. The Assured Autonomy program is developing approaches to produce mathematical assurance that such systems will operate safety and predictably under a wide range of operating conditions.

In combination with large data sets and libraries, improvements in computer performance over the last decade have enabled the success of machine learning. More performance at lower electrical power is essential to allow this use of AI for data-center applications and for tactical deployments. 

DARPA has demonstrated analog processing of AI algorithms that operate a thousand times faster using a thousand times less power compared to state-of-the-art digital processors. New research will investigate AI-specific hardware designs and address the inefficiency of machine learning by drastically reducing requirements for labeled training data.

DARPA has taken the lead in pioneering research to develop the next generation of AI algorithms, which will transform computers from tools into problem-solving partners. New research will enable AI systems to acquire and reason with commonsense knowledge. DARPA R&D produced the first AI successes, such as expert systems and search utilities, and more recently has advanced machine-learning tools and hardware. 

Current AI systems today seem superhuman because they can do complex reasoning quickly in narrow specialties, but creates an illusion that they are smarter and more capable than they really are. 

Roadmap will build upon these strategies by listing the types of AI that will be needed year-to-year to support military strategy and maintain a firm understanding of what AI is and how it will be used to benefit the organisation. This understanding should go beyond buzzwords and definitions.

Compromise of data through process models could mean the compromise of sources and methods. When AI is introduced, new attack vectors are introduced, such as deep learning spoofing and data spoofing.

If the training data is known or manipulation of data is too predictable, adversaries can easily anticipate and predict actions and outcomes.

Adversaries can spoof sensors and the data collected by those sensors without needing to mess with the underlying model code. Put simply, adversaries do not need to know what is in the box to exploit the box.

1. Limitation of Data utilisation/Availability and bias of programmers and embedded in data sets

2. Shortage of strategic approach to understand implementation times and integration challenges

3. Not enough Usablity/Interoperablity with other systems/platforms and ability to decipher how AI arrives at decisions

4. Trouble transferring learning from one experience to another, and does not understand causal reasoning

5. Lack of explanation capability and unable to do complex future planning

6. Difficulty handling unexpected circumstances, trouble dealing with boundary conditions and lack of context dependent learning 

7. Can’t decide it's own learning algorithm based on situation or carry out self planning about the best topology structure to use

8. Questions remain about ability to demonstrate multi-domain integrated learning
​
9. Not enough computing power

10. Vulnerable to adversary attack
0 Comments

    Site Visit Executive

    Provides Periodic Updates Operation Status

    Archives

    August 2021
    July 2021
    June 2021
    May 2021
    April 2021
    March 2021
    February 2021
    January 2021
    December 2020
    November 2020
    October 2020
    September 2020
    August 2020
    July 2020
    June 2020
    May 2020
    April 2020
    March 2020
    February 2020
    January 2020
    December 2019
    November 2019
    October 2019
    September 2019
    August 2019
    July 2019
    June 2019
    May 2019
    April 2019
    March 2019
    February 2019
    January 2019
    December 2018
    November 2018
    October 2018
    September 2018
    August 2018
    July 2018
    June 2018
    May 2018
    April 2018
    March 2018
    February 2018
    January 2018
    December 2017
    November 2017
    October 2017
    September 2017
    August 2017
    July 2017
    June 2017
    May 2017
    April 2017
    March 2017
    February 2017
    January 2017
    December 2016
    November 2016
    October 2016
    September 2016
    August 2016
    July 2016
    June 2016
    May 2016
    April 2016
    March 2016
    February 2016
    January 2016
    December 2015
    November 2015
    October 2015
    September 2015
    August 2015
    July 2015
    June 2015
    May 2015
    April 2015
    February 2015
    January 2015
    December 2014
    April 2014
    January 2014
    December 2013
    November 2013
    October 2013
    September 2013
    August 2013
    June 2013
    May 2013
    April 2013
    March 2013

    Categories

    All

    RSS Feed

Web Hosting by Dotster