Marine  Magnet Dispatch Service Centre
  • Drone Fleet Readiness Office
  • Status Updates
  • Marine Corps Integration
  • Submit Comments
  • Download Reports
  • Building Drone Swarms

Top 10 Tactical Network Use Cases Highlight Ability of Services Bring Forward Power/Speed in Contested Scenarios

8/20/2019

0 Comments

 
​Pentagon is funding and testing new network systems that will help troops maintain critical positioning, navigation and timing capabilities against adversaries that employ area-denial weapons.

Network connections help facilitate information exchanges that benefit weapon systems and their operators in many ways—such as command and control of the weapons, communications, and battlespace awareness. If attackers can access one of those systems, they may be able to reach any of the others through the connecting networks. 

Weapon systems are dependent on external systems, such as positioning and navigation systems and command and control systems in order to carry out their missions—and their missions can be compromised by attacks on those other systems. A successful attack on one of the systems the weapon depends on can potentially limit the weapon’s effectiveness, prevent it from achieving its mission or cause physical damage.

Innovations in sensors, analytics and artificial intelligence promise entirely new warfighting capabilities, serving as force multipliers and enabling new levels of situational awareness. New robotic and autonomous networks promise to reduce the number of warfighters in harm's way, while speeding our ability to project force on the battlefront.

“Soldiers are finding themselves operating in contested environments with degraded or no GPS signal at all. It's our job … to explore new network technologies, to be able to give those soldiers enhanced GPS signals, to be able to operate through those denied and contested environments.”

In a “contested environment, we can't shoot, move or communicate without some sort of new network capability. The service's multi-domain operations concept “talks about penetration, disintegration and exploitation — and new networks are needed for all of those.

Troops will be able to see “formations, tanks, weaponry and have really an unprecedented level of situational awareness. We want to take the flexibility of enterprise network and put that into the tactical theater.

Commanders will be able to quickly and reliably access, share and process data at the local level. That’s the inherent advantage of a tactical network. “In the regular network you have a machine that talks to some remote server. With tactical network, “you will execute functions on whatever resource is available. They can be mobile and heterogeneous, you can have handheld devices, things mounted on mobile platforms or unattended ground sensors. We should be able to execute computing on any of these resources.”

That ability to leverage big-muscle computing on a much smaller network platform better able to move quickly should give commanders the inputs that they need to react more effectively to fast-changing circumstances.

“You can plug a bunch of sensors into it — basically any device that measures something — to give you instant situational awareness. “You can have a vibration sensor, so that if there are fires, you can localise the shooter.

ou can measure the electromagnetic spectrum to find where the enemy is. Then you can put those things together over the network to find a rocket launcher in a particular site, and then tie in a targeting device to pinpoint and  destroy it.”

In this way, network advances support a faster operations tempo, allowing commanders to act on valuable sensor data in real time. Being able to communicate what you see locally enables you to take action locally. Tactical networks will also be a key component supporting emerging artificial intelligence capability.

“The vision for future warfare is moving toward fewer people in the field — taking people out of harm’s way and adding more things instead. Just as soldiers need to communicate locally, enabled devices will need robust connectivity and data sharing. “Network devices will need the same ability to coordinate locally amongst themselves, to plan collectively and deal with changing situations.”

In the bigger picture, network capability deployed at the edge could substantially reduce the logistics/supply burden, streamlining technology requirements in the command post and simplifying the hardware aspects of a military deployment.

“Rather than being locked into a certain number of servers in your logistics/supply footprint, we could virtualize and share that compute and storage across a range of applications. “It’s going to reduce the total amount of hardware in the command post, taking that network scale and deploying it in a much smaller footprint.”

Localized networks could also ease the pressure on satellites and other networking infrastructure, with processing at the edge freeing those assets for higher-end uses.

Even as the Services readies for tactical networks, the service wants to nail down the best uses for this emerging compute paradigm. “We need to figure out what is the optimal use of new network technology, what are some of these novel and unique uses that could potentially be involved, what are the most effective uses near term.

The service also will investigate practical means for scaling out tactical networks for widespread adoption. “It is a non-trivial task to roll this out across something the size of our Force.”

Our adversaries possess increasingly sophisticated communications and information technology allowing them to leverage advanced wireless technology and smartphones while actively moving in ground vehicles. They are also able to execute network and electronic warfare attacks in tactical settings that can cause substantial, intermittent and lasting damage to forces' ability to communicate.

Specific to network modernization, communicating securely with command-and-control and other units within the increasingly communications-reliant battlefront landscape is critical to ensure the success of the mission and the safety of warfighters. However, as future battlefields develop and missions require units to be mobile and support lots of tactical capabilities critical communications infrastructures are becoming more difficult to establish and maintain.

Additionally, innovations in network sensors, robotic and autonomous systems, analytics, artificial intelligence and deep learning are driving tactical network developers to consider deploying warfighting systems that are highly reliant on high-performance computing and storage. 

But in the face of potentially degraded communications, those resources may only be available if deployed all the way out to the individual warfighter or small teams conducting operations in austere and hostile environments, such as forward operating bases or combat vehicles-locations known as the tactical network's edge.

Capabilities that enable DoD and warfighters to modernize their tactical networks and maintain overmatch through better communication include command post mobility, secure wireless communications, operational security and edge computing. 

Edge computing enables warfighters to gain access to data previously available only at large data centers-including access to more robust services even when wide area network access is down. 

The Services’ tactical networking and command post programs widely acknowledge the critical need to improve mobility. The current state of the art for tent-based command posts requires hours of setup, including thousands of feet of copper wiring, which delays network availability and results in a dangerous lack of situational awareness for commanders.

Currently, troops who jump from one location to another typically do so in phases, with tent infrastructure, generators, network servers and satellite links going up first, followed by the running of cables to provide the local area network command post support. This process translates into long delays in availability of critical information services, which, in turn, can lead to increased vulnerability of troops and their systems.

Defensive postures of the past applied to a much more stationary battlefield environment. It was simply assumed that communications would be limited as warfighters moved from position to position. But technology advances by adversaries demand that our warfighters have the same secure communications experience while on-the-move as they do at-the-halt. At the same time, communications solutions must be delivered in a smaller form factor-whether to fit on the back of a Soldier or in a High Mobility Multipurpose Wheeled Vehicle.

And the fight does not stop just because you are moving. This is why defense forces need networking on-the-move capabilities. On-the-move means communications components that are smaller, made more rugged to adapt to mobility over any terrain, and reliable in the face of unanticipated conditions such as poor power sources and extreme temperatures.

In other words, situational awareness cannot wait until troops establish an at-the-halt position. Entering a dynamic tactical environment "blind" puts warfighters at a significant disadvantage, which can lead to loss of life and mission failure. There is a need to ensure that transportation vehicles and aircraft operators can communicate directly with a warfighter's headset-and vice versa-while en route to the destination.

True mobility demands innovation and modernization designed to reduce size, weight and power requirements. Not only do dismounted Soldiers need mobility, but so does the network infrastructure to support them.

All else being equal, communications equipment can never be too small, too light or too power-efficient. In contrast to legacy data-center-style, rack-mounted equipment, new generations of equipment designed for tactical and expeditionary use are becoming available with enterprise-grade networking and security technologies.

The DoD shift from wired to wireless battlefield and in-theater communications has been slowed by warfighters' not being able to securely transmit classified information over wireless networks. 

Without the confidence to share classified data securely, warfighters lose situational awareness relative to adversaries who can move faster and accept more security risk. This is particularly problematic when it comes to how defense units have historically operated in tactical environments.

While it was possible in the past for warfighters to use Wi-Fi and radio types to transmit classified data, it was prohibitively expensive and required both ends of the connection to be staffed with Soldiers cleared to take possession of classified security hardware. 

As a result, wireless Protocol networking was primarily limited to site-to-site, communications, and warfighters did not have the ability to use mobile devices for classified warfighting operations.

Tactical networks need data center-like computer, networking and storage capabilities at the edge to support applications including Situational awareness, mission command and command-and-control applications, Signal and image data gathering and analytics workloads and sensor fusion-based applications.

Tactical networks must unify access to data and applications from the enterprise level to the tactical edge. This means deploying high tech services at the tactical edge of the network, so that data is available at the edge even when  connectivity is unavailable.

At the same time, new network tech is raising the table stakes significantly when it comes to the volume and complexity of devices and sensors in tactical environments that can be compromised. From wearables on the troops themselves to connected tanks, helicopters and drones, network interconnectivity is only an advantage as long as it is secure, trusted and available.

A new class of modular, tactical data centers is becoming available for tactical and expeditionary programs, capable of hosting storage, artificial intelligence and analytics applications. Using ultra-small form-factor modules for computer, storage and networking functions that reduce size, weight and power .requirements, these systems can be deployed dismounted, at forward operating bases, in command posts, and on ground vehicles and aircraft-supporting a diverse array of use cases in disconnected, intermittent and limited environments.

The next generation of wireless networks is coming.

Sometimes the most valuable military assets are those you can’t see. That may be the case with 5G—the lightning-fast, next-generation wireless network technologies now being brought online.  

5G is more than a better, faster network with transformative data-sharing and communications capability that will radically increase bandwidth and speed, enabling new technologies such as artificial intelligence to deliver their full potential over wireless connections 

The result is enhanced command and control and situational awareness for everything from driverless technology and improved traffic flow to predictive maintenance and high-fidelity simulation and training.  

“Our base areas, we tell them, are pretty large—it’s not just on the flight line, it’s a big, huge circle. Coverage is focused on where the network will be used, however. “We didn’t want to go from fence line to fence line. “We concentrated on where the populations are working.” 

The Pentagon is considering how to deal with network traffic and establishing alternative paths for data where and when some part of the spectrum is unavailable. 

So what might the Services achieve if it can securely move data at much increased rates? Imagine high-speed wireless connectivity supporting flight line test equipment, or lightning-fast animated simulation and training, or full-motion, high-definition video to instantaneous artificial intelligence. The same capabilities could also be used in targeting and command, control, and communications for hypersonic weapons.

“Flight line operations and maintenance teams can leverage secure tablets within a secure 5G environment to view real-time inventory and schematics, better utilize spare parts, manage aircraft diagnostics solutions, and more.  A “flight line of the future” project is already underway that would bring secure wireless connectivity to maintainers where they work. 

A DoD-wide 5G experiment will look to create a “smart” port or depot by connecting maintenance and test systems together, automating work that airmen do manually today, 

“It’s a way of actually knowing where all your assets are going. Smart systems could automatically track where spare parts are installed, eliminating time-consuming paperwork. “We can link together, dynamically, what parts are existing in a particular component based upon the tags they have already attached to them.”  

5G also promises to enable multi-domain operations, speeding decision-making through enhanced situational awareness.  The question is, when we have the capacity to move a lot more around, will other things happen? Will we move so much information around that our problem will be sifting through the information and figuring out what it means?” 

“Mission planning may be more interesting. We don’t know how they’re going to do that because we haven’t talked about it yet.. “If we have good 5G connectivity, you may be able to get intelligence, surveillance, and reconnaissance information into your virtual reality headgear and understand it better.”  

Training will be a beneficiary of increased speed and decreased latency or delay.    

“The intersection between AI and 5G will be really interesting, and it goes both ways.“You’ll need AI to manage 5G systems, given the complexity, and the connectivity that 5G provides” will allow increased machine-to-machine communications so data fusion and AI capabilities run more smoothly.  

Someday,  fiber and wireline networks will be replaced with a totally wireless solution. Wireless infrastructure could be transformative. “We view it as a platform to explore how the we can ultimately benefit from the power of 5G and a network that delivers faster speeds and response times. 

“We can explore the future potential for a massive number of smart connections that can enable new mission capabilities like never before.”  

1. Advances in mobility and secure wireless enable DoD and warfighters to quickly relocate command posts in theater and give commanders more flexibility and options to manoeuvre.

2. Predictive maintenance can benefit from networks at the edge to support  effort to use AI predictive power to streamline maintenance and improve readiness.

3. AI can help commanders explore multiple possible courses of action, while best positioning the force to retain the initiative for future operations.

4. Leverage a range of GPS, traffic, weather, terrain and local friendly/enemy force information for optimal mission planning. The ability of military hardware to chart its own course may also come into play.

5. Integrating systems that had previously been uncooperative with a common, open architecture network infrastructure enables the use of common computing and storage assets which lowers costs.

6. Composable systems and an network design that creates a level of abstraction for hardware component infrastructure allows for rapid adjustment in system components to meet changing needs.

7. Modular and scalable architecture facilitates continuous updates to capabilities and lowers over all total cost of ownership

8. In the event of degraded long-distance communications caused by electronic warfare attacks, those computer and storage resources may only be available if deployed to the tactical network's edge. 

9. Command post using secure communications modules functions in a deployed environment. Advances in wireless system availability enables  mobile tactical networks 
​
10. Artificial intelligence and machine learning will benefit from advances in computing power. “Both will be critical regarding how soldiers use tactical networks to interpret and use data. 
0 Comments

Top 10 Supply Line Network Delivery Logistics Support Mobile Trainer for Distributed Operations

8/20/2019

0 Comments

 
​The Marine Corps’ logistics community is pursuing unmanned vehicles, additive manufacturing and “sense-and-respond logistics” to eventually support Marines deployed in small, dispersed units that cannot rely on the traditional “steel mountain” logistics network model.

Whereas Marines have previously resupplied by transporting a massive amount of fuel, water, ammunition and more from ship to shore and then putting it on trucks for distribution, the Marines’ future operational plans – distributed operations by small units – as well as increasing adversary capabilities will require network innovation in how troops are sustained in the field.

Following the Marine Corps Installations and Logistics Roadmap released last year, the service is pursuing new network tech to advance lift and distribution missions, supply and maintenance work, and other requirements.
So we can have moving targets to practice logistics manoeuvres required to resupply Marines in remote theatres. We can integrate those networks with simulated forces that are moving towards an objective – so you can validate that you can turn off your fires at the right time.

The vision: to embed interconnected, high-power computing capability into a wide array of drones, soldier-worn sensors and vehicle-mounted intelligence, surveillance and reconnaissance tools. Rather than collect data in the field and ship it back for processing — a time-consuming and bandwidth-intensive process — the Services would leverage the integrated networks to process on the edge. 

So, how would tactical network transform the fighting force? Better data, faster Edge computing enables war fighters to gain access to data tools previously available only at large data centers … even when wide area network access is down.

A field commander pulls down drone imagery of a potential danger zone. With local network processing, that imagery could be enhanced by multiple data feeds in real time. “Maybe a squad defused a mine here yesterday and that will be indicated here. Maybe there’s a choke point where you might have an ambush. You could overlay those maps and live UAV feeds with recent intelligence data and recent operational data.

The ability to manage vast swaths of data via computing at the edge could enhance soldiers’ ability to see one another. “Instead of the division command seeing battalion commanders and battalion commanders seeing company commanders, now you can do it with networks at the small unit level. You can see the individual soldiers.

“One, all of the operational design statements we’re making — ‘I’m going to have field located repair’ or ‘I’m going to have logistics support’ — all of those have to be tied to a supply deliverable in that network, so we assure ourselves that not only are we thinking about it, we’re buying it and we’ll deliver it."

We’re taking a look at the requirements our capability officers outline for us in terms of performance characteristics: What are our supply availability measures? What is the mean time to repair, how maintainable, reliable, available should the system be? We’re linking those supply requirements to what we’re calling an operational sustainment review.

Under new Logistics Regime, maintainers would “have historical trends in terms of what parts break, and then you’re only bringing those” from the outset of the deployment, and “you can very accurately calculate what your parts acquisition over the network should look like” for resupply efforts.

Aircraft  grounded awaiting replacement parts traced to the predictive network modeling that underlies how DoD and original equipment manufacturers  calculate target availability for spare parts. These models are fed forecasts that, while trustworthy in some contexts, are ill-equipped for the volatility that effects DoD operations. Contingency operations, continuing resolutions, and disruptions within the global supply chain unduly impact the demand for parts — sending the best current forecasts askew.

Forecasting aircraft spare parts supply requirements typically calculate ratio of future flying hours to past flying hours for the aircraft using a specific part and then applying ratio to historical demand for part. We conclude this technique results in bad estimates mostly because 1) correlation of specific parts to specific aircraft types is not accurate, 2) does not base projections on many combinations of aircraft employing part, and 3) does not incorporate influence of lengthy procurement times into future flying-hours estimate

Unstable demand puts  original equipment manufacturers in a position of producing parts only when other, more certain production priorities can accommodate it. This leaves DoD with lead times of months or even years for certain parts. Long lead times make forecasts even more unreliable.

The F-35 program’s shortcomings in this regard have been observed before. In our support to the logistics  network branches over the past decade, we found these models are subject to large errors, both in evaluating the tradeoff between wait time to issue parts against cost and in the recommended mix of spare parts to attain specific aircraft availability targets.

We found a better way forward. In an uncertain world, it’s best to hedge your bets and remove forecasting from the equation. Use a robust network strategy that will likely lead to a good outcome, whatever may happen.

Advanced networked algorithms now make such a strategy possible. Forgoing forecasts and variances as inputs, these hedging algorithms use transactional data to produce control levels that generate a better mix of spare parts regardless of the demand scenario. This means more of the parts requested by maintenance are available when needed and fewer parts sit on the shelf.

Success of new logistics network tech has spurred DoD to pursue hedging models as a means to reduce inventory costs, but it has yet to become the norm. Success within the F-35 program could change that. Certainly the potential for benefits to be realized across DoD exists, as well as support original equipment manufacturers and performance-based logistics outcomes.

Hedging models help decision makers see the tradeoff between key interests e.g., inventory value, average wait time for parts, total annual buy and repair spending and set a path that meets multiple potential demand scenarios — enabling DoD to be better prepared for the unpredictable.

Supply chain hedging may not be a quick fix to the F-35 program, but with the prospect of greater fleet availability and lower supply chain costs, it could be a lasting one.

“It is an exciting time for applying new network tech to field-level logistics operations  and we are thrilled to be a part of it. It is exciting to look at things with a fresh perspective and not the entered by existing orders and policies suggest you should do. You can't have innovation if you are beholden to systems that were created years ago.”

ALIS is a high-tech computer system that informs maintainers of aircraft upcoming maintenance and parts required to help sustain the aircraft. Marine Corps is “uncertain how long the F-35 can effectively operate” if the Autonomic Logistics Information System, or ALIS network becomes “disconnected from the aircraft.

ALIS looks at “operational data, sorties, it could include weather, the history of the part, was there repair work done on it before? We’re really on a higher level than any of these subsystems, including ALIS.” The hope is to be able to preposition parts and maintainers to make fast repairs or modifications not only in response to what the plane has been through but, perhaps, what it’s about to go through as well.

At an exercise near Twentynine Palms, California, the Corps recorded “issues related to the tents used to house the ALIS” and the “need for maintaining network connectivity, and the limited reach-back support for ALIS. “F-35 using its sensors to share data with legacy platforms” and better stealth capability over other aging aircraft. Classified facilities are required “to meet basic cooling and power requirements for housing the ALIS servers.

DoD plans to continue to evaluation of ALIS's performance, and that it agrees "future testing is worthwhile, so information is made more accessible across the services operating the F-35 has already been accepted by the Joint Program Office and the Pentagon

Controlling maintenance costs has been a challenge because of problems getting spare parts delivered in time, especially to forward locations. Automatic Logistics Information System ALIS has been described by the workforce as being hard to use and requiring the use of contractors for longer periods of time than originally expected.

Contract provides for "air system maintenance; pilot and maintainer training; depot activation; sustaining engineering; Automatic Logistics Information System [ALIS] support, predictive condition management; and supply chain logistics."

We fixed problems with the electronic equipment logs that were showing false positives, so those have been fixed, and the maintainers get to focus on things that are actually broken — not things that are reported as broken.

They fixed the scheduler, which had mismatches between the flight line system and ALIS, and they are currently working on things that are going to help maintainers do their own workflow on the flight line.

There is a lot more to go for them. They’re putting Wi-Fi out on the network line so that you can touch ALIS at the flight line, which currently you can’t. Maintainers have to go do their maintenance and then come back and enter data in the subsequent systems, and it doesn’t make sense to create data once and then replicate it again.

We want maintainers to be able to have ALIS in a protected, secure Wi-Fi network at the flight lines; that data is instantly uploaded. We’ve got work to go to get the accreditation done so that we could reach all the way back into the standard operating unit .

Where’s the chalk line that we switch to the new methodology? We have to have enough development teams to do it and support the level and scope of the network. With agile tool development, you want to have exposure with the user. Once those apps were deployed, what was the feedback like? Did users want to see additional fixes, or were the apps coming out well already?

When final deployment was done, it was digital tools as the users wanted. The users are involved from the beginning. Step one is the coders leaving their coding shop and going out to the flight line and sitting down, walking through how ALIS works and how the rest of the maintenance planning tools work.

Lessons learned from this hybrid network  model, as more and more unmanned systems are brought it, could lead to a future model that relies even less on Troops directly moving goods.

On supply and maintenance, 3D printing could help flatten out the steel mountain by reducing the quantity of spare parts units would have to bring with them or order for resupply. With 3D printers at key network nodes in the resupply chain, components could be manufactured closer to where they are needed.

Lessons learned about capabilities and limitations from 3D print experiments will not only inform future logistics decisions but could also get the service out of a tight spot in the near term.

“In our world, if we can get – and we’re really focused on it – unmanned aerial capability, 3D printing, sense-and-respond networked  logistics. If we could just really focus on those and mature those, that would be pretty good.”

The 3D printing process is one way to support sustainment programs. The team has a wide variety of things to maintain, including many parts that would require an expensive bulk buy when they really need only one or two. Examples include mic switch knobs, crew compartment panels, sun visor brackets, and armrests.

The unit already uses 3D printers in various sizes, but they want others with flexible network technologies to support making parts that engineers haven’t necessarily thought of yet. The standby compass cockpit panel dashboard is something the unit is currently prototyping. 

The in-house 3D  networked process is much faster than going back and forth with outside suppliers for parts. “Once you get the geometry, you can print it overnight and have it the next day." With diminishing sources of supply, the benefit for us is the flexibility and the agility to respond to a warfighter need.

“The benefit of the 3D supply network  is speed. That’s our bottom line."

Timely parts supply  is a problem Marines know all too well. 

Every deployment is  full of horror stories about lengthy wait times over old supply networks for specific items needed to fix a piece of gear. While high-cost items are generally kept in stock, getting a hold of small, less expensive plastic parts often proves problematic.

Going from a sometimes 60-day wait to printing solutions on demand with same-day-service “has the potential to be revolutionary,” especially for forward-deployed forces, 3D print may not completely replace our current supply chains, but 3D print ordered over new networks is a great opportunity to augment existing capabilities.

Wait times for parts can be especially long for obsolete equipment. Marine squadron recently used a 3D printer to make an aircraft part they needed, That was after the Marines tried to cannibalize the part from a decommissioned aircraft.

Compared to using the existing networks, it didn’t take Marines with the maintenance battalion very long  to learn the network scanning and printing process. Once the system was online, Marines were printing solutions within  hours.

They've focused on small plastic pieces common to communications gear. For example, the bracket that sits on the front of some radios is known to break with regularity — and it's not something Marines can replace. Tired of cannibalizing other sets, radio operators turned to 3D printers to produce new brackets.

The technology has been used to produce everything from tools and phone jack plates to gas caps and radio brackets. “We can use this technology to quickly manufacture a prototype weekend match up against the actual repair we need to do to test fit, form, and function. If that is good to go, we can move to the next step.

In the near future, 3D Print tech will provide engine and mechanical parts that typically have long wait times  over existing supply networks and high costs. The Services are planning to eventually use 3D printing to produce fully functioning unmanned vehicles.

1. Higher Efficiency Rate

Incorporate  integrated logistics, and product innovation strategies to predict demand as well as to act accordingly. And this is, without any doubt, one of the main supply chain network benefits. Why? When you implement supply networks you can adjust to emergency markets, and shorter product life-cycles.

2.  Increases Output

One of the main benefits of supply networks is the communication improvement. with transport companies, vendors, and suppliers to raise directional info flow and streamline distribution make for reliable for users

3. Boost Cooperation Level:

Most successful units right now all have in common high function network communication to open your doors and embrace technology, you can also take advantage troops not needing to share the same location space

4. Lowers Time Delays In Processes:

With good communication, you can  lower any delays in processes and use the network to mitigate any late shipments from vendors, logistics errors in distribution channels, and hold-ups on production lines.

5. Enhanced Information Integration

One of the best ways to maintain supply network do it is by combining all the information gathered on the different sectors of your business to allow forn enhanced operational alliances.

6.  Inventory Buffers

Variability customer spending requires taking charge of networking inventories to minimises holding costs, while also providing enough flexibility to meet the customer demand. If your inventory falls too low, you  may have to work overtime to produce more products.

7. Mitigate Your Risks

Identify critical risk factors in the field if you areconcerned about product compliance or quality, putting your energy towards a supply network management system to help you to mitigate limiations moving forward.

8.  Stay On Top Of Demand

Integrated networked logistics, and product innovation strategies can predict demand for their product and accomodate shorter product life cycles and emerging markets to ensure adequate response to changes of demand.

9.  Enhanced Application Design

Supply network applications can make remarkable changes in any organization to provide smooth execution of tasks to achieve long-term mission profitability.

10. Track activity of supplier and distributors 
​
Supply networks linked up all application systems at a remote location automatically improved collaboration between partners gives opportunity to share information. with stakeholders and accelerate other tasks like reporting, forecasts, order statuses, quotation and other transport plans in real time.
0 Comments

Top 10 Network Directorate Risks Trained on Surveillance Data Provide Visibility of Future Performance

8/20/2019

0 Comments

 
​New Pentagon network projects seek to capture more data about real-world battlespace — then train computers to out-think human captains.

Even today’s best sonar technology doesn’t give a sub captain a very good sense of the battlespace. “What the submariners get is a low-dimensional picture. So if you are towing an array, you get information like bearing and sometimes frequency information.

There’s a lot of potentially valuable data that towed sonar doesn’t capture because it’s only collecting one type of data and only at one point. If you could collect and properly analyze sound and wave data from other points in the ocean, you could develop a much better sense of what an adversary is doing. 

We want to explore “analytic techniques linking physical oceanographic variability with acoustic propagation, including field efforts to collect relevant data sets. Analysis of large oceanographic and acoustic data sets, including the development and use of artificial intelligence and machine learning techniques.”

We need to turn all that new undersea-sound-propagation data into 3D pictures that submarine crews could use to immediately see where they were in relation to other objects and figure out how they might move to get a better picture or hide their activities.

3D battlespace enables submariners to  understand where they are, geologically in terms of ocean topography, geographically, what the radiated noise levels were…which is something Navy is very concerned about because they don’t want their adversary to know they can hear them.”

“If we know this adversary is located along a certain bearing angle, then where might we go to optimise our ability to localize them? Those are decisions that submariners are making all the time.  “They’re really playing a game. They want to find the bad guy without the bad guy knowing that we know where they are.”

If you can take that data and use reinforcement learning— essentially, showing the software lots of examples of submarine captains executing missions — you could train an AI that would outperform a human crew on some of these decisions. 

“We can watch them while they are making these decisions and then the reinforcement algorithm will learn what they’re doing in order to minimise ambiguities and understand what the results are.” 

The rule that we're using at battle management system is we don't start talking platforms until the end. That's probably going to be paramount for us, for the first few years of the program. It is so easy to start talking about satellites and airplanes and forget what battle system is going to have to uniquely champion. That's the data architecture that will connect them.

The team is doing great job of starting to write down requirements for our data architecture, leveraging a lot of work that’s already underway in our space-development portfolio. There’s an initiative no one’s heard of called Unified Data Library. 

It’s not a very exciting name, but it’s a great system. Unified Data Library is used currently for space situational awareness. Data pipes in through a variety of sources, it’s able to be addressed in multiple layers of classification and we do microservices on top of it that are used by different users.

Step one, the thing we got to get right first, is that data architecture because people need to build systems that populate according to whatever standards we’ve fixed.

Step two, once we get the data architecture defined, will be the requirements for the population of the data. Maybe one sensor needs to be able to fill a gap that others are creating. We’re going to have to look at requirements at the system level and then tell satellites: “You need to be able to provide this level of data at this refresh rate. UAVs need to be able to meet this rate" and so on.

Once we do that, then we'll be into the traditional part of the acquisition, which is building those satellites, building those UAVs.

We’ve got to do demonstrations along the way, so expect to do yearly demonstrations. The first one we want to get to is ad hoc mesh networking so that we get the same kind of advanced connection effect where things start working together without humans having to control them.

So we don’t have to wait for the full architecture to be fielded. And it really will never be done; battle system merge over time as we put up satellites and UAVs. More things connecting to the network will make the system of systems better.

Battle System will soon be well definitized in terms of the lines of effort, the data architecture, you know we’ll have to have a line for artificial intelligence because we are not going to be able to pass all the data collected across the networks, a networking component, and then at the end, the platforms that provided. 

This has got to be about the highway, not the trucks that are on it. Step one is getting the highway paved. Now we’re able to get this kind of data architecture, analysis and demonstration work done. and when there are more funds we’ll ramp up fuller-scale prototyping.

it can’t  start by going to build a drone or satellite. We’ve got to focus on the basic stuff, which is that data architecture. We’re going to have to be able to do tool development at multiple levels of classification and do it securely: All those are things that are hard to get people energized about but they’re going to be what makes or breaks for this program.

The strike group commander knows the strike group best. We know what we did here during deployment, and this was a lot of air-to-ground, but we didn’t do a lot of air-to-air, and we didn’t do a lot of war at sea, and we didn’t defend myself against missiles.

So we develop a scenario that will get the training required. … They did a live missile ex, shooting at supersonic targets with jamming and all high-end types of things here, to do that, the missile shot. 

And there was a lot of air-to-air fighting, both live, but we also inserted virtual. So now the sailor on the scope sees five aircraft coming in, but there’s only one real one coming in for our guys to fight against.

And commander will see missiles coming in, constructive missiles. So we are pushing our live, virtual, constructive – that’s how we are accelerating along here the capability curve. “We put in that investment and that got their training up, and their readiness ratings were all green.

Network security technologies of yesterday are too large and expensive to deploy, leaving tactical networks unequipped with the mobility and scalability needed in a networked warfighting environment. Without the right technologies in place, Soldiers' views into the threat landscape can be restricted and even at times inaccurate, as real-time situational awareness of network threats is impaired.

Responses to network security threats on the battlefield must come in real time, as the difference between waiting hours and days versus seconds and minutes to respond could have dire consequences. Yet, the shortage of network specialists readily deployed and available in tactical environments makes real-time response difficult if not impossible. And even for tactical operators in the field, maintaining multiple systems can be overwhelming.

More vulnerability at the tactical networks-The electronic footprints of current tactical networks are often easy to discover, and the closeness of adversaries in battlefield environments makes it easier for communications to be intercepted, which is all the more heightened given how tactical networks are traditionally dispersed. Internal and external threats at the edge of the network challenge DoD when it comes to rapid detection and response
.
Once you have identified your list of targeted resources to be profiled, you are ready to begin creating  a profile for each one. The idea is for the resource owner to rate the resource's importance to the organisation from an information-security perspective and relative to all other assets in the organisation. 

The profile tracks information at a mission and function level and is not necessarily specific to implementation decisions. For example, if you are designing a new system, you should know what types of data will be processed and what the basic functions being performed will be before you decide on technologies or placement in the network. 

When you are designing your profile questionnaire, it is important to note that not every question needs to be used in the calculation of the risk sensitivity. Some questions are meant to capture other pertinent information about the resource for reporting purposes and do not directly contribute to the risk-sensitivity score. 

For example, you may ask a question about where the system is hosted. The answer to this question doesn't affect the sensitivity of the asset, but you may want to prioritise assessments of third-party hosted systems because of increased risk exposure, and the answer to this item will give you the desired information about which systems are hosted internally versus externally. 

You may also want to ask a couple of high-level questions about whether basic security controls are in place for example, roles-based access, encryption, and audit logging. The answers to these questions may help you to focus your efforts on resources that don't meet the most basic security control requirements. 

Similarly, you may want to ask if the system uses a common or central infrastructure for authentication and authorisation or logging to eliminate the need for assessing those areas any further. Systems using one-off solutions for these core security services may have more potential for risk exposure. 

Factors do not change the sensitivity of the resource, but they can help with prioritisation. For example, whether or not a vulnerability test has been performed on the resource does not affect its sensitivity, but this knowledge is important for identifying resources that may have undiscovered vulnerabilities that are readily exploitable. 

You will often find yourself trying to choose between several high-sensitivity resources to assess, and these other factors can help you decide which ones to focus on first.

The security risk profile questionnaire should include several questions about the resource to help determine the sensitivity and criticality of the application in comparison to others. It is essential to evaluate a resource's sensitivity on a relative scale. 

Start by identifying the resource that is most crucial to the organisation, and use this as a reference point. This is important because the tendency is to rate resources too high. If you end up with all resources being rated as high sensitivity and none as moderate or low, then the scale becomes worthless.

At the time of this review, Directorate Site Visit Executive  for network operations had not established letters of delegation to provide surveillance over units that were designing and building automated /network supply system components. The Commander had begun negotiations for delegations but were delayed because the Directorate Site Visit Executive, who was seeking to maintain positive relations with the project office, granted request from project teams to delay surveillance of units until after the start of initial production, to allow processes to develop.

As a result, Directorate did not have access to unit staff and facilities to perform surveillance functions and could not report directly to the team issues impeding unit progress  in assessing risk, designing and building equipment supply system field level supply of critical equipment components. 

Negotiations for delegations for units were ongoing based on risk questionnaire . Letters of delegation for unit surveillance are supposed to identify and address operational problems that can cause reductions to technical performance, cost increases, and schedule delays.

Units are responsible for designing and building System components to include an installation unit to integrate with aircraft, pointer-trackers with processor units to scan the air and identify signals. These components make up equipment supply kits to be assembled.

Unit surveillance is a critical part of the overall administration of services Directorate provides teams with to support decision-making events with objective and actionable task, technical, risk, schedule, and performance metrics through a programme support team consisting of multifunctional specialists to include and quality/value assurance. 

DoD policy mandates that the programme integrator, with support from the programme support teams at unit locations, independently assess task performance and execution in in the risk assessment questionnaire to be approved for release to programme stakeholders.

Key stakeholders include milestone decision authority and the project administrator uses the questionnaire to monitor mission schedule, and performance, and highlight problems in unit processes that could increase risk, delay schedule, and reduce performance.

DoD mandates Directorate must perform specific actions to determine and document programme risk prior to establishing letters of delegation: First review the mission requirements; Second, determine programme support team requirements for the unit, and establish a programme support team. Third, establish an initial programme risk rating based on technical,and schedule risk;.

Must develop programme support plan to outline planned surveillance events at location of mission execution and establish agreement, if required, with the programme office.

There was no technical reason behind the decision to delay direct surveillance of units to allow their risk assessment processes to develop. More specifically, the program integrator told investigators that Directorate wanted to maintain a positive working relationship with the project office and therefore agreed to delay surveillance of units 

But effective team planning and action, such as establishing delegation of risk questionnaire to oversee system design and development, can minimise potential risk increases, schedule delays, and technical performance that may be associated with deficits in programme maturity.

Directorate surveillance is critical to help programme stakeholders monitor and proactively identify and address programme risks. and surveillance is even more critical when unit processes are not fully developed. 

Without delegation or risk questionnaire, there was not direct access to the units to help in providing the project team with timely, value-added surveillance and progress assessments in designing and building systems components.

With direct and continuing access to units Directorate could have given the project team more timely and insightful advice and evaluation relating to delays, which would have allowed teams to take earlier action.

You may not have defined the specifics of how the functions will be performed, but having mission and functional requirements defined is enough to complete the security risk profile.

The best way to profile a resource is with a simple questionnaire. This questionnaire uses a series of targeted questions to measure the potential impact to the organisation of a generic security violation for the target resource. It should include questions that are designed to identify the following:

1. Resource ownership

2. Function data processed, stored, and/or transmitted

3. Sensitive functions performed

4. User community

5. Support and hosting model

6. Access administration model

7. Audit trail and maker/checker controls

8. Overall confidentiality, integrity, availability, and accountability 

9. History of previous risk assessments 
​
10. Requirements for future assessments
0 Comments

Top 10 Autonomous System Introduce Breach Exercise Mechanisms Include Unmanned Aircraft Drills

8/10/2019

0 Comments

 
​Ambitious plans are in the works to pair stealth fighters with unmanned aerial vehicles that could serve as robotic wingmen in high-end combat. Missions could include a variety of tasks such as reconnaissance, electronic jamming, and marking or destroying targets.

“The idea behind the remote carriers is that you will have a manned mothership, accompanied by … drones that are flying in the swarm, which are having to perform different tasks depending on the mission.

The drones could be a tactic to keep airmen out of harm’s way. In a high-threat environment, it would be preferable to send a remote carrier than a manned platform. “We need to make the remote carrier system as intelligent as possible. 

“What is a big help is the current huge steps in terms of artificial intelligence to make these drones not just dull platforms but really intelligent carriers with also a certain amount of autonomy” to navigate and perform tasks on their own.

An important variable will be the size and weight of the drones.. Larger platforms could be launched from a runway. “But there might be also smaller ones that you put, for example, into a transport aircraft, fly them into the scenario and then you open the back door and they are dropping out and then flying next to their manned fighter aircraft. It’s one of those things we are currently investigating,”

Post-mission aircraft recovery is another factor that must be considered. “There could be the possibility that there are some small ones where you say, ‘we can produce them rather cheaply’ and you simply don’t recover them. Then there are more important and more expensive assets where you need to make up your mind on how to recover them” to include potentially programing them to fly back on their own.

“The envelope of flight and the envelope of maneuvering of unmanned aircraft is much bigger because you don’t have the constraints of having the pilot in terms of Gs, in terms of acceleration, in terms of evading maneuvers. “It’s really a good case for introducing AI-based algorithms for all of these new, different capabilities.”

“The sheer volume of code and artificial intelligence that could be potentially added to the systems is really huge, and there is no way to do that without properly automating all the verification steps” to ensure that the algorithms will function properly and the drones will fly safely, 

“In our conception of the remote carrier, we will … have a lot of cooperation with our missile world” including the size, shape, weight, propulsion, flight envelope and modular payloads that comprise the systems. “To be ahead of the threats you have to share a lot of characteristics with the missile” weapons.

“In an A2/AD environment, the remote carrier is here with its own sensor … where it’s so dangerous for your manned platform that you won’t go there even if you have stealth. The drones could be used to detect and trigger enemy air defense systems, and be equipped with electronic warfare capabilities.

“That was our first foray into attritable aircraft. It’s expensive enough to be lethal but cheap enough to be used in an aggressive, non-risk averse way. … It’s certainly not a throwaway but it’s cheap enough that you can take a level of risks that we couldn’t take with a manned platform.”

Air-Launched Effects are executed thousands of feet above enemy terrain to perform surveillance and reconnaissance. That capability is in its infancy but is part of the future vertical lift modernization effort underway.

“It’s difficult to figure out where it makes sense, how they interact, how they adapt, how soldiers do certain things now that they have robotic capability.

If adeversaries could solely see the force through electronic means, they would see a much bigger signature over here, while we were tucked in behind terrain here waiting to begin the operation. But once the enemy has eyes on the robots conducting the breach, “the last thing we’re worried about is electronic signature.

The robotic breach also triggers the debate over what the services should procure and how those capabilities should be deployed: Should we send in high-tech, expensive and heavily armored optionally manned vehicles to conduct operations like breaches? Or can the service design robotic vehicles “that are a little bit more skeleton with all the breaching equipment so if it doesn’t make it, it doesn’t make it, and you just buy more for the redundancy factor?

Mobile robots speed material flow to fulfillment workstations and between manufacturing processes. They cut picking errors and boost throughput. They help consolidate storage space and future-proof operations. Robots offset rising labor costs and shortages. They make better use of your skilled workforce. Robotic mobility leads to traceability and predictability, your road to the always-on supply chain.

Traditional autonomous typically require some type of existing infrastructure or facility modifications, whether they are magnetic strips or navigational beacons embedded in the floor, to guide the vehicles on a designated path. They work well in predictable environments.

“If you’re always moving materials directly from point A to point B and those points never change, and the path between those points never changes, then an autonomous is a good platform. “But that type of environment applies to a very small percentage of the material transport market.”

Some autonomous vehicles navigate like GPS in your car according to maps in its robotic brain, and by using various sensors, it avoids and steers around unexpected obstacles. It autonomously determines the best route or path to get from point A to point B with real-time intelligence.

“One of the challenges that our clients have expressed with traditional autonomous vehicle technology is that each time a change is required in their facility/system, it’s a painful process. You have to change the track and then you have to change the programs. Usually it’s a takes a long time before the vehicle system is up and running again. With automated technology and the intelligence that’s built into these vehicles, you can literally make a change with a click of a button. 

Autonomous vehicles allow for changes and flexibility in building layouts, new machinery installations, and facility additions. Self-driving vehicles enable new applications to be automated because of the level of flexibility they offer within industrial environments. “By not requiring any infrastructure to navigate, the solution is able to grow and change with your organisation.”

“Imagine a manufacturing process. It doesn’t matter what the manufacturing process is, it’s a black box. You put parts into it, something happens, and parts come out. If you know it’s necessary to bring materials to that black box and when you pick them up from the back end, you know where all my inventory is. You know the cycle time of the production cell, when it’s up and when it’s down. You know how many units have been produced and the quality rate.

“The difficult part of implementing these kinds of vehicles into a factory is that the environment is always changing. You always have forklifts, boxes and machinery being moved around. That’s challenge has always been they can’t successfully navigate on a daily basis without running into obstacles they can’t resolve. The way our technology operates with deployments of multiple sensory inputs allows us to make good decisions in real-time dynamic environments.”

“The laser scanners on the robot will build the walls, the aisleways, and the machinery, and create its own map that resides inside the robot. Once a robot has a map, you can teach it pickup and drop-off points, and then it will look at those maps and figure out the best way to get between two points. Now if something impedes the path, it will recognize that and reroute itself to get to that drop-off point in a different manner.”

With swarms of these robots working together but separately, at any given time the system can determine which robot is in the best position to retrieve the next bin in an order. Downtime is virtually eliminated because there’s always another robot nearby if one gets stuck or needs to recharge its batteries. 

“Think of it as two brains. One big brain is controlling all the robots and each robot has a smaller brain that is communicating back and forth. The autonomy is coming from the bigger brain, which is the master control or warehouse control system, and that is coordinating the moves among each robotic transport vehicle.

Then you have smaller brains on the vehicles that are handling the low-level control, for example driving and navigating. That’s the swarm effect we alluded to earlier. The smaller-brained mobile robots are coordinating with each other and with the bigger brain to accomplish certain tasks.

The process starts with a robotic carrier retrieving a bin and delivering it to the picking station. Each bin has several compartments containing different items. The vision guided robot arm finds the right item, picks it from the bin, and places it in a designated area for further processing by the human picker.

“We divide and conquer the warehouse. We look for warehouses that have a high percentage of inventory that is compatible with the robot. We have people doing what they’re good at, and we have the robots doing what they are good at, which is picking the rigid boxes other products.

“If we need to go with a different arm for whatever the reason is, say we need to have a bigger robot or a client has a preference for a certain robot manufacturer, we can swap out arms without going through the  development for that specific supplier. It gives us modularity in a sense, so that we can use any type of mechanical, electrical, or electronic system and integrate it for our solution.”

A common operating system for autonomous vehicles that would facilitate "plug and play" applications will be critical. Such an overlaying architecture would prevent vehicles—whether drones, trucks, delivery bots, or anything else—from operating in conflict with each other. 

Common standards would provide the ability to gather information from any kind of sensor, make sense of that data, and respond in a way that is predictable and conforms to established standards and to build in rules about how a vehicle will react or behave in specific situations or conditions.

1. Reduce operating costs  and increase productivity 

The implementation of autonomous robots could primarily drive value by reducing direct and indirect operating costs and increasing revenue potential. Supply chain automation helps in the reduction of labor costs, inventory, warehousing, and overhead costs associated with inventory storage. Installed sensors and load protection devices allow robots to carry out precise and secure handling of all loads continuously and around the clock to concentrate human labour resources towards value add-on tasks and products, while repetitive tasks are handled by autonomous ground vehicles. 

2. Product Material Flow

Seamless integration with production enable companies to improve efficiency by implementing accurate just in time materials flow. Automation allows for the increase of the volume of products that can be produced.  The technology will allow for the incorporation of the skills of trained workers with the accuracy of automated equipment,  increasing productivity.

3.  Improve safety compliance 

Automation can assist organizations with complying with industry standards through standardizing pricing, products and vendors. Employee safety can be improved in highly hazardous environments and injury costs can be reduced significantly,” In the distribution center, for instance, robots can seamlessly zip past each other, humans, or other moving objects thanks to advanced collision avoidance capabilities, which are processed as quickly as any human can react to potential accidental run-ins. 

4. Human Machine Cooperation

Workers work directly with collaborative robots, easily training them with programmable movements and then handling material and sorting packages side by side with them. Vehicles move in a predictable and controlled manner with sensors for obstacle detection. Improved flexibility tools allow for the easy adjustment of vehicle paths and operations, systems can also be expanded when necessary during peak times or even moved to a completely new facility

5. Create major efficiencies.
 
Autonomous robots can test, pick, pack, sort, build, inspect, count, or transport materials of various sizes and weights faster and more efficiently than ever.  Drone technology combined with autonomous navigation and artificial intelligence is being used to understand inventory levels and location within warehouses and enabling organizations to move inventory faster throughout the supply chain. The companies investing in aerial robots are also redirecting their staff to handle high-value activities and empowering their teams with rich information, which can then be revisited at any point and time and can be used to make better decisions.

6. Perform product development tasks

When autonomous logistics drone workers are set up to perform continuous, repetitive tasks, product development and prototyping activities can benefit from around-the-clock testing for fatigue, damage tolerance, and quality. This frees up product development professionals to work on more important tasks. When handled by a  logistics drone, these tasks produce more accurate supply-demand reconciliation and replenishment needs, ultimately reducing on-hand inventory.

7. Improve accuracy on routine tasks: 

Robotic process automation in standard sourcing processes can reduce effort and time requirements and improve the accuracy of mundane tasks. “Autonomous robots are able to perform inspections on inbound goods and provide real-time data to suppliers. Autonomous Drones can be used in various warehouse operations, from inbound logistics in time-critical situations; carrying materials from storage to factory; transporting directly from receiving to shipping; or efficiently scanning inventory and significantly reducing labor costs.

8.  Automated Gate System

When gates can be controlled automatically through an automated gate system, throughput is increased at access points. An automated gate system will typically include the ability to centrally control gates from other facilities, calling for less management to monitor gate functions, and cutting down on staff crowding at exit and entrance points. This would also increase visibility, and improve the capacity to predict and plan for driver traffic and patterns.

9. Business Continuity

Strategies to prevent accidents in automated warehouses include robotic functions that reduce the risk of physical strain in human workers—and automated routes that decrease the chance for warehouse collisions. Robot movements are also believed to improve warehouse safety and reduce the risk of work-related physical problems. Automating these processes only increases productivity and output.
​
10.Employee tracking

It’s now possible to gauge the moments when employees are genuinely productive. It’s an efficient approach that can accurately detect what’s working at a company and what’s not. Automated systems can track employee time and location, automatically track the amount of time worked, and allow for more efficient accountability.
0 Comments

Top 10 Advance Autonomous System Capability Replace Current Tech Perform Machines Execute Tasks

8/10/2019

0 Comments

 

​Questions about autonomous warfare  address artificial intelligence understanding the context of its actions, its predictability, ability to transfer lessons from one task to another and durability, 

Autonomy is generally defined as a machine having the ability to execute tasks with limited to no human intervention. Advances in autonomy are driven by converging technologies such as AI, robotics, dig data, and advanced motion sensors. Autonomous systems can involve a built-in human control mechanism -- human-in-the-loop, a human override mechanism-- human-on-the-loop, or minimal-to-no human involvement-- human-out-of-the-loop.

Autonomous systems can be conceptually divided into two categories: processes and assets. Autonomous processes include those capabilities driven by machine learning, big data and AI to automate procedures and functions. 

Major advances in autonomous processes could support mission planning, training , decision-making, administrative roles, and business functions. Autonomous assets include the physical equipment and resources the Services can use to carry out missions. These assets are divided primarily into three categories: unmanned aerial vehicles UAV, unmanned underwater vehicles UUV and unmanned surface vehicles USV..
​
USVs receive less attention than their aerial counterparts, but hold enormous potential. USVs support cross-domain integration and increase the capabilities of other unmanned systems with their large payloads, power reserves, and endurance  and overcome anti-access/area-denial environments by projecting information operations, military deception campaigns and electronic warfare capabilities, Current projects aim to have swarms of autonomous vessels conducting both surveillance and security operations.

Autonomous assets can act as a major force multiplier. UAVs, USVs, and UUVs can increase the strength of the force and material readiness while the Navy’s requirements for deployments, readiness, and forward presence remain high.

Additionally, autonomous assets strategically support principles such as distribution and maneuver by leveraging “additional weapons and sensors over large areas” and optimizing the “strategic depth of the force.“ 

Both airborne and surface-borne drones can support intelligence collection and targeting requirements for multi-domain battlespaces and over-the-horizon  amphib ops with adequate fire support for landing forces, and autonomous drones could overcome this challenge by acting as mobile mini-mortars with increased on-station times.

The greatest benefit of implementing autonomy into the Navy is the speed of decision making for command and control. Autonomy and man-machine teaming can allow leaders to make better decisions faster. Military leaders must “be prepared to make decisions at the speed of relevance. When the speed of relevance is the speed of electrons, the Navy will depend on autonomy to remain a relevant fighting force.

The military already uses autonomous systems for offensive and defensive missions. Various levels of autonomy support mobility, targeting, intelligence, and interoperability, Autonomy empowers homing missiles, navigation, and autopilot capabilities. Basic targeting systems use automated target recognition to identify objects and support non-kinetic targeting for ISR collections. 

Counter-artillery batteries and Phalanx close-in-weapons-systems can engage automatically upon detecting a threat. Recurring and rules-based tasks such as scheduling replenishments at sea, naval weapon-target assignment plans, dynamic frequency allocations, and planning daily aircraft routing are candidates for integration with AI in the near future. 

Navy recently created its first underwater drone squadron.. Future uses of USVs are under-explored but hold substantial promise. USVs have significant advantages over UAVs and UUVs with regard to endurance and payload capacity for prolonged operations.

Previous exercises highlighted the ability of USVs to relay instructions from shore to underwater assets, in this case by ordering the launch of a UAV which a UUV was carrying 

Most USVs are directed toward missions such as observation and collection, physical environment mapping, countermeasures, countering small boats, and testing to involve automated payloads and autonomous coordination with multiple ships.

The Navy and Marine Corps are uniquely suited to benefit from autonomous systems. Attributes that welcome autonomy: empowering lower-skilled workers to perform higher-skilled work, replication for large-scale operations, faster-than-human reaction speed, superhuman precision, extended patience, and operations away from reliable communications. Some strides are being made to foster autonomy, but more can be done. 

Most AI systems require some level of guidance from humans. Sailors and Marines will require instruction and training on these technical systems, just as officers will require education on how to integrate them into operations and planning. Educating front-line leaders on the capabilities of autonomous systems should be a priority. 

Clearly define the goals and tasks for which an autonomous system is being considered. What will it take for autonomous vehicles to become an integral part of our mobility landscape? They must be applied in the right business context to solve the right problems. In other words, a critical success factor will be a clear value proposition for the company and its customers, and without a clear and viable value proposition, autonomy projects will lack support and are at risk of failure.

Identify/define what constitutes the successful completion of that task. Any application of autonomy must also provide some  benefit to the provider of products and services. Consider that customer-facing autonomous solutions must integrate, both operationally and in terms of information technology, with supply chain systems further upstream. To accommodate autonomous deliveries, companies will have to make process and infrastructure changes. The question is whether that can be done in a way that reduces costs and/or improves profitability.

Identify all possible off-nominal conditions, contingencies, and challenging edge case problems that occur at extreme operating parameters that the autonomous system must address. Bad weather, for instance, could push a drone off-course, causing it to land outside of its targeted landing zone. Malfunctions—whether mechanical or caused by  glitches—or unexpected conditions such as temporary obstructions could cause a collision or crash, causing damage to the payload .

Test the autonomous system in a simulation environment with many nominal and off-nominal conditions. To ensure that autonomous vehicles and their supporting technologies perform safely and correctly, it will be critical to test them in both normal, expected conditions and abnormal, unexpected situations. The safest way to test is to use a simulation program similar to a flight simulator to re-create real-world scenarios and use that information to make improvements. In other words, virtual testing of autonomous vehicles should augment and "stress-test" the physical testing.

Prioritize projects. Create a methodology for prioritizing which sort of autonomous vehicle projects to focus on first. One way to structure it might be a "crawl-walk-run" approach, starting by adopting autonomy for simple tasks where the risk is lowest and there are fewer variables to control for example, a single vehicle traveling a repetitive route in a sparsely populated area, and then moving to more complex tasks with more variables multiple vehicles traveling variable routes in congested areas

Another way to prioritize autonomy projects is to think about what problem a project would solve and rank it in order of its importance to the company and/or its customers. For example, a company's most important reasons for pursuing an autonomous vehicle project might be to improve safety and access to critical information. Improvements in speed, productivity, and cost, while important, could be lower priorities.

Ensure that you have a plan for managing data collection, storage, and analysis. Every autonomous vehicle constantly produces enormous amounts of data. This creates three issues. 

The first is how to collect all that information and use it, without latency, to make real-time decisions and take real-time or near real-time actions. The second is how to determine which data should be stored for later use. Out of that huge stream of data, which signals are important to keep, and which can be discarded? The third is how to distribute the storage of so much data. There will be too much data to keep in one location—and from a risk management perspective, it would be unwise to do so anyway. But when forensic research becomes necessary—perhaps to recreate an accident and identify its causes—it will be critical to know where all the data is sitting and how to access it.

We are just beginning to scratch the surface of how autonomous vehicles will impact logistics and supply chain operations. For example, in the future, autonomous vehicles will play an important role in data gathering, processing, and sharing. 

Processing advances in sensors, machine learning, and artificial intelligence are already pushing computing farther out in a supply chain. Autonomous vehicles could not only function as a mobility system, but they also could serve as both a local storage device for the relevant sensor data and as a "mesh network" for processing it—an example of "edge computing." 

Newly available data-rich information infrastructure will help enterprises to enable fluid alignment/realignment of resources and workflows through a major shift in their network and supply chain architectures.

1. Integrating with large suppliers

The ability to integrate systems with that of large suppliers is one of the major advantages of automating a supply chain, as it enables a strong foundation of which permits more visibility between partners. Improving order-to-cycle is critical since the speed of delivery is important for customers today. 

2. Data Availability

Having accurate, timely data available won’t only help you to pinpoint improvement opportunities for processes and allow you to collaborate better with your partners, but also to ensure that your customers receive their orders in the shortest time possible. 

3. Real-time product tracking

It’s important to provide your customers with real-time visibility over their orders, so they can always know the status of their deliveries and when they can expect them. 

4. Focus on delivery plan monitor

Access to data on delivery in real-time can help you to monitor and manage your routes better, communicate with your agents, and if necessary, adjust your delivery plan. 

5. Improve operational efficiency with data sharing and partnerships

Data can help you to optimize customer service – knowing the exact status of your customers’ shipments by having all data in a single place will make serving them far easier. As you continue enhancing the quality of your partnerships and providing excellent customer experience, you will also increase customer retention. 

6. Predictive Data

Having data on everything that can affect your operations will help you to better predict future events and act before those events would happen. By putting such data points into predictive models, you are able to improve the accuracy of your forecasts and remove some of the uncertainty involved in the process.

7. Boost productivity.

By making use of AI in supply chain management, it is possible to analyze its performance and come up with new factors which impacts the same area. In order to find the factors and issues which affects the performance of the supply chain, AI combines the capabilities of different technologies like reinforcement learning, unsupervised learning and supervised learning.

8. Demand forecasting by analyzing large volumes of data.

Measure and track all the factors which can work towards offering accuracy in demand forecasting. Based on the weather, real-time sales and other factors, it provides continuous forecasts in a loop. Such kind of information can help with automated sorting, improving warehouse management, self-management of inventory systems and forklifts that are self-driving.

9. Improving the selection of the supplier and its effectiveness.

AI can analyze the data related to the supplier like audits, in-full delivery performance, evaluations and based on that deliver information which can be used to make future decisions. This kind of step helps the company make better decisions as a supplier and work towards improving customer service
.
10. Improves factory scheduling and production planning.

Now companies can work on enhancing factory scheduling and production planning. They can work on to analyze the different issues and then optimize them. By leveraging power to balance constraints this can work well for build-to-order situations automatically.

0 Comments

Top 10 Benefits Autonomous System  Experimentation Program Demo Force Partners

8/10/2019

0 Comments

 
​Marines who have spent their careers shooting at static bull's-eyes on paper are being forced to adapt to a new kind of target -- one that can charge at them, move in unexpected directions, and respond when engaged.

The division is the first operational unit in the military to employ autonomous Marathon targets:  figures on four-wheeled platforms that can be programmed to operate in concert, and are fast and unpredictable enough to rock the most experienced marksmen back on their heels.

We are now finding range configuration limited only by our own imagination. And the same is true for employment of the targets, which not only test marksmanship skills through evasive, human-like maneuvering, but also make a convincing enemy force that can leave troops dry-mouthed when they charge a shooter at 10 miles per hour.

The targets could increase individual marksmanship effectiveness on a moving target dramatically over the course of just one day. "We're trying to show that these are a valuable target to us," 

"When we first used them, it was sort of little-known that we had these targets. When we got Marines in front of them, that's when the usage rates went up, Marines are asking why are these not on every range."

Among the Marines’ future plans is a live-fire range in which the targets work their way inside an infantry unit's perimeter, putting human troops on the defensive.

"In recent conflicts, what that was, we always outnumbered our enemy," Now, as the we are starting to look at peer-versus-peer, we try to give that to the Marines now. A rifle squad is fighting a rifle squad. A fire team is fighting a fire team."

It's not just the ground-pounders who can benefit from the targets. Amphibious assault vehicle crewmen have fired on the targets using their guns, and tanks have been able to train their 7.62mm coaxial machine guns on them.

"Some of the stuff we would like to see in the future is the ability to incorporate some of our higher-caliber munitions. "Right now, we're up to 7.62 and some of the smaller fragmentation rounds. We would love to incorporate mortars. ... Those are some of the things we have been looking at."

Marines are also planning on rigging the targets to "shoot back" at human troops, possibly by incorporating the Instrumented Tactical Engagement Simulation System, or ITESS, essentially a high-tech version of laser tag.

In a recent field test, SquadX program worked with Marines at the Air Ground Combat Center in Twentynine Palms, California, to track progress on two complementary systems that allow infantry squads to collaborate with AI and autonomous systems to make better decisions in complex, time-critical combat situations.

Program highlights manned-unmanned teaming to enhance capabilities for ground units, giving squads battalion-level insights and intelligence

“We are in a race with potential adversaries to operationalize autonomy, and we have the opportunity to demonstrate autonomy in a way that we don’t believe any nation in the world has demonstrated to date. Developing hardware and tactics that allow us to operate seamlessly within a close combat ground environment is extremely challenging, but provides incredible value.”

The exercises Twentynine Palms featured Electronic Attack Module [BEAM] Squad System [BSS] and Augmented Spectral Situational Awareness and Unaided Localization for Transformative Squads [ASSAULTS] system. The systems focus on manned-unmanned teaming to enhance capabilities for ground units, giving small squads battalion-level insights and intelligence.

In the most recent experiment, squads testing the system wore vests fitted with sensors and a distributed common world model moved through scenarios transiting between natural desert and mock city blocks. Autonomous ground and aerial systems equipped with combinations of live and simulated electronic surveillance tools, ground radar, and camera-based sensing provided reconnaissance of areas ahead of the unit as well as flank security, 

Troops surveyed the perimeter and reporting to squad members’ handheld Android Tactical Assault Kits [ATAKs]. Within a few screen taps, squad members accessed options to act on the systems’ findings or adjust the search areas.

The program-performer team identified a “steady evolution of tactics” made possible with the addition of an autonomous squad member. They also are focused on ensuring the ground, air, and network assets are always exploring and making the most of the current situation, exhibiting the same bias toward action required of the troops they are supporting in the field.

 BEAM-based BSS comprises a network of warfighter and unmanned nodes. In an experiment, the Super Node, a sensor-laden optionally-manned, lightweight tactical all-terrain vehicle known as the powerhouse of the BEAM system, communicated with backpack nodes distributed around the experiment battlespace – mimicking the placement of dismounted squad members – along with an airborne BEAM on a Puma unmanned aerial system (UAS). The BSS provides situational awareness, detects of electronic emissions, and collaborates to geolocate signals of interest. AI synthesizes the information, eliminating the noise before providing the optimized information to the squad members via the handheld ATAK.

We’re establishing superior situational awareness through sufficient input and AI, and then the ability to do something about it at fast time scales.”

The Squad X program has moved quickly through development and is already well along the transition path, due in large part to the program’s focus on partnering with the services to ensure real-world efficacy including an opportunity to test the technology downrange to get real-world information, not simulation. At the most recent experiment with the BSS, service representatives used the system to locate and identify objectives in real time.

For both systems, feedback has included a desire for a user interface so intuitive that training takes an hour or less and any available action is accessible in two screen taps. The ability to provide early input will guard against developing a product that either isn’t used or is used improperly.

“The feedback process, in conjunction with the actual experimentation, gives the Marines the ability to use the technology and start seeing what it can do and, more specifically, what it can’t do.

Marine Corps has been overhauling its infantry squads, and a handful of Marines recently performed field testing alongside autonomous robots developed by DARPA. The robots are part of the agency's Squad X experimentation program, which was started to give infantry Marines the same resources that mounted forces have.

Hot on the heels of the Marine Corps's head-to-toe overhaul of infantry rifle squads, a handful of grunts at the Air Ground Combat Center at Twentynine Palms, California recently conducted field testing alongside a handful of autonomous robots engineered by the Squad X Experimentation program.

The Squad X program was launched to give dismounted infantry squads the same "highly effective multi-domain defensive and offensive capabilities that vehicle-assigned forces currently enjoy," but infantry Marines simply can't support with current combat loads.

But that doesn't just mean robotic mules to move gear: As autonomous platforms become more integrated into current combined-arms squads, Marines will also face a "steady evolution of tactics. "Developing hardware and tactics that allow us to operate seamlessly within a close combat ground environment is extremely challenging, but provides incredible value," 

During the early 2019 test, a gang of autonomous ground and aerial systems that provided intelligence and recon support for Marines outfitted with sensor-laden vests as they moved between natural desert and mock city blocks at Twentynine Palms, while ground-based units provided armed security for the primary force.

The autonomous systems "provided reconnaissance of areas ahead of the unit as well as flank security, surveying the perimeter and reporting to squad members' handheld Android Tactical Assault Kits [ATAKs]. Within a few screen taps, squad members accessed options to act on the systems' findings or adjust the search areas."

The additional recon support and added firepower on squad flanks could prove a major boost to Marine squads as continue to evolve in pursuit of that ever-precious lethality, We're establishing superior situational awareness through sufficient input and AI, and then the ability to do something about it at fast time scales."

We made plans to have troops fire at targets using remote-controlled robotic vehicles as part of a phased effort to learn how autonomous combat vehicles can make small units more effective on the battlefield.

"We're asking them to utilize the vehicles in a way that's never been done before.“ One goal for the autonomous vehicles is to learn how to penetrate an adversary's anti-access/aerial denial capabilities without putting troops in danger.

"You're exposing forces to enemy fire, whether that be artillery, direct fire.  "So, we have to find ways to penetrate that bubble, knock down their systems and allow for freedom of air and ground maneuver. These platforms buy us some of that, by giving us standoff."

"The intent of this is to see how vehicles integrate into a light infantry formation and perform reconnaissance and security tasks, as well as supports dismounted infantry operations

"This is not how we're used to fighting.  "We're asking a lot. We're putting a lot of sensors, putting a lot of data in the hands of troops. We want to see how that impacts them. We want to see how it degrades or increases their performance."

Autonomous vehicles reduce risk by expanding the geometry of the battlefield so that, before the threat makes contact with the first human element, it has to make contact with the robots. That, in turn, gives commanders additional space and time to make decisions."

Services have been working for several years to develop armored robotic vehicles, but the vehicles being used in these tests aren’t the actual robotic vehicles that will eventually end up in combat — they’re surrogate vehicles designed to simulate a future platform. These tests are aimed not at the vehicle capabilities, but at how their operators utilize them and to learn how to best use future robotic vehicles to attack an enemy without putting soldiers directly in the line of fire.

Robotic systems are able to do that: they’re mobile platforms equipped with cameras and guns, which could be directed into the line of fire by troops who are well out of range. The vehicles are expected to be smaller and faster than the crewed vehicles that we currently field. Because they won’t actually carry people, they wouldn’t need to be as heavily armored, and could dedicate more space to weapons or fuel.

The goal is the ability to move troops farther away from the field than remote-operated robots would allow the insertion of AI and the reduction of manpower necessary to accomplish the mission task, without having the soldier there. We need to move farther forward on that.”

We’re starting to really develop knowledge across the Services in some of the things that we are working in. If a commander expresses a need for it, we’ll be happy to provide the capability.”
​
1. Reduced operating costs

Robots enable you to reduce direct and overhead costs, making a dramatic difference to your competitiveness. 

2. Improved product quality

The inherent accuracy and repeatability of robots means you can achieve a consistently high quality finish for every product produced. Robots eliminate the problems associated with tiredness, distraction and the effects of repetitive and tedious tasks.

3. Improved quality of work for employees 

With robots you can improve working conditions for your staff. They’ll no longer have to work in dusty, hot or hazardous environments. In addition, by teaching them how to use robots they can learn valuable programming skills and do work that is more stimulating and challenging.

4. Increased production output 

Robots can be left running overnight and during weekends with little supervision, so you can increase your output levels and meet customer order deadlines. Robots can now also be programmed offline, ensuring new production processes can be quickly introduced for faster production

5. Increased product manufacturing flexibility 

Robots can add flexibility to your production line. Once programmed, they can easily switch between processes, helping you to meet changes in product design or customer demand with the minimum of effort.

6. Reduced waste and increased yield 

By using robots, you can vastly increase the quality of your products. You will have more products finished on the first run to the standard required by your customers, and reduce the amount of breakages and waste produced as a result of poor quality or inconsistent finishing. With products being produced to such a high level each time, you will gain greater yields.

7. Improved safety 

Robots can readily take over arduous tasks that may be currently undertaken by manual workers. By using robots, you can decrease the likelihood of accidents caused by contact with machine tools or other potentially hazardous production machinery or processes. 

8. Reduced labour turnover and recruitment difficulty 

The high precision demanded by today’s industrial processes requires the highest levels of skill and training. Once programmed for your process, robots are ready to begin work with none of the costs associated with recruitment or ongoing training. Robots can also offer greater flexibility, both in terms of work patterns and the ability to handle different production tasks.

9. Reduced capital costs

Using robots to achieve faster, more efficient production lines can help reduce capital costs associated with inventory and work in progress. By moving products faster in production, businesses can better predict the production rate and ensure a fast and efficient service is delivered.

10. Save space 

Robots can be mounted in multiple configurations to help you save highly valuable space in manufacturing areas. They can also be programmed to work in confined spaces so you don’t lose valuable floor space.
0 Comments

Top 10 Digital Systems Engineering Models Develop Speed Product Design Specs Mission Test Space

8/1/2019

1 Comment

 
Pentagon has come to conclusion that advantage in future battles depends on our ability to fuse vast amounts of information to accelerate our decision cycle to guarantee the success of any mission. Victory in combat will depend on us becoming a Digital Organistion. 

Digital initiative encompasses comprehensive changes to how the force gathers, uses and shares data. “We must control and manipulate massive volumes of information to out-think and out-maneuver its opponents. The Digital initiative will ensure all Troops have uninterrupted access to the data they need, where and when they need it.”

“Our goal is to eliminate the time Troops spend building PowerPoint slides to display information needed to make a decision. “We should create or have tools to hook into comprehensive data streams to provide real-time information for rapid, data-driven decisions rather than solely relying on their personal experiences, intuition and interpretation.”

The Digital initiative calls for several connected elements of reform. The Services must develop a modernised information technology infrastructure to serve as a common backbone for data and information flow. It must institute data standards that allow the diverse elements of the force to share data and use artificial intelligence platforms.  It is also key to adopt agile business practices to improve effectiveness and efficiency.

“Besides our laser-focus on driving the Network services with greater agility and scale, we’re also ensuring security practices are ‘baked in’ and that Troops are empowered to leverage a global enterprise of data and services.” The Digital Force must also design and use standardised policies and network protocols to ensure the free exchange of data between platforms. 

“We are charged with harnessing the power of data for timely decision-making and mission success. With new data management practices, we will improve readiness, increase mission effectiveness, reduce the total cost of operations, improve network security and make rapid, accurate, data-driven decisions.”

Digital Force requires improved day-to-day business operations via application rationalisation, system consolidation, enterprise optimisation and continuous process improvement. These efforts will allow us to continuously identify cost-savings opportunities to directly fund digital transformation efforts.”

Ultimately, the Digital initiative aims to harness the power of the digital era. “We must move beyond antiquated processes, systems and mindsets. We will pursue new ways to leverage technology and institute an institution of innovation and informed risk-taking.”

The Digital initiative focuses on eliminating “antiquated processes” and overhauling how the service maintains, uses and shares data and information, amid difficulties coordinating systems across air, land, sea and network domains to react to the demands of modern combat. 

Becoming a Digital Force requires data that is gathered, stored and transmitted in commonly read and processed formats to minimise the delay between receiving, processing and using information derived from multiple systems to meet our tactical and strategic ends.” 
​
“The future DoD digital infrastructure will provide seamless, agile, resilient, transparent and secure infrastructure and services that advance DoD information superiority and simplify information sharing with mission partners.”

The Digital Blueprint looks to improve how data collected by airborne intelligence, surveillance, and reconnaissance platforms is shared, drive the Pentagon toward a greater command of electronic warfare, and turn to quick contracting methods to bring new technology on board faster.

Successful implementation of digital model-based systems engineering solves integration and training problems, as well as reduce the time and burden required to design, vet, redesign, and test and evaluate new planes, ships, weapons and more.

First, program offices need to sit down with operators and understand the requirement on a tactical level: what mission needs to be accomplished, what capability is needed, what threat is being countered, how will the system be used, who will use it, and more. If that information is all included in a computer model, NAVAIR can insert a notional placeholder aircraft or weapon into the model and pass it along to industry to actually engineer.

“Operators want to know our capability has been fully characterised, so not only do they know what it does, but they know what it doesn’t do – equally important to them when they take it into combat.” 

“When we deliver that capability to them on Day 1 it’s fully integrated with the environment they’re expecting to utilise it in,  which is done poorly, and “give them a system on Day 1 that they can fully train with. Fully train with, 100-percent of the capability we’re giving them. 

Currently we’re telling them what it can do, but typically we also go, ‘don’t worry, your training system, your digital simulator software, it’s an iteration or two behind, but it’ll catch up.’ Well, it usually doesn’t catch up because we keep rolling the operational tools but the training tools comes behind.”

“We’ve got a digital model of our threat; we’ve got a digital model of our blue forces, we got environmental models, whether I’m operating in an electromagnetic warfare spectrum or in the acoustic spectrum under the water; it’s all done with digital models.” 

It’s important for industry to engineer the new product within that digital model, instead of today’s practice of, “we write a 500-page specification with 20,000 shall-statements, and we give it to industry and go, here, design this. We don’t give them the threat models, we don’t give them the blue force models, we don’t give them that system of systems family model we just built. 

If industry can work within that digital environment, then little changes can be made along the way – swapping one sensor for another, for example – without wondering how that change may affect the aircraft’s aerodynamics or its low-observability or other features that today are designed separately on paper. 

With a digital drawing in a threat-representative environment, the sensor could be swapped out and thousands of possible engineering solutions Generated until the best one is chosen and be done in a matter of hours or days, rather than the months it would take with today’s processes to make those changes.

Benefits to this type of design effort continue throughout the test and evaluation, fielding and sustainment phases.

For example, in lieu of a paper-based design review, as industry meets various milestones in maturing its design “let’s take those digital models and let’s put them back in the tactical scenarios we developed with the operator back in step 1 and just see how it goes. 

What better evaluation or assessment of how the program is maturing than to actually run the current level of maturity of performance that we see in our models through the tactical situations we’ve built with the operators? Because in the end that’s what matters, in the end capabilities-based test and evaluation is about testing the capabilities – it’s not about ensuring industry met every one of those 20,000 specs. 

That’s where we spend all our time today during T&E, validating that industry met the specs. The fleet couldn’t care less, the fleet wants to know that the attributes and the capabilities that they’re counting on will be met.”

That sort of constructive testing – all conducted within the simulated environment – could pave the way to eventual virtual testing once the first hardware is delivered, which would then pave the way to eventual live testing in the field with real operators. That flow would make the best use of everyone’s time and allow any problems to be addressed as early on as possible.

Developing and maturing a “digital representation of that system and how it interacts with its environment” would also go a long way in delivering relevant trainers from the outset, rather than today’s simulators without the latest  update or inaccurate threat environments. 

The digital representation would also help with sustainment efforts throughout the life of the program, as models could help show how the system would hold up over time and in different environments.

“We’re in the process of building that model-based specification, that digital spec for industry right now. When we get that part right, build that spec, now it’s in industry’s hands” to continue to make best use of that model environment.”

We want to learn a lot about model-based engineering with new systems without bogging the program down with new processes and design tools. To supplement that learning effort, Navy and industry could conduct a “Surrogate System Experiment: to help identify potential kinks in the new process to bring this group – representation from the organisations represented in this group today – into a collaborative environment where we can actually build a surrogate program and execute that model, the capabilities-based acquisition model.”

“Find out where the hard spots are, find out where we have to go soft, find out what is that deliverable, what kind of contract will work, where do we have to hand off between DoD and industry, how do we truly make that integrated digital environment work in secure network environment?”

NAVAIR intends to begin implementing the digital model-based engineering concept into any new design, capability upgrade or sustainment program it can, seeking opportunities to learn as quickly as possible, “as opposed to waiting for the big bang on a brand new program.”

Digital model optimisation and Generative design programs were developed to let design engineers seek and evaluate a range of possible design options, based on a mix of time-tested finite element analysis. In some ways, they are the opposite of CAD. Some of them include a limited set of geometry construction tools, but they function primarily as form-Generating engines, sometimes offering not your typical geometry that users may not have imagined. 

Recently, the introduction of topology optimisation and generative design elements in CAD has started to blur the line between parametric modeling and Generative design. 

Topology optimisation is specific to the exploration of digital shapes, structures and solid geometry. It’s usually associated with aerospace lightweighting projects, where engineers seek ways to reduce material without risking the design’s safety requirements. For example, aircraft engineers may redesign a landing gear to weigh somewhat less, with the same load-bearing capacity. 

“Engineers were not trained to formulate the problem. They were trained to find solutions.” Formulating the problem, as it turns out, is the key to generating good geometry. 

For engineers getting started in Generative design, they certainly can pick it up faster if they understand how to apply realistic engineering constraints. Moreover, we see a future that is more multidisciplinary, making engineering concurrent. This brings team knowledge to the product ideation phase, which can help to release the burden on the mechanical engineer to define the problem and create the solution with such new methodology.” 

Functional Generative design tools enable you to move beyond the boundaries of traditional CAD modeling and harness the power of 3D printing and digitised models.” 

Generative design inquiries are usually not yes/no questions--will it break or will it hold? they’re formulated as what questions-- under these conditions, what are the best bracket design options to secure the fuel tank?. As you add additional constraints or parameters such as acceptable weight range for the bracket, preferred manufacturing materials and more the design options change. 

“The design process has always been Generative. It’s just that computers have never been part of that Generative process,  comparing CAD programs to Generative design programs may be as counterproductive as comparing apples and oranges, because “the two are doing different things. You need enough CAD interoperability to get you going with Generative design.” 

“We’ve always had optimisation in our tools, but it’s driven by dimension. For the first time we introduced topology optimisation. This is where, based on loading conditions you supply, the tools takes away materials from the base geometry you supplied, and provides you with an organic, optimal shape.” 

The new feature gives you a choice to define whether to produce the part in additive or subtractive manufacturing. “Based on that specification, the tool removes materials very differently for each approach.  Because the optimised geometry is digitally modeled, you can use the familiar CAD tools to refine the geometry in the final phase. 

There is distinction between Generative design and computational design. “In Generative design, you use the computer to assist you in finding the right design. With computational design, you may design a framework that lets you combine a heavy data set, concepts and ‘build-ability’ factors to guide your design.
 
A good computation model should include rules to minimise failures. “Let’s say you have an unusual building envelope shape, and you want to explore ways to build it in titanium or glass-reinforced fiber cement. You can build those material behaviours into the computational model, so the tool knows what the maximum or minimum curvatures allowable are.” 

Design engineers with less computational skills may prefer a Generative design tool that has built-in rules that automatically exclude forms that cannot be manufactured or constructed. On the other hand, some prefer an open system that engineers or architects can use to build a framework of constraints, to limit the available options to fit the manufacturing or building criteria. 

Quite often, what is digitally optimal—geometry with sufficient material reinforcement to counter the anticipated stress in different regions—is impractical to manufacture or produce, either due to cost concerns or the limitations of the production methods available. 

Additive manufacturing now gives the option to 3D print certain complex geometric forms that cannot be machined; however, even with AM, there are certain rules about printable geometry. 

“Many topology optimised structures have thin walls and complex geometries with lots of overhanging features, both of which may be challenging even for AM. Some of the new Generative design tools take these into account, but most do not—and engineers are not used to making these trade-offs during the design process. Before AM came along, we didn’t even know these trade-offs existed, let alone how to account for them in our design tools.

In some Generative design programs, the manufacturing feasibility rules are part of the Generative formulas; in others, they’re not. If they’re not, it’s up to the design engineers to work with the manufacturing engineers and construction engineers to account for them in the Generative setup. Otherwise, the range of solutions proposed by the tool will most likely include problematic geometric features. 

Math deals with the absolute; reality is full of uncertainty and unexpected changes. This means Generative design solution you get is perfect for the problem you’ve defined, but it might not be perfect for the real problem. You may have defined the rocket fuel tank pressure vessel with all the load cases you can think of, but in reality, there may be the effects of sloshing, which you didn’t consider or know how to define. So the optimisation tool didn’t take that into account.” 

Before digital optimisation made lightweighting possible, engineers usually over-engineered parts to account for unanticipated load cases. They build parts to be much stronger than necessary, just in case. As lightweighting becomes the norm, over-engineering is now seen by many as a wasteful practice. But is engineering parts with barely enough strength a good idea? 

Accessibility to various users’ experience levels is key. 

“We have worked very hard to make it as easy as possible to start using Digital Generate. “It’s difficult to predict what the user may or may not understand up front. One thing is clear—after some hands-on demos, a light goes on and they’re excited to rapidly start iterating with Generative design.” 

Primary objective of understanding and documenting user constraints is identifying the system/capability requirements of parts critical to the mission. A requirement can be defined as 1) a characteristic that identifies the performance levels needed to satisfy specific objectives within a given set of conditions and 2) binding statement in a supplier contracts.

There are three basic types of requirements: functional, performance, and constraint. Functional requirements identify the necessary task, action, or activity that must be accomplished -- what the system/capability must provide. Performance requirements characterise how well the system/capability must perform a function when subjected to expected conditions. Constraint requirements are subject to the restrictions placed on a system/capability through policy, procedural, technology or interface conditions. 

The source of requirements is the customer i.e., purchaser or system/capability to include the acquirer, user, customer, manufacturer, installer, tester, maintainer, User requirements are often not adequate for design purposes since they usually stated in nontechnical terms, ie mission critical function & expectations.

The user requirements become clear, unambiguous, and measurable as they are derived into technical requirements achievable through application of technology. Requirements development/administrative activities include mandating Logistics programs responding to a capabilities or requirements processes desired requirements balance what is acceptable to the stakeholders.

Regardless of acquisition category, DoD must apply a robust systems engineering approach that balances total system performance total ownership costs within the family-of-systems, systems-of-systems context. Industry Partners must execute Systems Engineering Plans for Milestone Decision Authority approvals integrated within the Acquisition Strategy.
 
Plan must describe the overall technical approach of Industry, including processes, resources, metrics, and applicable performance incentives. It must also detail the timing, conduct, and success criteria of technical reviews.” Systems engineering can be defined as an iterative process of top-down synthesis, development, and operation of a real-world system that satisfies full range of system requirements. 

Industry must provide Systems engineering working together on a set of inputs to achieve the desired output where the output is a system/capability that meets the user’s needs and requirements in a near optimal manner. 

Systems engineering must account for the entire range of the system/capability acquisition to include development, manufacturing, construction, deployment /fielding, operation, support, training, and verification. Systems engineering ensures that the correct technical tasks are accomplished during the acquisition process through planning, tracking, and activities coordination Lead Systems Engineers are responsible for: 

1. Elicit requirements from customers and potential product/service users

2. Validate and prioritise customer/user requirements

3. Define executable and verifiable requirements solutions

4. Isolate/verify balanced and robust solutions that best meet requirements

5. Execute total system design solution to balance cost, schedule, performance, and risk

6. Track decision making events for technical information meet customer requirements

7. Create cost-effective and supportable system throughout asset service life
 
8. Adopt open systems approach to monitor internal/external interface compatibility for systems and subsystems

9. Establish baselines and configuration control for each phase in engineering process

10. Provide for focus/structure of interdisciplinary teams for system and major subsystem level design.
1 Comment

Top 10 Digital Twin Switch Transform Tools Create Products During Process Test Simulation Operations

8/1/2019

0 Comments

 
​Digital models are used in a variety of applications and development workflows. They can vary in degree in how they match a physical device. Models can be used as a Digital Twin or in simulations in addition to models used to design a system.

“The Digital Twin will mean different things to different people. If you do operate a power station, it doesn’t matter what the initial design is. But, if you have a certified family of aircraft then it really does matter what the original design records were and how to integrate them.”

We connect this information with Blockchain. We see the Digital Twin as a level of intelligence to predict real world performance and the Blockchain is the connectivity and context for operational decisions. It connects the design, operation and simulation information together.”

Ultimately, a Digital Twin will unify all the data an organisation needs. “The digital model matures through the product design, manufacturing and operation. This digital connectivity through the life cycle can be described as a Blockchain, with data from all stages being fed back into the product ideation and creation stages.”

A manufacturer/operator can link the Digital Twin to its service, manufacturing, design history, real time data and simulation models specific to its configuration and expected failure modes. Comparing these simulation outputs with actual results provides valuable insights into the condition of the asset.”

“A Digital Twin is a dynamic digital representation of a live physical object and needs to represent specific aspects of physical objects like shape, working state and structural behaviour. Digital Twins will dynamically change in near real-time as the state of the physical object changes.”

Digital Twin is simply a virtual representation of all the information users need to supplement their work—no more, no less. It’s a question of scope. Sure, an organisation can gather more data than that one user might need. But that would simply mean there are more Digital Twins for each asset, user or relationship or one Digital Twin that filters data accessible by a user’s role.

Most process control systems that deliver sensor data to a control program provide at least a limited digital model of a component within the system. However, these days, the model can be more robust and combined with other tools like VR For example, a heat sensor might show what part of a device is hotter by showing that portion of a model using false coloring with red indicating higher temperatures.

Use as a tool kit: VR is “Twinned” with designs after they have been developed in the standard design space where models are created and subsequently imported into the digital space for virtual assessment and evaluation.

VR enables users to create designs in a virtual space by pulling, pushing, or stretching, rather than generating them on a screen. Users can create digital 3D objects in front of their eyes, in real time, thus saving time where they would have had to master complex tools to do the process.

As manufacturers look to grow and improve their design processes and customer satisfaction levels, using VR will undoubtedly become more conventional.. VR tools could enable engineers to carry out important analysis on structures and that consumers will be able to enjoy an even greater level of interactivity. going even further to bringing  object design  into the real world.

The developing relationship between digital design and VR is undeniably exciting both for those in the industry and consumers. Currently, VR remains relatively niche. It may take many more years for it to become mainstream  but there is no doubt that the ability to design, manufacture, sample, and customise in the VR space will – at some point in the not so distant future – become the norm.
 
Many VR  tools can be a powerful package for product development that allows the importing and manipulation of, or creation of 3D geometry. It features a Generative Design Tool which iteratively optimises designs given certain constraints. 

A Digital Twin has the latest sensor data associated with a matching physical device. A digital twin is often used in process control and product lifecycle management to help monitor or control a remote system. The model doesn’t necessarily need to exactly replicate the physical device. It may even be a 2D representation, but it’s typically combined with other models to provide a context for the information that can be presented or examined.

A computer-aided design CAD model is a representation of a physical entity, and it’s typically used to describe what a physical entity will look like. This can be a 2D or 3D architectural model of a building or a device such as a car. The CAD model provides dimensions and possibly descriptions of materials that would be used in construction.

Tools using CAD designs, digital twins, and simulation models may share characteristics depending on their function, although often a specific tool will create and manipulate a model. For example, a CAD drawing package may be used to create a digital model, and then a process control system would use that model as the basis for a digital twin. That software may provide the linkage between the digital twin’s sensors and controls with those in the real world.

Likewise, a model used in simulation may have characteristics added so that physical simulation is possible. This might include details about the virtual materials used in the model, which in turn would enable the simulation software to replicate how the model will react during the simulation.

When the scope of the Digital Twin is limited, there is a tendency to swing the pendulum too far the other way and conclude that the Digital Twin just a CAD model. The problem here lies in confusing the twin with a model. As stated above, for a Digital Twin to count as such, it needs a physical counterpart with which it can interact.

“While the term Digital Twin is often confused with a 3D CAD model, in reality, the Digital Twin is significantly more complex. “The Digital Twin refers to a specific real-world asset in-service in the field and represents the exact configuration of the product at a point in time. By combining data with an exact product configuration, service and manufacturing processes can be optimised and design improvements identified.”

Digital Twins can be used before a product is live. “The Digital Twin is a means to design and optimise end-user experiences. “It is used before a real product or service is produced, and during the lifetime to the real product, the Digital Twin is used to monitor and adapt the real twin’s functions and performance. 

For that purpose, the Digital Twin has to be able to behave like its real twin, being equipped with all its knowledge, capabilities and characteristics.” Once you hook some sensors up to a prototype and link the data back to a digital model you now have a Digital Twin that is live before the product is.

In the end what sets a CAD model apart from the Digital Twin? The CAD model is not automatically changing in response to changes in a physical asset

Digital models used in simulations often have the same type of sensor information and controls of a digital twin, but the information is generated and manipulated as part of the simulation. The simulation may replicate what could happen in the real world, but not what’s currently happening.

A digital twin could be used as a starting point for a simulation model that perhaps extrapolates how a system would operate in the future. The degree and accuracy of these simulations can vary depending on the implementation of the simulation and what type of results are desired. 

For example, a digital twin of a gas engine could simply track material consumption, power output, and heat output, but not the actual movement of components within the engine. This level of simulation may be sufficient for checking out how a vehicle would operate when using such an engine.

On the other hand, if the desired results involve how durable a particular part would be within the engine, then the level of detail with respect to the engine would have to be greater. Likewise, simulation of a self-driving car may need to know the output and control characteristics of the engine, but not the details within the engine.

Any system component may incur different models that vary in the degree they replicate the actual component, as well as how they react and what kind of information can be associated with them. The models may have different purposes, but they may also share common descriptions such as details about dimensions, material attributes, etc. Many models will be used by multiple applications for different purposes, from showing the status of a current system to simulating a device that has yet to be constructed.

“Using the Digital Twin in simulation can both improve operational procedures, and in contingency planning, it can even be embedded in the control system loop,. “In the future, Digital Twins in manufacturing will help detect potential quality issues earlier on, or even improve the quality of the product being manufactured through delivery of new design tech.

Engineers will be able to use simulations linked to the Digital Twin to predict how the physical twin will perform in a real-world environment instead of the ideal and perceived worst-case conditions outlined in the design process.

“The Digital Twin by itself doesn’t do any good unless it has interactions with its environment. So, you also need to model its environment.  "Depending on the nature of the interactions with the environment, and how well it is instrumented, that may well be the bigger challenge. For the system itself, you have the option to put in whatever sensors you need, but for its environment you don’t get to do that.”

Even so, this is why it will be important to link up various simulation technologies to Digital Twins to meet the application at hand. Sure, you might need some slower 3D simulations for a twin in the design or prototyping stage. But for operations, 1D simulations and 1D characterizations of 3D simulations are often sufficient.

“Traditional simulation performed during the design phase of a product’s lifecycle can be perceived as a slow process, because many different use cases must be investigated based upon best estimates of the conditions the product will be subjected to in the real world. But, with the benefit of large amounts of data, actual operating conditions can be simulated with confidence, quickly yielding actionable intelligence.

The Digital Twin is a tool that can potentially account for the whole system of a product or service. It keeps track of all the information about a system you need and from that information assists in the decision-making process. 

Digital Twin are “a digital model that accurately represents a product, production process or the performance of a product or production system in operation. Digital Twin is a representation of a real thing. This precludes simulations from counting as twins all by themselves. “A Digital Twin isn’t a twin until it has a twin. A physical product must exist.

Digital Twin is often related to model-based thinking because it links real world data to a systems engineering model of the whole lifecycle of a physical product or service. Though a twin can focus on one aspect of a product, its full potential is unleashed only when its usefulness spans multiple silos in an organisation.

“If we take the life cycle we have design, manufacturing, services and operations, and then end of life. The benefits of the Digital Twin for each step is different. For design, the twin’s main purpose is to set the performance of the product for the lifecycle. For manufacturing, it’s to optimise the process and reduce costs. For services, it is to reduce the operational cost and to use predictive methods. The idea is to bring simulation into this lifecycle information: you have a physical asset where you link it to a parallel Digital Twin.”

A  sufficiently robust Digital Twin could lead us to do away with the concept of job roles all together.“To unify and understand the enormous and diverse information about the  Digital Twin, innovators have to overcome traditional, siloed-expert thinking. Of course you need the capabilities to scientifically and physically simulate all the pieces working together as intended. But, engineers also need methods and tools to foster a behavioural dimension to their structured, physical and procedural information. 

So, what does a system-level twin look like? To the individual user it will look any another other Digital Twin, since it will only given them the information that they find interesting. After all, why would sales or marketing need to know everything the engineering team would need to know?

So, engineers will be able to make a system-wide simulation of a product or service for the Digital Twin. Digital Twin can incorporate 3D data/simulations, characterizations of the 3D data/simulations using response surface models, 1D simulations and 0D simulations. The 1D and 0D simulations, as well as the response surface models are used to speed up the system models so they are no longer waiting for slower 3D simulation.

Some argue they can sketch in 2D quicker. But this only refers to one aspect of a design i.e. how it might look from chosen angles. Even then, a sketch is at best an artistic prompt for what something might be. It’s not spatially exact. It has its place, but it is limited. 

This leads us onto the question of ‘is it possible to design only using digital tools? That question is probably best answered with another question: ‘is it possible to design only using physical tools?’. The extraordinary capability that digital tools have given us no longer makes the latter commercially viable at scale. 

But at the same time, despite being an advocate for a digital-first approach to design, as long as we are designing physical products we will never completely remove the need for physical tools. The trick is knowing when it’s best to use them and understanding why you would use them over quicker, more efficient digital alternatives.

Designers only working with sketches would have to get a sketch signed off, move onto the next part of the project, and even the part after that, before they realise that what they designed earlier on isn’t actually possible. The team will then have to start deviating from the original design, which has already been signed off on, and you end up a million miles from the concept and the finish line of the project. 

Some designers instinctively want to use pencils to sketch, which is a fantastic skill to have. But we teach them how they can sketch faster and more intuitively in 3D. Once they get past the initial block of learning how to express themselves in a new medium they get on much better because it offers far greater opportunities and benefits in other parts of the project. 

For example, as technology develops, we’re finding that it’s often easier to show clients videos and animations of our concept work to help them understand the design, rather than a static image or sketch. That stage can be reached very quickly if you’ve worked in digital from the outset.

At the point of launch, new products often have things that could have been done better in hindsight. Even with this being so there is no reason to make sure by not using tools that are already available to use.

For example, you might be constructing a spatial model in a digital workspace to work on product function but then you might consider that the part looks a bit thin and express concern it will be strong enough. Because been designing in digital we can take the data I already have, the same sketch, and with an hour or two of extra manipulation you can use your computer to test if it is in fact strong enough. 

Of course, further testing and development will be needed, but it’s that ability to troubleshoot and test concepts from the outset which makes designing in digital such a no-brainer where efficiency and productivity are concerned.

If you’ve designed a product in digital and shown a video of it to a customer so they get a good sense of how it looks and behaves from all angles, you could then machine quite a primitive, inexpensive model to show them how big it looks sat in a room with them. In this scenario a physical model doesn’t need to be any more elaborate or expensive than that because its only purpose is to demonstrate physical size and scale. 

The digital work does the rest and is far more powerful and flexible. After all, it’s much easier to change the material of a product on a video than it is to re-spray and re-finish a physical model and while you’re at it you could even find out if the change in material has an impact on weight and function of the product.

In a recent consumer electronics project, we used our digital approach for the vast majority of the design work. To accompany it, we created a basic 3D-printed model to test how it all fit together and how it felt to use. We then moved onto a sampling-up stage before manufacturing. 

We didn’t have to do physical mock-ups and models of how it looked, or spend years developing the design, because of the efficiency of our digital sketching and prototyping approach. It simply gets us to a manufacturing stage far quicker than more traditional approaches.

Designers must make decision at every stage of a project about how to balance digital and physical approaches to achieve the best results for them and their client. This means sketching, concept work and prototyping belong largely in the digital realm because it gives us the greatest flexibility and a head-start over physical sketches/models that simply aren’t as useful, flexible or powerful as their digital counterparts.

Switching to a digital system can help you earn more high value business. Work directly on customer data, or designed data, offline, and keep the dial turning more often. Here is a list of benefits realised with upgrading to Digital Twin programming.


1. Reduce time spent programming

Typically, the machine is not running as one programs a part on the controller, or when they perform a dry run prior to making parts. In a digital environment, you can program the part offline, while the mill is running something else. Also, you can simulate the program, to show exactly how it will run on the mill, saving time necessary for a dry run. If you program manually, you have to learn to program on every different controller and future purchases within the shop.

2. Better visualising tooling and work holding

When programming by hand, you may have to take into consideration the location of fixtures and clamps to avoid collisions. These can be imported into a digital model and visualised and avoided automatically.

3. Fast part changes

Any time a part changes one must either sort through the code to make changes, or start over. Digital  systems maintain associativity between the part and the program, where if the part is edited, the cutterpaths will automatically update
.
4. Quality of program, maximising output

A digital system can create cutterpaths where the chip removal rate is constant at all times. This will allow you to run at a maximum feed for the chip at all times. This is virtually impossible to calculate manually, and as a result, machine operators often run at reduced speeds most of the time.

5. Stop recreating parts

If you are manually programming to a drawing, odds are someone already created a 3D model of the part. Don’t waste time recreating what’s already been designed, work on the original 3D file from the start.

6. Take on more difficult work

Expand your capabilities by taking on work that would be unrealistic to program manually. Not only do many of these more difficult parts pay better, but you now have a larger pool of potential work to draw from.

7. Quote more accurately and quickly

Utilise a digital system during the quoting process to ensure your run times are as accurate as possible. This allows you to more accurately quote parts, and more regularly have the correct profit built in.

8. Work Offline

Design and program parts while the machine is running another job. Take your work on the road and make updates and changes anytime, anywhere, and upload the data or save it to the as needed.

9. Collaborate better

Have a question on a part, and need a quick answer? Collaboration tools allow you to share your question, on the digital 3D model, with others, and get immediate feedback.

10. Maximise machine utilisation

Besides helping to keep the machines running while you program offline, utilising digital simulation can provide the confidence needed to run unattended or overnight. This can greatly improve machine utilisation and increase profits while simultaneously reducing downtime.
0 Comments

Top 10 Digital Twin Product Development Representations During Simulation Project Phases

8/1/2019

0 Comments

 
“Simulation-driven design is taking simulation technology and moving it from the middle and the late cycles of the design process to the very front of it. “This drastically lowers the time it takes to develop products, because instead of going back and forth between detailed design and validation, we put validation or simulation at the front of that process. 

We use simulation to design the product using things like topology optimisation, or we integrate control systems at the earliest stages, and then when we get to validation, it’s a simple check box instead of an iterative process. So, simulation-driven design is putting simulation at the front of the design process and using simulation technology to create a design instead of using simulation to analyze it later.”

We used to have to wait for a design to come downstream before testing, building or simulating a design. Then, if the design wasn’t up to scratch, we would literally have to go back to the drawing board and try again. Now, armed with design and simulation tools we can effectively simulate early, and decide what strategy to use before getting too deep into the product lifecycle. We are now entering a world of pre-validated design.

Generative design and topology optimisation is a completely different concept where simulation-driven design really started to move from design validation, where you used simulation to validate a design to changing it from validating to becoming the actual main driver to inform the design.”

 We can basically do a motion simulation, and bring those loads and motions into an optimisation. We’re optimising full assembly-level designs, which is something that no one else is really doing either. We can have multiple components all being optimised at one time.

“We created a whole suite of digital modeling tools that allow you to take those concepts that are Denerated and quickly generate smooth, organic surface design. It‘s ‘3D tracing for engineers,’ so you’re basically tracing over the top of an optimised shape and you end up with a final printable design or a design that’s ready for machining.

Virtual development makes use of the whole suite of tools for layout, geometry generation, calculation and analysis, test and evaluation including the management of digital data.  This suite enables to concurrently anticipate a lot more considerations sooner for product and process design than in earlier days. Although mostly relying on and working with approximated data, calculations and simulations today come close to verification and qualification test results.  

There wasn’t really a clearly defined method of building a digital twin in the past, “What people would do in the past would be to just build a virtual prototype. There was no real notion of connecting it to operational data. And there was no real notion of actually deploying the digital twin at scale in an operational setting. 
So, simulation is typically used at the design phase. You would build one virtual prototype for your design, validate it, and that’s it. But now what we want to do is to take that design and replicate it as you would with other virtual assets. That concept of scaling out is what’s really new here.”

The concurrent assessment provides the timely information on the status of the Digital Twin.  Once evident, troubles and issues need to be tracked and monitored to ensure that they are resolved. Otherwise errors and quality nonconformities could perpetuate creating a “snowball effect” of troubles with many people using flawed data. The view ahead actually is an attempt to anticipate future developments and risks based on a good degree of engineering and management judgement. 

The Digital Twin enables to imagine many more cases where often small and seemingly neglect-able issues are placed within a larger geometrical context so that the real problem becomes apparent. That is then a starting point for pro-active risk management and for corrective actions
 
The Digital Twin is a core element in virtual product development as it is the culmination of the design intent that gives the product an “early face”. If Virtual Product Development is the unifying concept and approach, Virtual Prototyping actually is the process of digitally testing and evaluating the virtual representations of the product in all aspects of developmental and operational life. The Digital Twin is embedded in a spectrum of prototyping activities, which aim to mature the product as fast, as less costly and as reliably as possible

Digital Twin is the major development instrument to master complexity. As complexity is described among others by its “variety”- the number and difference of system states that elements involved may assume – the Digital Twin gives order to the thousands or millions of parts that in the end will represent the final product. 

It is virtually impossible to anticipate all events e.g. disturbances that will challenge the project’s progress.  The Digital Twin reflects the results of the many design considerations in a dynamically changing environment. It shows impacts of one’s own design on the “neighbours”, thus creating the basis for spatial and functional arrangement to provide and represent a unique reference to anybody involved, wherever located.

The level of detail of digital models changes throughout product development and depends on particular phase requirements and objectives. The overall aim of the Digital Twin is to allow full visualisation of the product in three dimensions at any time, and to be able to simulate and analyse geometric and functional behaviour. T

The Digital Twin is actually the pairing between Configuration Management and 3D design. The aim is to provide a complete digital product in any variants for any customers, regardless of the design phase and for designers who work on them concurrently. In former times usually one standard aircraft was developed at a time.

The major points for Digital Twin are providing the right data for the right configuration, and to have only one representation switched on to avoid geometrical overlapping. This handling of multiple variants and iterations places particular emphasis on effective management, ensuring that the effective/confirmed technical application is embodied in the Digital Twin. It is making use of a set of configuration attributes.

The fact is that real systems continuously change because of preconditions and environment alterations e.g. fluidity of boundaries so that certain states are more probable than others. This implies that out of all potential configurations the system might adopt there will be a selection of the favourable ones, hence also an avoidance of unfavourable ones. This strongly influences and determines the “shape” or appearance of actual structural complexity.

The solution is to extract the product structure with parts and assemblies related in different ways. A view is therefore a filtered extraction of a product structure from the digital database according to pre-defined criteria. The challenge is to provide views that correctly reflect the status of the product in near real time.

One may call that view also ‘as-manufactured’ or ‘as-produced’. “As-planned” refers to manufacturing planning how the product shall be decomposed to have all manufacturing operations and constraints accounted for. Its shows how the manufacturing engineer thinks everything will be put together. A closely related view is the “as-built” which actually reflects the real sequence of operations in the assembly work space. That accounts for unforeseen events such as supply delays and assembly errors. If everything goes according to plans the ‘as-built’ and the ‘as-planned’ will match. 

With discrete checking 3D design activities are virtually stopped at a certain point in time. After the checks problem areas are resolved and design goes on till the next check. Continuous checking doesn’t stop the design for checking’s sake but continuously takes “snapshots” of the Digital . Design could have solved the problem in the meantime, but experiences on previous programmes indicate however, that troubles resolved before being documented and communicated are minority cases. 

 The review process is closely related to the check process and is a primary quality improvement and sustainment measure. Whatever type of Preliminary and Critical Design Reviews, everyday technical reviews or dedicated Digital Twin reviews have the same supporting activities.

The reviews have to be prepared with the right data, reports, statistics and background information; they have to be run, e.g. doing network flythroughs, showing critical areas, and they need to be followed up. The follow-up is particularly important to track resolutions, feed statistics and report on progress or nonprogress, e.g. for troubles existing for a longer time and to escalate them to higher hierarchical levels should there be disagreements on resolution.

Standard physical techniques have been successful industry practice for a long time but  have a number of shortfalls that are better addressed with their digital substitutes. In fact, they are only an assurance that everything will fit together, in former times based on drawings. 

 Update with modifications or duplications are expensive, as always a physical component has to be produced. The response to changes is relatively low, but for minor ones like drilling holes and there is an inability to reflect real-time configurations. 

So the physical mock-up is usually representative only for one aircraft, and is of very little use afterwards e.g. after certification. Last but not least, maybe the major issue, it is quite costly. There are design, labour and maintenance hours to be paid for, not to forget tools and all the materials. In addition, it consumes precious factory floor space 

Digital Twin tools have matured insofar as they cover most requirements for geometrical and functional assessment, and, to a lesser extent, operational constraints. Shortfalls have to be compensated either with specific tools e.g. for tolerancing, To fully cover the whole spectrum requires all tools in to efficiently communicate with each other so results obtained individually can be compressed to an overall view of the Virtual Prototype.

The majority of Digital Mock-up applications, as the classical substitute of Hardware Mockups, cover the geometrical and functional areas. The closer one tries to assess the behaviour and interactions of the product in its environment the more will efforts shift to the right end of the spectrum.  

Another issue focuses on decision making. Deciding means selecting what information is deemed important and which actions will result from it. The process relies on so-called schemes. These are a set of rules that allow ordering and interpreting new experiences and findings. Schemes are grouped in several categories: 

First, there are constraints, which have a restricting impact on possible representations of a system. Among them are rules, norms, standards, guidelines and principles. Constraints also define the degrees of freedom one has for decisions and actions. Then, there are pattern, which put together different dependences, thereby reducing complexity and presenting a simplified view of a seemingly chaotic system. They support general understanding about proceedings as well as the complete and integrated perception of systems. Last but not least models, as above mentioned, show the strong interactions between the process of perception and that of decision-making

‘Order’ is defined as a state of many elements of a different kind, where one can make correct assumptions for the rest of elements based on knowledge of their time and/or space parameters, which have a good chance of turning out right. Terms often used for that are “structure”, “pattern”, “configuration” or simply “system”. Order, on one hand, can be made and deliberately planned. 

Such an “organisation” requires a rather clear goal, and people performing concrete tasks whereby serving a common purpose. As such it is imposed as a kind of force from outside. On the other hand order can be grown, spontaneously developed. Created through regularity in the behaviour of its elements, it comes from inside, is self-regulating and self-organising. No clear goals are pursued; individuals are following own goals in addition. Such an order is more abstract and may cover an uncountable number of circumstances.  

In conclusion, Digtial Twin tools better overall performance than their physical Mock-up predecessors. From a design point of view managing the shear size and complexity within the tight development schedule has made it virtually impossible not to use a Digital Twin. The spotlight investigation further supports the view that Digital Twins “produce” considerably fewer quality deficiencies in wing equipping and final assembly then before.

1. Type of parts in mock-up 

These are the different technological types of parts that are actually designed to be in the Digital Twin. The classification scheme goes from simple to very complex. It is important to note that not each and every part had been counted. 
 
2. Integration density 

 The more components are to be installed in a certain space volume, the more challenging and complex the integration task becomes. Therefore this indicator approximates the real integration density situation in the mock-up with the reference part count and a simplified wing volume calculation.
 
3. Drawing tool 

This indicator considers the question of how to actually create the design or the drawing – e.g. without or with 3D tools- accounting for more or less complexity. It takes into account human perception and correct or flawed realisation of the overall geometrical situation. The plain and discrete assessment of which drawing tool was/is used makes it an object level indicator.  
 
4. Mock-up class 

Mock-up indicators tell the level of detail as well as scope required for representation of the geometrical situation in the mock-up. As these classifications have the character of requirements and overall concepts defining what has to be done. 
 
5. Mock-up policy 

The mock-up policy describes the generic and operational rules to be followed for the mock-up campaign including quality assurance rules and roles and responsibilities, internally as well as across the supply chain. 

 
6. Mock-up tasks to be fulfilled

This covers the range of tasks and checks that have to be done in the wake of the mock-up campaign. It describes the “what” of mock-up activities to be done. This can be rough space allocation studies, detailed assembly-integration checks, maintainability validations or particular risk analysis. Each task can be separately identified and therefore be accounted as single items.  

7. Tool environment and maturity

This indicator addresses the actual tool set situation across the extended enterprise that prevailed for most of the time of the investigational time frame. The two major distinctions are the questions of a homogenous versus a heterogeneous tool environment situation and their maturities. This situation determines operational playing rules with respective efficiency gains or performance shortfalls. 

 8. Number of components ratio

 This is the number of parts, assemblies components in the wing compared to those being created for the mock-up as a whole. Also based on the “reference part count” it denotes the complexity of the campaign effort in terms of part/assembly creation: the more the mock-up resembles the real wing the higher the complexity, because more parts and their interactions have to be handled. 
 
9. Moving components ratio 

Based on the components created for the mock-ups this factor sheds light on the difficulty to integrate fixed and movable components. The more movable components there are with respect to the overall reference part count the higher the complexity. Here, not the different types of defined motion paths are of interest, but the overall level of movable components that have to be integrated.  
 
10. Configuration similarity

This indicator describes whether or not teams had experience with wings with similar structure and systems arrangements before the respective campaign. If yes, that would imply a certain familiarity with the integration task, which in turn would contribute in reducing complexity. 
​
0 Comments

    Site Visit Executive

    Provides Periodic Updates Operation Status

    Archives

    August 2021
    July 2021
    June 2021
    May 2021
    April 2021
    March 2021
    February 2021
    January 2021
    December 2020
    November 2020
    October 2020
    September 2020
    August 2020
    July 2020
    June 2020
    May 2020
    April 2020
    March 2020
    February 2020
    January 2020
    December 2019
    November 2019
    October 2019
    September 2019
    August 2019
    July 2019
    June 2019
    May 2019
    April 2019
    March 2019
    February 2019
    January 2019
    December 2018
    November 2018
    October 2018
    September 2018
    August 2018
    July 2018
    June 2018
    May 2018
    April 2018
    March 2018
    February 2018
    January 2018
    December 2017
    November 2017
    October 2017
    September 2017
    August 2017
    July 2017
    June 2017
    May 2017
    April 2017
    March 2017
    February 2017
    January 2017
    December 2016
    November 2016
    October 2016
    September 2016
    August 2016
    July 2016
    June 2016
    May 2016
    April 2016
    March 2016
    February 2016
    January 2016
    December 2015
    November 2015
    October 2015
    September 2015
    August 2015
    July 2015
    June 2015
    May 2015
    April 2015
    February 2015
    January 2015
    December 2014
    April 2014
    January 2014
    December 2013
    November 2013
    October 2013
    September 2013
    August 2013
    June 2013
    May 2013
    April 2013
    March 2013

    Categories

    All

    RSS Feed

Web Hosting by Dotster