Marine  Magnet Dispatch Service Centre
  • Drone Fleet Readiness Office
  • Status Updates
  • Marine Corps Integration
  • Submit Comments
  • Download Reports
  • Building Drone Swarms

Top 10 Digital Twin Demo Self-Organising Troop Manoeuvre Behaviour Process Tech Field Applications

6/21/2019

0 Comments

 
Digital Twin is a virtual representation of a real-life physical product, building, Troop Group behaviour.. Any item that exists in the physical world can be replicated with a digital twin to offer unique insights into how products or processes are operating in real time, even from a remote location.

As an example, adding enough sensors to create a digital twin of a car will require not only digitally replicating the shape of the vehicle, but also the tires, seats, engine, and even mirrors. 

But things get far more complicated under the hood, where the inner workings of the engine will require a real-time simulation including every spark, explosion, and movement inside the cylinder block, pistons, crankshaft, valves, and plugs, capturing every distortion, glide pattern, and even the slightest bit of of friction happening in real time.

By advancing from a single asset view to a larger asset population, organisations can unlock new opportunities to enhance operations. For example, an engine, transmission, and braking system may all have discrete digital twins, but will need to interact with each other every bit as much as the real engine, transmission, and braking system do in order to achieve even deeper insight into overall system behaviour..

But Digital Twin progress doesn’t happen instantly. As we step from detail and accuracy to micro-detail and micro-accuracy, these super high-tech 3D models will enable us to visualise how our physical products are performing and changing in the moment. 

If something breaks down, we can instantly tell what went wrong.

Experiments like this will not only require huge amounts of computing power, but also a massive range of computational approaches to simulate unique techniques for organising and interacting with functional responsibilities of work.

To put things in perspective, the drones we fly today have been in development for decades. It has taken that long to get to get the vehicles this good. With emerging technology, we still have to work our way through the primitive stages before we get to the good stuff.

At the same time, we are building a digital infrastructure that is layered over everything physical in the world. This is another form of digital twin advances and eventually the two will align.

A digital representation of a physical object, digital twins allow directors to create a crystal-ball-like-view into the future. They enable simulation, analysis and control to test and explore scenarios in a practice setting before initiating changes in the real world.

While digital twins have historically been associated with more complex technology environments, its impressive ability to both eliminate problems and deliver next-level operational performance is making these models a must-have technology in every unit toolkit.

Some of the first digital twin cases involved complex—and usually expensive—capital assets such as diesel engines, turbines, and heavy-duty construction equipment. Their digital representations are equally complex, comprising finite state machines with never before seen numbers of discrete states. However, digital twins offer even the simplest constructs a vast number of benefits. 

The single biggest problem with digital twins is that one size does not fit all. In other words, a new digital twin is needed for every single product that is produced and the process that creates them. That’s because every product, no matter how precisely it’s made, operates differently. This is especially true if there are workers involved in manouevres.

Adaptive Digital Twin  offers an adaptive smart model user interface to the physical and digital twins. The adaptive user interface is sensitive to the preferences and priorities of the user/operator. A key capability at this level is the ability to learn the preferences and priorities in of human operators in different contexts. These characteristics can be captured using network-based supervised machine learning tech. 

The models employed within this digital twin are continually updated based on the data being “pulled” from the physical twin in real-time. It can also accept information in batches after system use. This digital twin can support real-time planning and decision-making during operations, maintenance, and support. 

Advanced digital twins have unsupervised machine learning capability to discern objects and patterns encountered in the operational environment, and reinforcement learning of system and environment states in uncertain, partially observable environment. This digital twin at this level has a high degree of autonomy. At this level, the digital twin can analyze more granular performance, maintenance, and condition data from the real-world counterpart.

Digital twin system simulations can be run to explore failure modes, leading to progressive design improvements over time. For example, a manufacturer can link a digital twin to its service history, manufacturing process, design history, real-time network data, configuration-specific simulation models, and expected failure modes.

The ability to compare simulation outputs with actual results can provide valuable insights about the physical twin. Users can generate event-driven or agent-based simulations to explore the behaviour and interactions of the digital twin. The digital twin can incorporate 3D data and simulations, along with their characterisations using methods such as response surface models. To gauge customer experience and the impact of innovation on that experience, the digital twin can be employed to simulate a plant, product or service. For example, barcoding line replaceable units for analysis in logistics support. 

Engineers can potentially use simulations linked to the digital twin to predict how the physical twin can be expected to perform in the real-world. Contrast this with having to rely on the ideal and perceived worst-case conditions typically employed in the design process. 

Actual system performance data can be compared to data from the digital twin, prompting adjustment decisions that can contribute to successful mission outcomes.  Also, by incorporating data from the physical twin into the digital twin, engineers can improve system models, and subsequently use the results of the analysis with the digital twin to improve the operation of the physical system in the real-world.

The value added by a digital twin to its physical twin stems from its ability to optimise both physical twin operation and maintenance schedule. Simulating digital twin behaviour enables the determination and adjustment of real-world system behaviour. Specifically, insights gained through simulation can guide changes needed in system design and manufacturing with the digital thread providing the necessary connectivity across the system life cycle.

The fidelity of the simulation will typically vary with the purpose of the simulation and the stage in the system life cycle.  A potential benefit would be to rehearse missions in specific operational conditions, terrain, weather, etc. using the capabilities of the digital twin. For example, in the system design phase, a relatively slow, non-real-time simulation may be enough as long as it enables exploration and investigation of multiple different use cases under real world conditions. With access to the right data, actual operational conditions can be simulated with high confidence to yield insights into expected outcomes.

An important use of simulation is in the assessment of the expected operational life of the system i.e., low long the system can be expected to be operational, so digital twin can keep track of the wear and tear experienced by the physical twin. By employing simulation, the digital twin can estimate the remaining working life of the physical twin and proactively schedule maintenance. In other words, predictive maintenance can be used to estimate how long the physical system can be expected to operate normally and use that knowledge to proactively schedule and perform system shut down rather than wait for the breakdown of the physical twin which can be both expensive and potentially catastrophic.

For simple applications, digital twin technology offers value without having to employ machine learning. Simple applications are characterised by a limited number of variables and an easily discoverable linear relationship between inputs and outputs. However, most real-world systems that contend with multiple data streams stand to benefit from machine learning and analytics to make sense of the data. 

Machine learning is applied to a data stream to uncover/discover patterns that can be subsequently exploited in a variety of ways. For example, machine learning can automate complex analytical tasks. It can evaluate data in real-time, adjust behaviour with minimal need for supervision, and increase the likelihood of desired outcomes.
 

Machine learning uses within a digital twin include: supervised learning of operator/user preferences and priorities in a simulation -based, controlled experimentation testbed, unsupervised learning of objects and patterns using, for example, clustering techniques in virtual and real-world environments and reinforcement learning of system and condition states in uncertain, partially observable operational environments.

Linking digital twins to networks brings the data needed to understand how the physical twin e.g., manufacturing assembly line, autonomous vehicles network behaviour and performance in the operational environment to enhance preventive maintenance and AI optimisation of the physical system and operational processes. 

Acting as a bridge between the physical and virtual worlds, the network can deliver performance, maintenance and condition data from the physical twin to the digital twin. Combining insights from the real-world data with predictive modeling can enhance the ability to make informed decisions that can potentially lead to the creation of effective systems, optimised production operations, and new manoeuvre models. 

Multi-source/multi-sensor information e.g., temperature, moisture content, production status of current batch can be delivered to the digital twin along with information from traditional sensors to facilitate predictive modeling, providing much needed flexibility when it comes to system mobility/location options such as selling a capability i.e., product as a service versus selling the product itself. 

Importantly, the combination of digital twin and the network allows an organisation to gain insights into how a system/product is being used by customers. Such insights can enable customers to optimise maintenance schedule and resource utilisation, proactively predict potential product failures, and avoid/reduce system downtimes. 

Perhaps the greatest potential benefits of the network is in the service area. For example, service that is continually informed about the operational state and condition status of the system can be effective in ensuring cost savings and high availability rates to conduct manoeuvres. For example, predictive analytics can be employed to pre-fetch and rapidly deliver a required part to a maintenance crew.

In the future, digital twin technology can be expected to become a central capability in system engineering models because it can span the full system life cycle while at the same time helping  to penetrate new markets such as manufacturing, construction, etc. . Specifically, digital twins can be exploited in upfront engineering e.g., system concepts and model verification, testing e.g., model-based system validation, system maintenance e.g., condition-based maintenance, and smart manufacturing. 

In the near-term, digital twin technology can be expected to be integrated into manufacturing and maintenance activities to enhance predictive maintenance and design. At the same time, digital twin technology will continue to gain ground in each sector because of real-time access to system data.

Maintenance will be a major contribution area for digital twins. For example, digital twins will help organisations transition from schedule-based to condition-based maintenance, thereby substantially reducing system maintenance costs while also enhancing system availability. 

Digital twins can be exploited in aircraft engine maintenance. Currently, aircraft engines are routinely taken apart and rebuilt based on the number of hours flown, regardless of whether most of those hours are simply cruising at altitude or performing high-G maneuvers. Digital twin technology can help us better understand maintenance needs and schedule maintenance accordingly. 

The need for digital twins originates from the need for increasingly advanced dashboards  for users to view everything they need to know about their projects and properties in a convenient form and format. However, as many sectors continue to digitise and data becomes increasingly more complex and abundant, there has been an exponential growth in the number of tools, data formats and data services. 

As a result, generating actionable information from diverse data supplied by different tools is becoming increasingly difficult. A next-generation dashboard tied to one or more digital twins can dramatically improve timely decision-making and plan execution on many projects.

For architects, engineers and planners, digital twin technology can become a source of sustainable competitive advantage by linking projects and to real-time data with user-customisable smart dashboards. Manoeuvre zoning, and traffic data can be linked to the digital twin, and accounted for in making siting and structural component placement decisions. For example,  traffic information can be used to determine where to place an entrance to a building. 

Digital twin technology can provide a window into system performance, for  example, a digital twin can potentially help identify equipment faults and troubleshoot equipment remotely to alleviate key customer concerns. In addition to improving virtual system models, a digital twin can potentially improve physical system operations and sustainment. 

A digital twin can also help with product differentiation, product quality and add-on services. Knowing how customers are using the product post-purchase, can provide useful insights including identifying and eliminating unwanted product functionality and features, as well as unwanted components, saving both time and money in the process. 

A digital twin can enable visualisation of remote physical systems to troubleshoot a landing gear problem of an airplane parked at another airstrip. Multimodality sensors e.g., sight, sound, vibration, altitude can serve to deliver data from physical systems to digital twins anywhere in the world. These flexible capabilities can potentially lead to clear understanding of the state of remote systems through multi-perspective visualisation.

Digital Twin Planners are working extra hours on answering questions standing in the way of wide-scale digital twin deployment.  In the meantime, digital twin technology is continuing to make impressive progress in aerospace, defense, manufacturing, site construction and other applications.

Once we’re able to produce a virtual pairing with the physical/behaviour world, we suddenly have the ability to analyze data streams and monitor systems so we can head off glitches before they occur, prevent interruptions, uncover new opportunities, and even test new strategies with quickly contrived digital models.

The combination of new manoeuvre formations assignments, information and data analysis will quickly turn most skeptical leaders into strong advocates of digital twin tech.

1. Smart Command Centres

With advanced networks entering our battlespace, having a smart command center becomes a logical extension of our need to monitor and manage operations movements.   Every ship, airplane, ground vehicle, or turban in a power plant has the potential for being digitally replicated

2.  Directing Effective Field Operations

Troops will soon have their own fleets of drones, with scanning capabilities, to create digital models of the battlespace. As scanners, sensors, and resolutions improve, planners will be creating increasingly functional digital twins of their roadmaps, and activity centers.

3. Platooning 

The first phase of remote robotics for trucking will involve platooning where human drivers control the lead vehicle, followed by driverless vehicles. Since the driver is still in control, additional support won’t be needed until it arrives at the delivery location where either addition human operators can take the controls or remote drivers can manage vehicles for the final positioning of the truck.

4. Remote Operation 

The actual operators may be working in another location, but having a person at the controls is critical for certain situations. Drivers, pilots, and captains do far more than just drive their vehicles. They provide a contact person to talk to, provide security, situational awareness, and the type of oversight and responsibility that only a human can provide. Since there is no such thing as an infallible machine, things will go wrong. When this happens, we will need a live person to manage the problem. The solution may be as simple as a system reboot, but in extreme cases, emergency rescue people will need to be involved, and having a contact person to coordinate the response is critical.

5. Search Engines for Formation Detect

Online search technology has framed much of our plans around our ability to find things. In general, if it’s not digital and online, it’s not discovered. In the future, drones and sensors will replace much of the work of today’s web crawlers when it comes to defining our searchable universe. Search technology will become far more oriented to specific attributes in the future.  Over time, search engines will have the capability of finding virtually anything in either the digital or physical world.

6. Monitor and Enhance Performance

We already have several tools that can create a digital map of our troop performance, both external and internal, like 3D laser scanners. We also have a growing number of contact and embedded sensors that can track what is going on. We are developing complete digital image of troop behaviour that can be rotated around, zoomed in for close-ups, watching physiological episodes and operational resources.  When we finally develop holographic displays, our ability to gain relational perspectives, as well as cause and effect relationships will only increase.

7. Predict the condition of an asset 

Down the road, it will be necessary to transcend a slick dashboard view of a device’s current anatomy, and truly understand its behaviour. Digital twins provide a closer look at what conditions and events influence it to change, to regress or thrive, from one environmental state to another.

8. Increase accuracy

Understanding behaviour patterns, and leveraging advanced machine learning tools, enables meaningful digital twins to be played forward or backward in time. This modeling allows operators to better understand how a device might perform in a certain scenario, for example, to alleviate a potential mechanical failure before it happens.

9. Avoid Equipment failure

Military places  high value on being able to avoid costly breaks or errors. Digital twins enable teams to explore innumerable possibilities so they can deliver, with a higher level of confidence, a recommendation around the longevity or reliability of an asset. Units are wanting to improve uptime and increase production and can leverage digital twins to do so more quickly.

10. Create composite digital twins

Digital twins make it easier for  applications to interact with remote devices, whether to query them for conditions or instruct them to perform certain actions. However, digital twins containing only real-time physical state information are limited in their utility. The next step in digital twin development is to assess behaviour schematics. 

0 Comments

Top 10 Digital Twin Virtual Prototype Combination of Connected Product/Network Platform Questions

6/21/2019

0 Comments

 
A digital model is employed in simulations and as a digital twin, but those two methods differ in how they’re used, especially when it comes to the level of detail required.

Digital models are used in a variety of applications and development workflows. They can vary in degree in how they match a physical device. Models can be used as a digital twin or in simulations in addition to models used to design a system.

Current design models are a representation of a physical entity, and it’s typically used to describe what a physical entity will look like. This can be a 2D or 3D architectural model of a building or a device such as a car. The model provides dimensions and possibly descriptions of materials that would be used in construction.

A digital twin is a model that has the latest sensor data associated with a matching physical device. A digital twin is often used in process control and product operations to help monitor or control a remote system. The model doesn’t necessarily need to exactly replicate the physical device. It may even be a 2D representation, but it’s typically combined with other models to provide a context for the information that can be presented or examined.

Most process-control systems that deliver sensor data to a control program provide at least a limited digital model of a component within the system. However, with new developments the model can now be more robust and combined with other tools like Virtual Reality. For example, a heat sensor might show what part of a device is hotter by showing that portion of a model using false coloring with red indicating higher temperatures.

Digital models used in simulations often have the same type of sensor information and controls of a digital twin, but the information is generated and manipulated as part of the simulation. The simulation may replicate what could happen in the real world, but not what’s currently happening.

A digital twin can be used as a starting point for a simulation model that extrapolates how a system would operate in the future. The degree and accuracy of these simulations can vary depending on the implementation of the simulation and what type of results are desired. 

For example, a digital twin of a gas engine could simply track material consumption, power output, and heat output, but not the actual movement of components within the engine. This level of simulation may be sufficient for checking out how a vehicle would operate when using such an engine.

On the other hand, if the desired results involve how durable a particular part would be within the engine, then the level of detail with respect to the engine would have to be greater. Likewise, simulation of an autonomous  vehicle needs to know the output and control characteristics of the engine, but not the details within the engine.

Digital twins, and simulation models may share all of these aspects, depending on their function, although often a specific tool will create and manipulate a model. For example, a drawing package may be used to create a digital model, and then a process control system would use that model as the basis for a digital twin to provide the linkage between the digital twin sensors and controls with those in the real world.

Likewise, a model used in simulation may have characteristics added so that physical simulation is possible. This might include details about the virtual materials used in the model, which in turn would enable the simulation tools to replicate how the model will react during the simulation.

Any system component may incur different models that vary in the degree they replicate the actual component, as well as how they react and what kind of information can be associated with them. 

Models may have different purposes, but they may also share common descriptions such as details about dimensions, material attributes, etc. Many models will be used by multiple applications for different purposes, from showing the status of a current system to simulating a device that has yet to be constructed.

The concept of a Digital Twin is mostly applied to the case where conclusions are drawn from a physical operating product and network platform. But in the case of virtual prototyping, the Digital Twin concept is applied to an old practice in mechanical hardware.

One approach to developing a smart connected product, one that hooks up with an network platform, is to just build it. Just put it together piecemeal. Throw some sensors on a product. Wire that to some kind of embedded system. Wire that to your antenna. Start sending data to the network platform. You and your organisation can actually learn a lot from going through that exercise. 

While that needs to be done, you will quickly run into limitations on the experiments you can conduct with physical prototypes. Swapping out a sensor isn’t easy when it’s soldered in place. There might not be room, physically, for the sensor you really need for accurate measurement. You might run into too much electromagnetic interference for the antenna you planned to use.

Working through these issues is new to some organisations as they transition traditionally mechanical products to smart connected ones. However, the problems associated with resolving issues through physical prototyping aren’t new. 

In fact, physical prototyping is an old concept when it comes to hardware. Long ago, mechanical and electrical engineers figured out that modeling and simulating a design virtually means you are more likely to get it right the first time when you get to prototyping and testing. 

The benefits of an approach utilising virtual prototyping with digital twins are many. You have fewer rounds of prototyping, saving money and time. You stay on schedule. You stay on budget.

So while virtual prototyping is new to some organisations, this approach has advantages when applied to the development of linked smart, connected products and network platforms. In fact, Digital Twins are a key enabler.
You first need to set up the digital model component of a Digital Twin .

Numerical Models use machine learning and artificial intelligence. These applications or agents either extrapolate that data and/or correlate data to existing events. Both are an effort to predict future behaviour.

1D Simulation models are a combination of flow diagrams with equations or formulas behind the blocks that simulate the performance of embedded multi-disciplinary engineering systems. These models can provide deeper insights into ongoing operation.

3D Simulation models, often in the form of multi-body dynamics, are commonly used to predict the dynamics and structural performance of products. to  help characterise requirements of ongoing operations.

For scenarios based on engineering physics or asset operation, no prototype or operating product exists. As such, there is no sensor data to feed this digital model. However, the model  can be fed historical sensor data from prior products or even from past physical tests or operational data. 

In the worst case, a set of inputs can be modeled using statistical analyses or even a higher level simulation, such as a  multi-body dynamics model. This creates a set of input data that can be fed to the digital model.

The combination of the digital model and the input is enough to get started. You run the model as a simulation,  generating data from virtual sensors, which are points of measurements from the simulation. 

In this application, that virtually generated data is used instead of physically recorded data from sensors and fed to the network platform as if it were receiving streaming data from a running product. Only, in this case, there is no physical product. There is only a virtual product that is running in a simulation.

In this scenario, you overcome many issues that you might experience when trying to physically prototype a smart, connected product. You can change anything related to the sensor configuration, including placement or type.

You are not limited by network bandwidth other than the limitation between the compute resource running the Digital Twin and the one running platform. You can change the product’s design in terms of mechanical or electrical hardware, embedded systems and more. There is a tremendous amount of flexibility with this approach.

Engineering organisations have been using the “Get It Right the First Time” principle to avoid all sorts of difficult development disruptions for decades. At this point, applying it to mechanical and electrical hardware is a no-brainer.

Digital twin is becoming increasingly relevant to systems engineering and, more specifically, to model-based system engineering. A digital twin, like a virtual prototype, is a dynamic digital representation of a physical system. However, unlike a virtual prototype, a digital twin is a virtual instance of a physical system “Twin” that is continually updated with the latter’s performance, maintenance, and condition status data throughout the physical system’s life cycle. 

Here we present an overall vision and rationale for incorporating digital twin technology into systems engineering and examine the benefits of integrating digital twins with system simulation and  networks and provide specific examples of the use and benefits of digital twin technology in different industries. We are making a strong recommendation to make digital twin technology an integral part of system engineering model experimentation testbeds.

While virtual models tend to be generic representations of a system, part, or a family of parts, the digital twin represents an instance i.e., a particular system or process. Digital twin technology has the potential to reduce the cost of system verification and testing while providing early insights into system behaviour. 

The digital twin is ultra-realistic and may consider one or more important and interdependent vehicle systems such as airframe, propulsion, avionics and thermal protection. The extreme requirements of the digital twin necessitate the integration of design of materials and innovative material processing approaches. 

The context of operation of the digital twin involves an instrumented testbed in which model-based systems engineering tools e.g., system modeling and verification tools and operational scenario simulations e.g., discrete event simulations, agent-based simulations are used to explore the behaviour of virtual prototypes in a what-if simulation mode under the control of the experiment.

Insights from the operational environment are used to modify the system models used in the virtual prototype. Data supplied by the physical system is used by the virtual prototype to instantiate a digital twin. Subsequently, the digital twin is updated on an ongoing basis, so it  mirrors the characteristics and history of the physical twin with high fidelity. 

The virtual system model can range from lightweight models to full-up models. The lightweight models reflect simplified structure e.g., simplified geometry and simplified physics e.g., reduced order models to reduce computational load especially in upfront engineering activities. These lightweight models allow simulations of complex systems and system-of-systems with fidelity in the appropriate dimensions to answer questions with minimal computation costs. 

A digital twin is first created when the physical system can start providing data to the virtual system model to create a model instance that reflects the structural, performance, maintenance, and operational condition characteristics of the physical system. The physical system can be said to be ”fit for purpose” if the digital twin’s behaviour is analyzed and can be appropriately adjusted for a variety of contingency situations. 

For example, computer simulations of the braking system of a car can be run to understand how the vehicle model would perform in different real-world scenarios. This approach is faster and cheaper than building multiple physical vehicles to test. However, computer simulations tend to be confined to current events and operational environments. In other words, they do not have the ability to predict vehicle responses to future/envisioned scenarios. Also, braking systems are networked systems, not merely a combination of mechanical and electronic subsystems. 

Being a virtual representation, a digital twin is easier to manipulate and study in a controlled testbed environment than its physical counterpart in the operational environment. This flexibility enables cost-effective exploration of system behaviours and sensitivities to various types of system malfunctions and external disruptions. 

Today any digital version of a system, component, or asset is called a digital twin. With this larger interpretation, there are several questions that arise including: 1) Does a physical system have to exist before a digital twin is created? 2) Does the physical system with onboard sensors and processor need to report performance, condition, and maintenance data to the virtual system model before the latter can be called a digital twin? 3) Does the definition of a digital twin have to change because any physical asset today can be made smart with the advanced network technology?

These questions and many more need to be answered about the digital twin and its various uses. Here we present several levels of a virtual representation. Each level has a specific purpose and scope and helps with decision making and answering questions throughout the system’s lifecycle.

Pre-Digital Twin Virtual Prototype is created during upfront engineering. It supports decision-making at the concept design and preliminary design. The virtual prototype is a virtual generic executable system model of the envisioned system that is typically created before the physical prototype is built. Its primary purpose is to mitigate technical risks and uncover issues in upfront engineering. 

Like most model-driven approaches, virtual prototyping involves a model of the system early in the design process. However, a virtual prototype is not usually used to derive the final system. This is because a virtual prototype can be a “throwaway” prototype or a “reusable” prototype. The latter can be used to derive the final system. A virtual prototype is mostly used to validate certain key decisions about the system and mitigate specific technical risks early in the design process. 

For example, a model could consist of ideal wheels with dry friction contact patch rolling on a surface. It employs a simple i.e., low fidelity model of differential gear to distribute torque equally to the wheels, and reflects properties such as inertia, mass, fixed translation and torque to realise a basic structure of vehicle with mass properties defining gravity and global coordinates system. 

The sensors measure absolute position, velocity and acceleration. The trajectory control module provides torques values to the steering and differential gear mechanism. Such low fidelity models can be used, for example, in testing, planning, and decision-making algorithms related to, for example, trajectory control in autonomous vehicles performing lane changes.

Digital Twin virtual system model is capable of incorporating performance, condition and maintenance data from the physical twin. The virtual representation, an instantiation of the generic system model, receives batch updates from the physical system that it uses to support high-level decision making in conceptual design, technology specification, preliminary design and development. Data collection from the physical sensors and computational elements in the physical twin includes both condition status data e.g., battery level, mission performance data e.g., flight hours. 

The data is reported back to the digital twin which updates its model including the maintenance schedule for the physical system. Since interaction with the physical system is bidirectional, there is sufficient  opportunity for the physical twin to use knowledge acquired from one or more digital twins to improve its performance during real time operation. 

At this level, the digital twin is used to explore the behaviour of the physical twin under various what-if scenarios. Being an executable digital representation, it is easy to manipulate when exploring the behaviour of the system in the controlled simulation environment of the testbed. Any deficiencies discovered are used to modify the physical twin with the changes reflected in the digital twin. 

For example, a model could include a passenger car with power split hybrid power train. The chassis model has single degree of freedom with mass-and speed-dependent drag properties. The braking subsystem which uses brake pedal position resulting from driver action to calculate brake torque. This affects the driveline. The driveline model consists of four wheels with front wheel drive and ideal differential. The power split device consists of ideal epicyclic gear without losses. An ideal battery with constant voltage source powers DC motor model with inductor, resistor and emf component connected to shaft hub. The engine model with flywheel consists of drive-by-wire accelerator, where the accelerator inputs are converted to output torque. Road models are used to define the inertial frame, gravity, air temperature, wind velocity, gas constant for air, and air pressure. 

With respect to developing smart, connected products linked to network platforms, we’re in the early stages. But soon, some organisations are going to want to sidestep all those difficult development problems they’re experiencing. 

Leaders are going to realise that building and testing multiple rounds of prototypes is unacceptable. They’re going to realise that many delays completely undermine their competitive market position.

As a result, organistions are going to want to adopt more proven and standardised practices. Virtually prototyping with Digital Twins is not there yet. But given the rush toward new networks, the demand for this practice is expected to only increase.

So now we have this concept of using a Digital Twin to virtually prototype a smart, connected product linked to network platforms. What does that get you? Interestingly, it allows organisations to answer a set of serious questions.

The powerful aspect of this use of a Digital Twin is that you can answer these questions without any physical prototype or testing. Everything is digital. So you are getting smarter about the operation of this product without spending any money to build anything physical.

1. Is the right data being captured from the right physical/virtual sensors in product?

2. Do we need to use a physical sensor to capture this data, or can it be a virtual sensor?

3. Is edge processing required for the sensor data processed on product or network platform?

4. Is the mechanical design right for this product in the context of electrical systems configuration?

5. Are there changes that should be considered to improve placement of sensors?

6. Are there changes that should be made to avoid electromagnetic interference?

7. How will the connected product and network platform work to fulfill requirements?

8. What conclusions can be drawn from the data once it is in network platform?

9. What data event precursors trends must be crunched manually or machine learning be applied?
​

10. What data trends are precursors to events critical to the smart, connected product?


​
0 Comments

Top 10 Digital Twin Sensor Process/Product Service Bridge Operations Simulate Scenarios Predict Performance

6/21/2019

0 Comments

 
Creating a digital twin involves building a comprehensive digital representation of the many components of a physical object, from outer features to the networks inside. Companies develop digital twins by attaching sensors to their products, assets, or equipment.  Building digital twins will give you digital versions of materials, 2D drawings, and 3D models. More importantly, you’ll have an accurate view of how your devices are operating in real time. 

Engineers leverage simulation tools to develop innovative and high-performing products in a virtual design space using design  technology. Imagine what you could do as a designer or engineer if you could take your simulation a step further and study a digital working copy of a product under actual working conditions. It could help you rapidly optimize the design, life, and maintenance of a product. 

Companies are collecting real-time operating data from product-mounted sensors. They use the data to create an exact replica of a working product, process, or service. This exact replica, called a digital twin, is a simulated model in a virtual space that performs under real-world conditions to help companies find performance issues, schedule predictive maintenance, reduce downtime, and minimise  expenses. 

An example is visibility into real-world bearing operating temperatures and the downstream effects on tolerances right within a digital model. By studying the digital twin under actual working conditions, companies can see the product in action. Engineers can make more informed choices during the design process and use digital twins to make their simulations more accurate. 

For companies already using engineering simulation to design products, connecting the simulation to the physical product in the field is necessary to deploy the digital twin solution. For companies new to simulation, the engineering team must first build the 3D product model, optimise its performance, and replicate the real world in which the product system would operate. Re-using this data in similar product simulation scenarios for future product development saves time and money. 

A digital twin is a complete 360° digital representation of a physical asset, i.e., a pump, motor, turbine, or an entire plant. By creating digital twins of physical assets, data generated by the asset during its design and operational life is collected, visualized and analyzed, enabling unified life-cycle simulation. 

During the design phase, the digital twin allows for the analysis of processes, equipment and operations through multiple simulations for optimal safety, reliability and profitability.  At the concept phase, fast evaluation of design alternatives are analyzed and continuously iterated through variable specifications allowing integrated asset modeling of interacting but separate systems. Each iteration provides a more complete data-set aiding in agile development. 

The basic notion is that, for every physical product, there is a virtual counterpart that can perfectly mimic the physical attributes and dynamic performance of its physical twin. The virtual twin exists in a simulated environment that can be controlled in very exact ways that cannot be easily duplicated in the real world, such as speeding up time so that years of use can be simulated in a fraction of the time.

Hyper-accurate models and simulations offer engineers and product designers unmatched insights across the entire product development cycle. Still, digital twins are more than just an progression of digital models, although their goal is similar: Higher quality products and better product support at less cost and less effort. 

For decades, engineers and designers have heavily relied on design application tools to digitally capture their ideas for physical objects as parametric models. Even today, more complex tools allow for the simulation of certain characteristics such as thermal properties or stresses and strains. 

While use of simulations in product design is nothing new, they have historically relied on relatively small data sets or engineering assumptions when making predictions. Digital twins, however, have access to huge data sets thanks to sensors that monitor/measure literally every facet of a product's lifecycle and fed back into an iterative design-manufacture-observe-improve loop. 

Once a product leaves the factory and is acquired by an end-user, the digital twin can begin to feed off real-world data collected by the onboard sensors. This is where  the concept of the digital twin reaches its full potential. Sensors in the end item itself will track key performance characteristics of the device as it operates in real-world conditions. Comparing actual telemetry against the predictions of the various aspects of the digital twin model yield insights only dreamed of until now. 

A useful loop results from this level of integration of the physical and virtual. Not only can the digital twin be improved based on real-world data, but future iterations can also be improved based on better understanding of actual data from end users. In some cases, where changes can be made through an update, products that have already been deployed can also benefit from lessons learned through using a digital twin. 

Being able to get that feedback to a company is invaluable. Not having to rely on a customer to call a help desk and to have that data fed into a digital twin to influence future design iteration is even more incredible. In addition, it would be almost magical if a customer received an email from the company proactively describing steps they could perform to minimise the failures that they might be experiencing without having the customer even place a call or email in the first place. 

Beyond the obvious use of  rich datasets in maintenance prognostics, digital twins could have profound impacts on the design and engineering of subsequent product iterations. Understanding how a product is actually being used in an objective and data-driven manner will lead to faster development cycles and greatly reduce the time to detect product defects or identify useful tweaks, thus reducing waste by allowing manufacturers to make real-time improvements to the products still coming off the assembly line. This can translate to huge savings by avoiding costly rework. 

Digital twins are not limited to assessing tweaks to physical properties of a design. Digital twins can also make it easier to study the impacts that tool revisions have on performance. Various configurations and settings can be rapidly tested and assessed to determine which ones will deliver optimum performance. Updates could then be pushed out seamlessly to all the devices, leveraging the same network connection that initially sent the data used to identify improvements. 

Technology can continuously monitor, collect data, and conduct analysis. Meanwhile, humans can keep their attention on higher-level work such as exploring implications of various complex courses of actions and making informed decisions. 

Embedded platforms, with their computational horsepower, efficient sensors, and reliable communications hardware, are critical to the collection and dissemination of telemetry data. This data is necessary to make digital twins smart enough so that their function is worthwhile. Then, all that data can be pumped into databases that are rapidly analyzed. Throw in the possibilities from Artificial Intelligence systems to analyze and make improvement recommendations, and it’s possible that products could improve over time without any human intervention. 

The concepts of the digital thread and digital twin have been spearheaded by the military aircraft industry in their desire to improve the performance of future programs. They apply lessons learned through these digital technologies to current and upcoming programs. 

The data in the digital twin of an aircraft includes things like specific geometry extracted from aircraft 3D models, aerodynamic models, engineering changes cut in during the production cycle, material properties, inspection, operation and maintenance data, aerodynamic models, and any deviations from the original design specifications approved due to issues and work-arounds on the specific product unit. 

The benefits expected from having a digital twin for each product unit include ore effective assessment of a system’s current and future capabilities during its lifecycle and early discovery of system performance deficiencies by simulating results way before physical processes and product are developed  Optimisation of operability, manufacturability, inspectability, and sustainability leveraging models and simulations applied during the entire lifecycle of each tail number. Continuous refinement of designs and models through data captured and easily crossed referenced to design details. 

Product Engineers working together with Manufacturing Engineers create a 3D model linked to visuals for production process instructions. Product characteristics are linked to 3D models and extracted directly out of designs into conformance requirements.

Conformance requirements are linked to manufacturing process and inspection instructions. As-built data is delivered by Production along with product unit to customer and is made available for sustainment services to continue evolving the unit’s data during operation and maintenance services  Product design changes follow the same data flow and automatically update downstream models, references and instructions. 

The digital twin is a virtual copy of the real-world thing, or a complex system of connected things. It’s not just a 3D model—it’s a living model in 3D that sees the vehicle as part of a complex technology ecosystem of electronics, navigation, communication, collision avoidance, and so on. Engineers can analyze how a vehicle performs not just in its physical environment, under every condition imaginable, but over its entire lifecycle, from an early-stage digital prototype on screen to its last day on the road. 

Aviation engineers today can use the digital twin to pinpoint when and under what conditions, or after how many hours of flying a critical part or sensor will fail. Similarly, rocket engineers are putting digital twins to work to predict and verify performance of the lightest possible materials and payloads, long before they ever perform costly tests on the launch pad. Digital twins are being used not just to model how such physical assets perform, but—longer-term—how ever-more complex systems of assets and humans will behave together as a whole. 

Emerging technologies enabling the  digital twin include simulation tools—the heart of the digital twin—which has come a long way since the early days of 3D design. Simulation tools today can codify, replicate and virtualise the performance of physical products and systems, all based on the hard-wired laws of physics. Digital twins are essentially complex simulations of any number of components in action like aircraft jet engines during takeoff but based on true operational data generated, over long periods of time, by sensors from every critical part. 

How might digital twin value emerge in a few years with an autonomous vehicle? Engineers will construct a digital twin before they design or build the real thing—enabling dramatic cost savings and accelerating time to market. Designers will collaborate from the outset with operational teams and data analysts to begin gathering different data types to start modeling and verifying how future products will perform under every condition; how different types of drivers will interact with it; what its vulnerabilities are from a maintenance and breakdown standpoint. 

Building the digital twin might start with the physical components. Engineers can pool data on type of motor, suspension, chassis and aerodynamic body they want to tap into, and the materials they’re built from. Then they would start adding new data layers—such as operational data logs of similar models, or traffic data  to model performance in different operational scenarios. Engineers will pile up all that data and start tapping into machine intelligence tools to design and model out their ideal product—long before anything hits the assembly line. 

Then comes the promise of the digital twin over the lifecycle of the vehicle. With digital twins, engineering and operations teams can see not just what is happening at any given time, but why. They can speed up simulations in operation and productivity, pinpoint when, why and how breakdowns will occur, and reduce the costs and risks of unplanned downtime. 

Engineers are building digital twins of physical structures—dynamic, simulated models of the real thing, powered by the massive amounts of data that a single structure generates around the clock, such as physical specs, energy consumption and cost data, equipment parameters and live occupancy data pulled from elevators. The promise of all this is to use digital twins for everything from predictive maintenance and optimised facility management to streamlining workspace design based on the data flows showing how real people truly use the physical space. 

Imagine a manufacturing system that, on first glance, might function like any other modern system. While seemingly ordinary, the automated machines in this system experience significantly less downtime than you’d expect. This is because each machine is being operated alongside a virtual, real-time model that corresponds to all of the dynamics and components inside the real machine. Whenever the machine might be close to malfunctioning, the virtual model is two steps ahead, either making slight adjustments to its controllers, or notifying a maintenance staff before the issue causes downtime. 

This approach to modern, efficient machine design is quickly becoming a key technology for reducing costs and ensuring smarter products. The technology behind this process uses the digital twin, and by using modern system-level modeling tools, they are becoming an important part of the virtual commissioning process. 

For those who are just getting started, a digital twin is a dynamic, virtual representation of a corresponding physical product. These models can range widely in their purpose and fidelity, but they serve as a powerful connection to the product for diagnostics, design changes, and an important new process called virtual commissioning. Companies are using digital twins across industries, allowing them to optimise their products in ways that were previously impossible. 

Initially, digital twins were typically created once the physical product was in operation. Using vast amounts of data from sensors on the product, predictive models could be created to assist with diagnostics and improvements. By using a system-level modeling tools , the creation of a model-driven digital twin can begin alongside the design process. Meaningful digital twins can be created before a physical product is finalised, allowing for a powerful test platform to validate product performance earlier than ever. 

During virtual commissioning, engineers can import their designs directly into a simulation tool to create a high-fidelity model of their system dynamics. This model gives the engineer new abilities to get quick insight into system integrations issues, long before the expensive commissioning stage. They can import their motor parameters, define their product’s motion profiles, and have easy access to a full set of results after simulation. 

Systems are tested against a high-fidelity, functional model of the physical product, giving engineers a much clearer expectation of how their automated designs will perform after physical deployment. Every issue caught at the virtual commissioning stage then becomes one less unexpected issue that might crop up during physical commissioning. Catching even a handful of issues early can often save companies huge amounts of time and money, since fixing issues on a virtual platform is far easier than revising physical products. 

The power of model-driven digital twins doesn’t end with virtual commissioning. By having a high-fidelity model paired with the physical product, there are benefits along the entire design process – even after the commissioning process itself. Companies who have adopted model-driven practices with digital twins are using these models throughout the operation of their machines, using them for real-time diagnostics, as the inline digital twins can provide information about torques, inertias, and more. This helps ensure less downtime, as issues can be spotted early and fixed faster. Many companies also experience the cost-savings of using a digital twin, as the high-fidelity model can give them much of the information that they had previously needed to obtain from expensive physical sensors. 

As the cycle of innovation quickens, digital twins are becoming increasingly popular across industries to reduce risks and get new products to market faster. With  advanced modeling tools, model-driven digital twins are securing their place as an essential component for the modern design process. 

Part of the reason for creating a digital twin is to keep the list of features straight for each individual product. In high-volume production, that includes tracking the multiple features that change from product instance to product instance. The digital twin keeps a specific record for each product. Large auto manufacturers produce so much they want to keep the information on each vehicle as precise as possible. You don’t have all of the engineering information—that would be too overwhelming—but you have digital threads that lead to that information in case there are problems with the vehicle. Then you can follow the digital thread to the product lifecycle data.

In addition to the value of a digital twin on the manufacturing line, there is also value in capturing data when the product is in the field. The digital twin is also designed to send info back to the automaker while the vehicle is being used. You continue to communicate with the vehicle through some connectivity. You can look at the fleet or individual vehicles, so you can determine maintenance or upgrading needs and alert the customers.
 
So, who is actually utilising digital twin technology? Is it being used now? “We see examples of digital twin use in aerospace and defense. We have a customer who provides training and they actually use a pretty good digital twin in their training.” 

1. Formalise planning development, integration, and use of models to inform enterprise, programme decision making, support engineering activities to digitally represent the system of interest 

2. Ensure models are accurate, complete, usable across disciplines to support communication, collaboration and performance and decision making across lifecycle activities 

3. Provide enduring, authoritative source of secure authentication with access/controls to establish technical baseline, product digital artifacts, and support reviews for accurate decision making 

4. Incorporate technological innovation to establish end-to-end digital engineering enterprise and foster conditions for productive step advances towards goals 

5. Enable end-to-end decision making using advance human-machine interactions

6. Establish mature supporting digital engineering activity infrastructure to perform activities with connected information networks 

7. Develop, mature, and implement technology tools to realise digital engineering goals and share best practices using models to collaborate with stakeholders 

8. Improve digital engineering knowledge base, policy, guidance, specifications, and standards and streamline contracting, procurement and business operations 

9. Lead and support digital engineering transformation efforts, vision, strategy, and implementation to establish accountability to measure and demonstrate results across programmes 

10. Build and prepare workforce to develop knowledge, competence, and skills with active participation and engagement in planning and implementing transformation efforts


​
0 Comments

Top 10 Engineering Report Examples Improve Maintenance/Modernisation Work Orders On Job Site

6/11/2019

1 Comment

 

​We’ve been grinding away at this Maintenance/Modernisation business, making incremental progress. We face many challenges. We commissioned a deep and comprehensive report on Ship Readiness & how surface ship Maintenance/Modernisation efforts contributed to that.  
 
We were tasked to go in and  understand, start codifying, addressing the gaps & issues. We have made progress in our ability to understand the business from end to end. 
 
The one modification is that we started out with several main Blocks, but as we started to apply whatever heat and light we could to an organisation that includes about thousands of civilians and sailors, many more private sector Wrench-turners. There was a missing piece that we need to get at – Alignment/Oversight. 
 
Alignment/Oversight is probably the single biggest challenge before us as we consider how many different organisations that have some effect on whether we plan, implement and stabilise it right as to requirements, flowing money into the process at the right time to do proper planning and execute to a schedule/plan that we all agree to at the beginning. We are challenged in that area. 
 
We have mentioned that there is a lot of interest in that area, and NAVSEA is going to take the lead in getting that right.  
 
A Quick update on the Main  Rocks: First, Assessment, Planning a and Policy. We have a programme called “Total Ship Readiness Assessment.” This is a term pulled right out of the Submarine Playbook, which is funded to the mandate of conducting assessments for every ship in the programme. We plan for it to not be under resourced by one minute or one dime for execution 
 
The challenge is that we’re working w/ the commanders [RMC side] on this—to make sure it is aligned with the ship schedules and the “Ship Readiness Manual [SRM]” – some early learning ground for us.  
 
The programme was challenged a several years back when we slowed down funding on the Operations & Maintenance Side, putting some perturbations in to the full implementation, but, again, but we  have seen an increase in levels of  Resourced programmes.
 
We have a lot of resources or process to go out and do assessments, and as we know –assessments done right, and at the right time inform a work package that is very important.  
 
Not as much progress on sustainment programme as we would like but maybe it just took as much time to become operational, to move this one a little further down the road.   
 
This one is about grouping or aligning all the resources for life-cycle assessments of systems/equipment and how they perform in common platforms that looks across all surface ships, all systems and that we’re doing the right life-cycle assessments & activities to fight our way through obsolescence  & readiness system & how they are designed, and how we go and approve that. 
 
Our intentions are to single it all up so when we talk diesels, for instance, it’s not just amphibious diesels, that it’s diesels across all surface ships. So that’s the sustainment programme.  
 
We have made great strides in engineering requirements for maintenance, to be able to deliver on an expected service life at the right time on the platform that we envisioned at the time we engineered it at the beginning so it gets 35-40-45 years. 
 
Even if we change a platform, we now have the ability to go look hard with the right engineering rigor at exactly what it’s going to take to get 5 more years out of the ship—these are bedrock requirements.  
 
Nearly every ship in the Surface Navy is in the programme, but there are some exceptions—lack of investments early on in  LCS left us with with problems coming in on the left hand side, but by it became an easier path to get every ship that can be in the programme in the programme. 
 
Many ships have completed their first engineering maintenance period, or planning for the process of having that availability. This is one where we just need to stay the course—the leadership is solid.  
 
Again, much of the process is lifted right out of the Submarine Playbook & these requirements are flowing into the budget process. Every ship’s requirement is treated equally and most requirement for engineering maintenance that we have asked for we have gotten the budget to go & execute that.  
 
Another debate is what a year of execution looks like in the flow-down. We are succeeding in this area, meeting requirements & the budgets are moving right along with that so we can go and execute.  
 
The challenge for us--we are working very aggressively on this—is taking what we know about the requirements & aligning that with modernisation requirement that’s coming together for the same availability and telling the fleet early on exactly how many days & weeks that is going to take to accomplish all that work.
The Navy is projecting a significant increase in ship maintenance and modernisation work , and we’re concerned private industry does not have the capacity to keep up.

We’re concerned in the out-years we will not have the private yard capacity we need to handle all the force maintenance requirements. Private yards have done a good job absorbing work that has come their way during the force reset, but the prospect of a still greater maintenance workload in the out-years, with additional LCS coming online and a potentially larger force overall causes concern.

“This concern extends beyond dry dock capacity shortfalls, which still loom large. Forward Deployed Naval Force is at capacity, for example. There is simply more work than the current workforce here can accomplish. We’re pushing to expand the set of yards capable of and familiar with performing Navy work. 

At the same time, the Navy is trying to boost its sailors’ ability to perform more maintenance work on the ship without outside assistance. While this wouldn’t make much of a dent in the looming surge in workload, it could cut down on contractor maintenance costs, and it would lead to a more self-sufficient fleet capable of operating in complex environments.

In the case of LCS, the ship concept was designed around a minimal crew, with the bulk of maintenance activities being conducted by contractors. Though the crew size has been increased over time, the LCSs have come back to port about every 25 days for a contractor-led maintenance availability, which includes basic tasks such as monthly system checks

LCS crews will begin taking on more maintenance responsibilities themselves.
“It’s clear what the operational fleet’s demand is for us, which is to make sure the ship can be maintained, and the sailors where possible can do that. So we are focused on training for the sailors, and making sure they had the equipment, the spare parts, the technical documentation and more to help the ship’s crews and sailors at the Regional Maintenance Centers conduct more LCS work themselves.

“The key is to increase sailor ownership and decrease the reliance on contractors and original equipment manufacturers. The Littoral Combat Ship manning does not support shifting the entire maintenance workload to the crew, as their capacity is limited. But we’re still committed to maximising the amount of planned maintenance that we perform by the LCS crew.

One of the frustrating things for the crew was, it was just kind of the way we set it up, but contractors would come aboard and do the work but the sailors would have to hang all the tags … so it was kind of like, rather than having the sailors do the work, we would just have the sailors do all the setup and teardown, and then the contractors would step in and do the work and the sailors would watch them do the work. It’s crazy. There’s some maintenance items that are appropriately done by the depot – reset and safety … but the day-to-day weighing of CO2 bottles or, we just had to get after that.”

There’s still an ongoing conversation on the division of labor between sailors on the ships’ crews and the Regional Maintenance Centers, along with the role of contractors.
Beyond LCS, though, we have concerns about the declining ability of ships’ crews to take care of their own ships. Sailors today have fewer opportunities to become proficient at ship maintenance during shore duties, meaning sailors going to sea bring with them less knowledge about how the ship and its systems work.

In a contested operating environment with denied communications, he noted, “you are not going to have the ability to phone home.”

Navy operational and maintenance communities have to find a way to get sailors more proficient at fixing downed systems and repairing their ships while underway. While a complex challenge to address,  part of the solution would be ensuring that crews can begin pre-deployment training on time – without delays from ship maintenance availabilities going long – and ensuring that that training time includes an emphasis on maintaining and repairing the ship.

“That’s years of constrained funding that have taken risk on things like tech manuals, Engineering Operational Sequencing System, Planned Maintenance System, and resourcing maintenance, to where what has been the risk-taker has been compressing that training timeline.

“We’re in an environment now where that risk is unacceptable.”

Data is everywhere. Used effectively, data can help you spot trends, track performance and drive evidence-based decision-making. Unfortunately, not everyone can access the data they need. Here are sample report examples  you can use to help you understand your product development processes and use this information to make improvements, reduce overheads and manage the Navy Fleet like never before.

1. Leverage Data Forecasts

Well led teams can leverage data to understand what customers buy and why. Marketing teams can use automation tools to track website activity and email click-through rates, improve prospects and measure campaign effectiveness. This type of data is plentiful and can help businesses gain a deeper understanding of their target markets. They can then use these insights to forecast sales more accurately and tailor their branding and future product offerings with greater success.

While this data enhances the customer-facing side of your business, there is very little data available to improve your core functions, namely product development. Sure, you can get 3D models and 2D drawings, assembly instructions and other collateral which tell you how good your products are and how clever your engineering teams are, but you have no data, statistics or evidence that can be used to make process improvements or cost reductions.

2. Design History

Design teams, project managers and executives only really connect at design reviews, where each stakeholder can review the current status of a project, discuss problems and brainstorm ideas. If a project appears to be off track or behind schedule, more resources can be thrown at it. This may solve the immediate problem, but it doesn’t address the issue at its source. Without hard data, your only evidence is anecdotal evidence. “Why is this maintenance availability  taking so long?” may elicit a number of unwanted responses.

Enterprise Tool  is designed to address this shortfall. In addition to the complete design history captured in each Shape Update detailing who made what changes and when. Shaped Update Enterprise  Tool records project details, duration, release status, team activity, supplier access, and more. In short, any activity that touches your data is logged and presented in easy-to-read graphs, tables and charts. This gives you complete visibility into who did what and when, how engineering efforts are trending over time, and who is contributing in what ways

3.  Activity Overview

Every report in Enterprise Tool contains links to more detailed reports, so a good place to start is at the top level with the Activity Overview. Although it’s called an “overview,” there is lots of very useful information that can be drawn from here. You can see the total number of hours spent on each project, project activity on a daily basis and which Documents have been worked on the most. You can also see which engineers are actively working on which projects and use this information, along with more detailed reports, to see if you need to allocate more resources to get a project finished on time.

Activity is detailed  for every design project in your organisation. For auditing and security purposes, “Exports” is a report of great interest. Unless you assign export permissions to a specific role, team or individual, no data ever leaves Shape Update; nobody is able to share designs externally and everyone must sign in to your domain to access your data. This adds an extra level of security and a full audit trail.

4. Access the latest revision of a part 

Another huge benefit is ensuring that the correct part is being made a supplier must first sign in to your domain before they can export the data you have given them access to. Every export, just like everything else in Shape Update Enterprise Tool , is logged and presented as a detailed list of who did what and when. This enables you to check that you’ve set the right permissions and data access for the right people and that everything is above board.

Additional security is provided by the Login Locations map that logs the network address of every computer that signs in to your domain. This map shows you each user’s approximate location so you can keep tabs on where your data is being accessed from. Clicking on one of the dots tells you the number of sessions run from that location and you can get a detailed list of each user’s name and on what date they signed in. This is very useful if you are working with many outside suppliers. Any outliers can be checked, just to make sure that your data is being used as intended.

For managers, the number of pending releases could indicate either a flurry of activity in release submissions or a bottleneck in the approval process. Any delays here could have serious consequences, so this chart is invaluable in helping to keep everything running smoothly. To find out exactly what is going on, simply click on the Release Status pie chart and drill-down .

5. : Release Dashboard

The Release Dashboard lists all the Parts, Assemblies, Drawings and other project-related data that have been submitted for approval. Since each release must follow an approval workflow, the pending status indicates that a release is waiting for somebody to review and sign-off. The details of each release candidate, including the actual design data itself, can be interrogated from here to see which project it belongs to, who submitted it and how long it has been in a pending state.

A submission may be pending for any number of reasons: an approver may be on the flight line waiting for more information, or it may have simply been overlooked. You can then follow up with each worker, or override the approval process if you have the right administrative permissions.

This dashboard enables you to follow the progress of each project and ensure  the approval process runs as smoothly as possible. Monitoring the number of releases gives you a good indication of where a project is in the design cycle.

6.  Project Dashboard

Shape Update is unique in how it stores design data. Rather than using files for each part, assembly and drawing, a single Shape Update Document can contain any and all project-related data and an entire team of engineers can work on the same Shape Update Document at the same time. However, in practice, most engineers or teams of engineers will work in their own Documents and link them together in a master assembly. These Documents are then organized into Projects, which are unique to Shape Up Enterprise Tool. Projects make Document management and data access much simpler using Role-Based Access, and easier to monitor using the Project Dashboard.

The project Dashboard gives you details such as modeling time for the entire project, the most active users and the most edited Documents. You can also check permissions and role assignments, which can show you at a glance which users are able to access your project data and what they can do with it.

7. Feature Activity Metrics

One metric that is often overlooked is in the Feature Activity chart. This chart breaks down user activity into actual design activities within all the Documents in a project. As a project progresses, it is reasonable to presume that the number of drawing activities should increase and the number of design changes should fall. If this does not appear to be the case, this could indicate a serious problem – especially if a project deadline is looming.

There is enough information on this page and the subsequent detailed drill-down reports, to find out exactly which areas of a project are causing delays and get a good idea of where the problems lay before further investigation. This enables you to be proactive in your project management tasks, identify problems early and resolve them before they escalate.

8. User Dashboard

The User Dashboard details all activity associated with a particular user. This report can be used to review which projects and Documents a user is working on and which design activities are taking up most of their time.
Resource allocation becomes very important when there are many projects on the go at the same time. If some projects are more important than others or are in danger of overrunning, you can use the User Dashboard to see which projects each user is spending most of their time on. This makes it easier to understand everybody’s workloads and ask them to lend a hand where needed.

If it appears that some Documents are being worked on more than others, there may be several possible reasons. If it’s a particularly difficult design problem, that could be one reason. If there’s too much to do and not enough help, that could be another. Training may also figure into this equation, too. New users naturally take longer to achieve the same results as seasoned users, so maybe more training is required?

Finally, since everybody must sign in to your domain to work on your projects, activity by external contractors and suppliers is also recorded. This enables you to negotiate contracts based on hourly rates rather than flat rates if that suits your business better.

9.  Document Access

You can look at the Document Access report. For security review purposes and overall Document activity information, this fancy report shows you the information in a concise manner. The graph displays all the information from the table at the bottom of the report with a color-coded node for each Project, Document, and User. Each node is then connected with either a dotted line to signify assigned access permissions or a solid line for Document activity.

You can click on a node to filter by Project, Document or User, search for a specific item, and hover over a line to see modeling times and access permissions. Each node can be clicked to jump straight to any of the Project, Document or User Dashboards, show a detailed audit trail, or in the case of a Document, open it for viewing or editing. This makes it an ideal way to quickly navigate your engineering data.

10. Improving Your Business With Onshape Enterprise

Shape Update Enterprise Tool is not an add-on or a separate system, it’s all part of the same product development platform. Each report is driven by a unique network, so you can easily bookmark them in your web browser for quick access. Shape Update Enterprise Tool reports and analytics are also available via mobile app too, so you can access this data from anywhere at any time.

Having this level of detail about your design processes is totally new and may seem difficult first. You will soon start to notice patterns and trends in the data and it will become easier to recognise, helping you to be more proactive and manage your teams, your projects, and your business better. Ultimately, Shape Update Enterprise Tool will help you make informed business decisions backed by real data, enabling you to reduce costs and use resources more wisely.
​
Companies that are able to unearth these nuggets of information buried deep within their design processes have a significant competitive advantage over their rivals, as they can gain important insights and react quickly to expand their business in a way that is simply not possible without detailed engineering analytics.
1 Comment

Top 10 System Engineering Questions Fix Requirements Process Problems Create Quality Product

6/11/2019

0 Comments

 
​The first step in systems engineering process is the requirements analysis. In this step, the systems engineer attempts to further derive, refine, and define the customer’s requirements. This includes determining quantifiable characteristics that the system must possess to be successful. The relationship among requirements, the priority of the requirements, and the flexibility of the requirements are determined. 

The missions, threats, environments, constraints, and measures of effectiveness are continually reviewed. Some constraints that are considered are technological availability, physical and staff resources, cost/budgetary impacts, and the risk of not meeting the stated schedule. The output of this process answers the question “what?” for the functionality and “how well?” for the performance requirements. 

Congress’ recent interest in reforming the Pentagon’s acquisition system is all well and good, but its efforts will be wasted unless it addresses the root-cause of the system’s dysfunction: the requirements definition process. Requirements definition is the aspect of acquisition where the technological capabilities a new weapon will have are determined. Problems in the requirements phase of the acquisition process are therefore carried forward and contribute to a host of far-ranging issues down the line.

If you find yourself in a requirements role project, it’s likely that at some point you’ll need to create a functional specification. But what is a functional specification? Why do you create a functional specification? And, perhaps more importantly, what goes into a brief like this?

The purpose of a functional specification is to define the requirements to be implemented by the engineering solution. Now, as business analysts, not all aspects of our solutions are software-based. A perfectly legitimate solution to an engineering problem could involve a process change, organisational change, or even a configuration adjustment.

But since so much of business today is supported directly by engineering systems, many times solving a problem means upgrading or building new products and that means specifying functional requirements. Depending on your methodology and  analysis practices, a functional specification can come in a variety of different formats. Let’s look at a few of the most common formats.

Whatever template is in place at your organisation, the purpose of the functional specification is to capture what the product needs to do to support a  user. Often it is reviewed and approved by both business and technical stakeholders. The business users confirm that yes, this is what they really want the system to do. The technical users confirm that, yes, these requirements are feasible, implementable, and testable.

It’s the functional spec that sits in the middle and holds everything together.

Teams typically create long requirements specifications which includes information about the project, project team, open issues, environment, assumptions, dependencies, constraints, key dates, business model, data requirements, and, finally, the functional requirements. The functional requirements typically took up most of the proposal .These documents were thorough, but they lacked spunk.

Teams should move towards a shorter scope document to consolidated many of the overview sections in my earlier documents along with a set of use cases to drill into the functional details. Sometimes agile projects are undertaken where user stories were the preferred format.

Whatever the format, the focus should be creating alignment between what the  users wanted and needed the system to do and what product was prepared to build for them. And that’s really the essence of the functional spec.

So you need to know what you actually need to put into your functional specification. In addition to contextual information about the project, a functional specification includes a list of features and functions to be supported by the software. How. these features and functions are represented depends largely on the template in use.

Many functional requirements are typically represented as “system should” statements. You’ll typically have a list of system shoulds, often organised in sections by feature with a priority identified. For example, “The system should enable course participants to submit a question.” or “The system shall enable the course instructor to view all course participant questions.”

In a Use Case, functional requirements are typically represented as a series of steps. The use case puts a collection of functional requirements into the context of user action, which typically eliminates a lot of ambiguity that makes it’s way into an out-of context list of system shoulds. 

For example, “Course participant selects to submit a question. Course participant provides their name, selects a question category, and provides a textual question. System sends an email to the course instructor containing the information provided by the course participant.” 

In a User Story, functional requirements are typically captured in the following syntax: “As a user, we can do something so some benefit is received. When used appropriately, the user story syntax is brilliant at capturing user goals, functional requirements, and business benefits altogether in one concise statement.  For example, “As a course participant, I can submit a question so that I get my concerns about the course materials addressed” and “As a course instructor, I can view all course participant questions so I can respond in a timely manner.”

Each of these ways of capturing functional requirements has its pros and cons. System should statements are easy to track in requirements management systems but difficult to implement and test as they are often presented without context.

Use cases provide great context which helps get the right functional requirements approved and implemented, but it’s also easy for the scope inside a use case expand while meeting user goals or for individual requirements to get lost in larger use case documents.

User stories link together  benefits, functionality, and user goals and are often at the right level of detail to facilitate easy planning, but often lack the detail that’s needed to implement. It is also easy to lose the big picture in the midst of working through individual user stories.

The approach you choose will often be dictated by organisational standards. In the absence of standards, you get to define your own. It’s a good idea to start by asking the user and technical stakeholders what they’d like to see in a spec, as this can help you avoid a lot of issues down the line. 

Our favorite approach to specifying functional requirements is use cases with accompanying end user modular interfaces , and a short scope statement to get aligned around the big picture. There are many templates and work samples covering this approach.

“Aircraft Design Breaks System into Self-Contained Module Components Connect with Standardised Set of Interfaces”

Kiowa Warriors aircraft was a great aircraft at the time, but there was no open architecture…..If you wanted to change code for the multi-functional display for example, you’d have to rewrite the entire code for the whole aircraft and flight test it, because…. you might impact the flight controls.”

“It was a big deal just to change a screen, let alone make a major upgrade like a new weapon, sensor, or defensive jammer. That was tolerable when upgrades were infrequent and each subsystem on an aircraft was effectively stand-alone. 

It’s unacceptable in the modern world, where technology advances at such a quick pace and each aircraft has multiple interdependent systems all networked together. What you want, is something like the iPhone, where any vendor can offer a new app as long as it meets the manufacturing technical standards and the manufacturer itself updates the underlying operating system every few weeks.

So why does it take multiple years just to upgrade an existing aircraft? “We’ve got to build our equipment faster. We’ve got to deliver improvements to the warfighter more quickly, we need to look … introducing upgrades over the network, much like you get upgrades to your iPhone.

Software upgrades in particular should be swift because they don’t require any physical changes to install. But in a weapons system like Kiowa Warrior, a change in one section of code like for displays could inadvertently affect another like flight controls. So any upgrade requires not only meticulous debugging of the software, but time-consuming flight tests to prove and certify that the aircraft is still safe.

What’s more, because software and hardware were not standardised, it was often impossible to change them without knowing the original manufacturer’s trade secrets — their proprietary intellectual property. That often led to so-called “vendor lock,” when only that original manufacturer could do upgrades, with no competition on either quality or price.

The solution, much easier said than done, is to break the system into self-contained components — modules — that connect to each other like Legos using a standardised, well-defined set of interfaces — the architecture — and allow all potential competitors to know how those interfaces work .

The “open” part allows you to take a single module of software and/or hardware made by one vendor, e.g. the cockpit displays, and replace it with a new and improved module from another vendor. Because each module is basically a black box that plugs-and-plays into the wider architecture, interacting with other modules only in strictly limited ways that everybody knows about, replacing it shouldn’t have unexpected impacts on the rest of the system and shouldn’t require learning anyone’s trade secrets to make things work together.

“We probably don’t need to understand how that box works, we just need to understand what the input is and what the output is.  “No longer is it appropriate to go out and simply ask for the patent for the entire piece of equipment” — a chronic sticking point with contractors, especially small businesses whose intellectual property is often their main asset.

The modular approach applies most obviously to software, but has implications for hardware as well. “We don’t need a whole helicopter in order to prove out a rotor head, a landing gear assembly, or a control feature.  Instead, you can just put the new component, whatever it is, on a current aircraft and see how well it works: “We can fly it, we can test it, and we can get feedback.”

Now, actually developing interface standards to allow easy plug-and-play of new components is not easy. There’s at least a foundation out there, the Future Airborne Capability Environment, which is developed and updated by a DoD-industry coalition, but FACE may need more refinement and detail.

“Our first opens systems architecture cockpit is in the UH-60 Victor,” which will soon go into operational testing. That’s really the pre-runner to open source architecture. It’s being informative as we speak.”

“We turn changes right now in that cockpit sometimes overnight, software changes.  Instead of having to test everything every time you change anything, the modules are “fully partitioned, such that this part of the aircraft doesn’t have to touch that part of the aircraft.”

“You may think that’s relatively simple. “That’s not simple.”

"So the question is: How will DoD adapt requirements to maintain the resiliency of our warfighters? What steps must we take to protect our installations? How can installations sustain warfighters when networks are damaged or degraded? “Business as usual” is not an option in today’s budget environment."

Acquisition programs often see major increases in development costs during the production phase of the process. Since this phase occurs long after the capability requirements of a program have been set, development costs should stay relatively flat. Usually, cost increases stemmed from the addition of unplanned capabilities but some are due to deficiencies discovered in testing. 

Upfront cost increases for capability upgrades are significant,, but the new requirements add complexity to the production process that will balloon costs later. There are three possible explanations for these capability revisions. First, changing operational requirements could mean that the originally planned capabilities no longer meet current operational needs. Second, new or emerging technologies could have become available following the initial requirements definition phase. Third, administrative paperwork by the military services and their subcomponents allowed for unrealistic requirements at the outset of the process that had to be revised later.

While the first two explanations do not stem entirely from the requirements definition process, they are affected by it. The fact that capabilities determined during the initial planning process might need to be changed to account for changing operational needs suggests that both the requirements and production processes are moving too slowly. The same goes for the introduction of new technology late in the process.

The third explanation ties into the first two. Process roadblocks allow for overstated capability requirements that are really technological “wish lists,” because no service has an incentive to veto another service’s plans, no matter how costly or unfeasible they may be. The military services want weapons that push the technological envelope, and attempt to predict what will eventually become state-of-the-art technology five to ten years out with little actual knowledge of whether their dream can become reality. 

Those interested in fixing the Pentagon’s acquisition system need to actually address it. Requirements reform serves the goals of current acquisition reform effort and ignoring it will inhibit the effectiveness of current proposals. It will not be easy, but it should take greater priority than addressing the other phases of the acquisition process. The requirements process feeds into and ultimately drives the other phases of acquisition. Addressing it will target a root cause of acquisition dysfunction rather than just treating a symptom. 

“Processes such as developing requirements, contracting, making investments, or obligating money are often driven not by a sound business case, but by arbitrary deadlines and outside pressures.”

“DoD could get more from its funds if it functioned in a flexible system that allowed more effective resource allocation. An opportunity cost arises each time DoD makes a spending choice that could have been invested in developing other capabilities, delivering more units, or funding other critical requirements.”

“Many regulations can remove or dilute authority and accountability. Regulations that dictate contract type can deprive acquisition personnel of the discretion needed to get the best deal for DoD. Additionally, the management structure and decision-making process within DoD are too bureaucratic and encumbered by numerous layers of review. 

Successive reviews do not necessarily add substantive value, but they do add time to the process and add to the number of people who can say no or influence a program, including people who do not have a stake in the outcome of the acquisition. Because nobody holds actual authority to manage a program, there is no one to hold accountable. The fundamental reason for the continued under performance in acquisition activities is fragmentation of authority and accountability for performance."

“Defense acquisition is a human activity dependent on the judgments, considerations, interests, and decisions of people operating in the real world regardless of how impressive policy initiatives look on paper, or how effective the acquisition system is.

The ultimate effectiveness and efficiency of defense acquisition depends on and is determined by the people who are responsible for all phases of acquisition.”

“The dynamic defense marketplace is vastly different from the defense-centric marketplace of the past in which DoD could set the rules of acquisition. To effectively benefit from and compete in the dynamic defense marketplace, DoD must understand where it fits into the current business environment and adapt to this new reality. 

DoD needs to be a better informed buyer, one that understands market dynamics, interests of companies—including cash flow, profit motive, and opportunity costs, as well as the broader economy."

“Systems and capabilities must be developed, deployed, and integrated into operations within the arc of the threat, not after the threat has passed or after DoD has spent billions of dollars on technologies or capabilities that already are obsolete or will be obsolete by the time they are deployed. The private sector now drives much of the technological innovation, which makes it difficult for DoD to keep pace.”
​
1.  What are the reasons behind the system development?  

2. What are the customer’s expectations ?

3. How will the performance of the system be measured?  

4. Who are the users and how do they intend to use the product?  

5. What environmental characteristics e.g., carrier corrosion from salt or strong electromagnetic fields does the system have to take into account?  

6. What are the existing and planned interfaces?  

7. What is customer level of expertise? 

8. What functions will the system perform, expressed in the customer’s  terms?  

9. What are the constraints with which the system must comply?  

10. What will be the final form of the product—model, prototype, or mass production?
0 Comments

Top 50 Digital Twin Component Sequence Build Scenarios Highlight Product Design Model Operations

6/11/2019

0 Comments

 
​What makes Digital Twin model feasible is the ability to work in Virtual Space. The classic sequential models of Systems Engineering were necessitated by the need to work with physical objects. Designs had to be translated into expensive physical prototypes in order to do the downstream work of manufacturing. Only a subset of designs could be considered, because the cost of getting it wrong and having to go back and redesign was expensive and time consuming.

The Digital Twin changes that with its ability to model and simulate digitally. Downstream functional areas can influence design because working with digital models in the create phase is much cheaper and faster and will continue to move in that direction.

Building Blocks required for Digital Twin manoeuvre are quite similar to building blocks required to implement use of Blockchain with trusted status updates of connected instances. To deliver value, connections must span wide mission space. Implementation of connections must not be tied to distinct established steps or location, but must be time sensitive to maximise transmission and minimise with respect to sense/response between the edge and core mission space.

The convergence of Digital Twins and Blockchain is evident. Enterprises dissociated by modular structures and associated by function in operational sequences presents series of steps subdivided into blocks -- not only things/objects but also multi agent models, unit of work, process, verification decisions, outliers, feedback, metrics etc.

Component Sequence Builds make it easy to represent objects, processes, and decision outcomes. Connected blocks can support simulation agents networks joined by common Digital Twins. For example, alignment concepts described in previous reports specific to appropriate blocks can lead to useful platforms.

Valid operational results based on capacity prediction have been designed to develop new mechanistic route modules. During this process, the transition installation instances between any Two equipment upgrade/repair digital Twin states can be represented by considering the conditional probabilities in sequential series.

Previous Systems Engineering models include the Waterfall Model and the Spiral Model . What these models have in common is a sequential perspective.

The Waterfall model is clearly sequential as the sequence flows from design to operation. Waterfall is very simple to understand and use. In a waterfall model, each phase must be completed before the next phase can start. Development activity is divided into different phases and each phase consists of series of tasks and has different objectives. Waterfall model could be implemented efficiently to design a traffic signal system.

The Spiral model reflects the same, although there is an iterative aspect to it. Spiral model is a combination of sequential and prototype model. This model is best used for large projects which involves continuous enhancements. There are specific activities which are done in one iteration or spiral where the output is a small prototype of the larger system. The same activities are then repeated for all the spirals until build is complete.

Other models involve deconstruction and push down of requirements to the component level and a building back up from components to the complete system.

While these are conceptual models, the messy reality of systems development is that the target forward flow from inception to system is simply an ideal. What actually happens is that there is a great deal of determining that the system as designed does not really deliver the desired behavior, cannot be manufactured, and is not supportable or sustainable at the desired cost levels. Even when the design goes according to plan using deconstruction models, the claim is that it leads to highly fragile systems.

The Digital Twin implementation model attempts to convey a sense of being iterative and simultaneous in the development process. Unlike the Waterfall or even Spiral Models, the downstream functional areas are brought upstream into the create phase meeting considerations of system design. In fact, these areas influence the design.

At first glance, Digital Twin may just seem like a buzzword and that it’s not something that should be associated with things such as engineering or architectural work. However, this is not the case as there are many things that can be done. The best part is that technology is changing every day and the advancements are only making the case to use it stronger.

One benefit being it helps to save both time and money on designs. It prevents going from the design phase to the prototyping phase only to realise that it didn’t come out as intended and end up with wasted money on the design. Virtual reality helps to bridge the gap between these two phases by allowing the user to get the 1:1 scale, see how components sit in the assembly, and ensure that the end design is ready to be manufactured.

The manufacturing of integrated circuits on silicon wafers is a good example of a production process of huge complexity. Many process steps for different types of equipment are required to produce a chip of average complexity. A mix of different process types, i.e. batch processes and single wafer processes, sequence-dependent setup times, very expensive equipment and reentrant flows are typical for this type of manufacturing.
Modern semiconductor manufacturing processes are additionally characterised by a wide and over time changing range of product types and the demand to achieve good delivery performance. Semiconductor wafer fabrication facilities are examples for complex job shops.

A single product moves through the wafer in jobs/lots. Each lot consists of several wafers. The circuits are made up of layers. The recursive nature of the process is one of the main sources of difficulty in planning, scheduling and controlling wafer fabs.

With the time in mind, Digital Twins enable communication of clear design solutions to projects. While looking at a model on a screen or on a drawing, engineers might see different things or not even understand what they may be looking at. Virtual reality gives everyone the same exact view. In some instances, models can be marked up in a way that will aid in adjustments further down the design process.

The entire production process runs itself, without operating personnel, from an autonomous control station. Fundamental to the system is the way all the machines used are networked. The order data are transmitted to the control station, which then prioritises the various build requests and allocates them to an system, for example a 3D print system.. During the build process, the manufacturing status can also be retrieved on a mobile device independent of location.

Once the full production chain has been completed, the quality reports are sent back centrally to the control station. All the data necessary for the production of a “Digital Twin” can be accessed here, so allowing complete traceability, amongst other things.

The 3D printing process has become more and more interesting as a complementary or alternative process to conventional manufacturing techniques. The technology is described as ‘additive’ because of the way in which the process involves the application of layer after layer of thin material, which is then hardened by an energy source. Along with plastics and ceramics, it is also possible to produce metal components in a 3D printing process.

The aim of the pilot project was to develop a  next-generation “Digital Twin”  manufacturing line which would be able to produce aluminium components for the automotive and aerospace sectors significantly more cost-effectively than is currently possible. The successful outcome of the project means that in terms of the overall production process, manufacturing costs could be reduced by up to 50 percent compared with existing 3D printing systems.

"As far as the aircraft industry is concerned, the aim now is to build further on this expertise and to bring it to bear in other sectors as well.”

The secret lies in a scalable additive production chain, which is fully automated right through to the point where the printed parts are mechanically sawn off the build platform. This means that no manual work is now required at any stage of the process, from the data preparation and central powder supply through to the AM build process itself and including heat treatment, quality assurance and separation of the components from the build platform.

The technical heart of the system is the four-laser system for industrial 3D printing using metal materials. A driverless transport system and robots ensure the smooth movement of the parts through every stage of the production line.

A continuous 3D data string with integrated quality management makes this production system one of the first examples of the benchmark for the future networks. The manufacturing process is completely scalable: the production lines can simply be duplicated to extend the capacity of the plant. This brings the promise of further substantial savings in the future as the numbers rise. Today, the pilot facility is already capable of the automated manufacturing of components to series-production quality standards.

Parts are already being produced on the new technology line: the truck unit, for example, is already using the first replacement part –  a bracket for a diesel truck engine.

The 3D printing process is particularly useful in the replacement part sector since, in the event of a tool problem, infrequently required parts can often be reproduced more cost-effectively than with conventional sand or pressure casting processes. The first requests for 3D-printed replacement bus parts in aluminium are currently being examined. The analysis team in the passenger car area is also currently considering suitable potential applications.

Components for new vehicles can be optimised for 3D printing during the design phase, bringing the promise of further advantages in terms of cost. 3D printing also delivers weight benefits, which are of particular interest for electric vehicles.

When it comes to replacement parts, 3D  Print brings the advantage, going forward, of saving warehousing costs – because parts can instead be produced ‘on demand’ by the ‘”Digital Twin“, in other words the centralised availability of digital manufacturing data to allow the decentralised production of replacement parts using 3D printing.

“Additive Manufacturing is also suitable for smallest-series production of new limited edition vehicles. Systematic development of the parts specifically for 3D printing means that the production costs can be further reduced and the quality optimised. 

3D printing also makes particular sense during the advance development of vehicles. The low numbers required can often be produced more cost-effectively, and faster, with Additive Manufacturing than with conventional production processes.”

This applies just as much for vehicles with a combustion engine as for electric cars. 3D printing is also eminently suitable, for instance, for the production of the integrated base plates that carry the cooling lines for the batteries in electric vehicles.

High product quality comes as standard in the pilot facility. There is a provision for the use of a high-strength aluminium/magnesium/scandium alloy for parts used in the aviation and aerospace sectors.

For the automotive sector, a classic aluminium alloy is used, the material properties of which have been continually improved over the course of the pilot project. The material strength and finish quality, amongst other factors, have been significantly improved compared with when the cooperation started.

Now that all the quality checks so far have been passed with such promising results, preparations are under way for an audit according to the requirements of stringent industry standards. This is one of the prerequisites for the supply of series-production components by contract printing suppliers.

The automation of the entire 3D Print production chain will in future make it possible to manufacture larger batches in series production – with the same reliability, functionality, durability and economic efficiency as conventionally manufactured components. 

1. Modeling states of other agents

2. Fusing uncertain sensor data

3. Inter-component robotic communication

4. Deliberative behaviours for pursuit 

5. Mixed reactive behaviour 

6. Local/global intelligence

7. Pay attention to reward 

8. Interdependent resource actions

9. Iterative game play

10. Collective competitive change

11. Deduce intentions through observation

12. Deduce abilities through observation

13. Model as a team with individual roles

14. Depend on others for goal 

15. Multi-agent adaptive load balancing

16. Focal points/emergent conventions

17. Agents filling different roles

18. Distributed /Active sensing

19. Generalised partial global planning

20. Query propagation

21. Distributed traffic mapping

22. Planning State 

23. Goal communication

24. Negotiation between agents

26. Resource schedule coordination

27. Internal collective commitment

28. Internal collective decommitment

29. Changing shape/ size

30. Grounding meaning via shared experience

31. Legacy systems integration

32. Reasoning about accuracy

33. Training other agents track driving 

34. Minimise need for training

35. Market-based techniques

36. Distributed constraints.

37. Generalised partial global planning

38. Learning to choose coordination techniques

39. Query response in information networks

40. Potential commitment states

41. Pre-action/Actual commitment states

42. Collaborative localisation

43. Communication utility/accuracy

44. Grounding meaning via shared experience

45. Legacy systems integration

46. Training other agents track driving

47. Minimise need for training

48. Market-based methods for distributed constraints

49. Query response in information networks

50. Division of independent tasks

0 Comments

Top 10 Readiness Analytics Tools Initiatives Establish Actions/Outcomes Increase Mission Capable Rates

6/2/2019

0 Comments

 
​Marine Corps is focused on aviation logistics as a means of regaining readiness, and an extensive “Marine Aviation Logistics Plan” outlines ideas to boost professional development opportunities and modernise sustainment to keep up with how the fleet employs its aviation squadrons.

The Aviation Logistics Plan seeks to “modernise existing and time-tested, and support strategies, as well as capitalise on emerging capabilities and technologies offered by industrial base. Collectively, these efforts will enhance the air combat element by improving the readiness e.g. effectiveness, reliability and availability of Marine Corps aircraft.

Marine Corps Aviation plan states “maintenance concept changes, repair capability standup, and contract strategy changes” must be enabled to keep up with readiness challenges.

Almost every piece of modernised equipment is network-driven. The complexity of modernised equipment forces maintainers to take an active role in the setup, configuration, operation, and maintenance of this equipment.

Maintenance officers will be crucial to transitioning to new equipment and training by providing the subject matter expertise allowing operators to successfully employ their weapons system. The enlisted maintainer of the future will have to be agile enough to adapt to the potential for rapid changes in capabilities and system implementation.

Plans are required to be as competent in basic update status link implementation as operators. Link/align schedules between the roles of the operators, maintainers and tactical users will continue to be essential for success in all future missions.

Marine Corps Tracking Systems are used to program funds for intended use, but are not clearly linked to readiness goals. When the Marine Corps programs funds for intended use, it uses tracking systems to identify the funds; however, when it executes those funds, it uses a different set of fiscal codes to identify them. 

As a result, the Marine Corps cannot link the programmed intent of the funds to readiness, making it difficult to track funds through the budget cycle. 

Tracking costs associated with specific exercises was difficult because officials could not attribute several large one-time training expenses to specific training exercises contribute to readiness. There is currently no systematic way to ensure that codes are used accurately to associate funds executed with training exercises, which means they do not have complete or consistent data on costs associated with individual training exercises. 

As a result, commanders may lack accurate data for making resource decisions about training exercises needed to complete Mission Essential Tasks and improve units’ training readiness. 

Several factors have made it challenging for the Marine Corps to provide Congress the information it needs. 

First, the Marine Corps cannot fully track training funds through the budget cycle, making it difficult for the Marine Corps to, among other things, show that training funds were spent on readiness

Second, the Marine Corps has not prioritised tackling the longstanding problem of how to link training resources to readiness. Although the Marine Corps has a standing order to develop an enterprise-wide performance management framework that links resources to readiness via a robust analytical framework, no single entity has been assigned the authority to enforce this order. 

In the absence of that leadership, certain components of the Marine Corps have developed their own, independent initiatives that were designed to achieve the same objective of linking funding to readiness, but had their own specific approaches and intended uses.

Navy will examine the readiness reporting relationships of aviation, undersea warfare, surface warfare, expeditionary warfare, information warfare and the shore enterprise.

A related effort calls for the digitization of readiness. “When we in the Navy, or the rest of the military for that matter, talk about digitization, talk about analytics, machine learning, artificial intelligence, we usually see this through the fleet lens”

We talk about how to achieve decision superiority in the fleet fight. That’s all good."“But fleet readiness will benefit significantly by bringing all of the same tools to bear on all of our readiness processes. In fact, we have more unused data available for us here than we currently do in the fight.”

“We will use descriptive analytics to mine historic data and to derive the readiness drivers behind past successes or failures to move our understanding of readiness beyond a set of closely held beliefs and to vigorously derived facts.

“Once we understand the true drivers behind our readiness outcomes, we must apply predictive analytics to forecast how much outcomes will change with our investments in those readiness drivers. With predictive analytics in play, we will find ourselves in a place where we’re making data-informed decisions, maximizing the return on investment.

Once the service gets the predictive analytics capability, prescriptive analytics will be the next step. “If  we achieve this, the final frontier of analytical capabilities will take us to prescriptive analytics where our automated analytical tools will accelerate our decision making by suggesting decision options we may not have been aware of if we relied on human action alone."

To get there, the fleet analytics office will “develop the dashboards and reporting tools to see in real time what is going on with readiness capacity. They will also develop a risk matrix that helps us assess risk against the mission and drive accountability."

Department of Defense Barriers Limit Efforts  Aimed at Establishing “Fleet Readiness Analytics Office” Support to Senior Leaders Making Force Structure Decisions 

 DoD advised that we replace the term “force structure” with “force planning” to ensure that different audiences understand that we are referring to force sizing, shaping, capability, and concept development. DoD correctly stated that we were using the term “force structure” in a broad sense.

However, the term force planning is not interchangeable with force structure because force planning is the act of analyzing and determining force structure needs. In order to provide further clarification, we added a note in to the  report stating that when we refer to force structure analysis, it includes the force planning elements identified by DoD  i.e., force sizing, shaping, capability, and concept development. 

 Current DoD approach for providing readiness analytic support has not provided the timely and comprehensive analyses that senior leaders need to make informed decisions about the joint force structure needed to implement the National Defense Strategy. Senior leaders have documented in relevant DoD guidance that there are cracks in the foundations of readiness analytics, many of which originate with Support for Strategic Analysis

This is due in part to highly detailed and complex products that are difficult to produce and lack flexibility to analyze, insufficient guidance to overcome the interests of the services to protect their force structure equities, and the lack of a joint readiness analytic capability

DoD guidance also states that Support for Strategic Analysis products should retain consistency with DoD strategy and current intelligence and should incorporate operational approaches effective at regaining readiness. Credible independent analysis of an issue requires a detailed, well-understood, up-to-date common basis for that analysis. 

A key stated goal of Support for Strategic Analysis was to create a common analytic foundation so that the services’ force structures could be evaluated as a joint force—as it would fight. However, Support for Strategic Analysis  has not resulted in this type of joint analysis.

Specifically, DoD guidance states that Support for Strategic Analysis  is intended to facilitate the comparison and evaluation of competing force structure options and cross-service tradeoffs. DoD guidance also states that assessments of the aggregate readiness capacity of the joint force can provide an analytic foundation to identify risk and understand tradeoffs across competing demands for the force

According to the services, Support for Strategic Readiness Analysis  products provide a valuable resource and are critical to informing programmatic decisions. However, DoD Guidance noted that there was too little joint analysis at the operational and strategic levels; the department lacks a body or process to conduct or review joint force analysis; and the department’s Support for Strategic Analysis  efforts were focused on developing, versus analyzing, the common starting points. Accordingly, it reiterated the need for Support for Strategic Analysis to free up time and resources to conduct joint analysis and review competing analyses. 

Officials said DoD currently compares and makes decisions on force structure options primarily through the budget process; however, such budget reviews are typically limited to specific areas of interest. The officials added that program and budget review is not the best place to evaluate joint force structure tradeoffs because the kinds of issues examined in the budget process are more limited in scope and generally do not include comprehensive cross-service comparisons.

 Support for Readiness Strategic Analysis  has not yielded the analytic support that it was intended to provide owing to three interrelated and persistent challenges: 1) cumbersome and inflexible products, 2) limited analysis that tends not to deviate from the services’ programmed force structures and has not tested key assumptions, and 3) an absence of joint analysis evaluating competing force structure options and crossservice tradeoffs.

One of the key reasons DoD did not keep the products complete and up to date was that developing and approving highly detailed and complex Support for Strategic Analysis  products was cumbersome, taking a significant level of effort and time. Officials told us that developing the Concepts of Operation and  Readiness Level Views, in particular, was difficult because there was a desire to gain consensus with all of the stakeholders and because the services wanted these products to have high fidelity detail in order to run their readiness models. 

For example, Cost Assessment Office and Joint Staff officials told us that it usually took a couple of years to build and approve the Detailed View for one readiness scenario. The officials added that the level of detail included made the product inflexible and difficult to vary. Cost Assessment Office and Joint Staff officials agreed that this product became far too detailed and time-consuming and used a substantial amount of the department’s analytic capacity. 

As a result, the officials told us that Cost Assessment Office  abandoned building additional Detailed Views.. The lack of agreed-upon drivers of readiness has had other effects. For example, OSD Policy and Joint Staff officials told us that the services still wanted the comprehensive information that the Detailed View was supposed to provide for use in their readiness campaign models. Without Cost Assessment Office producing Detailed Views, the officials noted that some of the detailed information migrated into the higher level Concepts of Operation , making developing and analyzing that product more difficult and time-consuming as well

Service officials told us the services have been reluctant to conduct or share boundary-pushing readiness analyses through Support for Strategic Analysis  for fear that they will jeopardize their forces or limit their options. Officials also told us that the services have leveraged their participation in developing Support for Strategic Analysis products to ensure their favored major force structure elements are included in the common starting point. 

Joint Staff officials noted that they were able to do this because Support for Strategic Analysis  did not constrain what force structure the services could use for their analysis. That is, if the force structure was programmed, they could use it because the readiness goal was to overwhelm the adversary. However, by not significantly deviating from the starting points, the services were able to ensure that their analytic outcomes support the need for the already programmed force. 

Additionally, several questionable assumptions underpin the analysis. Sensitivity analysis examines the effects that changes to key assumptions have on the analytic outcome and are helpful to understand risk.  It can therefore provide insight to decision makers of how readiness levels would change if conditions did not match the assumptions.

However, Officials told us that the services, using Support for Strategic Analysis  products as a starting point, generally have not conducted sensitivity analyses on key operational  assumptions or on factors that may not be static or at least have some uncertainty and, if varied, may raise or lower the risk of completing assigned tasks or missions. 

According to these officials,, certain questionable assumptions have not been analyzed through sensitivity analysis as part of Support for Strategic Analysis . For example, all four services tend to assume that their readiness for a conflict will be high, consistent with the level directed in guidance. But at the individual service level, the military services continue to report readiness challenges and readiness rebuilding is anticipated to take several years. Specific details of service-specific assumptions that are problematic were omitted because the information is classified. 

The services have been reluctant to independently examine a broad range of innovative force structure readiness options and conduct sensitivity analysis on key operational assumptions through Support for Strategic Analysis  because, according to service officials, due to competing priorities they believe they can generally only affect marginal changes in their budgets from year to year and have limited analytic capacity.

Service officials noted how the majority of their service’s budget each year is constrained by must pay bills, including personnel costs, supporting existing force structure, established contracts, sustaining the industrial base, and statutory mandates. 

As such, unless directed to by senior leaders, service officials told us that they typically do not use their limited analytic resources to conduct sensitivity analysis or explore alternative approaches to regaining readiness. The sensitivity analyses they have been directed to conduct have generally been focused on smaller force structure changes, but have provided useful insights. 

DoD report partially addressed the element to include a description of readiness capacity required to support the force structure. Specifically, DoD report did not provide a complete picture for readiness requirement levels. For example, readiness at large aircraft installations was described by square yards of apron space, but did not include other requirements such as aircraft hangars and maintenance facilities.   

According to DoD officials, specific department-wide guidance concerning DoD methods for selecting installations in its assessments does not exist. Moreover, without developing guidance, readiness assessments may not be based on consistent methods across the department, resulting in inaccurate estimates. DoD/Congress does not currently have necessary information to make decisions concerning readiness capacity across the Services.

DoD method for estimating readiness capacity is not sufficient because its reported estimates cannot be generalised to describe readiness capacity across the department. Furthermore, DoD sampling method is not always implemented effectively because some of the military departments adjusted the sampling approach. 

The department has demonstrated a desire to fix Support for Strategic Readiness Analysis  deficiencies but has thus far been unable to overcome these challenges. Without determining the analytic products needed and updating them, issuing specific guidance requiring alternatives and key assumptions to be fully analyzed, and developing an approach for conducting readiness analysis, DoD may not be providing its leaders with the analytic support they need to prioritize force structure investments that would best manage risk and address full spectrum readiness challenges.

1.  Measurable strategic goals must be established to enable assessment of condition readiness and capabilities.  

2. Unifying framework is needed to provide clear line of sight across capabilities, readiness, and financial communities  

3. Existing processes are stove-piped with limited visibility and often require integration at the senior leadership level to develop a comprehensive view of effect of dollars on readiness.  

4. Multiple organisational constructs make it difficult for analysts to develop a comprehensive view.  

5. Sustainable reporting requires data from an integrated and automated process. 

6. Connection of operational training resources to readiness is limited by current systems and processes

7. Inability of manpower, force structure, equipment supply and maintenance, and training systems to connect with each other, acquisition, and requirements generation processes

8. Fundamental business processes such as data management had major shortfalls in consistency negatively affected ability to defend funding requests.  

9. Lacks a fully developed and comprehensive model to connect the output of institutional processes to readiness measures.  

10. Collection, storage, and transfer of data must adhere to consistent rules across major data systems in order to support decision making without stove-piped efforts. 

0 Comments

Top 10 Readiness Logistics Questions Plan Advocates Increased Focus on Spares/Training

6/2/2019

0 Comments

 
​Contractors are working closely with the military services to keep new and old airplane types alike in good material condition, but a broken budgeting process and difficulty in fully funding spare parts have made that a challenge, Joint Strike Fighter Joint Program Office working on “getting behind the power curve” on spare parts acquisition to avoid falling into the same readiness deficit that many other platforms have today.

F-35 stability a  top priority today, but“the readiness business is hard. Sustainment readiness is not rocket science, but it takes an incredible effort, and you can’t ignore any aspects of it. And once you start to slope down, it’s a fast slope down and it’s a slow pull out of the problem area because parts take several years, depending on the complexity of the part, to go purchase. 

So if you didn’t buy them or you didn’t spare for them, you’re way behind. So what we’re doing in a way is raising the visibility of what we need, and now we have to work with the services, work with the JPO and work with Congress to make sure we adequately fund and provide for the spare parts that we need to sustain it.”

Some F-35 suppliers are delivering late and non-conforming parts, resulting in production line inefficiencies and workarounds. Quality issues with insulation on the coolant tubes in the fuel tanks resulted in the contractor delivering less aircraft. According to contractor deliveries of parts are largely due to late contract awards and supply base capacity. 
While supplier performance is generally improving, it is important for suppliers to be prepared for both production and sustainment support going forward. Inefficiencies, such as conducting production line work out of sequence, could be exacerbated if late delivery of parts continues as production is set to more than double in the near future.
Trying to catch up on parts procurement is a challenge for the older planes, where in some cases original manufacturers are no longer in business, and the Marine Corps is eager to avoid a spare parts deficit with its new F-35Bs as it builds the fleet out.

We need to keep our procurement of spares on track.” “If we don’t do that we could be in jeopardy. So that’s something we fight for all the time. The logistics part of this is about making sure we have a platform we can sustain and generate the combat power we need.

Contractor is working with the JPO and the services to understand the proper level of sparing for each of the three variants, develop partnerships with aviation depots that will conduct repairs and modifications throughout the life of the airplane, and mature diagnostics and prognostics to ensure the health of the F-35 fleet is sustained.

But  there’s only so much the contractor – or even the services – can do in this budget environment. DoD is operating under a “broken budgeting process” today so service aviation heads are forced to rely on Overseas Contingency Operations funding instead of the base budget to pay for spares.

We’re  being forced into a position to buy parts in the OCO budget because it’s not planned for, it’s not budgeted for, in sufficient quantity to make up for what the wartime commitment is. 

There’s not enough money overall being put towards buying spare parts. We have to adequately fund spare parts, and spare parts are typically not funded at 100 percent. And if you’re not funded at 100 percent, you’re playing a shell game and a chess game with the parts that you have available to optimise it.” 

Could we have increased investments in that area? The answer is yes. However, we must recognise the fact that there’s going to be more requirements than dollars available, and so you’re going to have to make those decisions. So could there be more? Yes.”

F-35 is showing high readiness levels in its early operations, which could help the services avoid falling into a readiness deficit with the planes. 

The biggest challenge squadrons of all type/model/series aircraft face is “not mission capable-supply,” where spare parts are not available and therefore the plane cannot be fixed and put back on the flight line. “If you don’t have the parts you need on the shelf, what does a good industrious sailor or Marine do? They go get it off another airplane.

 That airplane’s a little more broken than that one over there, so we’re going to take it off that airplane and put it on that one. That’s several maintenance efforts, that’s very negative maintenance because we’re going to have to … go over, take a part off an airplane, install it on that airplane over there, and then eventually go back and put another part on the airplane I just took it off of. It’s crazy, but they’re doing it because they have to do it.

The Navy and Marine Corps have sought to address this problem by asking lawmakers for more money for spare parts and logistics. Navy and Marine aviation leaders this year stressed the need for the less-flashy logistics funds. the Unfunded Priorities List, which may help shape an expected supplemental budget highlights the need for spare parts.

The plan’s first line of effort focuses on the maintainers themselves. The aviation logistics community will look for opportunities for Marines to step outside the organisation and learn new skills and new approaches to aircraft maintenance. For example, the plan recommends industry exchanges to let Marines see how other businesses conduct aviation maintenance and bring back best practices.

The plan also calls for non-traditional training opportunities such as a 3D scanning and printing training event focused on how to develop long-term maintenance, manpower, and materiel planning to improve aviation readiness,” with a focus on managing maintenance , manpower, training , and advanced skills.

New advanced wire repair training, plus increased participation in joint wiring training and working group events, will seek to directly affect flight line readiness rates. “Declining material condition across every type/model/series is impacting all wiring types and severely affecting current readiness. There is inadequate training and familiarity within the maintenance community. Lack of familiarity regarding the importance of wiring systems and apparent lack of focus on funding priorities for wiring at the system level are all contributing factors needing resolution to affect current readiness.”

After conducting independent readiness reviews for each type/model/series aircraft in the Marine Corps, the Marines found they did not have “the right density of maintainers with the right qualifications” in all of their squadrons. plan deals with “increasing the depth, capacity and reach of our operations sustainment capability” to keep up with the demands and changes in how Marines fight. 

Modern combat scenarios demands flexible and scalable capabilities. Increased operational tempo, split and disaggregated operations and constrained resources mandate the modernization of written doctrine and its associated enablers.

The number of Marine Corps aircraft ready to fly on any given day has plummeted for almost a decade., leading to serious questions. Mission-capable rates for almost all of the Marine Corps' fixed-wing, rotary and tiltrotor airframes have fallen. While officials stress that the number of flyable aircraft fluctuates daily, the downward trends have alarmed Marine leaders and members of Congress.

"In the typical squadron too many are not able to fly tonight due to a shortage in parts, long-term fixes or need some kind of attention that the squadron doesn't have the ability to provide," so we can meet today's mission, but we don’t have the depth to meet tomorrow's."

The less they fly, the less training missions they get, the less training the aviation maintenance personnel get, the less money we have for spare parts, the less money we have for training exercises, the higher the mishap rate will be if everything else is held constant." We need to ensure we have the parts we need, the aircraft we need and most importantly, the training we need for our people so that we can remain at high readiness

But getting spare parts is a challenge  when a fleet of Marines is competing for the same parts, the winner will be the unit closest to deploying. We do not have enough ready basic aircraft. That means we are not getting enough flight hours and we aren't up on our maintenance requirements for those specific aircraft.

While Marines are known for doing more with less, it poses challenges when they identify needed repairs, only to find they're competing with everyone else in need of the same parts. Since when a model type is out of production, there can be delays in getting needed parts so  certain parts have to be made in-house, which can take a while. 

"We can realistically only look Marines in the eyes so many times knowing that unless something changes, we will just be asking them again. Ingenuity only gets us so far, however, we are equipped to fight but our units at home who are on the bench are hurting the most."

The situation became dire when across-the-board sequestration hit. The readiness center was not able to hire replacements for service experts who retired or took jobs elsewhere, and it could not order badly needed maintenance materials or spend money to fix equipment at the center. This meant not enough maintenance on plant equipment. That led to many machines being down for extended periods of time," This inhibited our ability to produce parts, further slowing our turnaround time. Both issues continue to impact our throughput and cost..

Fewer available aircraft would mean less flight time for pilots — and if Marines weren't training to fly, he said they could become a liability. Congress has forced the military to cut training and operations budgets because they were not allowed to take money from elsewhere. This is a problem the Marine Corps itself cannot address," There's no way for the Marines to reallocate funding that just isn't there."

The Marine Corps is getting back  basics to solve its aviation readiness problems, with a focus on contracting for spare parts and enhanced training for aircraft maintainers at the heart of several type/model/series’ readiness recovery plans

Each type of plane in the inventory has its own reasons for a declined state of readiness. Marine Corps set up independent readiness reviews for each aircraft type and is in various stages of implementing those plans to boost flight line readiness and increase the number of flight hours per month pilots can fly.

In the course of crafting these readiness recovery plans, two cross-platform ideas emerged that should set the Marine Corps up for long-term success: a new focus on contracts for spare parts, and a new Maintenance Training Instructor program Marines had a serious problem with “not mission capable- supply” status in several platforms, meaning aircraft were grounded because maintenance couldn’t be performed due to a lack of available spare parts.

Marine Corps now has contracting officers accountable for maintaining a proper flow of spares. “And instead of just kind of looking at something in the aggregate, you’re now responsible for that airplane, getting the parts for that airplane.” 

The service is also looking for inefficiencies in existing contracts that can be ironed out, as well as improvements to the flow of parts – to avoid scenarios where spare parts stack up at loading docks but don’t make it to aircraft maintainers in a timely fashion. Marine maintainers weren’t spending enough time each month actually touching their airplanes.

In some cases, Marines had to reset the fleet – giving the aircraft new fuel lines, hydraulic lines, wiring and more. The reset program will take off any parts that are in bad condition or too old and replace them with brand new parts, restoring the helos to a high-reliability flight ready standard.

The Marine Corps’ challenge with the Hornet fleet is unique. It is the biggest user of the legacy Hornet – the Navy is using its newer Super Hornets while the older planes are stuck in a depot logjam, whereas the Marine Corps is just not meeting its flight hour goals. 

The Navy and contractors are working hard on the depot side as the legacy planes go through a Service Life Extension Program SLEP, and with depot productivity up “now it’s about just kind of burning through the backlog. We’re just short of what we need to be producing out of the depots to do that, but throwing any more money at the depot probably is not the answer to that, we’ve just got to encourage them to do good work and to basically make their gate. 

Another counterproductive but prevalent action has been transferring aircraft between units, typically in an attempt to get the most modern and highest-capability airplanes to the squadrons about to deploy. this can take about a thousand hours of maintenance per transfer. These hundreds of thousands of maintenance hours produced no additional readiness but just shifted planes around within the service.

We intend to assign monetary value to the man hours and require a cost-benefit analysis that looks at readiness lost due to the maintenance requirements associated with moving aircraft, compared to the warfighting capability gained. 

The initiative would take the oldest planes and, while they’re in the depot for a Maintenance Inspection Interval bring them up to a common configuration to match the newest planes coming off the production line. We developed them off the line, there were changes we needed to go forward in combat, so they were modified as they came off the line, sometimes several in any given year and these airplanes went directly forward to either Marine Expeditionary Units or combat zones.

The biggest challenge would be spare parts and maintenance, and paying close attention to that supply logistics chain to avoid the problems plaguing the rest of Marine aviation. As for the maintainers, he said there’s a lot of excitement today about the F-35 transition and “right now we have just an exceptionally well trained F-35 fleet of mechanics.

We are attacking our current unacceptable Not Mission Capable- Supply rate, and the root causes for it. The supply chain that supports Marine aviation is fragmented, antiquated, and not optimised to enable the required state of readiness in our current fleet. This fact is clearly evidenced by the low rate of Ready Basic Aircraft RBA and unsatisfactory high Non Mission Capable Supply NMCS rates across nearly every T/M/S the Marine Corps currently operates. 

RBA is important because, without enough ready airplanes, pilots at home cannot fly the required number of hours per month to stay certified, let alone proficient. The Marines hadn’t met their flight hour goal in several years but that the service was meeting its goals for getting aircraft out of the depots and back to the flight lines, so we should close the gap on  and reach the RBA goal – and therefore hopefully the flight hour goal.

Each of the Independent Readiness Reviews conducted to date identified systematic shortfalls in the sustainment organisations, processes, and resources of the supply chain that supports Marine Aviation. Accordingly, the focus of effort will be on continuing to aggressively attack these daunting challenges. The strategy to reduce the capability challenge will be focused on the areas of consumables, repairable, and manpower.”

Marines will “monitor fleet demand for consumables on long-term contracts and ensure vendors receive accurate demand forecasts,” and work to improve depot component repair performance. Consumable forecasting is an issue. Lack of consumable material accounts for greater than 80% of non-mission capable supply 

And as the Corps works to claw back readiness and increase pilot flight hours, it's the spare parts issue that has the service's top aviator most concerned. aircraft maintainers are still sometimes resorting to cannibalization, or borrowing parts from working aircraft to make other planes operational.

The one thing that is holding the man down on every platform is not-mission-capable supply," By every type/model/series, it's a contributor to why that airplane might not be available for flying. "We couldn't sustain them. The requirement was still there, but we couldn't sustain it.  If the Marine Corps was a business, they are underwater right now, because we don't have enough power tools to make flight hour goal.

Marines have stood up Advanced Aviation Management Training Course for “maintenance Staff on how to develop long-term maintenance, manpower, and materiel planning to improve aviation readiness,” with a focus on maintenance management, manpower management, training management, and advanced skills management.

“A lot of the other type/model/series we didn’t have enough of the qualifications, we weren’t measuring the military occupational specialties of those young Marines. “We are doing that now, and that’s going to have an outsized impact on our ability to retain the right qualifications.

Another line of effort in the aviation logistics plan deals with “increasing the depth, capacity and reach of our operations sustainment capability” to keep up with the combat element demands and the third line of effort deals with keeping up with changes in how Marines fight .

“Today’s dynamic global environment demands flexible and scalable capabilities. Increased operational tempo, split and disaggregated operations and constrained resources mandate the modernisation of written doctrine and its associated enablers.

Each of the Independent Readiness Reviews conducted to date identified systematic shortfalls in the sustainment organisations, processes, and resources of the supply chain that supports Marine Aviation. Accordingly, the focus of future efforts will be on continuing to aggressively attack these daunting challenges focused on the areas of consumables, repairable, and manpower.”

By ridding the force of its reliance on fixed infrastructure and creating forward operating bases of varying sizes and levels of advances, Marine aviation would gain increased operational reach, increased capacity by supplementing sea-based sorties, more options during major manoeuvres, flexibility/surprise, and reduced risk to the force.

Distributed Operations concept supplements traditional sea and land basing options with “mobile forward arming and refueling points” for resupply mid-mission.  A separate mobile distribution site would serve as the location for Marines on surface connectors to stage fuel and weapons that will be brought to the mobile forward arming and refueling points. 

Importantly, all these sites are considered “mobile” and are intended to maintain elements of “deception and decoy” – in keeping with the idea that the aircraft are supposed to be distributed and difficult to find and target.

1. Where are interfaces-- technical, organisational; are they given or deliberately set?

2  Are there any disturbances--in the process, workflow, information flow?  

3. Are there changes over time? e.g. in terms of goals or missions, requirements- or technology “creep”; address similar aspects to “what is new” question

4. Do we control/master it, also if conditions considerably change?  

5. Is it mature e.g. readiness level- are there hidden potentials?  

6. Is it a source of conflict, uncertainty, and ambiguity?  

7. Are previous developments considered, and to what extend?  

8. Does it increase or decrease the number of elements and relations?  

9.  Are there differences, similarities or overlap among elements and relations?  

10. If everything went smoothly, would the issue really be worth accurate complex data indicator? 



0 Comments

Top 10 Digital Procurement Solutions Assist Dispatchers Purchase Order Generation Requisite Stages

6/2/2019

0 Comments

 

​Digital procurement solution is a critical tool when it comes to weapons systems delivery directly impacting force readiness. With digital procurement solution, all purchase orders will be sent over interface in the system so all parties are notified, ensuring delivery is completed immediately. At each stage your purchasing processes are streamlined, reducing logistics headaches and eliminate unnecessary paperwork.

Consider an aircraft manufacturing scenario in which company X produces tires, but subcontracts production of lug-nuts to company Y . In order to build a single system to automate certain parts of production/delivery process, the internal behaviours of both companies X and Y must be modeled. 

But both companies are likely to do not want to relinquish information and/or control to a system designer representing the other company. It is possible with just two companies involved, an agreement could be reached, but with several companies involved, 

Multi-agent systems are necessary. The only feasible solution is to allow the group of companies to create their own agents that accurately represent their goals and interests. They must then be combined into multi-agent system focused on digital procurement.

Using agent models is not a new concept; however, digital procurement will address long-standing challenges associated with complexity, uncertainty, and rapid change in deploying components contributing to readiness levels of defense systems. By providing for more agile and responsive development, digital procurement supports organisational excellence and provides a foundation for mission success in the future. 

DoD must assess digital approaches to acquisition and systems engineering, including its methods for the use of agent models, simulation results and tools that support acquisition/sustainment process so it is possible to stay ahead of readiness demands for new and upgraded weapons systems. 

Markets represent process that maps potential buyers and potential sellers of a good to one another and optionally to a price at which a sale can take place. The goods traded in such a market are characteristics or options for characteristics. Each distinct good requires a separate market, and markets for different goods may have different protocols.

Applying multi-agent technique in multi-section flexible manufacturing system requires setting up dispatch rules, a distributed agent based system is implemented, assisting agents to choose suitable dispatch  rules pertaining to dispatch region and also full-in dispatch of manufacturing system resolved through the agents cooperation. 

Multi agent systems can be used for simulation and modeling of production process or supply chain, where they easily simulate an independence of involved parts. These tools can help to answer non-trivial tasks – how changes in single component will affect the production process or supply chain as a whole.

For the uninitiated readiness officer the procurement process can appear to be a simple, basic procedure: Locate needed goods. Pay. Receive purchased goods. Document transaction.

But like all important business functions, procurement is a layered, fluid business process with multiple moving parts so there are numerous stages in the procurement process.

In many organisations, purchasing-- more than any other business function, is tied to routine. There are usually only periodic negotiations with established networks of suppliers or sources. 

But many purchasing team skills and outlooks were formed years ago in an era of relative stability, and they haven’t changed. Now no company can allow purchasing to lag behind other departments in acknowledging and adjusting to accelerated tech changes. Such an attitude is not only out-of-date but also costly. 

Few companies today can allow purchasing to be managed in isolation from the other elements of their overall business systems. Greater integration, stronger cross-functional relations, and more top-management involvement are all necessary. 

The company will realise these benefits only if it uses the systems effectively. It must foster consistent, cross-functional information flows and demands and induce line managers to supply the required metrics for the purchasing information system. One way to reduce  instinctive resistance is to show them that most of the “new” metrics already exist and need only be recast in an appropriate format. Finally, management must make certain that any major new systems are user-friendly.

Progress toward effective supply management can only be gradual, and the company will have to surmount many obstacles to implementation along the way. But the rewards are well worth the effort. An attitude of “purchasing as usual” will make the company vulnerable to competitive pressure; but enhanced strategic awareness, greater flexibility, and stronger focus in the supply area can improve the supply security and lower the input costs of any industrial company.

Too often the purchasing department receives information on the companies business plans and objectives that is incomplete or improperly geared to the tasks and time horizons of strategic supply management. Purchasing executives are usually informed of major expansion and investment projects but often lack adequate operating information under some timelines. 

Every facet of the purchasing organisation, from systems support to top-management style, will ultimately need to adapt to these requirements. Concrete changes in the organisation will be required to establish effective organisational relations, provide adequate systems support, and meet the new staff and skills requirements.

Some companies have already responded to the growing pressures.  To ensure long-term availability of critical materials and components at competitive cost, many manufacturers must come to grips with the risks and complexities of virtually unlimited sourcing. Others that already source from many entities must learn to cope with uncertainties and supply or price disruptions on an unprecedented scale. 

Instead of simply monitoring current developments, orgaisations must learn to make things happen to its own advantage. This calls for nothing less than a total change of perspective-- from purchasing as an operating function to supply management as a strategic goal.

To reduce the long-term risk of dependence on a single source, the company could search for alternative suppliers or materials or even consider backward integration to permit in-house production. On the other hand, if the company is stronger than the suppliers, it can spread volume over several suppliers, exploit price advantages, increase spot purchases, and reduce inventory levels.
 
Whenever a manufacturer must procure a volume of critical items competitively under complex conditions, supply management is relevant. The greater the uncertainty of supplier relationships, technological developments, and/or physical availability of those items, the more important supply management becomes.

On items where company roles in the supply market is secondary and suppliers are strong, the company must go on the defensive and start looking for material substitutes or new suppliers, ie diversify. It may have to increase spending on market research or supplier relations, or even consider backward integration through major investments in R&D or production capacities. In short, the company needs its supply options. 

For supply items with neither major visible risks nor major benefits, a defensive posture would be over-conservative and costly. On the other hand, undue aggressiveness could damage supplier relations and lead to retaliation. In this case, a company should pursue a well-balanced intermediate strategy.

Usually, a company will find itself in different roles with respect to different items and suppliers. When it can bargain from a position of strength, it should press for preferential treatment. Bargaining from weakness, the company may have to offer inducements—longer-term contract obligations, for example, or higher prices—in order to ensure an adequate supply.

To what extent might cooperation with suppliers or even competitors strengthen long-term supply relationships or capitalise on shared resources? For example some industry sectors reach out to others for  production of certain critical components that they could not produce cost-effectively on their own. If suppliers can be involved early in the design process better quality, lower cost, and “just in time” production is likely.

Potential risks are evident in the event of non-delivery or inadequate quality. The higher such costs and the greater the risk of incurring them, the less latitude the company has for rapidly shifting supply sources or delaying negotiations or contracts. These costs influence required inventory levels but they mainly affect production. 

Changing a source of supply might, for example, make it necessary to modify the production process. In the case of materials for highly automated production processes, the costs of such modification could be prohibitive.

The relative importance of different criteria may vary with technological change or with shifts in industry sector competition. Careful definition of the criteria of both supplier and company strength is a prerequisite to accurate market assessments.

Supply and demand patterns can blow up on you virtually overnight. How can an organisation guard against disastrous supply interruptions and cope with changing fiscal outlooks and new opportunities brought on by new technologies? What capabilities will  organisations need to sustain itself in the face of strong protectionist pressures? Almost every kind of  unit will have to answer these questions. 

New approaches to devise strategies gives you a simple but effective framework for collecting marketing metrics, forecasting future supply scenarios, and identifying available purchasing options as well as for developing individual supply strategies for critical items and materials.
 
Following this approach, all purchased materials or components are classified in terms of impact  on bottom lines and supply risk. Next, the supply market for these materials is characterised. Then the overall strategic supply position is determined. Finally, it develops materials strategies and action plans.

Organisations must explore a range of supply scenarios in which it lays out its options for securing long-term supply and for exploiting short-term opportunities; clearly define respective risks, costs, returns, and strategic implications; and develop a preferred option with objectives, steps, responsibilities, and contingency measures laid out in detail for top management approval and implementation. The end product will be a set of systematically documented strategies for critical purchasing materials that specify the timing of and criteria for future action.

Supply risk is assessed in terms of availability, number of suppliers, competitive demand, make-or-buy opportunities, storage risks and substitution possibilities. Using these criteria, the company sorts out all its purchased items according to placement in categories such as strategic, bottleneck, leverage and non-critical groups.

How much risk is acceptable? Vendor mix, extent of contractual coverage, regional spread of supply sources, and availability of scarce materials all contribute to supply risk profile. An organisation can often take action to lessen unacceptable risk for example, meeting periodic annual materials requirements exclusively through long-term contracts may achieve substantial savings through the use of  periodic scheduled agreements 

Tasks, resources and schedule constraints drive the timeline of a project. The project schedule is initially created by having the tasks needed to deliver an output  created and then, the duration needed to complete the task assuming that 100% effort is spent on doing that task and that all inputs to that task are available is calculated.

Supply chain design decomposition distinguishes objectives and means of supply line administration. Because it systematically relates means to objectives at different levels of abstraction, higher-level objectives can be made actionable step-by-step. Using a structured procedure, decomposition techniques can be utilised to develop a supply chain strategy that is in alignment with strategic goals of the organization.

We develop a decomposition method for the Time-Constrained Procurement Scheduling Problem with Proximity Resources. For proximity resources the resource units are ordered and the units assigned to a job have to be in close proximity.

 On top of that resource proximity is not required by single jobs, but by job groups. As soon as a job of such a group starts, the  resource units in close proximity are occupied, and they are not released before all jobs of that group are completed. 

Developed decomposition methods separate the resource proximity assignment from the rest of the scheduling problem. Test results demonstrate the applicability of the decomposition method. Decomposition forms a first promising approach for procurement scheduling constraints with  resource proximity and may form a good basis to develop more advanced tech.

Procurement is so dynamic it requires constant monitoring by Site Visit Executive in an organisation. There is always more to know so forecasts can be produced – all of which is a big responsibility.

So…what are the stages in the procurement process? What are those constantly developing pieces in procurement?

As Site Visit Executive learns/reads more about the actions of procurement teams it  becomes obvious that each personnel team involved in procurement has a slightly different take on the stages in the procurement process.
Here we present solid list of the critical stages in the procurement process:

1.  Need Recognition 

This is a seemingly obvious step, but one that needs to be mentioned. Procurement departments must recognise a product is needed in order to purchase it. That product can be either a brand new item, or one that is being re-ordered.

2.  Specific Need 

Does your industry have specific requirements for various products? If that is the case in your field of service be sure you are up-to-date on those requirements and order accordingly.

3. Examine Supplier Options 

Every business needs to determine where to get their goods. Some companies have an approved vendor’s list while others are still trying to determine who the best suppliers are. Once a supplier is chosen, companies should stick with that relationship and try to establish preferred pricing.

4.  Price and Terms 

Once a supplier is chosen, companies should stick with that relationship and try to establish preferred pricing and specific terms i.e. delivery.

5.  Purchase Order

The purchase order is used as the formal contract used to buy the product. The purchase order outlines the price, specifications and terms and conditions of the product or service.

6. Delivery 

The transfer of the purchase order via contact network interface

7.  Expediting 

This stage addresses the timeliness of the service or materials delivered. Delays constrain successful missions so purchase order must have expected delivery date information.

8.  Receipt and inspection 

Once delivered, the receiving field unit must inspect and, subsequently, accepts or rejects the product. 

9.  Approval and Payment 

At this stage, the following updates must match when the seller wants payment – the invoice, the receiving document attached to the product, and the original purchase order. If a discrepancy is found, it must be resolved before payment is made.
​
10.  Record Keeping 

The receiving customer must keep good records. This means saving all relevant documents for every completed purchase.
0 Comments

    Site Visit Executive

    Provides Periodic Equipment Sustainment Programme Reports Assess Help Desk, Training, Workshops & Clinics for Product Information & Reliability Types, Systems Upgrades, Communication Links & Supplier Participant Retrieval Status

    Archives

    August 2021
    July 2021
    June 2021
    May 2021
    April 2021
    March 2021
    February 2021
    January 2021
    December 2020
    November 2020
    October 2020
    September 2020
    August 2020
    July 2020
    June 2020
    May 2020
    April 2020
    March 2020
    February 2020
    January 2020
    December 2019
    November 2019
    October 2019
    September 2019
    August 2019
    July 2019
    June 2019
    May 2019
    April 2019
    March 2019
    February 2019
    January 2019
    December 2018
    November 2018
    October 2018
    September 2018
    August 2018
    July 2018
    June 2018
    May 2018
    April 2018
    March 2018
    February 2018
    January 2018
    December 2017
    November 2017
    October 2017
    September 2017
    August 2017
    July 2017
    June 2017
    May 2017
    April 2017
    March 2017
    February 2017
    January 2017
    December 2016
    November 2016
    October 2016
    September 2016
    August 2016
    July 2016
    June 2016
    May 2016
    April 2016
    March 2016
    February 2016
    January 2016
    December 2015
    November 2015
    October 2015
    September 2015
    August 2015
    July 2015
    June 2015
    May 2015
    April 2015
    February 2015
    January 2015
    December 2014
    April 2014
    January 2014
    December 2013
    November 2013
    October 2013
    September 2013
    August 2013
    June 2013
    May 2013
    April 2013
    March 2013

    Categories

    All

    RSS Feed

Web Hosting by Dotster