With respect to developing products linked to networks, we’re in the early stages. But soon, some companies are going to want to sidestep development problems they’re experiencing. They’re going to realise that building and testing five rounds of prototypes is unacceptable. They’re going to realise months of delay completely undermines their competitive position in the marketplace.
As a result, those companies are going to want to adopt more proven and standardised practices. Virtually prototyping with Digital Twins is not there yet. But given the rush of companies toward network solutions, you should expect the demand for this practice to only increase.
The concept of a Digital Twin is mostly applied to the case where more insight is obtained from a physical operating product and a network platform. But in the case of virtual prototyping, the Digital Twin concept is applied to an old practice in mechanical hardware.
One approach to developing a smart connected product, one that hooks up with a network platform, is to just build it. Just piece it together. Throw some sensors on a product. Wire that to some kind of embedded system. Wire that to your antenna. Start sending data to an network platform. You and your organisation can actually learn a lot from going through that exercise.
While that needs to be done, you will quickly run into limitations on the experiments you can conduct with physical prototypes. Swapping out a sensor isn’t easy when it’s soldered in place. There might not be room, physically, for the sensor you really need for accurate measurement. You might run into too much electromagnetic interference for the antenna you planned to use.
Working through these issues is new to some organisations as they transition traditionally mechanical products to smart connected ones. However, the problems associated with resolving issues through physical prototyping isn’t new. In fact, that is an old concept when it comes to hardware. Long ago, mechanical and electrical engineers figured out that modeling and simulating a design virtually means you are more likely to get it right the first time when you get to prototyping and testing.
The benefits of an approach utilising virtual prototyping with digital twins are many. You have fewer rounds of prototyping, saving money and time. You have fewer change orders. You stay on schedule. You stay on budget.
So while virtual prototyping is new to some organisations, this approach has advantages when applied to the development of linked smart, connected products and networks platforms. Digital Twins are a key enabler.
How exactly can Digital Twins be used to virtually prototype smart, connected products and network platforms? You first need to set up the digital model component of a Digital Twin with one of the following:
Numerical Models: These models use machine learning and artificial intelligence tools. These applications or agents either extrapolate that data and/or correlate data to existing events. Both are an effort to predict future behaviour.
1D Simulation: These models are a combination of flow diagrams with equations or formulas behind the blocks that simulate the performance of embedded tools or multi-disciplinary engineering systems. These models can provide deeper insights into ongoing operation
3D Simulation: These models, often in the form of multi-body dynamics, are commonly used to predict the dynamics and structural performance of products. These models can provide deeper insights into ongoing operations.
For scenarios based on engineering physics or asset operation, no prototype or operating product exists. As such, there is no sensor data to feed this digital model. However, the model can be fed historical sensor data from prior products or even from past physical tests or operational data. In the worst case, a set of inputs can be modeled using statistics or even a higher level simulation, such as a multi-body dynamics model. This creates a set of input data that can be fed to the digital model.
The combination of that digital model and the input is enough to get started. You run the model as a simulation, generating data from virtual sensors, which are points of measurements from the simulation.
In this application, that virtually generated data is used instead of physically recorded data from sensors. That output can then be fed to a network platform as if it were receiving streaming data from a running product. Only, in this case, there is no physical product. There is only a virtual product that is running in a simulation.
In this scenario, you overcome many issues that you might experience when trying to physically prototype a smart, connected product.
You can change anything related to the sensor configuration, including placement or type.
You are not limited by network bandwidth other than the limitation between the compute resource running the Digital Twin and the one running the network platform.
You can change the product design in terms of mechanical or electrical hardware, embedded systems and more.
There is a tremendous amount of flexibility with this approach.
So now you have this concept of using a Digital Twin to virtually prototype a smart, connected product that is linked to a network platform. What does that get you? Interestingly, it allows organisations to answer a set of serious questions.
1. Is the systems configuration right for this product?
2. Do we need to use a physical sensor to capture this data, or can it be a virtual sensor?
3. Is the data we want to flow to the network platform limited by bandwidth?
4. Is edge processing required for the sensor data?
5. What data should be processed on the product versus being fed to the network platform?
6. Are there changes that should be considered to improve placement of sensors?
7. Are there changes that should be made to avoid interference?
8. How will connected product and network platform work to fulfill requirements?
9. What conclusions can be drawn from the data once it is in the network platform?
10. What data trends are precursors to events critical to the connected product?
Top 10 Operating Mechanisms of “Digital Twin” Architecture Framework Represent Flow Between Each Builder Module and Sub-System
To ensure digital networking of production systems and the optimisation of material-specific requirements, we need to measure, assess and replicate the changes in material properties in a process where "Digital Twins" of materials are created.
The materials digital space has laid the groundwork for this process. When a finished part rolls off the production line, this is one of the first questions always asked: "Does this component have the properties we want?"
Often, even the tiniest of variations in the production environment are enough to alter a part’s material properties – and throw its functionality into question.
Manufacturers avoid this by close inspection of samples throughout the production process. Breaking down the samples into their composite parts and measuring them separately is an extremely time-consuming process.
"The outcome of the sample testing process branches out into an array of different subsets, each with their own specific measurement results. While experts may be able to keep an overview of the complex interrelationships in their heads, until now there has been no way to take the diversity of resulting data and portray it in a coherent digital format."
Now, for the first time, a proof of concept has been developed demonstrating that it is possible to digitally represent many such material processing cycles with a materials data space for test specimens produced using additive manufacturing.
"The data space concept allows us to integrate any type of material information into a digital network – a really valuable tool. We want to use the materials data space to automatically generate a digital twin of each material that will mirror the current state of the physical object under examination."
Data spaces can be used to integrate all types of materials information into digital networks. The advantage of the materials data space is that it provides an overview of all relevant parameters at a glance, whereas formerly data on different material parameters was scattered among numerous data repositories in many different formats.
But the real promise lies in the future. "In the years to come, the materials data space has the potential to become the production command center. Whenever component quality isn’t up to the expected standard, you can compare it with information on previous components stored in the materials data space to determine whether the present component can in fact be used or whether it must be rejected.
In the future, these results could be automatically integrated into industrial decision-making processes: whenever component quality dips below the required standard, production automatically comes to a halt.
Creating the data space –and managing the diversity of materials data – calls for a corresponding information model. "In this case, the model reflects the natural material world, in which material states and properties are assigned to defined categories.
The best way of thinking about it is in terms of a social network where each user is a node in the network. And in turn, these nodes have their own subject matter associations. What we do is to create semantic relationships between the individual material objects and their associated processing steps.
Then there are also interrelationships among these communities. What would be a “follow” on social media is represented in the materials data space by details on the chronological sequence of production or work steps, for instance "leaving the additive manufacturing process" or "this laser is part of the 3D printing process".
The new demonstrator for additively manufactured metal components has the capacity to generate samples, characterize the materials they contain, conduct subsequent data analysis and determine material properties.
Thanks to the logic underpinning the model, users can make extremely complex queries of the data space that simply wouldn’t be possible with the same degree of flexibility in the case of a conventional database.
1. Creation of quality prediction models: Collected sensor/process network data crunched with AI techniques to build quality prediction models
2. Storage of quality prediction/response models: Response plans/times are entered into model repository according to operator experience
3. Creation of production schedules: After production simulation schedules include machine/production target outputs specified to timetable and sent to reference coordinator builder to lead into quality/productivity detector
4. Request for reference performance indicators: Coordinator delivers the production schedules to performance indicator simulation sub-system to be used in production-monitoring standards
5. Transmission of reference performance indicators: Reference models created based on simulation results serves as the criteria for manufacturing execution monitoring criteria to include target quality and production per unit time per process
6. Quality analysis and prediction: Quality and productivity detector predicts the quality and productivity in real time using quality prediction model and reference performance indicator
7. Request for analysis of response plans: Coordinator requests model repository for analyses of response plans for irregularities and receives corresponding information
8. Request for future performance indicators: Predict future performance indicators caused by responses to irregularities and current schedule sent to the performance indicator simulation sub-system.
9. Request for new schedules: Coordinator requests new schedule when difference between change in initial/future performance due to irregularities is significant
10. Transmission of visualisation information: Progress of entire systems is simultaneously sent to dashboard in the form of information visible to the user through graphical user interface.
Top 10 “Digital Twin” Engineering Team Strategies Define Agent-Machine Interaction Focus Configuration Programme Goals
1. Formalise planning development, integration, and use of models to inform enterprise, programme decision making, support engineering activities to digitally represent the system of interest
2. Ensure models are accurate, complete, usable across disciplines to support communication, collaboration and performance and decision making across lifecycle activities
3. Provide enduring, authoritative source of secure authentication with access/controls to establish technical baseline, product digital artifacts, and support reviews for accurate decision making
4. Incorporate technological innovation to establish end-to-end digital engineering enterprise and foster conditions for productive step advances towards goals
5. Enable end-to-end decision making using advance human-machine interactions
6. Establish mature supporting digital engineering activity infrastructure to perform activities with connected information networks
7. Develop, mature, and implement technology tools to realise digital engineering goals and share best practices using models to collaborate with stakeholders
8. Improve digital engineering knowledge base, policy, guidance, specifications, and standards and streamline contracting, procurement and business operations
9. Lead and support digital engineering transformation efforts, vision, strategy, and implementation to establish accountability to measure and demonstrate results across programmes
10. Build and prepare workforce to develop knowledge, competence, and skills with active participation and engagement in planning and implementing
Top 10 Observations of Day-to-Day Work/Field Experiences Identify Trends/Challenges of “Digital Twin” Simulation Utilisation
Specific sessions with engineers, technical/simulation managers, R&D, quality managers were performed. Surveys were conducted in a typical inductive approach assess key practices, processes, tools and data associated with simulation and product/process development.
1. Domains/application: depth and completeness of engineering simulation areas
2. Methods: engineering simulation tools utilization
3. Level of integration within driver/follower processes
4. Process gates and decision criteria: definition, completeness, visibility
5. Documentation of simulation process and decision-making criteria/milestones
6. Level of adoption/dissemination of engineering simulation in extended organisation
7. Depth/completeness of specific skills of engineering simulation
8. Organisation: relationship and integration between engineering simulation teams and the rest of product development teams
9. Data lifecycle/workflow: modelling, capture, revision, access/control
10. Infrastructure: central/distributed computational capacity, support/availability, post-process remote/local capabilities
Top 10 “Digital Twin” Implementations
1. 3D model of a generic part/product
2. 3D model of a specific part/product
3. 3D model shows live information
4. Live set of physical database assets
5. Process simulation validate manufacture potential
6. Human Machine Interface process plant
7. Live icon representation e.g. display indicator
8. Live numerical value graphic
9. Simulation of a machine e.g. rotating
10. Simulation responding to real time data