We use simulation to design the product using things like topology optimisation, or we integrate control systems at the earliest stages, and then when we get to validation, it’s a simple check box instead of an iterative process. So, simulation-driven design is putting simulation at the front of the design process and using simulation technology to create a design instead of using simulation to analyze it later.”
We used to have to wait for a design to come downstream before testing, building or simulating a design. Then, if the design wasn’t up to scratch, we would literally have to go back to the drawing board and try again. Now, armed with design and simulation tools we can effectively simulate early, and decide what strategy to use before getting too deep into the product lifecycle. We are now entering a world of pre-validated design.
Generative design and topology optimisation is a completely different concept where simulation-driven design really started to move from design validation, where you used simulation to validate a design to changing it from validating to becoming the actual main driver to inform the design.”
We can basically do a motion simulation, and bring those loads and motions into an optimisation. We’re optimising full assembly-level designs, which is something that no one else is really doing either. We can have multiple components all being optimised at one time.
“We created a whole suite of digital modeling tools that allow you to take those concepts that are Denerated and quickly generate smooth, organic surface design. It‘s ‘3D tracing for engineers,’ so you’re basically tracing over the top of an optimised shape and you end up with a final printable design or a design that’s ready for machining.
Virtual development makes use of the whole suite of tools for layout, geometry generation, calculation and analysis, test and evaluation including the management of digital data. This suite enables to concurrently anticipate a lot more considerations sooner for product and process design than in earlier days. Although mostly relying on and working with approximated data, calculations and simulations today come close to verification and qualification test results.
There wasn’t really a clearly defined method of building a digital twin in the past, “What people would do in the past would be to just build a virtual prototype. There was no real notion of connecting it to operational data. And there was no real notion of actually deploying the digital twin at scale in an operational setting.
So, simulation is typically used at the design phase. You would build one virtual prototype for your design, validate it, and that’s it. But now what we want to do is to take that design and replicate it as you would with other virtual assets. That concept of scaling out is what’s really new here.”
The concurrent assessment provides the timely information on the status of the Digital Twin. Once evident, troubles and issues need to be tracked and monitored to ensure that they are resolved. Otherwise errors and quality nonconformities could perpetuate creating a “snowball effect” of troubles with many people using flawed data. The view ahead actually is an attempt to anticipate future developments and risks based on a good degree of engineering and management judgement.
The Digital Twin enables to imagine many more cases where often small and seemingly neglect-able issues are placed within a larger geometrical context so that the real problem becomes apparent. That is then a starting point for pro-active risk management and for corrective actions
The Digital Twin is a core element in virtual product development as it is the culmination of the design intent that gives the product an “early face”. If Virtual Product Development is the unifying concept and approach, Virtual Prototyping actually is the process of digitally testing and evaluating the virtual representations of the product in all aspects of developmental and operational life. The Digital Twin is embedded in a spectrum of prototyping activities, which aim to mature the product as fast, as less costly and as reliably as possible
Digital Twin is the major development instrument to master complexity. As complexity is described among others by its “variety”- the number and difference of system states that elements involved may assume – the Digital Twin gives order to the thousands or millions of parts that in the end will represent the final product.
It is virtually impossible to anticipate all events e.g. disturbances that will challenge the project’s progress. The Digital Twin reflects the results of the many design considerations in a dynamically changing environment. It shows impacts of one’s own design on the “neighbours”, thus creating the basis for spatial and functional arrangement to provide and represent a unique reference to anybody involved, wherever located.
The level of detail of digital models changes throughout product development and depends on particular phase requirements and objectives. The overall aim of the Digital Twin is to allow full visualisation of the product in three dimensions at any time, and to be able to simulate and analyse geometric and functional behaviour. T
The Digital Twin is actually the pairing between Configuration Management and 3D design. The aim is to provide a complete digital product in any variants for any customers, regardless of the design phase and for designers who work on them concurrently. In former times usually one standard aircraft was developed at a time.
The major points for Digital Twin are providing the right data for the right configuration, and to have only one representation switched on to avoid geometrical overlapping. This handling of multiple variants and iterations places particular emphasis on effective management, ensuring that the effective/confirmed technical application is embodied in the Digital Twin. It is making use of a set of configuration attributes.
The fact is that real systems continuously change because of preconditions and environment alterations e.g. fluidity of boundaries so that certain states are more probable than others. This implies that out of all potential configurations the system might adopt there will be a selection of the favourable ones, hence also an avoidance of unfavourable ones. This strongly influences and determines the “shape” or appearance of actual structural complexity.
The solution is to extract the product structure with parts and assemblies related in different ways. A view is therefore a filtered extraction of a product structure from the digital database according to pre-defined criteria. The challenge is to provide views that correctly reflect the status of the product in near real time.
One may call that view also ‘as-manufactured’ or ‘as-produced’. “As-planned” refers to manufacturing planning how the product shall be decomposed to have all manufacturing operations and constraints accounted for. Its shows how the manufacturing engineer thinks everything will be put together. A closely related view is the “as-built” which actually reflects the real sequence of operations in the assembly work space. That accounts for unforeseen events such as supply delays and assembly errors. If everything goes according to plans the ‘as-built’ and the ‘as-planned’ will match.
With discrete checking 3D design activities are virtually stopped at a certain point in time. After the checks problem areas are resolved and design goes on till the next check. Continuous checking doesn’t stop the design for checking’s sake but continuously takes “snapshots” of the Digital . Design could have solved the problem in the meantime, but experiences on previous programmes indicate however, that troubles resolved before being documented and communicated are minority cases.
The review process is closely related to the check process and is a primary quality improvement and sustainment measure. Whatever type of Preliminary and Critical Design Reviews, everyday technical reviews or dedicated Digital Twin reviews have the same supporting activities.
The reviews have to be prepared with the right data, reports, statistics and background information; they have to be run, e.g. doing network flythroughs, showing critical areas, and they need to be followed up. The follow-up is particularly important to track resolutions, feed statistics and report on progress or nonprogress, e.g. for troubles existing for a longer time and to escalate them to higher hierarchical levels should there be disagreements on resolution.
Standard physical techniques have been successful industry practice for a long time but have a number of shortfalls that are better addressed with their digital substitutes. In fact, they are only an assurance that everything will fit together, in former times based on drawings.
Update with modifications or duplications are expensive, as always a physical component has to be produced. The response to changes is relatively low, but for minor ones like drilling holes and there is an inability to reflect real-time configurations.
So the physical mock-up is usually representative only for one aircraft, and is of very little use afterwards e.g. after certification. Last but not least, maybe the major issue, it is quite costly. There are design, labour and maintenance hours to be paid for, not to forget tools and all the materials. In addition, it consumes precious factory floor space
Digital Twin tools have matured insofar as they cover most requirements for geometrical and functional assessment, and, to a lesser extent, operational constraints. Shortfalls have to be compensated either with specific tools e.g. for tolerancing, To fully cover the whole spectrum requires all tools in to efficiently communicate with each other so results obtained individually can be compressed to an overall view of the Virtual Prototype.
The majority of Digital Mock-up applications, as the classical substitute of Hardware Mockups, cover the geometrical and functional areas. The closer one tries to assess the behaviour and interactions of the product in its environment the more will efforts shift to the right end of the spectrum.
Another issue focuses on decision making. Deciding means selecting what information is deemed important and which actions will result from it. The process relies on so-called schemes. These are a set of rules that allow ordering and interpreting new experiences and findings. Schemes are grouped in several categories:
First, there are constraints, which have a restricting impact on possible representations of a system. Among them are rules, norms, standards, guidelines and principles. Constraints also define the degrees of freedom one has for decisions and actions. Then, there are pattern, which put together different dependences, thereby reducing complexity and presenting a simplified view of a seemingly chaotic system. They support general understanding about proceedings as well as the complete and integrated perception of systems. Last but not least models, as above mentioned, show the strong interactions between the process of perception and that of decision-making
‘Order’ is defined as a state of many elements of a different kind, where one can make correct assumptions for the rest of elements based on knowledge of their time and/or space parameters, which have a good chance of turning out right. Terms often used for that are “structure”, “pattern”, “configuration” or simply “system”. Order, on one hand, can be made and deliberately planned.
Such an “organisation” requires a rather clear goal, and people performing concrete tasks whereby serving a common purpose. As such it is imposed as a kind of force from outside. On the other hand order can be grown, spontaneously developed. Created through regularity in the behaviour of its elements, it comes from inside, is self-regulating and self-organising. No clear goals are pursued; individuals are following own goals in addition. Such an order is more abstract and may cover an uncountable number of circumstances.
In conclusion, Digtial Twin tools better overall performance than their physical Mock-up predecessors. From a design point of view managing the shear size and complexity within the tight development schedule has made it virtually impossible not to use a Digital Twin. The spotlight investigation further supports the view that Digital Twins “produce” considerably fewer quality deficiencies in wing equipping and final assembly then before.
1. Type of parts in mock-up
These are the different technological types of parts that are actually designed to be in the Digital Twin. The classification scheme goes from simple to very complex. It is important to note that not each and every part had been counted.
2. Integration density
The more components are to be installed in a certain space volume, the more challenging and complex the integration task becomes. Therefore this indicator approximates the real integration density situation in the mock-up with the reference part count and a simplified wing volume calculation.
3. Drawing tool
This indicator considers the question of how to actually create the design or the drawing – e.g. without or with 3D tools- accounting for more or less complexity. It takes into account human perception and correct or flawed realisation of the overall geometrical situation. The plain and discrete assessment of which drawing tool was/is used makes it an object level indicator.
4. Mock-up class
Mock-up indicators tell the level of detail as well as scope required for representation of the geometrical situation in the mock-up. As these classifications have the character of requirements and overall concepts defining what has to be done.
5. Mock-up policy
The mock-up policy describes the generic and operational rules to be followed for the mock-up campaign including quality assurance rules and roles and responsibilities, internally as well as across the supply chain.
6. Mock-up tasks to be fulfilled
This covers the range of tasks and checks that have to be done in the wake of the mock-up campaign. It describes the “what” of mock-up activities to be done. This can be rough space allocation studies, detailed assembly-integration checks, maintainability validations or particular risk analysis. Each task can be separately identified and therefore be accounted as single items.
7. Tool environment and maturity
This indicator addresses the actual tool set situation across the extended enterprise that prevailed for most of the time of the investigational time frame. The two major distinctions are the questions of a homogenous versus a heterogeneous tool environment situation and their maturities. This situation determines operational playing rules with respective efficiency gains or performance shortfalls.
8. Number of components ratio
This is the number of parts, assemblies components in the wing compared to those being created for the mock-up as a whole. Also based on the “reference part count” it denotes the complexity of the campaign effort in terms of part/assembly creation: the more the mock-up resembles the real wing the higher the complexity, because more parts and their interactions have to be handled.
9. Moving components ratio
Based on the components created for the mock-ups this factor sheds light on the difficulty to integrate fixed and movable components. The more movable components there are with respect to the overall reference part count the higher the complexity. Here, not the different types of defined motion paths are of interest, but the overall level of movable components that have to be integrated.
10. Configuration similarity
This indicator describes whether or not teams had experience with wings with similar structure and systems arrangements before the respective campaign. If yes, that would imply a certain familiarity with the integration task, which in turn would contribute in reducing complexity.