Most “Digital Twin” Simulation Experts say the terms component, parameter and assembly indicate configuration design mainly applies to the configuration of physical objects, but the components can also represent other things, e.g. activities.
Simulation planning tasks where the possible actions that constitute the plan are known, but where the time order and dependencies between actions are unknown, can also be viewed as an example of a configuration design task
Each solution agent determines the optimal configuration for the product concerned based on its local point of view. Then the configurations are evaluated considering all the points viewed in the previous steps.
Consensus contains one or more agents which are quite similar regarding the constraints imposed by the different experts of domains and the customer. These solutions allow the customer to choose among several product configurations which best fit the imposed requirements.
Simulation constraints differ from requirements in that requirements must be satisfied, while constraints must not be violated. For example, a requirement could be that field-level unit should have an outlet for smoke from fire. Such a requirement could be satisfied by some contraption that fulfills this function.
An example of a constraint would be that the field-level position should not have an outlet higher than an upper boundary. This constraint does not say that there should be a outlet, but that if there is to be one, it should not exceed a certain upper boundary.
From that perspective scheduling is in many respects similar to layout design: the components, ie activities are fully specified and the problem is to find an arrangement of the activities in time that satisfies a number of constraints.
Multi-Agent Set-based design constructs can replace point based design construction with design discovery; it allows more of the Reliability Simulation effort to proceed concurrently and defers detailed specifications until mission space tradeoffs are more fully understood.
Process involves facilitated negotiation between agents created to help design teams take into account product characteristics across different functions and all stages of product life cycle.
Multi agent process discovers and corrects errors and still make sets wide enough that process was able to move forward and reach a converged solution without major rework. In a point-based design approach the team would need to start the design over.
Hybrid system of design agents and intermediate digital agents exhibits promise as a means of achieving effective conceptual set-based design of blocks by a cross-functional design team.
Negotiation across the network provides effective way to balance interests of the design team members. Converged marketplace can assess the interaction and design value of different parameters
Previous interface allowed only one-on-one negotiation even though many agents had critical interests in some parameters.
Hybrid agent approach can provide means to address potentially limiting design communication and negotiation process in advanced cross-functional team design, even if it is virtually linked across the network.
Mission Reliability Digital Twin Simulation must be constructed to depict the intended utilisation of elements to achieve mission success. Elements of the item intended for redundant or Alternate modes of operation must be modeled in a parallel configuration or similar Lego Block Construct appropriate to the mission phase and mission application.
Rapid diffusion of Digital Twins calls for open source entry level models of subcomponent units.. Once system of system with unit parts serves as Lego Block Construct or baseline unit to be created and built.
Pervious build efforts stopped at the physical unit representation , but Digital Twins call on the source of the physical unit to contribute to status update deposit complete with engineering characteristics of operational reliability functions.
Building Lego Blocks required for Digital Twin movement are quite similar to building blocks required to implement use of blockchain with trusted status updates of connected instances.
To deliver value, connections must span wide broad macro system. Implementation of connections must not be tied to distinct established steps or location, but must be time sensitive to maximise transmission and minimise with respect to sense/response between the edge and core.
Have we encountered Block and Connection concepts elsewhere?
Well, we have. Multi-Agent Systems are present in Lego Block Constructs, with each Agent representing one block component viewed a baseline unit contribution to Digital Twin Model.
The convergence of Digital Twins and blockchain is evident. Enterprises dissociated by modular structures and associated by function in operational sequences presents series of steps subdivided into blocks -- not only things/objects but also multi agent models, unit of work, process, verification decisions, outliers, feedback, metrics etc.
The components, their shape and dimensions are known and the configuration problem consists in finding the optimal layout of the components on the mold. The example domain of Lego Block Construct configuration also fully specifies the components.
Component Sequence Builds make it easy to represent objects, processes, and decision outcomes. Connected blocks may simulation agents networks joined by common Digital Twins. For example, alignment concepts described in previous reports specific to appropriate blocks can lead to useful platforms.
Set of basic blocks could be specified as a general block with length, width height, and color parameters. In that case the specification of the component space would contain one type, ie the rectangular block with replacement parameters.
Block units and subunits can be configured to create Digital Twin of machine. Condition instances and unit of status update represent blocks constitute Digital Twin model.
Not completely restricted configuration cases are apparent where a skeleton arrangement is given, but the specific arrangement still has to be determined. The other extreme case is where the space of possible arrangements is not limited in any way by the specification of the problem.
The Lego Block Construct problem is an example of a configuration problem where the skeleton arrangement is given, but the constituent blocks and their relations still have to be determined.
Some variants of configuration assume the requirements to be functional, in which case some relation between the arrangement of components and the required function must be determined.
In the Lego Block Construct problem, the requirements are not directly related to the components themselves, but to geometric properties of the final assembly. Testing the requirements against a possible solution will involve some aggregation over the properties of the individual components.
As an Example, consider scheduling a meeting. Using a point-based process for scheduling, the scheduler might send out an announcement that a meeting is necessary and propose to schedule it at a time suitable to him/her, e.g., for 10 am on Monday morning.
As the meeting attendees read the announcement, schedule conflicts will likely be revealed, causing back-and-forth communication and causing the meeting time to shift many times until a meeting time is reached that is agreeable to many if not all.
In contrast, using a set-based process for scheduling, the scheduler could announce that a meeting is necessary and send out all his/her available time and times currently booked but that might be rescheduled.
Then, each of the meeting attendees could compare their own availability with that of others and negotiate which meetings are best rescheduled until a meeting time manifests itself out of the attendees’ constraints. The set-based process eliminates unnecessary back-and-forth communication and expedites the meeting time selection.
Operational stability increasingly dominate general officer debate. Tech innovations have changed the way weapons systems operate. Specifically, traditional models of mission costing agencies construct fail to capture crucial processes.
Current controls by command centres have limited influence in global, highly interconnected and tech mediated networked systems. For example, the behaviour of high frequency computational agents on one exchange can have rapid knock-on consequences globally.
New kinds of decentralised self-organising collective action such as disagreements on mission requirements are possible using multi-agent platforms. At the same time centralised institutions find it increasingly difficult to control information and events leading to mission accountability crises.
Intent of the process is to couple the benefits of Multi-Agent Tradespace exploration in conceptual design with the benefits offered by the survivability design principles and survivability metrics.
In particular, multi-agent tradespace for survivability is a value driven process in which the designs under consideration are directly traced to the value proposition, and the measures of-effectiveness reflect the preferences of the decision maker during nominal and disrupted scenario states.
By following a parametric modeling approach, broad exploration of the tradespace is enabled in which the decision-maker gains an understanding of how value proposition maps onto a large number of alternative system concepts.
By emphasising breadth rather than depth, promising areas of the tradespace may be selected with confidence for future assessments, and sensitivities between survivability design variables and disturbance outcomes may be explored.
One implication of value thresholds changing as a function of scenarios is that definition and scale of the utility axis will vary across nominal and disrupted scenario states. A general response to this implication is to elicit applicable multi-attribute utility functions across all potential scenarios from the decision-maker.
However, depending on the particular system under purview of decision-maker, it may be possible to assume that the attributes comprising the utility functions are constant- with variation only on in terms of acceptability ranges and scaling of the single-attribute utility functions.
So must inquire whether the lower bounds of attribute acceptability may be temporarily broadened in the presence of finite-duration disturbances and, if so, the magnitudes associated with that extension
The concept generation phase of tradespace exploration is concerned with the mapping of form to function. Working way through solutions for how the attributes might be acquired, the designer inspects the attributes and proposes various design variables and associated ranges and enumerations.
Design variables are designer-controlled quantitative parameters that reflect an aspect of a concept, which taken together as a set uniquely define a system architecture. Each combination of design variables constitutes a unique design vector, and the set of all possible design vectors constitutes the design-space.
In the process of proposing design variables, tension exists between including more variables to assess larger tradespaces and limits on evaluating a larger set of designs.
Some models are not accredited for operational use because certain models contain deficiencies, such as optimistic representations of performance and simplistic representations of scenarios. In these cases, while data was initially supplied, the model performance failed to meet the criteria for accreditation.
Subsequently, supporting rationale to explain these failures was not provided, or to explain how the modeling issues skewed the overall performance results. For example, modeled sensor tracking data used in recent tests was compared to real-world sensor tracking data and found that the models representing some systems performed better than the real-world system
These modeling deficiencies can affect other elements that rely on sensor data and can artificially inflate performance. In one case, launch-on-remote capabilities were over-estimated. As a result, the models were not accredited, so there was no verification that test results supporting tested capabilities are credible and reliable.
Additionally, some models used in operational assessments are overly simplistic. For example, modeled representations of the battle scene in moments after intercept do not display the resulting complex scene that is caused by the large quantity of interceptor debris. This deficiency limits insight into how well it will perform during realistic attacks
Efforts to develop digital models can help in this area, by providing more processing power and great scalability for engagement complexity; however, the capability is not expected to be mature for some time.
Threat Models Cannot Be Traced Back to Underlying Threat Assessments: The value of ground test-generated data is dependent on the quality of the threat model that stimulates the test. However, the threat models have never been been accredited before operational testing, and in some cases, after testing.
As is the case with other models, in some cases data needed to accredit the models was not received in a timely manner. Additionally, the threat model used in testing was not traced to the threat model developed based on the intelligence threat assessment.
For example, during a past test event, a model representing an important element rejected the intended threat model and instead ran its own internal threat model. As a result, the test did not reflect real world conditions where the entire system would be exposed to the same threat stimulus.
Test architecture is not designed to generate the data needed to confirm that all elements are reacting to the same model during testing, meaning that testers were not aware that other elements could also reject the approved threat model during testing.
These deficiencies introduce ambiguity into the test results including the extent to which the system operated as an integrated system of systems against a common threat set. Officials are currently working on a pathfinder activity to help understand and rectify the traceability issue.
Although the warfighter and other decision makers rely on models to provide information about system effectiveness, capability delivery documentation does not include information about the quality of modeling data.
Specifically, memos and change packages describing technical capabilities delivered to the warfighter and their limitations, do not discuss the extent to which the models used to assess the new capability are verified, validated, and accredited for assessment, or how test results were affected by model limitations. As a result, decision makers do not have complete information about the validity of the capability assertions in these documents and how much confidence should be placed in reported performance.
Decision makers need access to reliable and timely information to make operational decisions. Additionally, in cases where models and simulations cannot be validated and accredited, any modeling results should be balanced with a clear explanation of which areas of performance assessment could be affected by the lack of accreditation. Lack of such information could lead to miscalculations about how best to employ the system or uninformed decisions about where to focus future capability development and investment.
While officials have recently begun to brief some combatant commands on how modeling limitations impact the warfighters understanding of delivered capabilities, these briefings are not readily available to other stakeholders and decision makers.
“Digital Twin” Simulations are at last matching reality—and producing surprising insights into Real-World Mission Space. Site Visit Executive likes to challenge General Officers. Simulating the formation of real-world mission space, some briefings start by projecting images of creations made by a team of scientists next to photos of real mission results and defies the audience to tell them apart. "We can even trick General Officers. "Of course, it's not a guarantee that the models are accurate, but it's sort of a gut check that you're on the right track."
For decades, scientists have tried to simulate how the thousands pieces of equipment in the observable mission space interact together. But in the past few years, thanks to faster computers, the simulations have begun to produce results that accurately capture both the details of individual fleet components and their overall distribution of parts shapes. "The whole thing has reached this little golden age where progress is coming faster and faster.”
As the fake simulations improve, their role also is changing. For decades, information flowed one way: from the scientists studying real missions to the modelers trying to simulate them. Now, insight is flowing the other way, too, with the models helping guide DoD Leaders. "In the past the simulations were always trying to keep up with the observations. Now we can predict things that we haven’t directly observed.”
The simulations also sound a cautionary note. Some scientists hope mission space formation will ultimately turn out to be a relatively simple process, governed by a few basic rules. However, modelers say their test results suggest mission space is unpredictable. "It's clear from everything that we've done that the physics of the mission space formation is incredibly messy."
Some simulations focus on individual components. Before you can cook up mission space simulations, you need to know the ingredients. Scientists also know the recipe's basic steps. Computer simulations helped develop that theory. As the theory grew more refined, so did the simulations. Recently the Simulation produced a rendering of the mission space web whose structure closely matched how the mission space are strewn through space in digital clusters, threads, and sheets.
Similar simulations suffered from a fundamental shortcoming, however. They modeled the interactions of some components alone, which are easy to simulate. Only once formed did the programs insert components of various sizes and shapes, following certain rules. In such simulations, "The fundamental assumption is that components don't do anything to each other. The interaction is all one way."
Now, modelers include the interactions of ordinary parts with itself and with other components—processes that are far harder to capture. The life stages of fleet components has developed hand in hand with the large-scale structure of mission space.
In general, modelers attack the problem by breaking it into millions of bits, either by dividing space into a 3D grid of subvolumes or by parceling the mass of types of components into swarms of particles. The simulation then tracks the interactions among those elements while ticking through time in, say, multi-mission steps. The computations strain even the most powerful supercomputers.
The simulations are far from perfect. They cannot come close to modeling individual components—even though the simulations point to the importance of feedback effects on that scale. Researchers then employ "subgrid" rules to describe how all that materiel behaves on average. "It's like you're looking through foggy glasses and trying to describe this shape that you cannot see perfectly.”
It’s clear from everything that we’ve done that the physics of mission space formation is incredibly messy. Those rules include dozens of parameters that researchers tune to reproduce known features so tuning raises the question of whether the models explain reality or merely mimic it, like a virtual representation.
But scientists say the models should be reliable as long as they avoid predictions that depend strongly on the tuning. "We're not going to get away from subgrid prescriptions, there's no way. But this is not some kind of magic. It's still physics."
The models have already overturned some long-held assumptions. The usual explanation of what determines mission space size has been knocked down. The models predict other subtle phenomena that observers can try to spot.
Some models operate at huge scales, whereas others generate individual, realistic-looking mission space. They divide space into volume elements or model matter as swarms of particles, then trace their interactions.
By comparing real and simulated mission space, scientists can test key assumptions. But mix in the ordinary components makes predictions change. Perhaps the simulations' single biggest lesson so far is not that scientists need to revise their overarching theory, but instead that problems lurk in their understanding of interactions at smaller scales.
Simulators hope to replace such crude assumptions with models based more solidly on physics. To do that, they're hoping to enlist the help of scientists working on much more finely resolved models that simulate origins of mission space.
Scientists are trying to help put mission space simulations on a sounder footing. “Our interest in this is to replace the tuning with some physics and say, ‘OK, this is what it is, no tuning allowed.“ The goal is to string together results from different size scales in a way that minimises the need for fudge factors. "What you want is a picture that's coherently stitching together across the entire range of scales.”
Ultimately, through observations and simulations, some scientists still hope to develop a unified narrative that can explain how any mission space gets its shape and properties. But many mission space modelers believe the recipes will always be complicated and uncertain. Mission space formation may be like the weather, which keeps precise predictions forever out of reach because of its chaotic nature.
“We’re quite concerned that we'll understand the big picture but never understand the details. In that case, the increasing realism of mission space simulations may serve only to underscore a fundamental complexity of real-world missions.
1. Formulate mission statement and quantify decision-maker value requirements during nominal and emergency states
2. Develop concept-neutral system models of disturbances in operational scenario of proposed systems.
3. Set out survivability principles by incorporating susceptibility reduction, vulnerability reduction, and resilience enhancement strategies into design alternatives.
4. Model/Simulate baseline system performance of design alternatives
5. Gain an understanding of how decision-maker needs are met in a nominal operational scenario
6. Model/Simulate impact of disturbances on lifecycle design performance of alternatives across representative sample of disturbance encounters
7. Provide examples of how decision-maker needs are met in disrupted scenarios
8. Apply time-weighted survivability metrics like utility loss and threshold availability for each design alternative
9. Provide summary statistics for system performance across representative operational service life
10. Explore trades and perform integrated cost, performance and survivability trades across design space to identify alternatives