A lot has changed. Simulation and modeling software allow organizations to create realistic and verifiable digital twins of their supply chains. Data mining techniques along with sensors input allow them to feed real-time data into models. They can monitor and determine what's happening in the real world and plan the appropriate corrective action.
Digital supply chain twin represents a digital depiction of an organization's actual supply chain. At its core is a digital supply chain model that's essentially the same as a prescriptive analytics model used for advanced supply chain modeling.
While prescriptive models work with real data, a digital supply chain twin takes this capability further in that inputs are fed into the model in real time. As a customer's order is processed, the order and associated transactions are automatically fed into the supply chain digital twin.
To truly reflect the real world, its essential inputs which impact the supply chain are taken into account. For example, if production machinery goes down or a supply ship is late, this information should be sent to the digital twin of the supply chain.
To avoid reliance on manual input, network devices can be used to detect that information. Provided the model has intelligence such as the duration of the outage or when the ship will dock, the digital twin in the supply chain can determine the right corrective action and support supply chain optimization.
Although the concept of a digital twin for supply chain is relatively new, a number of major organizations have adopted it as part of their business model. It's being used in warehousing, electricity supply and in supply chain networks.
In one example, supply chain digital twin in a warehouse receives real-time data from the physical warehouse and continually tracks performance to identify optimal storage solutions
Another example is energy companies can use a digital twin model to optimize pipeline nomination and allocate capacity. The energy market is complex and highly changeable. This model combines the organization's commercial arm together with its logistics function, allowing better identification of short-term logistics optimization.
Digital twins can also be used to overcome problems balancing turbine load under varying weather conditions, turbine supplier developed a digital twin so operators can monitor critical temperatures derived from virtual sensors on the digital twin to compute the motor temperatures, maximize performance and prevent overheating.
A parts supply company had difficulty producing sufficient high-margin products to meet demand. Using a digital model of business, they identified that perceived logistics constraints preventing economic manufacture were incorrect and, as a result, were able to meet demand, use the model to monitor performance and identify ongoing improvements.
The concept of a digital twin isn't new; the approach has been in use for many years in product engineering, flight simulators and Formula 1 racetrack simulators. What is different is the concept of integrating real-time data and inputs from the real world.
It’s possible thanks to the use network devices, the digitization of business activities and the introduction of powerful prescriptive simulation and modeling capabilities.
Supply chain digital twins allow companies to understand their supply chain dynamics. They reveal previously hidden information and help dispel preconceived notions. Delivery companies have identified numerous digital twin logistics and supply chain opportunities illustrates that the supply chain digital twin is more than just hype.
Some benefits associated with a digital supply chain twin include:
Dynamic optimization: Models aren't static, offering the ability to continuously optimize supply chain performance.
Greater insight: Digital twins offer greater insight into supply chain performance and its response to different inputs.
Supply chain transformation: The ability to understand the how and why of the supply chain opens previously hidden value
Deploying Digital Twin Supply Chains
When considering a digital twin supply chain solution, it's essential to evaluate how applications offered can meet your supply chain needs. What's absolutely critical is to be able to model your supply chain accurately.
For example, a digital twin should replicate constraints and be able to accommodate trade-offs. Digital twins need intelligence and the ability to analyze structured and unstructured data. They should be able to analyze multiple scenarios to provide the right actionable information.
Modeling tools should allow you to revise and update digital twins easily. The programming language should offer excellent visibility into the digital supply chain twin, its structure and formulae.
It's for these reasons that optimization modeling and prescriptive analytics solutions are a core component of a supply chain digital twin
Lessons learned through experience with Digital Twin efforts can provide insights for other efforts. Having many different Digital Twin approaches is appropriate, not conflicting, as there is no “one architecture to rule them all”.
Digital Twin provides the bones onto which specifications will be developed. It drives the standards development process, but must also be matured based on inputs from the community and the standards body itself. Do not think digital twin is ”done once”.
The work to form agreements on Digital Twin system requirements, architecture, design, acceptance criteria, etc is sometimes more about people/organization than technology. Do not let a single vendor or organization define specifications in closed settings, then “pull them out of the hat”
Success in developing Digital Twin solutions relies strongly on creating an environment of trust, open exchange of ideas, and consensus in the community, and then doing the work to document and manage change in a design.
Our effort defines a Digital Twin network-based architecture and interface specifications for integrating electronic systems in military ground vehicles, is being implemented in products and by vehicle programs, and the artifacts are being made available.
Digital Twin Configuration System Retains Product Supply Data Updates with Blockchain Facilitates DevOps Approval Transfer Speeds”
Digital Twin configuration is determined by the type of input data, number of data sources and the defined metrics. The configuration determines the value an organization can extract from the Digital Twin. So a Twin with a higher configuration can yield better predictions than can a Twin with a lower configuration.
What if the product configuration data required for being delivered via DevOps method can be recorded on a Blockchain? This would make the process easier, transparent for both and would eventually improve the quality.
Blockchain and DevOps are growing and gaining wide acceptance. These two products/process already offer so many advantages alone, but together they can provide tactical and strategic planned advantages.
The main response can center on the fact that on making product delivery chains which is DevOps, a more transparent and reliable method which is Blockchain technology.
Configuration management is important in DevOps because it helps you automate otherwise tedious tasks and allows an organization to increase agility.
Moreover, configuration management supports the DevOps big picture, and it’s widely agreed upon that configuration management is essential to DevOps, as opposed to being just another component of the process.
Configuration management takes on the primary responsibility for three broad categories required for DevOps transformation: identification, control, and audit processes.
Seeing configuration management as separate and aside from DevOps creates problems.
Comprehensive configuration management is essential to a properly functioning DevOps organization, as it lays the groundwork for far more automation than that which it impacts directly so enterprise businesses communicate better and function as a more agile development unit focused on continuous integration and continuous delivery.
By manipulating simple configuration files, a DevOps team can use application development best practices, such as version control, testing, small deployments, and design patterns. In short, this means Blockchain code can be written to provision and manage an infrastructure as well as automate processes.
"Digital Twin concept important is that computers have greater predictive skills that humans working with physical prototypes. “The digital twin a good definition of what we do. It’s a virtual representative of a physical product or operation.
The point is the precision we can reach. A lot of our technology acquisitions have been in simulation, test, and electronics. If we can improve the precision of the models we’re creating, we can predict performance.”
Performance Digital Twin. The Digital Twin for performance is based on tools enable insight discovery, analysis and monitoring from in-service products and production systems. Performance Analytics quickly identifies product issues disrupting the supply chain, manufacturing process or customer experience.
The performance Digital Twin may also include data analysis to discover hidden product issues before they occur; graphical displays to clearly identify potentially problematic configurations; and automated data monitoring to fine-tune operations and provide insight for improving your products.
1. Define “architecture” clearly
2. Identify elements and interfaces, and vet with the stakeholders
3. Common language: non-ambiguous, agreed upon terms
4. Domain specific, not generic is best
4. Select the modular elements and level of abstraction wisely
5. Line between architecture and design will vary depending upon the domain
6. Which elements are modular should be based on the goals, e.g. defines “on-the-wire” messaging
7. Define scope, and stick to it, consider problems driving cost
8. Provide legacy system strategy
9. Show how existing equipment adapts
10. Provide a pathway from the current practice to the objective architecture
Lessons Learned: Specifications
1. Agreements, not technology
2. Resist the urge find a new technology to solve the problem
3. Develop trust and agreements between stakeholders.
4. Engineers and scientists show patience in development
5. Design specification must come first.
6. Good enough trumps optimized
7. Explicitly define and make acceptance tests central, early and often
8. Formalize interface specifications
9. Information exchanges: data content and functions first, format and encoding later
10. Get the logical aspects down first, then move to details.
Lessons Learned: Organization
1. DoD holds the vision, works with a “core team” to define architecture and set up organization, and promotes to the community of interest
2. Steering body: leaders from the “customer base”, adopters who will eventually write contracts for and accept delivery of elements, help guide
3. Architecture lead: pin the rose on someone with experience and respect from community
4. Independent core support team:
5. Paid & contractually held to deliverables
6. No perceived “stake” in the decisions
7. Include domain experts
8. Specifications require input from domain experts for each type of interface
9. Specify as little internal component behavior as possible to allow for innovation
10. Data models are important, and must be non-ambiguous
Lessons Learned: Trust
1. Environment of trust
2. Ensure that all have access to the same information
3. Common portal or document base with as wide access as is feasible
4. Hold “open” meetings with documented minutes and actions
5. Ask opinions in open environment
6. Consensus-based process
7. Consensus can only be achieved if all parties feel heard, and you are willing to be flexible without sacrificing vision
8. No “formula” for consensus, but you know it when you get it
9. Ensure all decisions are supported and documented, and shared with all
10. Ensure that the consensus opinion results in feasibility/validation
Lessons Learned: Process
1. Plan for Validation: quality control on specifications
2. “Validation” step in the process before “publishing” specifications
3. Different individuals developing and evaluating/validating specifications
4. Open, documented validation plan and results
5. Develop acceptance criteria and test plan identify defects/hole during validation.
6. Compliance/Conformance Verification Plan: plan determining if implementations comply/ conform to specifications
7. Provide detailed compliance test procedures and documentation expectations as part of the process
8. Plan for if and how compliance will be certified
9. Provide reference implementation and any tools developed during validation to the community
10. Listen to domain experts, and put together “task teams” guided by core team to solve problems
Lessons Learned: Take-Aways
1. Architecture: Choose modular elements wisely, manage scope
2. Define legacy system strategy Specifications:
3. Concentrate on agreements, not technology
4. Tests/acceptance criteria are crucial, and eventually embody the specifications
5. Organization: Government and industry buy-in
6. Independent core technical support team
7. Environment of Trust: Openness to all, and consensus process
8. Process: Mature specifications through validation
9. Plan for verification of compliance /conformance with tests
10. Provide reference implementation and tools
1. Getting bogged down in developing new tools: content and common understanding is the goal
2. Trying to solve problems not driving cost in the domain of interest
3. Being too general: attempting to teach domain to think in software terms
4. Being a ”first adopter” of a new, unproven technology, tool, or approach
5. Underestimating the need and cost of defining acceptance criteria and tests up-front
6. Allowing a single vendor to provide a specification.
7. Best specifications defined by vendors will be suspect
8. Unrealistic cost/benefit and schedule plan
9. DoD not having technical knowledge must engage/understand technical details as much as possible
10. Finding balance between under specification and over-specification