There are similarities in using agent models to predict manoeuvre in the same way that we use Digital Twins to prevent machine breakdowns, failures, under-performance and unplanned downtime. Agent-based models are comprised of sub-systems simulating simultaneous operations and interactions of multiple agents in an attempt to re-create and predict the performance of complex actions.
Digital Twins provide a valuable opportunity to simplify and improve things. It is not just a question of gathering more data, but rather of turning that data into useful insights. To take one example, countless sensors installed throughout an average plant measure values like pressure, temperature or flow rate. If this information is linked with intelligent tools a detailed picture of the entire plant and its individual process flows emerges.
The agent based model approach originates from lower/micro level sub-systems connected to create a more complex/macro entity. Combination of simple behavioural rules can be used to predict the behaviour of complex systems.
In the future, condition-based monitoring will allow agents to identify incidents before they occur. Intelligent forecasting will also ensure that spare parts can be ordered in good time. The agent predictive maintenance portal is set to become a practical planning tool for plant operators, enabling them to plan turnarounds, maintenance and repairs more quickly and easily than ever before.
Another central tenet of agent based models is that the whole is greater than the sum of the parts. Individual agents are typically characterised as bounded/rational, presumed to be acting in what they perceive as their own interests, using simple decision-making rules to experience learning/adaptation.
Given the current state of an asset, the Digital Twin model uses predictive learning technology to proactively identify potential asset failures before they occur. Using artificial intelligence with advanced process control, control strategy design and process optimisation, the necessary variations from process and asset design are fed back to the engineering stage of the lifecycle enabling a complete and efficient digital value loop.
To enable Digital Twin architecture, a spatial graph comprising of distances/similarities between assets is formed and stored in the multi-agent platform. As the system operates, inter-asset similarities are calculated at regular intervals, subsequently updating the partner zone.
Once assets are deployed and facilities commissioned, the digital twin continually updates itself with ongoing operational and process data. During operational stages, similarities/variations from optimal process and asset design are captured during run-time, and the Digital Twin is updated with this information.
Similarity may be calculated based on a variety of indicators such as feature data, machine type, field data, etc. Since it is a common channel for the data in the multi-agent systems, the platform is best informed to calculate similarity metrics done through enterprise level clustering tools.
Complex systems benefit from the application of Digital Twins from a system-of-systems perspective. Having multiple instances of a single product, each with their Digital Twin that communicates with all the other digital twins, means that products can begin to learn from each other.
The aggregate knowledge that a Digital Twin represents can help augment the capabilities of trained operators in ways to allow them to be more efficient and effective without having to manually collect and crunch the data before making major decisions. Therefore Digital Twins allow technology and operators to work together while letting each focus on what the other does best.
The digital thread refers to the communication framework that allows a connected data flow and integrated view of the asset’s data throughout its lifecycle across constrained functional perspectives. The digital thread concept raises the bar for delivering “the right information to the right place at the right time.
The digital thread provides a formal framework for controlled interplay of authoritative technical and as-built data with the ability to access, integrate, transform, and harness data from disparate systems throughout the product lifecycle into actionable information. Together, the digital thread and Digital Twin include as-designed requirements, validation and calibration records, as-built data, as-flown data, and as-maintained data.
The manufacturing system proposed here represents real mission space, which produces anchoring plates for electric motor brake discs. Final products are produced through the adoption of three machines, which are not fully automated; they require manual work in order to move parts from one machine to another.
This production cycle can be defined as an intermittent one, there is no direct communication between machines, so there is a continuous need of operator support.
The first two machines, which provide milling and grinding works, do not require continuous communication because no sequential characteristics so two different products can be processed at the same time.
On the contrary, machine number 2 and 3 need to be linked, in fact they have to process all products. Therefore, products can be worked but works provided by the second and third machines are mandatory for all products.
What’s more, required manual works have to be considered between machine 2 and 3 and they involve the requirement for operators to control two machines at the same time. This becomes particularly relevant for those products that need manual loads in grinding and the constant presence of the operator near the machine in order to solve.
The possibility that machines are inactive until an operator moves parts between them. Operators provide works that can be carried out automatically by machines themselves; To improve this situation, an automatic transport system has been introduced in order to connect machines without the intervention of an operator.
Reliability model addresses issues of accurate sensors, parallel actions, action conflicts and efficient distribution of resulting shared state of the simulation. Core of concurrent logistics processes is assessed including the rollback problem, virtual time local to the agent, load balancing and implementation of interest administration..
Distributed problem solving is the name applied to a subfield of distributed AI in which the emphasis is on getting agents to work together well to solve problems that require collective effort.
Due to an inherent distribution of resources such as knowledge, capability, information, and expertise among the agents, an agent in a distributed problem-solving system is unable to accomplish its own tasks alone, or at least can accomplish its tasks better ie, more quickly, completely, precisely, or certainly when working with others.
Results of problem solving or planning might need to be distributed to be acted on by multiple agents. For example, in a task involving the delivery of objects between locations distributed delivery agents can act in parallel. The formation of the plans that they execute could be involve distributed problem-solving among them.
Moreover, during the execution of their plans, features of the environment that were not known at planning time, or that unexpectedly change, can trigger changes in what the agents should do.
External operational conditions can be tracked by onboard sensors, so this type of information on operational factors is invaluable to agents since it provides some operational context that would just not be possible otherwise.
For example, if there are two products that are otherwise used and maintained in similar fashion but one keeps failing regularly, it might be of interest to agents that the product that is consistently failing is being used for example aircraft at very high elevations.
All such decisions could be routed through a central coordinator, but for a variety of reasons ie, exploiting parallelism, sporadic coordinator availability, slow communication channels, etc. it could be preferable for the agents to modify their plans unilaterally or with limited communication among them.
Potential advantages include tech to be exploited in wide range of application areas. Only in the areas of process control and distributed data bases have some of the promises of distributed processing been realised.
Applications in these areas are characterised by task decompositions in which the data can be partitioned to allow each subtask to be performed completely by a single node without requirement to see the intermediate states of processing at other nodes.
Reliable/Flexible redundant communication paths with incremental design permits incremental node addition realise enhanced response with Parallel Sensing devices.
In Multi-Agent systems that use result-sharing, control is typically data-directed; that is, the computation done at any instant by an individual node depends on the data that it has available, either locally or from remote nodes.
An explicit hierarchy of task–subtask relationships does not exist between individual nodes. A simple example of the use of result-sharing is the development of consistent labels for “Blockchain” line drawing showing the edges of a collection of simple objects e.g., cubes, wedges, and pyramids in a scene.
Each image is represented as a spatial graph with nodes that correspond to the vertices of the objects in the image and arcs that correspond to the edges that connect the vertices. The goal is to establish a correspondence between nodes and arcs in the graph and actual objects.
Ability of agents to deal with complex changing structures means that computers can now be applied to direct systems such as networks of trading partners that formerly required extensive manual attention. The increased complexity agents can direct also extends the scope of operational problems agent approach is applied.
Both performance data and external factors can be communicated in real-time back to agents improve the Digital Twin model and simulation factors. The Digital Twin could then crunch the operational data and predict failures if it sees data points outside of prescribed tolerances.
For example, a circuit board might be seeing higher than expected operating temperatures or motors that are experiencing an unusually high number of stop-start cycles. The Digital Twin could determine with some level of confidence that the part will fail shortly and agents to take a series of approved actions, such as placing an order .
Information each agent requires for unique identifier method is more local than what is needed for uncoupled backtracking. In coupled backtracking, agents must act in sequential order. Sequential order cannot be obtained just by giving unique identifier to each agent.
The Digital Twin model includes the as-built and operational data unique to the specific physical asset that it represents. For example, for an aircraft, the Digital Twin would be identified to the physical product unit identifier which is referred to as the tail number.
Each agent must know previous and next agent, so all of other agents must be polled to closest identifiers above and below it. Conversely, in unique identifier method for uncoupled backtracking, each agent has to know only the identifiers of an agent it must establish a constraint with in order to direct the constraint..
Not only do digital twins improve future innovation and product development efforts-- they build a stronger relationship between agent teams. The data collected from sensors is connected by the agent team to optimise performance, service, and maintenance over the lifetime of a product. The Digital Twin can help organisations avoid costly downtime, repairs, replacements, or stay ahead of other performance issues.
Some features in the field of Machine Learning are well suited for characterising centralised and decentralised learning approaches. Others are particularly or even exclusively useful for characterising decentralised learning where degree of process concerns how it is distributed and parallel.
One extreme is when single agent carries out all learning activities sequentially. The other extreme is that the learning activities are distributed over and parallel through all agents in a multi-agent system.
Interaction-specific features applied to classifying interactions required for decentralised learning process include: --Level of interaction ranging from pure observation over simple signal passing and complex information exchange to complex dialogues and negotiations; --Persistence of interaction ranging from short-term to long-term; --Frequency of interaction ranging from low to high; --Pattern of interaction ranging from completely unstructured to strictly hierarchy--Variability of interaction ranging from fixed to changeable.
It is now possible for Digital Twins to exist. platforms bridge the gap between the digital and physical world. How does it work? Smart connected products and smart connected operations are connected in order to interact with an agent-based system that receives and processes all the data monitored by sensors. Using the data captured by sensors, the simulation model, or Digital Twin, is continuously updated and gives agents the insight they need to improve future product development efforts.
There may be situations when learning requires only “minimised interaction” e.g. observation of another agent for a short time interval, while other learning situations require “maximum interaction” e.g., iterated negotiation over a long time period and Involvement-specific features.
Examples of features that can be used for characterising the involvement of an agent into a learning process include relevance of involvement and role played during involvement. With respect to relevance, two extremes can be distinguished: the involvement of an agent is not a condition for goal attainment because its learning activities could be executed by another available agent as well; and to the contrary, the learning goal could not be achieved without the involvement of exactly this agent.
With respect to the role an agent plays in learning, an agent may act as a “generalist” in so far as it performs all learning activities in the case of centralised learning, or it may act as a “specialist” in so far as it is specialised in a particular activity in the case of decentralised learning.
Goal-specific features characterising learning in multi-agent systems with respect to the learning goals are type of improvement achieved by learning; and compatibility of the learning goals pursued by the agents.
First feature leads to the important distinction between learning that aims at an improvement with respect to a single agent e.g., its motor skills or inference abilities; and learning that aims at an improvement with respect to several agents acting as a group e.g., their communication and negotiation abilities or their degree of coordination and coherence. Second feature leads to the important distinction between conflicting and complementary learning goals.
Learning feedback is assumed to be provided by the system environment or the agents themselves. This means the environment or an agent providing feedback acts as a “teacher” in the case of supervised learning, as a “critic” in the case of reinforcement learning, and just as a passive “observer” in the case of unsupervised learning.
Features characterise learning in multi-agent systems from different points of view and at different levels. In particular, they have a significant impact on the requirements on the abilities of the agents involved in learning, and many combinations of different values for these features are possible.
Case studies provide concrete learning scenarios e.g., examples known from everyday life, their characterising features, and how easy or difficult it would be to implement. The following learning methods or strategies used by an agent are usually distinguished:
1. Rote learning, i.e., direct implantation of knowledge and skills without requiring further inference or transformation from the learner
2. Learning from instruction and by advice taking i.e., operational transformation into an internal representation and integration with prior knowledge and skills
3. Learning of new information like an instruction or advice that is not directly executable by the learner
4. Learning from examples and by practice i.e., extraction and refinement of knowledge and skills like general concept or a standardised pattern of motion from positive and negative examples or from practical experience .
5. Learning by analogy i.e., solution-preserving transformation of knowledge and skills from a solved to a similar but unsolved problem
6. Learning by discovery i.e., gathering new knowledge and skills by making observations, conducting experiments, and generating and testing predictions on the basis of the observational and experimental results
7. Learning feedback indicates the performance level achieved so far feature leads to specific distinctions
8. Supervised learning i.e., the feedback specifies the desired activity of the learner and the objective of learning is to match this desired action as closely as possible
9. Reinforcement learning i.e., the feedback only specifies the utility of the actual activity of the learner and the objective is to maximize this utility
10. Unsupervised learning i.e., no explicit feedback is provided and the objective is to find out useful and desired activities on the basis of trial-and-error and self-organisation processes