Marine  Magnet Dispatch Service Centre
  • Drone Fleet Readiness Office
  • Status Updates
  • Marine Corps Integration
  • Submit Comments
  • Download Reports
  • Building Drone Swarms

Top 50 Systems "Digital Twin" Engineering Work Include Readiness/Reliability Models Simulation Activity

5/20/2018

1 Comment

 
Digital Twin Simulation in the form of virtual reality has now allowed some reliability/maintainability characteristics of a design to be "evaluated" before any model or prototype is even constructed.

Although not a total substitute for testing, virtual reality simulation can reduce the amount of time required by allowing some reliability aspects of the design to be verified without or with reduced maintainability demonstration and testing.

Digital Twin simulations have definite applications for designing reliable equipment. For example, based on Digital Twin design status updates, a virtual copy of the product can be "produced."

Maintainability engineers can then enter a virtual design space where maintenance can be "performed" on the product.

Accessibility of components, whether an item fits in an allocated space, and the approximate time required to perform specific maintenance actions all can be evaluated using Digital Twin Simulations.

Virtual copies of support equipment can be evaluated by "performing" maintenance activities with them. Digital Twin Simulation reliability updates could allow technicians to view virtual information panels "superimposed" using augmented reality techniques on the actual equipment.

In general, Digital Twin Simulations can be used by reliability engineers to assess degree can be reached in time, access, field of view, force structure posture and activity timing. In addition to designing for maintainability,
Digital Twin Simulations have many potential training applications. Maintenance and manufacturing procedures, especially procedures that are seldom performed or are difficult to teach using conventional approaches, can be taught using Digital Twin Simulatons.

Digital Twin Simulations could also be used to train individuals in performing difficult to anticipate procedures. For example, Infantry Troops can now "perform" operations without actually using any physical tools or a live operations.

As has been the case with previous new technologies, the possible uses of Digital Twin Simulations cannot be fully appreciated or anticipated. As virtual reality interest moves forward, the applications related to design for reliability will certainly increase in number and in fidelity
 
Equipment is out in the field, it’s out with the troops, they’re giving the feedback. The engineers developing the systems are there right with them, so they are using it and giving very good feedback.

They get it down, they give special operators a chance to work with the equipment. They touch it, they use it, and then we can quickly turn those things around.

Getting the input of users is extremely important, or you won’t have a reliable system that they’re going to use when it gets fielded.

It’s similar to a special operations model, which has been an inspiration in the effort to streamline acquisition of system reliability as whole.

Site Visit Executive job will be to focus on defining requirements as precisely and realistically as possible from the start and then to experiment to see what works before committing resources to reliability efforts

There are going to be trade-offs and all this and the reporting lines may change to make sure we get accurate optimisation of Digital Twin Simulations.

Everybody recognises we have reached a point of action for reconstituting readiness that we’ve got to just move forward with smart reliability models. it’s time to move from this industrial age system to a modernised system utilise Digital Twin Simulations.

1. The programme must use a digital model to develop depictions of the system to support all programme uses, including requirements assessments, architecture, design and cost trades; design evaluations; optimisations; system, subsystem, component, and subcomponent definition and integration; cost estimations; training aids and devices development; developmental and operational tests and sustainment.

2. Models and simulations must be used, to the greatest extent possible, in systems engineering and program/project risk management; cost and schedule planning; and providing critical capabilities to effectively address issues in areas including but not limited to interoperability, joint operations, and systems of systems across the entire acquisition life cycle.

3. The responsibility for planning and coordination programme use of models, simulations, tools, metrics, and the engineering job sites belongs to the programme administration; the performance of the actual tasks may be delegated to the programe systems engineer and other program staff as appropriate.

4. Programmes should identify and maintain model-centric technology, methodology/approach and usage preferably in a digital format e.g., digital system models, that integrate the authoritative technical metrics and artifacts generated by all stakeholders throughout the system life cycle. Unless impractical, the programme should develop the digital system models using standard model representations, methods, and underlying information structures.

5. The digital system models are a collaborative product of systems engineering and design engineering efforts. The program should construct the digital system modes by integrating metrics consumed and produced by the activities across and related to the programme.

6. The digital system models must include, but should not be limited to, the technical baseline, parametric descriptions, behaviour definitions, internal and external interfaces, form, structure, and cost. This information must be traced at a minimum from operational capabilities through requirements, design constructs, test, training, and sustainment. The programme should validate the digital system models baseline at appropriate technical milestones.

7. Systems engineers should use models to define, understand, evaluate, communicate, and indicate the project scope, and to maintain an authoritative source about the system. When captured digitally, the system model may be used to produce technical documentation and other artifacts to support programme decisions. It is expected that a properly managed, digitally based system model will be more accurate, consistent, and sharable.

8. Models, simulations, tools, methodology, and data employed in acquisition activities must have an established level of trust, and the programme must use the activities with an acknowledged level of risk appropriate to the application. The development of models, construction of simulations, and/or use of these assets to perform programme definition and development activities and Materiel Development Decision and requires collaboration among all project stakeholders and is led by the systems engineer.

9. The programme directorate must ensure sufficient training in the appropriate use of models, simulations, tools, data, and the engineering job site. The programme must identify metrics that show the link between training and the appropriate use of activities that result in benefits to the programme, especially in the areas of early identification of defects, cost avoidance, and risk reduction.

10. The programme should update the digital system models throughout the program life cycle and maintain configuration management i.e., version controls. These updates will provide continuity among all programme stakeholders, including the program model developers, simulation uses, and other engineering and programme administration activities.

Top 10 Evaluation Summary of Reliability Detection Models Predictions Included with Element Rationale

Reliability critical elements of the item extracted from the reliability modeling and reliability prediction effort must be listed and included in the summary to include conditions of high failure rate elements, over stressed parts i.e., exceed established parts rating criteria, and mission reliability single failure points. The detail tasks and methods for preparing reliability models and perfecting reliability predictions follow

The Basic Reliability model must consist of a reliability block diagram and an associated mathematical model. By definition, the Basic Reliability model is an all Series model which includes elements of the item intended solely ‘for redundancy and alternate modes of-operation.

The Mission Reliability model must consist of a reliability block diagram and an associated mathematical model. The Mission Reliability model must be constructed to depict the intended utilisation of elements to achieve mission success. Elements of the item intended for redundancy or alternate modes of operation ~must be modeled in a parallel configuration or similar construct appropriate to the mission phase and mission application.
 
1. Reliability Block Diagram

Reliability block .diagrams must be prepared to show interdependencies among all elements-- subsystems, equipments, etc.or functional groups of the item for each service use event. The purpose of the reliability block diagram is to show by individual shorthand the various series-parallel block combinations that result in item success. An understanding of mission definition, of item and service use profile is required to produce the reliability diagram.

2. Block Diagram Title

Each reliability block diagram must have a title including identification of the item, the mission identification or portion of the service use profile addressed, and a description of the mode of operation for which the prediction is to be performed.

3. Statement of Conditions

Each reliability block diagram must include a statement of conditions listing all constraints influencing the choice of block presentation, the reliability parameter or reliability variables utilised in the assessment, and the assumptions or simplifications utilised to develop the diagram.

4. Statement of Success

A statement of success must be defined in specific terms stating exactly what the calculated reliability represents for the items as diagramed and performing under the criteria presented in the statement of conditions.

5. Order of Diagram

The blocks in the diagram must follow a logical order which relates the sequence of events during the prescribed operation of the item.

6. Block Representation

The reliability block diagram must be drawn so each element or function employed in the item can be Identified. Each block of the reliability block diagram must represent one element of function contained in the item. All blocks of the reliability block diagram shall be configured in series, parallel, standby, or combinations thereof as appropriate.

7. Identification of Blocks.

Each block of the reliability block diagram shall be identified. Diagrams containing few blocks may have the full Identification written in the block. Diagrams containing many blocks shall use a consistent and logical code identification written for each block. The coding system shal1 be based upon the work breakdown structure work unit code numbering system or other similar uniform identification system that will permit clear traceability of the reliability block to its hardware or functional equivalent as defined in program documentation. The code must be identified in a separate listing.

8. Non-modeled Elements

Hardware or functional elements of the item which are not included in the reliability model must be identified in a separate listing utilising the coding system. Rationale for each element exclusion from the reliability model shall be provided.

9. Reliability Variable

Reliability variables shall be determined for each block and presented so association between the block and its variable is apparent. The reliability variable is a number-- time, cycles, events, etc. used to describe the duration of operation required by each block to perform its stated function. This variable must be used in calculating the reliability of the block.

10. Block Diagram Assumptions

Two types of assumptions shall be used in preparing reliability block diagrams:.technical and general. Technical assumptions may be different for each item and for each mode of operation. The technical assumptions shall be set forth under the statement of ‘conditions. The general assumptions are those applicable to all reliability block diagrams.


Top 10 Equipment Readiness Information for Product Support Item Decision Assign Reliable/Capability Models

While heavy military transport aircraft have very strict mission profiles, agile fighter, trainer or attack type fleet usage variability aircraft are well known to experience substantial variability in their missions .

Once critical factors are identified in the aircraft design stage contributing to fatigue test performance, individual aircraft reliability tracking programmes are used to accumulate, so they cannot be tracked based on mission hours alone. So tracking program is necessary for agile combat type aircraft conducted with every the fatigue life status of each aircraft throughout its service life design, based on its own operational load spectrum

Amount of performance fatigue life consumed and the remaining life for each aircraft in the fleet is assessed. One of the greatest benefits of an individual aircraft tracking program is that calculated independently of other aircraft in the fleet. Reliability Models based on individual design characteristics reveals loads monitoring can take place without a prior knowledge of the exact critical location.

Ideally, provided that wide spread in the rate of fatigue usage, sufficient number of primary load carrying structures are routinely monitored, stresses at all critical locations hours on many aircraft. The fatigue accumulation rate is the individual aircraft fatigue damage values

Critical transfer function relating the monitored load are calculated using the standard location stresses. So change in the critical location an be accommodated through the design of transfer function to the new critical location. Some of the benefits gained from the individual aircraft tracking program include:

1. Modeling of operations to stabilise the rate of fatigue life consumption,; life of an aircraft structure, knowledge of the actual load experienced by that structure is essential.

2. Drawing reliability comparison between design and usage spectra for each aircraft; with estimation of the fatigue life or damage status of major components on each aircraft based on loads monitoring in the primary structure of that aircraft and related to fatigue test results

3. Planning of maintenance action according to rate of fatigue damage accumulation for aircraft fleet reliability estimates, modification of operations to stabilise the rate of fatigue life consumption, life of an aircraft structure, knowledge of the actual load experienced by that structure is essential.

4. Building an operational load reliability model in conjunction with flight trials for application to a fatigue test and, where a safe-life may be stipulated, some aircraft are retired at a different number of flight hours due to their to compare with early fatigue test metrics

5. Identifying the variability in response between aircraft calculated rate of fatigue damage accumulation being higher or lower than the reliability target rate because of operating fleet under the same flight conditions through assessment of mission severity, with prime factors driving individual aircraft tracking are the unique combination point-in-the-sky affects

6. Gaining better understanding of the loading scenario experienced by different aircraft in the fleet and the availability of a good on-board reliability monitoring in conjunction with flight trials metrics

7. Observing difficulties introduced by assumption that, if the fleet average load factor exceeding curves matched that and structural redundancy at vertical tails of the design spectrum, the aircraft could be: operated until the design life, but operators of modern aircraft is likely to have a different systems

8. Designing future aircraft to be smart buyers in the acquisition of new aircraft for the same role; and usage spectrum to the design spectrum. The root bending moment of the component is the primary factor to assess.

9. Defining flight trials metrics parameters to be measured on new aircraft or new monitor and fleet-wide average load systems for the same aircraft to allow the more accurate calculation of critical components reliability

10. Seeking to maintain fleet structural integrity based on its reliability and identify operational overloads making individual aircraft tracking programme necessary. Test life extension must be substantiated by further fatigue tests to determine the next critical location and required repairs
 
 
Top 10 Mission Reliability Model Categories Standards Include Definition of Equipment Item Related to Reliability.

For Basic Reliability modeling; the item definition is simple - all equipments comprising the item are modeled in series. “All” equipments includes any equipments provided solely for redundancy or for alternate nodes of operation.

However, for Mission Reliability modeling, the item reliability model and mission success definition can become elusive problams especially for complex multimodal systems incorporating redundancies and alternate modes of operation.

In item definition, emphasis is placed on properly specifying reliability within the context of all other pressing requirements and restraints that comprise a functioning Item. A proper definition is important in order to establish meaningful requirements and goals

An adequate item definition aids in determining when the item is being used as intended, when it sees its anticipated environment, when its configuration has been changed beyond its original concept, as well as when it is performing its specified function.

Item reliability is defined as the probability of performing a specified function or mission under specified conditions for a specified time.

1. New item processed through a defense technical review activity for tasking and authorised for procurement that cannot be replaced with an existing item.

2. New item authorised for tasking that is contained in a new standard or revised superseding specification or standard that replaces prior items. This item will not be assigned stock number or an item standardisation code until a requirement for the item is generated.

3. Item authorised for tasking that has been included in standardisation an item reduction study but an intelligent decision could not be made due to lack of technical information.

4. New item processed through a defense technical review activity for tasking and authorised for standardisation but an intelligent decision could not be made due to lack of technical metrics.

5. Item no longer authorised for tasking which has been authorised for replaced by a new item as the result of new or revised standards superseding assigned specifications or standards

6. Item authorised for tasking which was initially identified for standardisation the result of a formal item reduction study and which was accepted as a replacement for one or more items not authorised.

7. Item authorised for tasking which as been included in an for standards item reduction study and which initially does not replace an item not authorised.

8. Item, which as a result of a formal item reduction study, is for procurement accepted as not authorised for tasking

9. Item authorised for tasking that has not yet been for procurement subject to item standardisation.

10. Item authorised for tasking that is in a specific procurement supply classification class or item name grouping consisting primarily of items which are one of a kind and, therefore, little or no potential exists for elimination of items through formal item reduction studies.
 
 
Top 10 Condition State Models Calculate Materiel Reliability Function When Service Life Estimates Not Directly Observable

At each observation moment to build reliability model, an indicator of the underlying unobservable condition state assessed, and the monitoring information is collected. The observation process is due to a condition monitoring system where the obtained information is not perfect so observation process doesn't directly reveal the exact condition state.

To match value of indications to the unobservable degradation state, a relationship between them is given by an observation reliability model. Time-dependent proportional condition state is considered to model equipment failure rate. Reliability Model Limitations include the problem of imperfect observations, and the problem of taking into account the condition state history of observations.

1. New, repaired or reconditioned materiel serviceable issue to all customers without limitation or restriction.

2. Serviceably and issued for its intended purpose but restricted to issue for training use only

3. Serviceable materiel requires or designated for test, alteration, modification, conversion or disassembly not to include items require inspection or tested immediately prior to use

4. Materiel involves only limited expense or effort to restore to serviceable condition accomplished in storage activity where materiel is located

5. Economically repairable materiel requires rework, repair, overhaul or reconditioning

6. Materiel requiring additional parts or components to complete end item prior to issue

7. Materiel determined to be unserviceable and uneconomical to repair

8. Materiel in stock suspended from issue pending condition classification where true condition is not known

9. Materiel returned from customers/users and awaiting condition classification

10. Materiel identified on item condition control record but turned over to service site
 

1 Comment

Top 10 Factors to Consider for Setting Up Logistics System with Centralised Control Decentralised Task Execution

5/13/2018

1 Comment

 
New Logistics Simulations Tools could resolve some issues Marines face in supply missions by combining the planning skills of expert Troops with artificial intelligence, robotics and metrics tracking. How do we dominate and operate effectively with decentralised execution in a contested information scenario?

Maybe if we just change a little bit of how we train and organise, we can dramatically impact the way we execute. We may have to spread out to greater distances with smaller units. Then you have to move supplies to the point of field requirement in a contested scenario with contested networks. Now you are facing a whole new set of problems.

Taskers are directives or action items at the executive level that require responses. Logistics Simulation gives users a common place to store and execute taskers so that at any given time, anybody involved in the task knows what’s going on with it.

Our Tool has led to an increase in Troop productivity and streamlined business process showing decreases in the average number of days that a tasker is late, average time it takes to complete a tasker and decrease in missed due dates.

This process provides a framework that MEU commanders can use to develop mission plans and understand where equipment shortfalls are likely. The process consists of simple steps that translate mission requirements into tasks, subtasks, and military activities, each of which is linked directly to the types of equipment needed for completion. It also highlights key parameters that may affect the types of equipment needed or the execution of key tasks.

Those numbers exceed expectations. Presently, we’re above the average metrics that we were looking for, so it’s a positive outcome so far. Another benefit from the tool has been better accountability since it can assign tasks to individuals or teams and allows review and approval before the work is released.

The tool holds ownership from leadership all the way down to accountability of taskers, making sure that taskers are completed in the right time frame. A lot of times, taskers get pushed down. They get lost in the shuffle as we do business, but with the Tool the tasker can’t get lost. We have accountability of where the tasker goes, and we know when it should be accomplished. That helps use engage in how to do business the right way.

For this report, the equipment list was provided by one of the MEUs we visited. This has the advantage of providing a realistic equipment set. However, as we explain later, the application we have developed can accommodate equipment lists that are significantly different, larger, and smaller than what we use as a baseline.

The approach used in this report is for the user to utilise the Tool to facilitate the development of planning factors and get better at assigning equipment to tasks.

The equipment available to perform the tasks associated with the missions consists of the equipment onboard with the unit and possibly additional, remotely located equipment.

We don’t really anticipate any major adjustments to the Tool because right now it’s currently doing what it’s designed to do: make sure taskers get accomplished in the timely fashion and pushing Marines away from the dinosaur age to … this new age of how we do business.

What equipment is available to Marines to accomplish mission tasks and subtasks? A diverse set of factors affect the types of equipment carried into theatre, including not only space available but also risk trade-offs made by commanders and expectations about the deployment risks.

Inputs to the system consist of an equipment list assigned to unit, the tasks identified through the mission deconstruction process, the measures and metrics used to define equipment capabilities, and the set of linkages between tasks and equipment.

What measures and metrics should be used to assess the capability of selected equipment? The loading list provided the set of available equipment. We then used equipment updates and sponsor input to define the capabilities of each piece of equipment in performing designated tasks. This information is displayed to the user when a piece of equipment is selected.

We identified the measures and metrics, or “planning factors,” needed to assess the capability of each piece of equipment in the loading list. In our initial construction of the Tool, we identified equipment substitution efforts might accomplish a task.

Equipment list was provided by one of the installations we visited. This has the advantage of providing a realistic equipment set. But the application can accommodate equipment lists that are significantly different, larger, and smaller than what we use as a baseline.

Centralised control and decentralised execution of tasks are ideals sought in logistics support operations. If achieved, support will be responsive, economical, and flexible. Site Visit Executive has determined good balance between centralisation and decentralisation of logistics operation functions is usually difficult to achieve.

Control may suffer because it is fragmented, or support may fall short because services and equipment are too concentrated. Consequently, Visiting Executive must use judgment and experience to achieve optimal mix of centralised control and decentralised execution based on specific circumstances popping up in fluid mission tasks.

Centralised control is most effective at the strategic levels, drawing on existing support infrastructure, established procedures established by Site Visit Executive & stability of missions in theatre.

The degree of centralisation varies at the operational level since forces can be fragmented, sometimes over great distances, and operations often take place under problematic expeditionary conditions. At the tactical level, the degree of centralisation is determined by mission/concept of operations-- factors that often override purely logistical considerations.

Site Visit Executive has submitted lots of written Principles of Supply Line Logistics to include detailed universal constants applicable to all aspects of logistics including responsiveness, simplicity, flexibility, economy, attainability, sustainability & survivability.

In addition to these principles, many other logistics considerations exist to keep Site Visit Executive in Business so installations of any size can use supply line routing application to apply smart techniques to deal with disparate situational connections realised when addressing supplier group contacts.

These considerations will not always dictate a specific course of action, but will assist Site Visit Executive in maximising effectiveness & efficiency of logistics operations if used smartly.

Site Visit Executive can first look at broad readiness, but can also look at readiness levels of subordinate units to provide the ability to control, distribute, and replenish equipment and supplies in assigned areas of operation, to receive supply support from and provide supply support to other services.

Readiness Terms are used in different contexts/processes. Operational gaps in systems used by Marine Units must be closed so exchange is seamless.

Capability to link information as it is processed by Units must be built.

Aggregated information provided to Commanders must be traced/linked to operational systems used to rollup information. But no Marine Site Executive has yet stood up to identify functions spanning across process and write terms required to support processes

If Site Visit Executive has better overview of equipment status, resources will be allocated/pooled more efficiently so greatest potential for operational readiness is realised.

Information from readiness systems is required to determine number of pieces of equipment available for deployment. No Site Executive has created an easy way to link equipment information available from readiness and Services systems.

Current readiness systems only include commander’s best estimate for equipment status. Estimates have traditionally been utilised usually for overall equipment assigned to the unit and not individual pieces of equipment.

Military Services use systems to maintain records of equipment under service, but records do not include any information about what units it is assigned to.

Marine Corps leadership utilise unreliable information when making decisions about major equipment end items and spare parts b/c Information contained in the Marine Corps legacy logistics and supply information systems is significantly inconsistent & inaccurate.

We believe this occurs because Marine Corps guidance does not sufficiently define the respective roles and responsibilities for Marine Corps logistics information directorates.

So this results in confusion among personnel in various Marine Corps commands about who is responsible for the accuracy of different elements contained in the systems.

Inconsistencies and inaccuracies also occurred, in part, because Marine Corps guidance lacks adequate operating procedures about logistics information systems. So field-level Troops are not aware of, or are not in compliance with, current operation procedures and logistics rules employed by the directorate.

We found the Marine Corps has insufficient assurance that most important information is transitioning to Global Combat Support Systems
 
When first standing up Logistics Directorate, Site Visit Executive usually creates a centralised organisational structure. The authority in such a structure is top-down, which makes it easy for operational headquarters to make important decisions and keep tight control over all aspects of the organisation.

As operations grow, it’s not uncommon for Site Visit Executive to gradually transition into a more decentralised structure. Here, decision-making authority trickles down to Troops who are no longer dependent on “the boss” to make all the decisions and dictate direction for every area of the organisation.

In decentralised system, more layers of teams are added to hierarchy, which gets more Troops involved in helping set strategic direction— but also can add to the complexity of decision making.

So what is the stage of mission development is a big factor in determining which type of organisational structure is best. Factors include the makeup of your Troop base, the strengths and weaknesses of your administrative team, and the best practices followed by other Units In each of the Services, among other factors

The type of organisational structure utilised — whether centralised or decentralised — is important because it forms the basic framework to achieve mission success and how decisions will be made. Following are a few common characteristics of centralised and decentralised organisations, as well as some of the advantages and drawbacks of each:

1. Top-down authority in centralised structure establishes a specific chain of command, with Site Visit Executive making all the key decisions with little or no input from rank-and-file Troops

2. Centralisation consolidates decision-making authority, goals and objectives with uniform across the organiation and standard sets of policies and procedures followed by all Troops can reduce costs and ensure Service-wide consistency

3. Multiple individuals, including rank-and-file can make important decisions and set policies and procedures that guide how the mission s run in decentralised operations

4. Team structure is created throughout the organisation in decentralised system results in a broader range of input that can generate more solutions and ideas.

5. Troops at all levels of the organisation are empowered to make decisions, often with little or no input from higher-ups in decentralised system

6. Decision-making may be slow in centralised operations and disrupt the flow of work order tasks momentum

7. Troops may feel less motivated to do their best work in centralised system if they feel they don’t have a voice for sharing their ideas on how improve outcomes

8. Executives in decentralisation system will have less control and need to rely more on personell beneath them to carry out the mission and goals of the organisation. This process of “letting go” may be difficult for Site Visit Executive.

9. Decentralistation may promote unnecessary competition among those who have previously worked together under a centralised structure, but are now working in different departments or divisions.

10. Decentralised structure sometimes doesn’t work well in during early stages of missions development that have not yet built a solid infrastructure

1 Comment

Top 10 Multi-Agent Model Simulations of Field Deployed Troops Decision Behaviour in Mobile Situations

5/7/2018

3 Comments

 

The Multi-agent model includes a “Digital Twin” simulation of every foxhole in the area affected as well as every road, communication network, etc. And the scenario is populated with more than One Million Agents—a synthetic population statistically identical to the real population of the affected area in factors such Troop equipment.

Each agent is an autonomous subroutine that responds in reasonably human ways to other agents and the evolving disaster by switching among multiple modes of behaviour—for example, flight, and efforts to find other Troops

The point of such models is to avoid describing troop missions from the top down with fixed equations, as is traditionally done in such fields as economics. Instead, outcomes such as vehicle crash through the interactions of many individuals, leading to a real-world spontaneous behaviour that is otherwise hard to simulate.

That kind of detail is exactly what commanders need, The model can warn system fail point X might well lead to a surprise traffic jam at point Y. If commanders decide to deploy mobile dispatch centre in the early hours of the crisis to restore communications, the model can tell them whether more vehicles will take to the roads, or fewer.
Agent-based models are how you get all these pieces sorted out and look at the interactions.

The downside is that models tend to be big—each of the model's initial runs kept large microprocessor computing cluster busy for couple of days—forcing the agents to be relatively simple-minded.

"There's a fundamental trade-off between the complexity of individual agents and the size of the simulation."

But computers keep getting bigger and more powerful, as do the data sets used to populate and calibrate the models. More and more decision-makers are taking agent-based models seriously. "They're the most flexible and detailed models out there,"which makes them by far the most effective in understanding and directing policy."

One early success was to simulate troop interactions on ordinary desktop computers, they pared agent-based modeling down to its essence: a set of simple agents that moved around a grid in search of resources that was abundant in some places and scarce in others. Though simple, the model gave rise to surprisingly complex group behaviours such as combat, and troop segregation.

Another milestone has been when Operational Simulation System, an agent-based traffic model developed. Unlike traditional traffic models, which used equations to describe moving vehicles en masse as fluid, each vehicle and driver was modeled as an agent moving through a contingency theatre road network.

The simulation included a realistic mix of combat vehicles, driven by troops with a realistic mix of abilities, and destinations. When applied to the road networks in actual combat theatre, Simulations did better than traditional models at predicting traffic jams one reason agent-based models are now a standard tool in planning of mobile operations.

The model also incorporates data on how troops typically move between stations. The result is a model that not only predicts how bad a situation could get—something commanders could determine from equations—but also suggests where the worst hot spots might be.

Given how much is at stake in those simulations, users always want to know why they should trust the results. How can they be sure the model output has anything to do with real-world scenarios—especially in cases such as mission disasters, which have no empirical data to go on?

That question has several answers. First, users shouldn't expect the models to make specific predictions about, say, a mission crash in the future. Instead, most modelers accommodate the inevitable uncertainties by averaging over many runs of each scenario and displaying a likely range of outcomes, allowing planners to use the model as a test bed to game out the consequences of taking action A, B, or X.

Modelers should not just slap the model together and see whether the final results make sense. Instead, they should validate the model as they build it, looking at each piece as they slot it in—how troops get to and from point of action, for example—and matching it to real-world data sources. "At every step, there is data that you're calibrating to,"

Modelers should also try to calibrate the agents behaviors by using studies of troop behaviour. Doing so can be tricky—troops are complicated—but in crisis situations, modeling behaviour becomes easier because it tends to be instinctive. The model, for example, gets by with built-in rules that cause the agents to shift back and forth among just a few behaviours, such as shelter-seeking, evacuating or attack.

Even so, field studies point to crucial finer points, "When a mission falls apart, "we find that troops are more likely to consider twice, weighing possibility of being without support from others even with priority of the crisis itself."

Likewise when under fire, commanders tend to assume that when the alarm sounds, troops will immediately file toward the exits in an orderly way. But just watch the next time your unit has a fire drill. Troops don't evacuate without first talking to others"—and if need be, collecting friendly forces.

The evidence also suggests that blind, unthinking panic is rare. In an agent-based model, model developers tried to reproduce what happened in a past incident where the unit came under fire. Some in the group jammed together so tightly that no one could move, and casualties were taken. Between participants accounts, the team had good data on the behaviour, and their relationships to others.

And when we incorporated those relationships into the model, the runs most consistent with the actual incident under fire involved almost no panic at all. "We found that Troops were trying to get out with those in the unit that shared a similar assignment and proximity to the fight.

Results from the model can sometimes be counterintuitive. For example, the model suggests that right after the strike, commanders should expect to see some troops rushing toward ground zero, jamming the roads in a frantic effort to pick up their comrades. The model also points to a good way to reduce chaos: to quickly restore communications service, so commanders can verity the safety of troops.

If agent-based modelers have a top priority, it's to make the simulations easier to build, run, and use—not least because that would make them more accessible to real-world decision-makers.

Decision makers access a playbook: network status updates containing digital versions of every large operational theatre, with pre-computed models of just about every potential hazard. "Then, if something actually happens, like an explosion commanders could pick model being closest match and do near–real-time calculation for things like the optimal mix of engagement and evacuation.

It is "agent-based modeling as a service." Teams are developing and testing a networked tool that lets commanders to build disaster simulations and do what-if assessments on their own, without having to hire programmers. With just a few clicks, users can specify key variables such as the region of interest, and the type of attack. Then, using the tool's built-in maps and graphs, commanders can watch the simulation unfold and see the effect of their proposed response protocols.

Despite being specialised for troop movement in and out of theatre, the tool's underlying geographic models and populations are general, and they can be applied to other kinds of disasters. Ultimately, the goals are to build such models into services that are individualised to each unit and situation.

So it won't just be decision-makers far away from the fight using those systems. It will be you. "It will be as routine as Google Maps.

Advances of the last decades in information and communication technologies have triggered fast developments in a large number of application areas involving cooperation among networked agent entities.

Network tech advances have enabled implementation of number of predicted models that were just waiting for the proper conditions, and on the other hand they motivated the development of new concepts.

One of the main underlying trends in Training development is the focus on agent models, protocols, and mechanisms to support the collaboration of pre-existing entities in distributed real-world scenarios, be it among organisations, among Troops, or among Troops and organisations.

Strong reliance on standards will contribute to facilitate the interfacing of existing applications with the virtual infrastructure, but unfortunately not all classes of information that need to be exchanged among virtual agents are covered by existing standards.

Initiatives of application agent groups contribute to facilitate this process. In general, it is necessary to develop some interface/mapping layer, at each enterprise, to adequately have this enterprise interacting with virtual infrastructure.

It is also noticed that most legacy applications were designed for a enterprise-centered local operation and to be operated by agents. In order to have these applications supplying information to or consuming information from the virtual network, it is clearly necessary to extend their functionality.

Interoperability among enterprise applications represents a major challenge for supporting the rapid formation of virtual scenarios, in response to new operational opportunities. On the other hand it is important to have in mind that each enterprise has its own way of doing business.

Furthermore the level of information sharing among virtual agents is likely to either change of the trust level among partners or with adjustments to configuration in time. Therefore, flexible virtual coordination and information visibility rights definition mechanisms are needed, in order to support both the autonomy and change order properties in behaviour of virtual agents.
 
The first step is to understand and document user requirements and Constraints for system capability so process can meet requirements availability/ reliability parameters must be explained and guide trade-off studies of mission capability and operational support, defining baseline against which the new system will be measured.
So performance factors need to be matched up with user needs into clearly defined system parameters and allocate/ integrate parameters to relevant disciplines needed to realise success.

Systems engineering attempts to optimise effectiveness and affordability as the capability is created. The systems approach makes sure the question What are the user needs and constraints? is answered before designing the answer.

The top-level programme plan for achieving required available/reliable is executed in manner to ensure requirements are achievable. Through understanding user needs and constraints, new capabilities begin to be defined.

Must establish the case for a materiel approach to resolve gaps in capability. The primary focus is to acquire quality products balancing process of satisfying user needs while improving mission capability and operational support, also adhering to scheduling constraints and justifiable costs.

During capability assessments, time and resources need to be set aside to measure and characterise current operational experience, organise metrics and supply line performance to reach conclusions about the causes of shortfalls.

It is also imperative to understand subsystem design complexity and influence on availability/reliability. Capabilities-based approach leverages the expertise of all service directorate activities defining new capabilities.

Primary focus is to ensure that joint force is properly equipped and supported to perform across disciplines to identify improvements to existing capabilities and create new warfighting capabilities.

Process defines needed capabilities through characterisation of doctrine, organisation, training, materiel, leadership, and Troop units. Availability/reliability levels are defined within this framework, principally in the category of materiel.

So Goal is to inform and share information among decision makers tasked with use and system support. Information to be shared includes user requirements, and how system will be used or potentially miss targets.

Key to any assessments is description of use/support location, constraints on what support is available for system, what information will be available to decision makers, and how that information will be verified.

Operation commanders typically have relatively clear ideas about the planning factors for specific pieces of equipment. However, Troops usually do not have an updated set of planning factors that it uses to develop mission plans or to guide what it brings aboard its ships.

This lack of training on character of planning factors led us to rely more heavily on the users of the application. We consulted equipment training tasks that provided details on the capabilities of pieces of equipment, such as payload, maximum speed, or lift capacity, and the application makes this information available to the user tasked with selection of equipment to accomplish required tasks.

Site Visit Executive Found Nothing but Problems During Simulation Training Exercises. The “problem” part of the Fleet Deployment Problem is inspired by several sources.

In some cases, we identified “problems” during our war games. Played at the operational level, [OpFor] team. Most of the time, each game identifies several key operational enablers employed by our forces.

With the help of our team, we capture that key moment—friendly force disposition, enemy situation, mission, and even the losses suffered on both sides—and use it as a point of departure both for our input to the Chief Integrated Agenda and for our Fleet Problems.

In other cases, as we considered how our ConOps would play out in reality, it became apparent there were warfighting tasks that were critical to success that we could not execute with confidence.

This gap was not because the unit did not practice these tasks individually—the efficiency of the plan ensured they could—but because we as a force never practiced them together, in combination with multiple tasks, against a free-playing, informed, and representative Red Team.

In one case, during an exercise planning session, we discussed a critical operational tactic that is used routinely in exercises and assumed to be executable by the fleet.

In the course of the brief, someone piped up and said, “Sir, you know we can’t actually do that.” In the silence that followed we could see some in the room thought we could, while others were nodding in agreement that we could not. With that, we had the scenario for a Fleet Problem with the mandate to “go find out.”

We learned that the doctrine and assumptions in one area had removed the capability to implement key tactics in another. Having never performed the task together in theatre, the disconnect never was identified clearly. It was not until we tried to execute under realistic, true free-play conditions that we discovered the problem’s causal factors, leading to the ability to apply solutions.

In discussion with the Troops we frame the problem that is given to a unit or formation. For example, it is critical that we be able to operate carrier strike groups [CSGs] in areas of significant submarine threat.

Our traditional approach to this challenge would be to create an antisubmarine warfare [ASW] exercise, tasking submarines to act as targets within a set geographic area.

In a Fleet Problem, we instead would task the CSG to conduct a combat mission-- conduct strikes on, giving it maximum flexibility in timing and mechanism. We then would create scenarios rich in submarine threats.

The CSG’s mission would not be ASW, but rather conducting a core combat strike mission in support of the joint fight in a robust submarine threat scenario.

Teaming the submarine threat is the means to the end —strike. If you destroyed all enemy submarines and lost no friendly units but were unable to execute the mission assigned—strike—then Blue loses and Red wins. How the CSG commander manages that threat to accomplish the mission is not prescribed.

Speed and manoeuvre? Go for it. Aggressive surface ASW? Great. Will the escorts sweep ahead or stay near the CSG? Air assets? Of course. How is that coordination going?

“Task success equals mission success” may hold true in training scenarios as long as Red is constrained. But in our experience, rarely in the real world does the sum of the tasks successfully completed equal mission success to note that there is no experimentation in the usual sense in Fleet Problems.

We are not notionally employing systems and weapons that are not already deployed in the fleet. Each unit attacks the problem using what it has physically on hand today. So commander developing and executing a plan must be based on our warfighting intent rather than ticking off a discrete schedule of individual training objectives.

The biggest challenge stems from the combinatorial size of the problem. There are several dozen organisations that team up to administer requirements and provision of tactical training, schoolhouses, resource sponsors, as well as a group of different Training status updates to specify policy for agent modeling and simulation and an unknown number of simulators.

Beyond the basic phase of training, training requirements for ships are only minimally articulated. Each ship must complete a set of training exercises tailored to counter deficiencies of the ship in question.

These exercises are selected and planned by a number of different teams. In the intermediate and advanced phases of training, there is little consistency in the choice of exercises or the requirements that must be met, and, in all phases of the operation several different teams evaluate the success of training.

Training requirements and standards are often too general or inconsistent for assessing readiness, further complicating the problem of determining how agent-based simulation must best be used.

Early directives have authorised DoD to use simulators to make training more effective and to help maintain military readiness. Agent-based training simulators permit:

1. Training impractical if done with actual systems or equipment

2. Concentrated practice in selected normal and emergency actions

3. Training of operators to address possible equipment faults

4. Training of maintainers to diagnose equipment failure

5. Increases in proficiency despite shortages of equipment

6. Improvements in abilities even with reduced amount of operational space

7. Authorised availability of training time for operations

8. More access to testing/training ranges

9. Control of training costs over all phases of service life

10. Reduction of systems required in maintenance training.

3 Comments

Top 50 Readiness Tool Questions for Product Support Decision Authorise Workforce Skill Types Assign

5/1/2018

2 Comments

 

Readiness Initiative that’s being prototyped as we speak is trying to take disparate maintenance efforts – field events, depot events, which make no logical sense right now, all they do is guarantee that the weapons systems are not available to the operating squadrons because this week it’s out for this inspection, they return it to the flight line and the next day it’s out for another inspection or modification.

“So right now Navy is working, prototyping Two logical, consolidated depot events to try and guarantee more aircraft availability on the flight line once it comes out of that depot event. So that’s looking very promising.”

Application prototype officials are still “fine-tuning the details of it,” but said that more workforce labour could be conducted while the planes are at the depot – including inspections and maintenance items that would normally be done by the squadron – to consolidate the required work into as few days as possible

Officials stated depot maintenance workforce estimates are not currently being adjusted or updated over the service life of an asset class. Periodically updating depot maintenance cost estimates—in accordance with cost estimating best practices—for each asset class could provide decision makers with much needed information with which to determine future budgets.

We received Several Excuses:

First, officials stated that cost estimating best practices are most applicable to new acquisitions.

But our cost estimating guide is intended to be applicable to programmes and assets in all stages of service life, including maintenance and support. Updating standard support levels periodically would lower budgetary risk by using actual metrics to better inform future depot maintenance estimates.

Second, officials described how sustainment and maintenance costs can be uncertain and challenging to estimate, which is mitigated by keeping close tabs on its depot-level maintenance workforce for all assets.

But best practices can help ensure that cost estimates are comprehensive and accurate, which can help ensure that funds will be available when needed.

Third, officials explained that given the current fiscal issues, it would be best to focus on improvements that do not require additional workforce resources.

But well-documented cost estimating process and the use of accurate historical metrics should enable more efficient operations.

By not updating the standard support levels with information on actual expenditures, there is no way to know what the actual depot-level maintenance needs are of its workforce and assets.

Best practices state that programmes should be monitored continuously for their cost effectiveness by comparing planned and actual performance against the approved baseline.

Building effective workforce programmes and cost control requires ongoing revisions to the cost estimate, budget, and projected estimates at completion. Further, a competent cost estimate is the key foundation of a sound budget.

Not updating the estimated costs with actual expenditures could lead to ineffective planning by the workforce responsible for conducting depot-level maintenance.

Finally, officials stated that they do not update their depot maintenance estimates with actual expenditures because doing so would cause individual budget line items to constantly change.

But by not reviewing and updating the standard support levels there is no way to accurately know what the actual depot maintenance workforce requirements are for each asset class. This can limit the workforce as they seek to succeed in challenging fiscal times and allocate resources best to support of more modern and capable assets.

Controlling maintenance costs has been a challenge because of problems getting spare parts delivered in time, especially to forward locations. Automatic Logistics Information System ALIS has been described by the workforce as being hard to use and requiring the use of contractors for longer periods of time than originally expected.

Some suppliers are delivering late and non-conforming parts, resulting in workforce production line inefficiencies and workarounds. For example, quality issues with insulation on the coolant tubes in the fuel tanks resulted in the contractor delivering less weapons systems. According to contractors, deliveries of parts are largely due to late contract awards and supply base capacity.

The depot workforce is taking a two-pronged approach to cutting down on defects. “Quality starts at the very lowest supplier and what are we doing is to ensure that quality is coming up to the supply chain as good as it can get,” That means blocking faulty parts from ever getting to the production line to stop the quality issues from coming up in the first place.
 
To cut down the number of low observable capabilities -related quality escapes, the workforce is also taking steps to make it easier for Navy to assess state of skills/tools of the workforce required to ensure trainers have the administrative authorities infrastructure to build and sustain the aircraft, whether through increased training or improved practices.

For the programme office, reducing the maintenance rework on the aircraft will help it close in on the “true cost” of the aircraft, allowing DoD to push the price per aircraft as low as possible. It’s also seen as critical for keeping delivery schedule as planned, with no future delays as workforce production ramps up.

While supplier performance is generally improving, it is important for suppliers to be prepared for both production and sustainment support going forward. Inefficiencies, such as the workforce conducting production line work out of sequence, could be exacerbated if late delivery of parts continues as production ramps up.

There aren’t many concerns that the workforce will be able to keep having aircraft coming down the line and putting them together and delivering them. They will be able to do that. “But there are concerns the workforce might not be able to do it at the Readiness Rate that our war fighter has asked us to do it-- provide for sustainment operations.

We have assessed capability of Mission Readiness Tools to predict future requirements enable system performance assess. The tool helps the workforce to spot main system faults, such as a gearbox. By isolating the fault, the workforce can recommend removing components proactively, such as a failing quill gear.

“So instead of sending the aircraft gearbox to depot repair, the workforce is able to replace the subcomponent on the flight line, reducing fleet maintenance burden, reducing costs and improving mission performance.

Workforce crews are using conditioned-based maintenance CBM to perform maintenance based on need, rather than a set schedule. Engineering and logistics assessments are generated by smart aircraft equipped with sensors—similar to tire pressure sensors in cars.

The workforce is at the point having the capability to process massive amounts of metrics, take action and save millions of dollars on repairs as a result of CBM practices.

To date, the CBM diagnostic strategy has not only saved millions in main gearbox repair costs, but avoided more than on dozen mission aborts, and reduced the number of drain-and-flush cycles associated with the main gearbox.

Our Readiness Toolset is enabling proactive maintenance and supply, aggregating years of historical readiness metrics from disparate systems into a single source to provide cost, inventory, maintenance, supply and operational flight-hour metrics in a standard format.

The tool automates what used to take the workforce months to do and provides information in minutes. Instead of only focusing on the top parts problems that are challenging the fleet today, the workforce has the ability to prevent the next many challenges from actually occurring. It is essential to want to get out ahead of problems, but the workforce must do both—fight the current readiness battle and enable the fleet to avoid the next one.

The Tool produces more than 100 top-level metrics to identify components that perform outside their established parameters. With this information, the workforce can spot indicators of potential readiness issues and address them proactively before they impact the fleet.

For example, the Tool could have helped the workforce prevent a recent fleet maintenance issue involving a potential shortage of brake replacement parts on F/A-18E-F Super Hornets two years before it happened.

By real time monitoring, the Tool indicates when the demand and status of a given part is outside the normal ranges by producing charts to indicate metrics are outside its normal performance bounds. The Tool would have shown an increase in demand for the brake part, which would have prompted workforce logistics team to question the change.

“Today, when the Tool indicates a change in part usage, the workforce can investigate the cause and determine whether they need to order more parts if the part is still available, or get the Fleet Readiness Center to manufacture the part.

The Logistics and Industrial Operations competency, in collaboration with NAVAIR Engineering Modeling Division, is also developing forecast Tools designed to predict which components or parts may need to be replaced based on maintenance schedules.

Readiness Forecast Tool uses existing Naval Aviation metrics—such as the current status of parts and aircraft and historical scheduled and unscheduled maintenance workforce rates—to forecast future behaviour and assess the near-term impact of specific actions, such as stocking up on a specific part or upgrading a component that requires frequent maintenance.

Readiness Forecast Tool provides a one-year forecast of ready basic aircraft, non-mission capable aircraft and out-of-reporting aircraft, enabling a quick look on near-term readiness posture, allowing the workforce to understand near-term impacts of top-level actions; essentially ‘what if’ scenarios of major changes/adjustments at system level.

The Tool runs discrete event simulations using probabilistic decisions and business rules to model flight operations and resulting maintenance and supply demand and effects, employing several discrete factors, processes and resources in areas of supply chain, maintenance, flight hour changes, component reliability, life limit increases, depot workforce capacity and performance improvements. Model output includes metrics on the numbers of aircraft in various states of mission capability readiness.

The Tool allows the workforce to run a simulation on a part, component or system to see how it would impact the number of mission capable aircraft. For example, what would the result be in X number of years if the workforce upgraded the fuel control on a particular aircraft? The Tool will predict the fleet readiness impact of an improvement by accounting for component reliability, impact to supply demand and impact to required maintenance.

So the workforce can anticipate readiness advantages or disadvantages before deciding to invest in making a change. “Bottom line, the workforce now employs a series of maintenance planning tools and initiatives that provide workforce at all levels with the right information to make informed decisions.

DoD has asked us to assess the Navy depot maintenance industrial base. Specifically, we are going to identify and evaluate workforce and workload practices that could reduce costs while still meeting fleet requirements.

To do this, we need to estimate current and future workload, and the different types of skills and workforce used to execute this workload. We also need to understand the depot workforce staffing decision process and the policies that constrain the ability to change workforce levels to meet expected demands on the fleet.

There are dozens of shops at each depot and the number of workforce trade-skills mirrors the number of production or support shops. The production shops typically produce a product; this set of shops includes the Welding, Painting and Blasting, Sheet Metal, and Electrical Shops.

The workforce trade-skills associated with these shops are the welder, painter, electrician, and electronics trades. The support shops provide support services; this set of shops includes the Quality Assurance Shop and the Lifting and Handling Shop. The workforce trade-skills associated with the Quality Assurance Shop include the engineer trade; the crane-operator trade is the primary trade-skill in the Lifting and Handling Shop.

This questionnaire asks about the shipyard workforce, and the policies and processes used to run it. We will visit your Job Site after we have received your completed form to review the questionnaire to ensure we understand your responses and to give you an opportunity to submit additional comments.

Many questions ask for direct workforce estimates. If these are not available, or not available in the format we request, we would appreciate any information that will help us understand the issues addressed in the questions.

If you are limited by insufficient space/time to fill out the response, please continue your response in the space provided in status update application and attach. Please assign workforce parameters and workload to the designated Trade Skills as is done for reporting workload and workforce.

Labour Type refers to the type of workforce kills used to execute availabilities at all DoD Job Sites. We have identified a number of workforce labour types including permanent, temporary, contractor, apprentices and military personnel. We include an ‘other’ category which we invite you to use to identify any other workforce labour types currently used that we have not specified.

1. Please provide a list of your shop names and proportion of each workforce trade skill by shop

2. Please provide the average number of workforce assigned to each Shop and Labour Type.

3. Please provide the current distribution of your workforce by years of experience in the field

4. Does the workforce experience distribution differ significantly by Labour Type?

5. Please provide the average percentage of total new hires within each experience level and workforce Skill Category.

6. Please describe your workforce recruitment pool e.g. certain training centres, other shipyards, grown within the organisation, etc.

7. How does the workforce recruitment pool this differ by Labour Type and Shop?

8. Are there particular workforce skills or disciplines that are in high demand or for which recruiting is difficult?

9. What causes the high demand for workforce skills, or why is it difficult to recruit for them?

10. What constrains your hiring rate for each workforce Labour Type and Shop e.g. available recruitment pool, ratio of instructors to new technicians as function of shop space and labour type

11. What is the maximum annual growth rate you have sustained as a percentage of the total permanent workforce, by shop?

12. What is the maximum annual growth rate you could sustain as a percentage of the total workforce, by Labour Type?

13. What has been the maximum percentage of the total workforce represented by temporary, apprentice, uniformed and contractor labour?

14. For how long did the yard operate at this workforce percentage level of for example contractor/uniformed employment?

15. Please provide the average number of annual recruits for new hires in the apprentice workforce programme.

16. Please provide attrition rates as a percent of the total number of workers in the Skill Category.

17. What constrains your ability to reduce the workforce?

18. How do workforce constraints vary by Labour Type and Shop?

19. If attrition is not sufficient to reduce the workforce, what types of options are there eg, Reduction In Force, etc and what are the implications of executing these options?

20. When do you expect your permanent workforce to typically no longer work for your organisation? Do you expect this to be lower, higher or the same in the future?

21. Please indicate the average-- over experience levels relative productivity of the various Labour Types to that of the permanent workforce.

22. Does relative workforce productivity between the different types of labour depend upon Shop? If so, please enumerate the relative productivity differences by skill.

23. On average, how many years does it take for a new workforce hire to become fully productive within each Shop?

24. What is the relative workforce productivity, to that of production tradesman by year of the apprentice programme?

25. What is the maximum-- annual average percentage of workforce overtime that has been worked within each Shop?

26. What is the maximum peak Overtime worked at any point in time and for how long did the workforce operate at this peak level of Overtime?

27. What limits or constrains your ability to employ overtime for the workforce?

28. Do you cap workforce overtime by dollars, hours, or percentage of straight-time hours? If so, how do these caps operate?

29. How does overtime affect the productivity of the direct workforce? For example, by what proportion does productivity decrease as overtime increases?

30. How does productivity during overtime vary by shop and workforce Labour Type?

31. Do different workforce Labour Types typically perform more, less, or the same amount of overtime?

32. What kind of production-related throughput expectations are set for the workforce? For example, do you use worktime quotes for product?

33. How do you currently measure productivity? How does this vary by shop and workforce Labour Type?

34. For intermediate level and or other non-depot level jobs, provide total direct workforce labour man-days for non-depot level workload by Skill Category

35. Please provide the average annual standard workforce man-days per year expected to be worked by Labour Types and years?

36. Please provide average costs over all workforce experience levels unless otherwise specified

37. Please provide the direct workforce compensation package during straight-time for each type and in each shop
38. Please provide the workforce labour cost factors that get applied to the direct labour rate as a percentage of compensation rate

39. Please describe how the factors are used to estimate total cost of workforce over the time it takes to complete jobs.

40. Please provide the average rate of workforce compensation per work period by Skill Categories.

41. Please provide the average cost of hiring workforce not including the cost of new hire time to become productive and recruiting factors

42. What are the average training costs for workforce including the cost incurred for providing the trainer materials?

43. Please specify how training costs vary by workforce Trade Skill and Labor Type. For example, do apprentices have different training costs from other new hires?

44. Are there specific tasks that temporary workforce labour perform?

45. Under what conditions do you typically subcontract the workforce workload? For example, are there conditions patterns?

46. What types of work or tasks do you typically assign to subcontract workforce?

47. Please describe commonly-used sources of subcontract workforce labour and specify whether use of subcontract labor varies by Skill Category.

48. . What constrains your ability to hire and use subcontract workforce labour and use outsourcing?

49. What types of workforce tasks do you typically outsource and does it vary by Skill Category?

50. How would you characterise the quality of outsourcing market, eg number of providers, and how easy is it to hire workforce for special trade skills, etc.

2 Comments

    Site Visit Executive

    Provides Periodic Updates Operation Status

    Archives

    August 2021
    July 2021
    June 2021
    May 2021
    April 2021
    March 2021
    February 2021
    January 2021
    December 2020
    November 2020
    October 2020
    September 2020
    August 2020
    July 2020
    June 2020
    May 2020
    April 2020
    March 2020
    February 2020
    January 2020
    December 2019
    November 2019
    October 2019
    September 2019
    August 2019
    July 2019
    June 2019
    May 2019
    April 2019
    March 2019
    February 2019
    January 2019
    December 2018
    November 2018
    October 2018
    September 2018
    August 2018
    July 2018
    June 2018
    May 2018
    April 2018
    March 2018
    February 2018
    January 2018
    December 2017
    November 2017
    October 2017
    September 2017
    August 2017
    July 2017
    June 2017
    May 2017
    April 2017
    March 2017
    February 2017
    January 2017
    December 2016
    November 2016
    October 2016
    September 2016
    August 2016
    July 2016
    June 2016
    May 2016
    April 2016
    March 2016
    February 2016
    January 2016
    December 2015
    November 2015
    October 2015
    September 2015
    August 2015
    July 2015
    June 2015
    May 2015
    April 2015
    February 2015
    January 2015
    December 2014
    April 2014
    January 2014
    December 2013
    November 2013
    October 2013
    September 2013
    August 2013
    June 2013
    May 2013
    April 2013
    March 2013

    Categories

    All

    RSS Feed

Web Hosting by Dotster