Task 1:  Initiate Estimate

Group equipment types into categories and prepare estimates for requests to include specialised “what if” scenarios. Execute cost drills and obtain design requirements by visiting Job Sites. As with all estimates, expectations must be established with the field-level operators as well as programme leadership. Also, some equipment categories may not require full estimate, focusing on only R&D, build requirements or O&S costs. Be aware that O&S costs can vary greatly.  For example, radars are operating at some level all the time, while a single torpedo may only be fired during an occasional exercise.

Task 2:  Define Programme Description

Definition of equipment programme acquisition strategy, technical definition, characteristics, design features, and technologies to be included in its design is key to preparing good estimates. You must work with design experts, logisticians, test/evaluation experts, to create programmatic and technical baselines required to produce the cost estimate. O&S records must be reviewed, along with “opened and inspected” processes to determine equipment condition & requirements for overhaul. Programme baseline information such as plan of action & milestones also procurement profiles must be presented. Supplier teaming arrangements for different equipment section integration prior to deployment can make programmes cost more depending on the scenario.  For example, additional costs may be incurred for transportation, testing, inefficient sequencing and Job Site labour. You must always document any key pieces programme technical basis of estimate. 

Task 3:  Obtain Work Breakdown Structure

Budget Exhibit is usually standard rollup structure for end cost, the Basic Build Cost is primary category, with others to include Plans & Change Orders, as well as Escalation for base-dated estimates.  Consult equipment template for more details, noting that Job Sites usually structure their cost by contract line item numbers to be mapped onto work breakdown structure. You must examine this cost allocation since mapping can be major source of cost differences between Jobsites. Like programme description, work breakdown structure is often not available when estimate is requested. Note that Structures may not address O&S cost elements.

Task 4:  Establish Estimate Assumptions

Estimates are derived from Job Site reports, rates & workload forecasts. You must answer questions from oversight groups, and recognise possibility that estimate may need to be constructed at future date, perhaps years in the future when a new estimating team is in place. You may discover some unique challenges in terms of technical aspects of equipment types creating different labour rates at Job Sites.  Many systems utilise plug & play components have potential to be subject to spiral over course of design.  Spacing between builds does not allow for rapid or consistent learning, making determinations more difficult. Many assumptions for equipment Systems are common to include basing of estimate, decisions as to what/where costs are accounted for as well as budgeting policies.  Acquisition strategy may differ since equipment is often procured under separate contracts for each type at different times and Technology Refresh may be required to update systems prior to initial capability determinations to replace dated parts. You must consider cost of testing for integration of mobile job site costs. Continuous system upgrades require sizable recurring engineering effort for subsequent equipment type builds. You must factor in assumed procurement profile potential include different Job Site participation to result in lead & follow-up service costs for  time frame/quantities of equipment builds.

Task 5:  Select Cost Estimating Tools

You must derive labour/materiel estimates forward priced to phasing of work, then broken down into the base-dated and escalation components.  Make or buy differences, accounting differences, and workload scenarios are taken into account.  Job site labour rates must be considered for specific equipment types differences. Throughput costs are input for equipment items determined off-line from models, e.g., known costs for components, etc. You must obtain labour/overhead rates from Job sites to require inputs you are familiar with and time to incorporate rates before results are obtained to complete the estimate. In some cases, special relationships are required for estimating contribution from equipment types.

Task 6:  Collect Information based on Type

Information from several equipment types may be used to determine estimates.  In particular, when estimating the cost of specific components, several platforms may be similar and appropriate to draw cost conclusions from. You must collect information relating to the differences between current/baseline equipment from Job site source of such information.   When establishing estimating ranges, you must utilise information specific to Job site or to normalise information from other Job sites to make comparisons accurate.  If possibility exists for equipment builds at multiple job sites, you must be careful when applying Job site-specific information as basis of estimate.

 Task 7:  Run Model to Generate Point Estimate  

Up-front engineering and design estimates are crosschecked against Job site estimates and parametric model based on complexity of equipment type. Model execution and estimate determination to address acquisition according to type standards and also address O&S.   One unique aspect of estimating equipment Systems may come into play if equipment is subsystem to another platform.  In this case, estimate may be part of a larger estimate, which may affect how a model is selected and the point estimate is generated.  Specific models have been created to address acquisition, O&S, and service life cost of equipment.  Other specialised models are useful when conducting assessments of alternative plans and tradeoffs between different equipment types.

Task 8:  Determine Risk Factors to Incorporate into Estimate

Risk determination for equipment builds is subject to sensitivity factors conducted for range of possible outcomes to include: schedule, design maturity/testing, historical costs, technology insertion off-ramps, Job site scenario, etc. Other risk areas include labour and overhead rate fluctuations caused by Job site workload projections to produce large effects on costs. You must enhance preparedness to answer questions from oversight groups, provide for description of useful estimation techniques, including lessons learned. In many cases, specific risk assessments are required prior to equipment contract award.  Some options to include in risk minimise include creation of technology off-ramps & Engineering Models. You must research technical risks in programme and conduct efforts arising from reliance on off-the-shelf products. Unique risk areas for equipment arise because acquisitions change, typically new capabilities are fielded in different equipment types.

Task 9:  Conduct Preliminary Estimate Review 

Reviews help to ensure successful completion of the estimates. Estimate presentations may vary in structure and depth depending on the target audience. Depending on the purpose of the estimate and its audience, utilised techniques may be reviewed for routine drills. Documentation must be thorough, accurate & complete.  Proper documentation enables ensuring requirements are met, providing cost estimate history trail for future reference as well as presenting convincing picture to skeptical parties. Preparedness must be enhanced to answer tough questions & provide useful information to be applied in execution of other programmes.

Task 10:  Produce Final Estimate

Production of final estimate is done in the same manner as other equipment systems estimates, with exception being  your Boss must review the product before it is delivered and meeting with additional decision makers is often times required. Final format of equipment system estimate will vary depending on who furnishes system, whether it is for inclusion in equipment cost estimate or not, and whether it is for forward fit or back fit. As equipment platforms become candidates for use by other services, there will probably be changes to programme structure/presentation response so additional requirements are met.  

 
 
Weapons System Operations & maintenance [O&M] makes up balance of weapon system service life costs, typically making acquisition costs be not as big of driver. O&M Budgets on average account for more than 70% of expenditures. Formulation of O&M Budgets must include all costs of operating, maintaining, and supporting fielded systems, including costs of manpower, equipment, supplies & services associated with modifying, supplying, training for system.

Pay close attention to which budget you are working on and who knows what about budget “execution” which may be considered “pre-decisional” without ability to release. Always pay attention to which budget numbers you are discussing!  President’s Budget numbers are different from actual executable dollars. Even veteran Program Executives can get this confused!

Though Program Executives typically spend most of their time dealing with acquisition-related costs, service life costs often play a more prominent role in shaping requirements and acquisition strategy. To use simplified example, consider Navy acquisition program to replace the anti-submarine destroyer fleet. A more conventional design based on existing technology and concepts might cost substantially less to design and procure but significantly more to operate.

Conversely, a newer design that used modern materials and advanced information technologies to reduce fuel consumption and crew size might cost the Navy substantially more to design and procure but cost significantly less to operate and maintain compared to the conventional design.

Service life cost considerations would typically steer the Navy toward new/modern design, but constraints on Research/Testing/Evaluation and procurement funds might also force Navy to make a decision that would ultimately appear “penny-wise but pound-foolish.”

When the Program Executives for this program case study is confronted with the question of “how much does it cost,” just exactly which cost to cite in answering questions should be carefully considered. For DoD acquisition Program Executives, “how much does it cost” is never a straightforward question.

Completeness and accuracy are critical to budget exhibits because congressional budget decisions will primarily be based on the content of these documents, regardless of the accuracy of information they contain. Failure to request or to document required funding for spare parts, test support, program overhead costs, long lead items, or any other less obvious aspect of a program’s total cost will not be well received by Congress when funding is under consideration.

 Similarly, while increased budget requests year to year will likely lead to some back and forth, failure to document changes attributable to new acquisition strategies, program delays, or increased costs may go unnoticed in the short term, but eventually will catch up to the program and ultimately will damage service acquisition credibility.

You must consider distinction between Understanding Versus Fixing. It’s not enough that we do our best; sometimes we have to do what’s required. Although an effective execution review process is essential to understanding whether a program is performing to standards, the review process itself cannot fix a program that is not obligating or expending to Office Secretary of Defense [OSD] standards. Execution review is like a map—it simply tells the Program Executives where they are. Once you recognise present situation, you must take corrective action or someone else will.

If a program is not keeping up with obligation or expenditure guidelines, the Program Executives essentially has two choices: reallocate funds or relinquish funds. By reallocation, we mean moving funds from one program or project to another. Most programs typically include smaller programs or budgets funded from a common source. The spend plan depicts how funds are distributed among these projects and how each project will execute over the course of the year.

If it becomes evident during an execution review that programs/projects are not meeting their execution forecasts, the first thing a Program Executives should do is look for opportunities to relocate those funds to other programs or projects that can use the funds more effectively. Best reallocation processes are the simplest, and if all key decision makers participate in execution review, reallocation can occur in real time.

When it becomes evident program is lagging in terms of obligations, all Program Executives needs to do is turn to Subordinate Staff and ask who can use the money and what the “payback” mechanism/timeline will be. If the two affected program or project managers can agree on the swap, and the other key participants agree that the reallocation plan is realistic, executable, and will improve performance, all that’s left to do is document the agreement and reallocate the funds through their funds/tracking system.

An alternative approach is for the Program Executives to “sweep” funds determined to be either excess or under-executing, and to maintain a list of unfunded or underfunded programs or projects. Periodically, Program Executives judgement or with advisors at hand, can review available funds and reallocate them against “unfunded requirements.”

Though this is a common approach, there are a couple of reasons we recommend the “real time” strategy. The first reason has to do with the underlying problem: poor execution. If a program is lagging on obligations or expenditures, time is of the essence in correcting the problem. Especially given the contracting lead times that often precede obligation and expenditure events, a good solution today is generally better than a great solution a month from now.

Beyond this, our experience has been that highly deliberative, unfunded-requirements process often tends to break down in complex situations pervading many program offices. Requirements change over time, cost estimates become out-of-date, and opportunities that may have been great ideas at prior time no longer seem so appealing.

Keeping accurate lists of projects to quickly absorb and execute funds can be difficult, time consuming, and ultimately not worth the effort. For all these reasons, combining the funds execution review and reallocation process in a single, real-time event is often the most effective and efficient way to identify and quickly move to remedy execution shortfalls

Create Working Program Executives Smartbook for money issues defined by quality information and attention to detail; if you don’t have it at your fingertips, you can’t do your job. Unless you have a photographic memory, a best practice is to create a “Smartbook” that contains the critical information you will need to do good job:

1.  Budget Formulation: Securing resources leading to better decision-making about future program activities and budget strategies

2.  Budget Approval: All presented expenditures are approved, pending funds are available.

3.  Budget Oversight: Align resources with strategic goals to ensure resources and expenditures are occurring in general conformity

4.  Budget Execution: Future Year funding profile by appropriation and constrained/unconstrained budget line  

5.  Apportionments of funds: Money available to be used for any combination of specified time periods, programs activities, projects

6.  Track of funding cuts: Authority to incur obligations and to make payments.

7.  Current appropriation: Commitments, obligations, and expenditures as compared to Service and OSD goals

8.  Baseline history: Original baseline and each subsequent approved baseline

9.  Expenditures: Issuance/ disbursement of funds transfer to liquidate pay obligations

10. Contract closeout: Cancellation or downward adjustment of a previously recorded obligation

 
 
Equipment Upgrade/Repair Simulations based on Work Order Sampling Status Assessment enables Site Visit Executive to monitor dispatcher learning, behaviour & physical proficiency on jobs by establishing standards for evaluation of dispatchers based on observations of job site performance/progress designed to provide feedback to administrators so Advanced Logistics requirements can be authorised.

Work Order Sampling Observational Assessment helps Site Visit Executive make instructional decisions based on information about individual dispatch teams so better understanding of what content should be included in training materials is established. Specific expectations must be communicated about what dispatchers have to master and what concepts dispatchers are having difficulty learning so adjustments can be made to curriculum.

Appropriate status updates of Job Site performance means establishing types of work order status expected of dispatch teams is logical, suitable way to determine behaviour of activity being observed. For example, collecting work order samples for evaluation aimed at standards established by Site Visit Executive over period of time based on operational requirements is appropriate technique of establishing performance indicators to convey context of Logistics actions.

Job Site Status update programme standards can be used to collect work order observations on instructions with multiple execution steps. Being aware of transitions occurring on job site and writing dispatch team behaviour summaries across the top of the status update application, such as “fills out requirements impacting work order quality quickly and moves on to next task.”

Establishing correct requirements means dispatchers must possess both technical and communication skills. The job can be difficult because implementation often gets mixed with the requirements. Often, administrators have the attitude of "I'll know it when I see it," making it difficult to establish requirements early.

We initially focused on creation of prototyping/execution tools & traceable specifications/requirements tools. Possible next steps could entail moving toward executable requirements/simulation tools requiring considerable ability for dispatcher to build field-level demand forecasts results into narrative structure.

Include all dispatcher individual skill sets and period of time you are planning to observe and provide for appropriate assessments of key Logistics indicators to uncover requirement levels being met during evaluation of dispatcher skills in communicating work order efficacy using established narrative language. There's no sense being exact about something if you don't even know what you're talking about. There is no point in trying to establish exact specifications if the designers and field levels cannot even agree on what the requirements are.

It is very important to verify requirements are correctly implemented in the design. This is accomplished by defining traceable requirements as ability to follow the life of requirements i.e. from its origins, through its specification progress, to its subsequent deployment and field-level use. Observational periods of on-going refinement/ iteration in any of these phases captures relationships between requirements, specifications & design.

There also exist requirements in equipment upgrade/repair simulation application tools. Forward planning allows designers to capture system requirements and designs into metrics-based executable system models. Models can be evaluated & simulated to ensure requirements correctness. Application Workbenches are established as system-level simulation tool focusing on behavioural aspects of observation assessments and evaluation of dispatcher performance.

It would be good idea to establish technique for assisting in structured process for determination of correct/accurate requirements. In particular, our application provides for structured equipment product planning to enable dispatch teams to clearly specify field-level requirements in narratives before evaluation of each proposed product /service capability systematically in terms of its “Impact Proximity” for meeting field-level requests. Metrics display relationships between, for example, field-level requirements and narrative system features.

Site Visit Executive must pay attention to key words and phrases used in narratives when assessing quality of work order performance indicators, i.e. “Participates in initial Prototype design ...” “Uses integrated status updates...” “Understands requirements for unit/function testing...” “Begins to create field-level acceptance ..” “Describes standards of quality assurance...” “Performs training at installations..” “Demonstrates ability to rework requirements design..”  Words and phrases, in conjunction with work order examples can assist with composition of accurate status updates.

Traceable assessments allow for allocation of product requirements early in process to reduce problems in correcting defects due to untraceable components in integration and system test phase. Providing traceable assessments also allows for accountability in making sure work order project milestones are approved, deliverables are verified & field-level units are satisfied. Also, traceable assessments keep information organised during changes in composition of dispatcher team skill sets/levels.

Site Visit Executive must establish narrative limitations to observational plans so fewer, more descriptive metrics of work order content are collected. Your goal is to classify observations across multiple performance indicators in order to obtain variety of skill type expectations of quality job execution, while establishing collection standards. Standards must include agreed upon entry of metrics limited to practical amounts so clear picture of dispatcher level of performance is provided for subsequent use in establishing standards.

One of most important things in getting started with narrative structure is to ask questions. Context-free questions are high-level questions posed early on in project to obtain information about broad scope of design problem & potential solutions. Examples of context-free questions include “What installations will be affected?” “What is reason for solving this problem?” “What field-level scenarios is product likely to encounter?” “What are trade-offs between time/value?” These questions force both sides, designers and field level operatives, to look at higher-tier issues. Also, since these questions are appropriate for any project, they can be prepared in advance.

Let’s look at observations collected for performance level indicators “Shows signs of meeting standards of beginning communication levels for work order concepts.” At the end of each dispatcher team performance rating period, Site Visit Executive reviews skills checklist ratings so narrative summary report can be established for each dispatcher skill set level. Unclear requirements can be caused by poor choice of words or unstructured introductions of key elements. It is important to eliminate or at least reduce ambiguities as early as possible because cost/benefit ratio metrics can be compromised as operations progress.

You must summarise dispatcher knowledge and write narrative commentary describing strengths/weaknesses demonstrated in execution of work orders in each Logistics domain. To determine dispatcher progress, look at number of “Jobs Not yet in Process” and proficiency ratings in each domain. If there are more process proficiency rating results from assessments during observation periods, then dispatch teams are demonstrating expected progress.

When writing narrative summary, you must convey how dispatch teams are progressing. Although two dispatcher teams may have identical ratings for communication of work order issues, it is unlikely both teams will have exactly the same skill set and approach to work order execution. Moreover, even if evaluations of two dispatcher teams indicate similar skills, it is probably the case each team does not demonstrate skill levels in the same manner.

Common means of describing complex systems using object orientation represents specification narratives, but there are no details given on how/when features are linked together during progress made in systems creation. The main concerns about using object-oriented narratives for real-time embedded systems is about speed/size characteristics of Application to be utilised.

Equipment Upgrade/Repair Simulation Restrictions in structural properties has been cited as one shortfall. For example, there are no predefined application stereotypes to help improve standardisation and no capability to capture information in depth or potential for complete description of operational system properties may be limited. Another shortfall deals with deficiencies in simulated schedule concurrency. However, our application concepts are still in initial creative stages and only time will tell if it will be effective in system specifications.

Some points in support of object-oriented narratives for embedded systems include requirements for objects to be efficient so Site Visit Executive can write about larger systems with fewer defects. Obtaining results in less time is realised using simulation techniques instead of structured methods, and advances can be implemented in assembly narratives, in addition to others.

Site Visit Executive has highlighted possibility that usually, in defining system requirements, there is tendency to cover areas related to performance as opposed to areas that are related to support. But this means emphasis is only placed on part of the system and not the whole system. It is essential to take into consideration the entire system when defining system requirements. Summaries detailing results from evaluation of established work order proficiency standards must consist of individualised narrative samples for each dispatcher team assessing the following skill types/levels:

1.  Requirements/Prototyping: Specs established in a systematic way to ensure accuracy & completeness

2.  Initial/Detail design:  Precise, verified functions of design behaviour, or system characteristics/procedures determining satisfaction of provisions

3.  Formal product integration: Specifically established functions, attributes, constraints, preferences & expectations of equipment products.

4.  User status updates: Just as important to specify what system is not supposed to accomplish compared to what system is supposed to do when requirements are complete

5.  Unit/Function testing: Constructed primarily to test the completeness, accuracy, clarity, & concise properties of requirements

6.  Integration/System testing: Creation of dependable, embedded systems to interact with outside field-level operations

7.  Field/Acceptance testing:  Used on regular basis to determine if field levels will be satisfied with equipment product

8. Quality assurance:  Quality preferences on desired condition are placed on attributes to be defined in addition to constraints

9. Installation/training: Operational deployment/distribution to include quantity of equipment product & location

10.  Requirements/Design Rework: Revisit requirements & specifications to fix encountered problems

 
 
Key components of any performance logistics programme implementation is establishing sound set of metrics. Since main purpose of programme is ‘buying performance,’ what constitutes performance must be defined in manner where achievement of performance can be tracked, measured, & assessed.

Identification of top-level metrics achieves this objective. An effective programme implementation depends on metrics accurately reflecting User Goals established as effective measure of support provider performance.

Logistics Response Time is period of time from logistics demand signal sent to satisfaction of that logistics demand. ‘Logistics demand’ refers to systems, components, or Upgrade/repair resources such as labour required for system logistics support.

Performance Logistics Metrics must support desired outcomes we have highlighted. Performance measures must be tailored by DoD to reflect specific Service definitions & unique circumstances of support logistics arrangements.

One of most critical elements of Performance Logistics strategies is to tailor metrics so operational success of system approaches are realised and reliability of  metrics align well with support provider scope of responsibilities. Support providers must be fully responsible for meeting metrics defined in established contracts resulting from programme actions.

So there must be consistency between scope of product support responsibilities and identified metrics. If Task initiatives do not perform all functions contributing to operational availability, consideration must be given to identifying appropriate metrics to hold support providers accountable.

While objective metrics should form bulk of performance evaluation, some elements of product support requirements might be more appropriately evaluated subjectively by DoD. This approach allows some flexibility for adjusting to potential support contingencies. For example, there may be different field-level unit priorities to be balanced with overall objective measures of performance.

Equipment Upgrade/Repair Simulation conclusions/ recommendations must address quantitative/qualitative logistics costs/benefits as well as risks associated with expected unknown areas which have not been quantified, such as changes in purchasing/provision of spare parts supply lines at installations.

Affected supply line processes must be identified & specific metrics to be used for assessments. Utilising existing logistics & past site visit operational information to define sustainment baselines is required as well. Identify affected purchasing process parameters affected & specific metrics to be used for assessments.

Next Steps and Challenges for Equipment Upgrade/Repair Simulations include Capacity for applications described in this report to derive relevant metrics, most of which currently consider only subset of missions & focus on equipment-specific planning factors.

Future work will expand application to include other missions and will include additions or process advance of existing features—for example, addition of a consistency metric for relative support task importance selection would be great idea. In accounting for multiple missions, we face two significant challenges.

The first challenge is how to deal with common support tasks when considering multiple missions. It may be the case establishment of just single command centre is all required to accommodate multiple missions, but equipment needed to support each mission may differ based on metric construction.  In other words, although the support task is “common,” there may be unique, mission-specific requirements for accomplishing it.

Second challenge concerns sequencing tasks and assigning metrics to reflect relative importance at the support task level versus the mission level. A typical example might be transport of equipment to new staging area subsequent to support.

If mission A is designated more important than mission B, does that mean that all support tasks associated with mission A have absolute priority? If not, how do we provide User with the ability to designate metrics exceptions at the task level?

Equipment availability is system performance parameter to provide insight into probability item or system will be available to be committed for specified requirement. Depending on application utilised by User, availability can be defined to include reliability, maintainability and logistic support information. For administrative purposes, ability to quantify availability metrics in terms of all contributing elements is essential.

This Report provides for discussion of steady state operational availability metrics to be used by DoD in determining aircraft fleet requirements. The availability model embodies scheduled/unscheduled maintenance and allows for assessment of “Impact Proximity” using in-service equipment upgrade/repair Job Site information.

Metrics are sensitive to fleet size, aircraft flying rate, frequency of grounding events, aircraft maintainability, scheduled inspection frequency & scheduled inspection duration. The predictive capability of availability metrics provides for smart upgrade/repair support capability decision making.

We have illustrated an application of using in-service upgrade/repair assessments to construct supply line risk availability model based on limited sets of metrics. This capability can be of great value in predicting availability and performing impact assessment on those parameters, which affect availability.

We created Logistics Scorecards with application allow users to drill down from higher, aggregated metrics to lower detailed metrics, enabling quick diagnosis of low performance supply line elements. For example, users could start by first implementing strategic, executive scorecard metrics in fashion aligned with existing capabilities.

Implementation of automation processes could follow, through use of new, innovative application tools & addition of more tactical, operational scorecard metrics. As application users are able to take more capabilities on board in area of supply line performance metrics, characteristics of future automation of the process can be implemented over time.

The discussion provided here details existence of multiple designed approaches for applying availability metrics in use today. But techniques used here allow for separation of scheduled/unscheduled upgrade/repair Simulation parameters for support within a single availability metrics model.

There is no reason why approach presented here can not be used by all DoD Services to promote utility of in-service equipment availability applications. Or, DoD can keep chugging along, reporting short-term operational success, while simultaneously short changing interests of Field-level Troops by not applying adequate resources towards product quality/innovation & sustainment requirements.

DoD must Identify specific metrics critical for product support meeting schedule expectations of installations. Ensure definitions are provided within assumptions & appropriate measures identified following site visit investigation. Recent site visits have uncovered instances of using too many evaluation metrics.

Growth in number of total metrics must be minimised to ensure reasonable amount of effort is required to obtain & assess information to ensure reasonable conclusions/recommendations arrive quickly to Job Sites.  

Impacts must be assessed using primary upgrade/repair metrics to include Operational availability, materiel readiness, total cost/benefit of in-house supply line provisions & mission downtime. Identify & define any potential automated decision processes, planned integration & installation expectations in key Simulation assumptions.

Linking metrics to existing DoD measures of performance and reporting systems is key consideration. Many existing logistics metrics can be related to top field-level performance outcomes.

Although actual performance logistics strategies, as implemented, sometimes delineate metrics at levels lower than top field level-level measures e.g., system availability, it is important initial identification of performance outcomes be consistent with metric areas outlined below:

1.  Operational Availability: Percent of time system is available for mission or ability to sustain operations tempo

2.  Operational Reliability: Measure of system in meeting mission success objectives such as sortie, tour, launch, destination reached, or other metric specific to service/system

3.  Operational Maintainability: Time required to return failed repairable system to service—usually sum of model sets describing diagnosis, repair, inspection, reporting & evacuation

4.  Cost/Benefit per Unit Usage: Total operating cost divided by the appropriate unit of measurement for assigned system such as flight hour, launch, transit distance, or other metric specific to service/system

5. Logistics Footprint: Job Site size or ‘presence’ of deployed logistics support required to deploy, sustain/move system such as large equipment caches,  labour, & transit assets

6. Logistics Response Time: Period between work order submission & completion to vary with complexity, Job Site size/generation standards

7.  Work order plans: Percentage of work orders scheduled in period to be completed/closed & work order backlog track/trend.

8.  Automated Supply line Support: Frequent, recurring Job tickets tagging label schemes to indicate instances field-level to derive support value from product

9.  Supplier structure quality: New products introduction defined as percentage of new products introduced to field-levels hitting time, volume & quality targets

10.  Quick transport functions: Calculate relationship between number of on-time pick-ups to total number of deployments in period must indicate performance & product support service levels

 
 
Site Visit Executive has recognised importance of strategic sourcing to be established in supply line connections for DoD as Services continue to execute work orders aimed at improving efficacy of equipment sustainment enterprise at critical Logistics Centres. Here, our goal is to create set of criteria for product support sourcing decisions so weapon system programme offices can apply smart measures early in acquisition process.

Supply Line Connection criteria will help determine if DoD enterprise has capability to sustain systems and, if not, decide if it would be beneficial to establish work order capability at Logistics Centres. If new systems can be routinely sustained within DoD mission sets, it may fit into existing enterprise structures & available capabilities with potential to be expanded, if necessary to meet expanding field-level demands.

How does Logistics Centre systems approach work? It develops criteria for prioritising spares item component procurement established in supply line connections using sequence assignment techniques. Candidate buys derived from supply line connections are ranked in order of cost/benefit ratios i.e., improvements in availability from adding spares to mission cache divided by the unit cost, then incrementally added to sparing solutions in diminishing order until target availability and acceptable fiscal goals are reached.

The framework presented in this report addresses these sustainment planning challenges in several ways. First, the approach presented here provides for repeatable, model-based decision tool not to require large amount of detail requirements. Instead, we propose using readily available upgrade/repair work order information in Logistics Centre supply line connection catalogues of field-level component demand.

Second, this approach applied at Logistics Centre considers upgrade/repair source decision making in context of broader DoD enterprise. That is, we examine large, complex “systems of systems,” such as fighter aircraft, from the perspective of technologies and subsystems, some of which are common across different aircraft.

Finally, concepts promoted in this report are potentially applicable to other aspects of sustainment planning, such as directing upgrade/repair sourcing mixes and informing other DoD sustainment responsibility structures. We have detailed critical factors to include Logistics Centre activation, life-extension work order engineering, supply line connection establishment & product support integration.

DoD guidance states Logistics Centre work orders must be assigned on basis of best-value determination for meeting field-level demands. But this guidance does not specify how to determine “best value.” To help fill this gap, this report presents an approach to determining best value when assigning Logistics Centre workloads. 

Spare item component models are flexible/adaptive to allow for extensive variation in user-defined systems and field-level mission scenarios. Although some Logistics Centre Users focus model on estimating field-level demands for spare components to only most critical items under constant i.e., steady state operating conditions, other users consider every reparable item over wider ranges of operating conditions.

In either case, once system and field-level demand scenarios are defined, the basic question for users at Logistics Centre remains the same: What mix of spare parts is required to keep system at some desired level of operational availability so specific work orders can be executed within acceptable time frames?

In the longer term, well designed system-oriented models can help shape future sustainment enterprise by giving DoD opportunity to examine subsystems across weapon system Types. In doing so, Logistics Centres are required to evaluate effects of new or emerging technology on its subsystem strategies. The model determines component requirements based on explicit contribution of each item to overall performance of the system.

The probability-based component model to be utilised at Logistics Centre employs systems approach to produce optimal spare part solution for critical equipment. This means solutions are realised where no other mix of items can provide greater, system-wide effect on work orders. In fact, systems approaches we have presented do not produce only one solution, but an entire range of smart solutions meeting field-level demand signals.

The model can deal with many different operating scenarios and changes during the course of a field-level mission scenario. For example, operating tempo may be steady state, dynamic, or both. Also, equipment upgrade/repair and resupply may be suspended for some time periods, maintenance training techniques may change as result of common system implementation progress, etc.

When Logistics Centres perform work order reviews across weapon systems and across technology types, DoD will be better positioned to identify sustainment system it would like to have in the future. This framework can also inform decisions about other product support activities, such as establishing quality supply line connections or improvements in sustainment engineering process.

For example, information derived from framework can support decision making when there are field-level demand options for division of Logistics Centre work orders between product support providers. Moreover, we have established potential of framework to inform discussions between weapon system designers and DoD concerning engineering projections of reliability and equipment upgrade/repair parameters by providing basis for comparison with reviews of legacy aircraft.

Logistics Centre provides functional parts to meet DoD requirements for equipment subject to normal field-level demands driven by equipment wear/tear associated with real-world, mobile missions to planned upgrade/repair intervals. The ability to execute work order tasks on subsystems such as overhaul of structural elements, engines, electronics, hydraulics, etc. can be realised to ensure operational success in real world, mobile scenarios.

Logistics Centre Repair Engineering Team is dedicated to establishing new enhancements in upgrade/repair reliability to increase mission readiness of weapons systems. Spare parts are ordered by DoD through a variety of supply line connection contractual agreements to support field-level flight line demands & Logistics Centre equipment upgrade/repair requirements in work orders.

Using requirements capability of component model is well established so Logistics Centre Users enter availability specifications and optimal spares mix is determined so field-level target is reached. The spares requirements can support new systems e.g., initial provisioning or existing systems e.g., periodic replenishment.

Work order valuation capability approaches spare component mix problem from opposite direction. The evaluation mode of the model enables Logistics Centre Users to select required item mix, prior to determination of system availability provided by specified mix under many potential field-level conditions.

The Model also considers several item-specific factors, such as changes in field-level demand signals per flight sortie operating hour, Logistics Centre upgrade/repair times, transit times, supply line connection quality/quantity per application, and procurement lead-time.

Catalogues of previous supply line connection runs are established for comparative purposes. Retrieval capabilities provide Users with easy way to save/restore spares assessments i.e, model output along with related item-level information i.e., model input in user-specified locations.

The model allows users to compare variety of input/output results at both item & system levels. For instance, users can compare any two model-generated spare component requirements for identical weapon systems. By simply selecting the two previous model results, side-by-side comparisons of the spares requirements are presented and sorted so items are displayed by order of increasing or decreasing differences.

To ensure availability, Logistics Centre maintains large supplier base with expertise in supply line connection participation to provide semi-finished items for final processing at Logistics Centre. All supply line connection inputs to Logistics Centre must satisfy stringent, quality work order standards established by DoD for uncompromised mission readiness.

Logistics Centre Upgrade/repair Clinic process is typical of aggressive approaches to determination of field-level demand levels so work order cost/benefit determinations are brought to bear on equipment ownership status. With Upgrade/repair Clinic, high volume/high value items in DoD scrap bin are evaluated and upgrade/repairs are performed so parts with reliable/quality characteristics are utilised.

Logistics Centre has established large work order portfolios with constant growth in upgrade/repair capabilities, thanks to establishment of Repair Clinics and engineer teams—innovators who are constantly looking for supply line connection solutions to emerging materiel stress conditions defined by field-level demand scenarios.

Logistics Centre provides site planning services so DoD can establish its own equipment upgrade/repair capabilities at Field-level, as well as test cell for field-level operations or work order expansion programmes. These services can range from simple Logistics Centre summary assessments to on-the-spot detailed status reviews of field-level demand signals.

We have promoted design of models focused on doing one thing very well: assessing availability-based component work order recommendations for critical systems at established Logistics Centres. The model is not fully featured, integrated component solution for absolute assessments of field-level demands. More realistically, it is robust, highly adaptable assessment tool expressly designed for answering complex fiscal/availability challenges pervasive at DoD. In this context, component model is perfect complement to tactical/execution capabilities of many mission packages.

As a result of these reviews, Logistics Centre can provide DoD with Job Site Layouts showing team locations, work order flow and equipment placement. Logistics Centre can also recommend required site equipment and estimate cost/benefit of supply line connection results to bottom line of DoD operations.

Training in work order design/utilise is offered to DoD for field-level and Logistics Centre equipment upgrade/repair. An advantage of providing field-level training at other locations is so DoD can avoid extended time away and expenses for its maintainers.

Training at Logistics Centre offers world class work sites with modern training tools for critical equipment support required by DoD. Subject areas covered in maintenance training include subsystem exposure, test cells for field-level operations, Upgrade/repair testing of system controls and work order procedures.

In addition to providing Site work order planning services, Logistics Centre provides additional consulting services so supply line connections are established to improve potential of realising ambitious goals of being productive/efficient leading to best satisfaction of DoD readiness requirements based on field-level demand signals.

Logistics Centre utilises equipment upgrade/repair manuals updated to include product support status, service bulletins, special instructions, illustrated parts catalogues, other technical documents and training materials in executing work orders at Job Sites. All supply line connection reference materials are continuously updated to incorporate latest status configurations, upgrade/repair processes & operating recommendations.

Site Visit Executive has created state-of-the-art administrative structures for materiel product support to meet field-level demands and offer complete work order capability advantages. By using results from supply line connections, DoD can take advantage of Logistics Centre unparalleled strength in supplier status updates to minimise delays in real-world, mobile part delivery and improve cost/benefit equation of missions executed by the Force.

Logistics Centre takes care of your aircraft so you can focus on flying. Logistics Centre services include:


1.      Parts Requirements Forecast for Work Orders

2.      Secured Materiel to Support Schedules

3.      Upgrade/repair Sourcing Administration

4.      New or Serviceable Materiel as Scrap Substitute

5.      Dependable Work Order Turn-around Times

6.      Focus on Item Type Source Availability

7.      Common Component Scale Economies

8.      Specific Item Location Model Input/Output

9.      Side-by-Side Item Requirement Comparisons

10.    Equipment Commonality & and Item Quantity