With our Repair Clinic, we look at the high volume/high value items in the customer’s scrap bin and develop repairs that produce reliable, quality parts. Our large portfolio of repair capabilities is constantly growing, thanks to the Repair Clinics and a large team of development engineers—innovators who are constantly looking for solutions to emerging material stress conditions.
Developing and maturing a “digital representation of that system and how it interacts with its environment” would go a long way in delivering relevant trainers from the outset, rather than today’s simulators without the latest update or inaccurate threat environments.
The digital representation would also help with sustainment efforts throughout the life of the program, as models could help show how the system would hold up over time and in different environments.
“We’re in the process of building that model-based specification, that model spec for industry right now. When we get that part right, build that spec, now it’s in industry’s hands” to continue to make best use of that model environment.”
We want to learn a lot about model-based engineering with new systems without bogging the program down with new processes and design tools. To supplement that learning effort, Navy and industry could conduct a “Surrogate System Experiment: to help identify potential kinks in the new process to bring this group – representation from the organisations represented in this group today – into a collaborative environment where we can actually build a surrogate program and execute that model, the capabilities-based acquisition model.”
“Find out where the hard spots are, find out where we have to go soft, find out what is that deliverable, what kind of contract will work, where do we have to hand off between military and industry, how do we truly make that integrated data environment work in secure network environment?”
NAVAIR intends to begin implementing the model-based engineering concept into any new design, capability upgrade or sustainment program it can, seeking opportunities to learn as quickly as possible, “as opposed to waiting for the big bang on a brand new program.”
It’s important for industry to engineer the new product within threat model, instead of today’s practice of, “we write a 500-page specification with 20,000 shall-statements, and we give it to industry and go, here, design this. We don’t give them the threat models, we don’t give them the blue force models, we don’t give them that system of systems family model we just built.
If industry can work within that model environment, then little changes can be made along the way – swapping one sensor for another, for example – without wondering how that change may affect the aircraft’s aerodynamics or its low-observability or other features that today are designed separately on paper.
With a digital drawing in a threat-representative environment, the sensor could be swapped out and thousands of possible engineering solutions generated until the best one is chosen and be done in a matter of hours or days, rather than the months it would take with today’s processes to make those changes.
Benefits to this type of design effort continue throughout the test and evaluation, fielding and sustainment phases.
For example, in lieu of a paper-based design review, as industry meets various milestones in maturing its design “let’s take those models and let’s put them back in the tactical scenarios we developed with the operator back in step 1 and just see how it goes.
What better evaluation or assessment of how the program is maturing than to actually run the current level of maturity of performance that we see in our models through the tactical situations we’ve built with the operators? Because in the end that’s what matters, in the end capabilities-based test and evaluation is about testing the capabilities – it’s not about ensuring industry met every one of those 20,000 specs.
That’s where we spend all our time today during T&E, validating that industry met the specs. The fleet couldn’t care less, the fleet wants to know that the attributes and the capabilities that they’re counting on will be met.”
That sort of constructive testing – all conducted within the simulated environment – could pave the way to eventual virtual testing once the first hardware is delivered, which would then pave the way to eventual live testing in the field with real operators. That flow would make the best use of everyone’s time and allow any problems to be addressed as early on as possible.
We provide facility-planning services to assist a customer establish its own engine maintenance facility, test cell or expansion program. These services can range from a simple summary review to an on-the-spot detailed analysis. As a result of these reviews and analyses, we can provide our customers with a building layout showing department locations, work flow and equipment placement. We can also recommend required facility equipment and estimate costs.
We provide all the technical documentation to fly, operate, and maintain our aircraft engine products. We publish the engine and maintenance manuals, service bulletins, special instructions, illustrated parts catalogs, other technical documents and training materials. All publications are continuously updated to incorporate the latest configurations, repair processes, and operating recommendations.
Most jet engines consist of a compressor, a turbine, and a combustor that are used to accelerate the flow of air through the engine, thereby producing a thrust force on the aircraft. Spare parts are ordered by our customers through a variety of contractual agreements to support flight line, intermediate and depot maintenance needs.
We take care of your engines so you can focus on flying. Our Engine Centers have developed state-of-the-art materiel management and supply chain management systems, which offer complete Contractor Furnished Materiel supply capability. By using Contractor Furnished Material, our customers take advantage of our unparalleled strength in vendor and distributor relationships to minimise delays in part delivery and to reduce overall cost.
Are you getting many complaints about job site & equipment maintenance, are work orders taking too much time to complete? If this sounds familiar, maintenance metrics can help you identify causes/solutions. Maintenance planners can use these metrics to pinpoint the real issues and how to fix them. These metrics are not too difficult to collect. Use them even when everything seems fine to get a quick measure of your maintenance performance.
Improvement is one reason. For improvement, we need to know where we have been and hopefully, where we want to go. Another is that people like to get a score or feedback on how the organisation sees their work. That said, while metrics reflect individual performance, the focus of metrics should be to identify issues with the business processes associated with the work and not the individual themselves. From the metrics, we can identify trends and patterns. Adding to that, consider: "What gets measured gets done. What gets celebrated, gets done well."
Recognise there are two types of metrics, leading and lagging. Both are useful. To better understand the difference, ask yourself if you are operating within the organisation as Mr. Fix It or just cleaning up the mess? Are you taking a pulse of the organisation or performing a after action review on last month's performance?
In most organisations, the reality is that everyone is focused on the after action reviews. Often, the reason for the after action review focus is those numbers are the most readily available from current reports. They are easy to identify and it is the quickest way to satisfy the demand for metrics. Lagging metrics are like looking in the rearview mirror; they only tell you where you have been and not where you are headed.
Ideally as a rule of thumb, you should have multiple leading metrics for every lagging metric. Leading metrics are performance drivers. Utilising them allows you the opportunity to make preemptive actions to improve your chances of meeting the desired outcomes or lagging metrics. Leading metrics often measure activities or even processes.
Understand that the selected metrics much like processes, too, the organisation chooses to employ will drive employee behaviour as well. As an example, one organisation chose to measure the number of work orders requiring re-approval if the labour or materials cost exceeded significant change of the original estimate. This measure is a lagging metric because it was after the work had been completed. The reapproval process was designed as a heads-up information sharing activity to show more dollars spent than anticipated. What behaviours did it drive? Planners would significantly overestimate labour hours and contractor/materials costs to avoid the re-approval process.
Look at how the domino effect takes hold from there. Those labour hour estimates were used to create the following week's schedule. Now we aren't assigning enough work to the technicians as the hours were padded. Wrench time suffered. Work orders on completed work that were left open became easy targets for technicians or contractors to charge time to when working on other jobs or idle.
Materials for other jobs were charged to those work orders as well. How can we use metrics to drive behaviours? Introducing or revising the organisational metrics requires training for all stakeholders, not just maintenance personnel. Don't assume that the standard metrics that you might take for granted, such as "schedule compliance," are understood by all. Using this metric alone, questions like, "What counts toward the metric?," "When is the cutoff point that items can be added to the schedule and count?" and "What is a scheduled job?" should be addressed from an learning, even machine based perspective.
Before reviewing specific metrics, it should be noted that variations to the following metrics could be defined or utilised based on your requirements. The listing is not intended to be comprehensive, but to provide insight on specific behaviours related to maintenance planning and scheduling. Let's begin with those metrics directly influenced by the maintenance planning and scheduling function.
The following metrics are more general in nature to the maintenance organisation. However, the planning or scheduling role can and often does influence these metrics. Consider the simple metric of "schedule compliance" as an example. If the planner has not correctly identified the materials and parts, or incorrectly estimated the hours required for the job, it may be very difficult to complete the number of jobs that are scheduled. If the scheduler has not coordinated the various crafts and the work cannot be completed in the scheduled window, schedule compliance may be impacted.
Are your metrics headed down south or stagnating, not improving? Wondering how to identify the problems or root causes? Do you know the behaviours the metrics are driving? There is a saying that the "product always follows the process."
"If your system is not working, don't blame the people, blame the system." To that end, where is your mission outcome program to evaluate if the processes are working? Ideally, you should be pulling several completed work orders off the pile randomly every operational time period as a minimum. Gather the planner scheduler, supervisor, technicians, storeroom person, and maybe even the plant manager, as examples and walk the jobs.
When you get to where the work occurred, you should be stepping through metric type items to determine the process effectiveness. Did the planner scheduler estimate the job duration correctly? Did they have the right parts? Were the parts staged and kitted? What about multicraft coordination? Did operations have the equipment ready based on the schedule? Did the job get completed before the due date? Was any follow-up work required? Was the work order completed and closed in a timely fashion? The primary goal is to determine if the operational processes worked, but you can also determine performance issues or the need for training, as examples.
1. System Test and Evaluation. The use of pilot, prototype, production, or specifically designed networks to obtain or validate engineering data on the performance of the system during the developmental phase normally funded from RDT&E of the program.
2. Effort associated with the development of any specialised tools or data in support of the system level test program.
3. Test articles that are functionally configured to represent and test the complete defense materiel end item being developed or produced,
4. Detailed planning, conduct, support, data reduction and reports excluding the contract data requirements list data
5. Test articles functionally configured to represent and test subsystems/components of the defense materiel end item being developed or produced
6. Conduct testing up through the subsystem level, which can be associated with network element acceptance testing
7. Demonstrate engineering design and development process is complete, risks have been minimised , system will meet specifications and estimate the system's military utility when introduced
8. Determine whether the engineering design is supportable, practical, maintainable, safe, etc. for operational use
9. Provide test data with which to examine and evaluate trade-offs against specification requirements, life cycle cost, and schedule
10 Perform the logistics testing efforts to evaluate the achievement of supportability goals
11. Evaluate adequacy of the support package for the system, e.g., deliverable maintenance tools, test equipment, technical publications, maintenance instructions, and personnel skills and training requirements, etc.
12. Cost, schedule, performance measurement management, warranty administration, contract management, data management, vendor liaison, subcontract management, etc.
13. Support element management, defined as the logistics tasks management effort and technical control, and the business management of the support elements.
14. Logistics management function encompasses the support evaluation and supportability assurance required to produce an affordable and supportable defense materiel system.
15. Planning and management of all the functions of logistics include: maintenance support planning and support facilities planning;
16. Support equipment requirements determination; supply transportation; provisioning requirements determination and planning; training system requirements determination;
17. Network resource determination; organisational, intermediate, and depot maintenance determination management; and data management
18. Program models, tests and associated simulations e.g., such as wind tunnel, static, drop, and fatigue integration ground tests; test bed aircraft and associated support;
19. Qualification test and evaluation, development flight test, test instrumentation, ballistics, range and accuracy demonstrations, test facility operations, test equipment including its support equipment, chase and calibrated pacer aircraft support and logistics testing
20. Avionics integration Test bench/laboratory, including design, acquisition, and installation of basic network and test equipments
21. Ability to simulate in the laboratory the operational environment of the avionics system/subsystem
22. Air vehicle equipment consisting of the avionics and/or other air vehicle subsystem
23. Modules required by the bench/lab or flying test bed in order to provide a compatible airframe avionics system/subsystem for evaluation purposes
24. Flying test bed, including requirements analysis, design of modifications, lease or purchase of test bed aircraft,
25. Modification of aircraft, installation of avionics equipment and instrumentation, and checkout of an existing aircraft used essentially as a flying avionics laboratory
26. Avionics test program, consisting of the effort required to develop test plans/procedures, conduct tests, and analyze hardware and software test results
27. Verify the avionics equipments' operational capability and compatibility as an integrated air vehicle subsystem
28. Effort required to design, code, de-bug, and document network programs necessary to direct the avionics integration test
29. Engine military qualification tests and engine preliminary flight rating tests
30. For sea systems: model basin, hydrostatic, fatigue, shock, special sea tests/trials agenda preparation, data collection and analysis, dock and sea trials and hull vibration survey elements
31. Test articles such as Inert Measurement Vehicles, Launch Separation Vehicles, Separation and Control Test Vehicles, Boost Test Vehicles
32. Ordnance: test articles such as Ballistic Test rounds, Inert Measurement rounds, Dummy rounds, Launch Separation rounds, etc.
33. Operational test and evaluation conducted by agencies other than the developing command to assess the prospective system's military utility, operational effectiveness, operational suitability, logistics supportability
34. Include compatibility, inter-operability, reliability, maintainability, logistic requirements, etc., cost of ownership, and need for any modifications.
35. Initial operational test and evaluation conducted during the development of a system tests as system demonstration, flight tests, sea trials, mobility demonstrations, on-orbit tests, spin demonstration, stability tests, qualification operational test and evaluation,
36. Support required to prove the operational capability of the deliverable system e.g., technical assistance, maintenance, labour, material, etc. consumed during this phase of testing
37. Logistics testing efforts to evaluate the achievement of supportability goals and the adequacy of the support for the system
38. Deliverable maintenance tools, test equipment, technical publications, maintenance instructions, personnel skills and training requirements, network support elements
39. . Design engineering and production of system or subsystem mock-ups that have special contractual or engineering significance or that are not required solely for the conduct of one of the above elements of testing.
40. Risk reduction facilities where software and hardware can be developed, integrated, tested and evaluated for both stand alone functionality and/or interoperability prior to being fielded.
41. Support elements necessary to operate and maintain, during test and evaluation, systems and subsystems, which are not consumed during the testing phase and are not allocated to a specific phase of testing.
42. Repairable spares, repair of reparables, repair parts, consumables, warehousing and distribution of spares and repair parts
43. Test and support equipment, test bed vehicles, drones, surveillance aircraft, tracking vessels, contractor technical support
44. Operational and maintenance personnel, special fixtures, special instrumentation, etc., that are utilised and/or consumed in a single element of testing and that should be included under that element of testing
45. Deliverable training services, devices, accessories, aids, equipment, and parts used to facilitate instruction through which personnel will learn to operate and maintain the system with maximum efficiency.
46. All effort associated with the design, development, and production of deliverable training equipment and its associated software as well as the execution of training services
47. Overall planning, management, and task analysis function inherent in the work breakdown structure element systems engineering/program management
48. Distinctive deliverable end items of training equipment, assigned by either a contractor or military service are required to meet specific training objectives.
49. Operational trainers, maintenance trainers, and other items such as cutaways, mock-ups, and models e.g. Operational Instructional Equipment, Maintainer Instructional Equipment
50. Training course services; contractor-conducted training in-plant and service training; and the materials and curriculum required to design, execute, and produce a contractor developed training program. e.g. Operator/maintainer Instructional tools