Marine  Magnet Dispatch Service Centre
  • Dispatch Services
  • Status Updates
  • Pentagon AI Contest
  • Submit Comments
  • Download Reports
  • Case Study Scope

Top 10 Questions for Ship Yard Engineers Conduct Digital Twin Model/Simulation New Job Site Configurations

4/21/2019

0 Comments

 
​As part of its effort to modernise and optimise ship yards, Navy is in the early stages of work site simulation plans, meant to show potential new layouts of the yards’ welding shops, pump shops, material storage, office spaces and more could contribute to better workflow and reduced man hours.

The Shipyard Infrastructure Optimisation Plan effort will also include repairing and enhancing dry docks and replacing aging capital equipment, but the Navy hopes to hash out the layout issues early on to avoid installing large capital equipment and then having to move it a few years later.

“The Shipyard Infrastructure Optimisation Plan articulated a vision that shipyard infrastructure has three interdependent components: the dry docks, the facilities and the capital equipment; and that these configurations are fundamentally linked to the shipyards’ ability to execute the mission they are tasked to do..

“We are utilising modeling and simulation as a tool to integrate these components to better inform the desired infrastructure layout. Through this, the Navy will be in a better position to make meaningful, long-lasting investments that not only address the condition of the facilities and equipment but also change the way the work is conducted. Once we’re finished, the Navy will recover more than several hundred thousand work days per year, every year.”

A first step is the Digital Twin effort, where the yards as they exist today will be put into a modeling and simulation system that can then generate numerous options to optimise work flow.

We have designed a notional new design for the yard running simulations on transit times for various types of personnel to understand what shops, offices and other facilities need to be located near what other facilities to help optimise travel flow.

The impact of each worker having to walk an inefficient path from location to location to do their job may seem minute, but when multiplied by the total workforce over the span of a maintenance availability, an inefficient yard design can extend the amount of time it takes to repair a ship quite a bit.

The workforce walks the circumference of the earth every day getting to and from the worksite. 5% inefficiency in their time spent walking around the yard could be eliminated with a more appropriate flow from the shops down to the drydocks.

Workers train weekly to maintain readiness, so when the call comes to get the job done, they are prepared. “This trainer allows firefighters to work in virtual controlled environment that detects whether agent is applied correctly to extinguish fires. The trainer mimics the heat a firefighter feels and presents a multitude of fire scenarios, to include building and aircraft fires.”

Not only is the simulator safer, but it is also more cost effective and easier to use. Now firefighters do not have to load up their rigs with equipment, water, gas and wear heavy equipment when training. They are able to train in their training rooms without leaving the facility.

The simulator consists of a head monitor display, nozzle and protective gear. The computer central processing unit is in the simulated firefighter air tank system that the individual places on their back. This simulated tank is much lighter than what the firefighters would normally wear during a call.    

“Lightweight means less physical stress . Our vehicles, and equipment stay in-service, allowing for faster responses to real world emergencies that occur while we’re training. While the simulator allows skill sharpening in a safe environment, the authenticity of a real fire is the best way to learn.

“There’s no substitute for fighting real fires. “In order to do what we do, we have to know how fire is going to react when you introduce something new to that environment - water, foam, ventilation, etc. If you apply any of those examples in the wrong way, you could end up with accidents.

We’re confident that transparently laying out those upcoming challenges will encourage industry to make the right investments in their infrastructure and workforce and avoid further ship maintenance troubles in the coming decades.

Today, the Navy does not have enough dry docks at its disposal to care for cruisers, destroyers and Littoral Combat Ships. A mismatch in the number of ships and number of dry docks is one issue; ships coming into the yards late due to deployment extensions and ships coming out of repairs late due to unplanned work popping up exacerbate the situation.

“When we looked at it, we had spent a lot of time analyzing and building the 30-year shipbuilding plan, which gave the industrial base a real clear signal on what was to come and then they could start making capital investment decisions. We were less clear and less transparent about the repair work.”

“And that, in combination with kind of awarding each ship repair individually, was causing us to sub-optimise performance. So the purpose of the plan was to make sure we were very clear with the Navy, with Congress, with all of our industrial partners, the workload coming forward and then get better at planning the work, which would allow our industrial partners to get more efficient at executing the work.”

The long-range plan does multiple things. First, it lays out the expected workload for the repair and modernisation industry based on anticipated ship inventory. Identifying the number of availabilities, the number of man hours, and the dollar value of the work helps both industry leaders and Navy planners see the wave of work coming in terms that make sense to them.

It also outlines ongoing initiatives with the ship yards to help address today’s maintenance backlogs by investing in the yards’ layout and capital equipment to create a more efficient place to do ship repairs and upgrades. The plan looks to create better workload stability and thereby be able to use new contracting models that promote efficiency and on-time delivery of ships.

Today, the Navy awards contracts an average of 90 days before the start of a maintenance availability, limiting yards’ ability to hire new people or rearrange their workload to accommodate a new ship. Our goal is to bring that average to 180 days to allow for better planning. 

We have already reduced the number of inspections associated with ship maintenance work by 30 percent and hope to eventually bring that down to 50 percent. We are also increasing on-the-scene leaders’ ability to approve changes to the work package once an availability starts, helping to curtail what can be lengthy pauses in work while the approval request goes up the chain of command.

Better business practices will help ships get in and out of maintenance in a timelier manner, as well as encourage industry to spend money on yard and workforce investments that will be pivotal to the Navy’s and industry’s ability to keep up with demand in the next few decades.

“Our on-time availability is improving out of both the public and the private yards, but it’s not yet to the point where we need it to be. We’re starting to see performance improve. We’re starting to see we’re using shipyards we hadn’t used previously. As noted, the demand is there, the challenge is to meet that demand as efficiently as we can with the right acquisition strategy.”

The service is in talks with shipyards that have existing dry docks capacity to see if they want to do business with the Navy leaders who are offering to certify facilities before companies even bid on any repair contracts to reduce the risk the yard takes and to grow the field of possible repair yards. The Navy has also spoken previously of talking to industry to gauge any interest in existing yards adding new dry docks.

Despite the efforts to find more dry docks to work with, it’s unlikely the Navy will increase its dry dock capacity by 60 percent to keep up with the 60-percent increase in ships needing them. Business practice improvements are going to make up for some the difference, but Navy needs to take additional measures going forward to deal with the upcoming surge in surface ship maintenance needs, 

When we’ve looked at it, industry responds to the demand signal we put out there. We were not clear in showing the composite demand signal, so a key element of that 30-year maintenance plan was so we could show the entire demand signal. Our experience has been, when we clearly articulate the demand, industry makes really good decisions on how to invest to help us deliver on that demand.

You have to fight with the fleet you have now. There is no other option; it is a necessity. Our Goal is to do that better and to lay the groundwork for the future fleet by focusing upon availability of assets. We have identified two principal Questions. First, how do we get our availability rates higher? Second, how do we get ships to the fight more effectively and more often? The essential assets required to fight and win are not going to make much difference sitting in Job Site Drydocks. We can provide for enhanced deterrence through enhanced availability. They are not going deter anybody if they are not available and capable of going to sea..

We have put a major effort in getting much greater availability from many of our ships, and the ways we have done so will shape our approach, our expectations and our template for the operation of the new ship classes. We have seen a dramatic improvement in our upgrade and maintenance programmes. For example, out maintenance engineering planning programme is already doing a better job of predicting the maintenance needs of specific ship hulls and should continue getting more accurate over the next few years.

We put as much effort into infrastructure design as we did into combat readiness, which is about numbers today. We want to shape infrastructure that is all about availability of assets required for mission success, and not just readiness determined with poorly designed metrics. Getting the right infrastructure to generate fleet innovation on a sustained basis is what is crucial for mission success. We are strongly promoting continuous build processes.

We have established technical foundation instructions that look at each class of ship and, based on where a hull is in its service life and what type of maintenance availability it is approaching, outlines what type of work the ship is likely to need. Instructions for each individual hull and monitors of deferred maintenance are active, in addition to other things engineering teams need to know about that particular ship..

The job sites at the Yards have created learning centers to help new hires become proficient at their trades faster so cases of schedule delays and cost increases due to workforce challenges will be less of a budgeting problem going forward. We want the trainers there, as well, so that when we’re maintaining one part of the system deployed in theatre, it’s the same people in the same building maintaining those things that will allow us to make future decisions about outdated operations and training requirements. We want these teams sitting next to each other and learning together.

Once all the ships have been through a docking availability, where they are more thoroughly taken apart and inspected, we will have a very clear idea of the state of each ship and what to expect for future maintenance periods. We anticipate the problem of work package growth will be reduced in the future but it will probably never completely go away. There is always something that will surprise us when you take a pump off of a foundation that you couldn’t see before and then that foundation is eroding.

The good news is that we anticipate work growth when drafting plans for future years out, and they generally can stay within that margin. Where we see growth today is still on ships that have not gone through that process, that docking process, and really getting into the tanks and understanding what those conditions are. It is a constantly improving process with the goal to know exactly what the condition of the ship is so we can properly plan for it, order the materiel and be able to do the work on schedule and on time.

We have initiated periodic meetings with each shipyard commander to get an update on progress of the ships and to find ways to empower the yards to do what it takes to deliver the ships on-time or early. It’s to get a quick update on where they are, where they’re having challenges, and then where can headquarters can provide help in terms of, do you need our help in getting materiel, do you need our help in clearing some technical issues that you need adjudicated before you can get back to testing. So that’s all begun to contribute to success of operations.

There are things we can do up at headquarters to advance quality of operations, if it’s a technical issue we can give them additional technical resources. We can provide them some focused effort from the headquarters; if our chief engineer sitting there with the shipyard commander brings an issue up, it cuts through the normal layers that these things have to get through. It has fostered key ingredients for the future fleet, most importantly that time matters, and there must always be a sense of urgency since ships need to get back to the fight as soon as possible.

We cannot overstate enough the importance of periodic reviews because we aren’t about to claim we are the reason these things have gotten better, but review do in fact provide the shipyard commanders with an additional level of a sense of urgency, that we have established a mechanism to get headquarters’ attention; that headquarters is there to support each worker at the yard; that if they’ve come up against a roadblock that they’re having a problem getting solved, then we can muster some resources to get the issue solved probably quicker than they can get the problem fixed in the normal way. We want to create better supervisors at the Deckplate, and initiatives are being put in place to train new hires more quickly so it is possible to start contributing to the workload even before they’re qualified to work on the ship.

Many availabilities that experience problems, on the other hand, are much more complex. The biggest factor is that many availabilities take much longer than anticipated, not due to unexpected maintenance work but rather because modernisation work suddenly started driving schedules. Modernisation, in the past, has generally not been a driver for schedule in availabilities – they mostly have been specific to particular parts of the ship, or particular machinery, or some capability like that.

We’re now getting into modernisation that really takes the ship apart completely. The scope and duration of some systems installation are now understood to some degree, but often times we will not know until between that budget process and the beginning of the year. We have shifted some of the money over to help address all those challenges – though ultimately the shortfall is about the same size now as it was at the beginning of the fiscal year.

The fact that the deficit hasn’t shrunk much over the last period isn’t for lack of trying, though. We had begun awarding firm fixed-priced contracts for surface ships instead of the old multi-ship/multi-option setup. Preliminary observations shows that costs are coming down, freeing up money to spend on other emerging ship maintenance work.

But sometimes we find ourselves facing a big unplanned bill this fiscal year when ships return from a deployment that was not only extended for a significant period but was also essentially the second in a back-to-back deployment with only bare-bones maintenance work in between.

We have been warning Congress for years that extended deployments since have led to more severe maintenance problems when ships can finally go into an availability. Insufficient time and funding have led to partial completions of the work in some cases, which then creates bigger problems down the road – for example, tank inspections get skipped and then we have to deal with major corrosion issues later on.

What we’re seeing now with the actual testing of equipment prior to the availability, the additional work the ships are tasked with over the course of deployment, we’re seeing a lot more work now coming into that package. The consequences are manifested in an availability that will be much bigger than we anticipated.

So as we grow the size of the workforce and we go look at all the ship work we have on the plate we’re trying to get out in front far enough in advance so we can go to the fleet commanders, telling them don’t have the capacity at the naval shipyards, and then we can go talk to industry earlier than we’ve typically done before. If you look at the list of ships out there, there are several cases where looking into the future we may have to go to industry earlier than we’ve done today.

In some years, Pentagon-level reprogramming can take money from other Service acquisition programmes, but most of the time we cannot get money from the other services for the year to cover shortfalls. Due to funding shortfalls, we have reduced contract support levels, intermediate level repairs, and ability to provide after-hours support in specific areas. Although extensive efforts have been expended to limit adverse impact to the ships undergoing maintenance, fiscal realities have forced us into these actions.

Specifically, we are forced to stop engineering support to include tank and void inspections, infrared surveys, underway vibration analysis and surface ship availability work certifications. Reduction in parts procurement means a stop to all major diesel work, surface ship torpedo tube repairs and refurbishment, air compressor overhauls, communication receiver and transmitter repairs, and repairs to electronic warfare and anti-ship missile decoy systems. When supplies of on hand materiel run out, repairs to additional systems will be impacted.

Delaying maintenance periods, pressing them into the next fiscal year with the budget currently under consideration not being optimal, affects even the smallest number of ships, impacting the final decision on how to deal with the operations and maintenance shortfall. We do not want to embark on a path that partially accomplishes all availabilities across the entire fleet. That is a dangerous practice that rapidly builds maintenance and capability backlogs that are difficult to recover. Indeed, we are still digging out from that sort of policy implemented more than a decade ago that is difficult to recover from.

The fleet takes on operational risk when it has less than full operations and maintenance funding, meaning acceptance of less readiness across the whole of the Fleet, less capacity to surge in crisis, or perhaps living with reduced readiness in our ships that would keep them from reaching the end of their service lives. In any case, recovering from these situations will cost us more in time and money in the future, limiting utility of the Force.

1. How do you fight with the fleet you have and prepare at the same time for tomorrow’s fleet, especially when you have several new programmes in the pipeline?

2. How do you execute initial steps to a successful maintenance availability like proper planning, determining what people and materiel will be needed at each step along the way?

3. By learning how to ramp up availability with today’s fleet, are you preparing solid templates for future operations?

4. Several months before the availability starts, do you commit to having a resource plan --in other words, these are the people you need, when you’re going to need them, so you can finish on time?

5.  Isn’t one broad aspect of changes you are responsible for clearly setting out solid goals for build/upgrade the Fleet of the 21st Century?

6. Isn’t it important for you to have periodic calls with each shipyard commanders to get updates on progress and find ways to empower the yards to do what it takes to deliver on-time?

7. How important in your view is building a new shipyard training infrastructure to support a 21st century combat force?

8. If you have work backlog, how do you plan to move availabilities around so you have workers with the capacity to do work?

9. Overtime is one of must important factors to adjust, but isn’t it difficult for you to fine-tune how many man-days of work get accomplished without taking major workforce shortages into account?
​
10. So it is apparent your focus is on advanced planning, the growth in the workforce, worker-efficiency initiatives and more—how do you plan to achieve success in these areas?


0 Comments

Top 10 Reasons to Use Digital Twin Tool Designed by Workshop Gain Insights on Behaviour/Condition of Equipment

4/21/2019

0 Comments

 
​Does “Digital Twin” hype have your head spinning about the mission space of the future and connectivity of tomorrow’s equipment? Tired of suppliers trying to pitch you new, interconnected “Digital Twin” equipment that operates the same way, same speed?

Welcome to the “Digital Twin” No-Spin Zone. Workshop solutions all non-invasively upgrade today’s equipment today, so you can monitor and be aware of upcoming service issues within critical mission systems.

Workshop makes it super simple and inexpensive to upgrade your existing equipment with easily attached component and electrical sensors that monitor critical systems and assets. With Workshop, you’ll know exactly what’s going on with today’s machines, so you’ll avoid unplanned downtime and dramatically increase productivity and profits.

Workshop’s affordable solutions let you look into the future to predict machine malfunction and failure.
Workshop sensors are placed directly on your machines or components to automatically monitor condition. The sensor data transmits to an easy-to-install gateway and is then sent to cloud servers running powerful analytic software. Results are transmitted from the server to a user-friendly app, where you will view real time machine condition and maintenance advice.

Far beyond the monitoring capabilities and trending packages being touted by new entrants to the field of predictive maintenance and analytics, Workshop products ‘listen’ to key assets and components to detect changing health and operating conditions long before the machine controller issues a warning or alarm condition.

Machine control data typically only shows process data. Workshop actually analyzes the data from vital machine components, such as gearboxes, heaters, pumps and more.

The Workshop prescriptive maintenance solution provides actionable maintenance advice and allows you to schedule maintenance at a convenient time rather than suffer a costly shut down.

Power Analyzer diagnoses everyday electrical problems including costly issues that can result in lengthy downtime such as motor burnout, control and display memory loss, and failing transformers, capacitors and more.

Workshop Power Analyzer uses patented local cloud-based analytics technology to capture, interpret and diagnose the data obtained from electrical components and internal power distribution.

Easy-to-understand dashboard gauges warn users of pending failure of stator windings, motor bearings and heaters as well as all common power line issues such as sag, swell, harmonics, ground faults and imbalances.
The data is presented in a manner that it is easily understood by personnel regardless of any electrical engineering experience. Every manufacturer has electrical issues. These issues can, and will, affect productivity, machine performance and electrical costs.

Workshop Power Analyzer technology is designed to help diagnose everyday electrical issues and present them in a way that can be easily understood by maintenance and management alike, regardless of experience on electrical components and related issues.

All Workshop products work in conjunction with Digital Twin Package that provides a factory dashboard of all connected assets. Through the Tools the user can set up alarms and alarm recipients.

Historic data is available through equipment performance trend lines. And, a streamlined dashboard interface delivers a clear summary of the powerful analytics for all monitored equipment, neatly organised by department and location.

Tool offers easy-to-understand diagnostic and actionable maintenance advice via text or email messages and handheld or desktop dashboards, in advance of machine or component failure. No manual data analysis is required.

The Workshop Vacuum Pump Analyzer monitors operating condition and trend health of pumps and blowers in various industrial applications. The Vacuum Pump Analyzer sensors easily attach to the pump exterior and constantly measure vibration and vacuum, plus ambient temperature and pressure. A plug-in connection reliably powers the sensor, unlike competitive units that rely on batteries for power.

Workshop Vacuum Pump Digital Twin tracks and compares the collected data against a baseline to determine overall pump health. Filter status, oil status, pump utilisation and vacuum or pressure level trends are all constantly monitored. Visualisation of valuable operating metrics is delivered through easy-to-understand mobile and desktop dashboards

The easy-to-install Workshop Component Analyzer constantly tracks the operating condition and trend health of rotating components in industrial machinery. Easy-to-understand icons direct users to view component health trends displaying months of operating history.

Customisable threshold values allow users to identify preferred maintenance conditions, unhealthy operating conditions, or deteriorated component health.

Time-domain data sampling methods enable continuous trend monitoring to spot anomalies and repeatable events that occur under changing process or operating conditions, allowing early detection of developing component health issues, and the operating trends that may be causing them. Traditional Frequency-Domain data and analytics add enhanced diagnosis capabilities for bearing and gearbox faults initially, with other device libraries being added each month. 

Visualisation of key operating and analytic results are conveniently delivered 24/7 via web browser or mobile app in easy-to-understand dashboard gauges, and via email/text alerts. No advanced training or expertise is needed.
In the manufacturing world, improving the utilisation of assets and increased productivity are among the most important goals. Reliable assets reduce downtime, improve production quality and get product out the door faster.

The most effective approach is to use a predictive maintenance solution that readily installs to your existing equipment and has a powerful Digital Twin package that will collect data from your monitored equipment in real time, compare the data with established baselines, assess the condition of the equipment and use analytics to effectively manage maintenance activities.

With Workshop you’ll know exactly what’s going on with your machines so that you can avoid unplanned downtime and dramatically increase productivity and profits. Ask us how we can help implement a predictive maintenance program for you today!
​
“Rough Conditions in Maintenance Shops Place Risk on Potential for Successful Operational Missions”

Pilot on a routine training sortie when the aircraft’s engine caught fire and was forced to jettison fuel tanks and immediately return to base due to an engine issue originally stemming from incorrect practices and the “general disarray” of the base’s maintenance shop several years earlier, according to a Pentagon Report.

The primary cause of the incident was an obsolete part—a turbine frame forward fairing—that was installed on a brack that was not compatible several years early during a service life extension maintenance overhaul.. The part failed during the flight and caused the engine fire. Between the engine damage and the loss of the external tanks, the mishap was estimated to cost just several million. Routine inspections on the aircraft would not have caught the issue with the fairing, according to the report.

Through interviews with maintainers, the investigation board found that “poor enforcement of standard maintenance protocols” inside maintenance shop led to the improper completion of paperwork to ensure parts accountability, a severe disorganisation at the shop, improper handling of parts, and a failure to follow proper procedures for cannibalizing parts from other aircraft. 

When a new senior officer arrived, he found the shop in disarray and without organisation. “It was mix and match, with no standardisation of where things went,” the report states. “The shop had their own ‘method of the madness.’ 

Because things were in such disarray, the shop “received millions dollars” to revamp and get back to standards.

“Given the strict protocols governing how parts are ordered and installed, a preponderance of the evidence shows the poor enforcement of standard maintenance protocols several years back created an environment within the Propulsion Flight that was a substantially contributing factor to the order and installation of the obsolete forward fairing that caused the fire,” the report states.


The investigation into another crash found an uncommanded closure of the aircraft’s main fuel shutoff valve caused the crash. That investigation found that the maintenance squadron did not have a corrosion preventative compound, which was prescribed by a task order for the aircraft and could have helped prevent an uncommanded closure of the valve, 

Forecasting aircraft spare parts requirements typically calculate ratio of future flying hours to past flying hours for the aircraft using a specific part and then applying ratio to historical demand for part. We conclude this technique results in bad estimates mostly because 1) correlation of specific parts to specific aircraft types is not accurate, 2) does not base projections on many combinations of aircraft employing part, and 3) does not incorporate influence of lengthy procurement times into future flying-hours estimate

Because things were in such disarray, the shop “received  millions dollars” to revamp and get back to standards.

“Given the strict protocols governing how parts are ordered and installed, a preponderance of the evidence shows the poor enforcement of standard maintenance protocols years earlier created an environment within the Propulsion Flight that was a substantially contributing factor to the order and installation of the obsolete forward fairing that caused the fire,” the report states.

Quality control during production is an important aspect of reducing cost caused by misaligned equipment or faulty equipment. When a machine is misaligned, operating incorrectly, or faulty, then parts or products produced by that piece of equipment may have poor fitting, tolerances that do not meet specifications, or incorrect yields for a desired product.

To better illustrate this, a tire production company uses a screw-compressor during their production that has a faulty oil filter that lets oil pass through it. When the oil passes through the filter, it gets carried through the pneumatic pressure pipes and finally begins to accumulate on the controllers and often damages them. The secondary issue is that oil is harmful to rubber and there is an oil limit of two parts per million that is allowed to pass through the filter. Using pressure sensors and the specification for differential pressures that the oil filter is rated for, the oil leak can be prevented by properly maintaining or replacing the filter.

Another example is the machine that begins the first stage of the tires formation. This machine has a long shaft that the tire is produced on. If the shaft has a misalignment, then defective tires are produced. By vibrational and gyroscopic analysis, the misalignment can be discovered, and the maintenance team will be notified so that they are able to quickly fix the issue.

Providing these notifications real-time during production allows operators and maintenance personnel to effectively communicate issues that reduce defective parts and downtime, ultimately reducing cost.

Test New Visual Inspection Procedure for Cold Sprayed Protective Coat/Wear Hydraulic Tubing Extends Aircraft Service Life

We recently completed an extensive effort to help certify a maintenance process that could extend the life of aircraft hydraulic system components. Through participation in a rapid innovation team, we successfully helped develop, test, and validate the cold spray coating process for the life extension of aircraft hydraulic lines. Cold spray is a technique by which metal particles are accelerated onto a surface through high-pressure application. 

The force of the impact bonds the metal to the surface without the need for temperatures as high as those typically associated with other deposition processes. This process was identified as a potential solution for replacement aircraft hydraulic lines, which are prone to chafing damage.

Aircraft hydraulic lines are made of titanium, a strong, lightweight metal that can be bent and routed around tight spaces without collapsing upon itself. These qualities make it ideal for aircraft hydraulic systems. However, titanium does have drawbacks. It is very a surface-sensitive material, meaning that any nick or scratch can be detrimental to its overall material properties. 

Because of the proximity of the hydraulic lines to landing gear components, chafing is a common occurrence, necessitating frequent inspection and replacement of hydraulic lines. This procedure is a costly and time-consuming endeavor. 

To alleviate this problem, the rapid innovation fund team began investigating the use of the cold spray process to apply a protective titanium layer to chafe-prone tubing areas. Behind this effort was notion the sacrificial titanium layer could endure considerable wear while preventing harm to the material beneath.

Certifying the process for implementation on the aircraft hydraulic tubes was a complex and multi-faceted effort included identifying shortfalls in the technology, developing process controls, establishing test plans and acceptance criteria, performing material analysis, and developing inspection techniques. 

The final step was approval by a change evaluation team. It was a tough task and one that required a thoughtful approach. “We knew we had to do this the right way, with safety first and make sure the use of this system met our acceptable risk levels and didn’t cause any unintended problems.”

Team began by developing a test plan that would meet all the necessary requirements for the aircraft. They also worked closely with aircraft maintainers and organisations who currently use the cold spray process for other applications. 

Because of the platform-specific conditions under which this process would be used, special factors needed to be considered. In many instances, the team needed to start from square one to collect and process new data, perform necessary calculations, and develop new procedures. 

At one point the team devised a new test procedure to ensure sufficient cold spray adhesion on the rounded surface of a tube versus the flat panels on which cold spray had been typically used. The team also had to take into consideration material integrity issues that can arise when using the cold spray process on titanium tubes.

“Titanium is very sensitive to temperature changes in the range in which cold spray is applied. Applying high heat to titanium tubes in an oxygen-rich environment causes a brittle crust—called alpha case—to form on the outside. Applying a protective coating to the titanium tubes would require a process that wouldn’t destroy their inherent material integrity.

“We developed a process in which we use helium as a carrier gas that displaces oxygen, so the brittle layer doesn’t have a chance to form on the tube.”

Throughout their investigation, team kept focus on maintainability. Since both the cold spray-applied protective coating and the underlying tubing materials are both titanium, it was very important to determine that field inspectors could discern between the protective layer and the tubing beneath. 

To test this, team developed visual test procedures and asked maintainers to inspect worn areas to ensure that they could determine the difference between the protective coating and the underlying titanium of the tube itself. 

“It’s a slight difference, but there are different levels of reflectivity. We wanted to make sure the inspectors could determine this.” The team developed an ultrasonic thickness inspection to serve as a backup inspection method once a potential problem area is identified.

After several years of painstaking research, collaboration, analysis, development, testing, validation, and verification, work was presented to change evaluation team, where the process was certified for use on the majority of the aircraft hydraulic system and systems in a number of other aircraft as well, should the use of this process be deemed necessary.

Use of the cold spray coating process will begin soon on all aircraft as a preventive measure that holds great promise in extending the serviceability of the aircraft’s hydraulic tubes.

“By using this process, maintainers can now allow a degree of wear to occur.  Now if we see damage due to chafing, as long as it does not affect base material, we can tolerate that damage, which can save significant time and money.

This painstaking effort exemplifies the critical role the team plays in the development, advancement, and safety of aircraft assets. “We’re here to ensure it gets done right.”


1. Implement tech and process from industry to map workflow. 

It’s recommended to begin the mapping effort at the very beginning of the maintenance process and the flow of the process from one step to the next. Once the map is done, eliminate as many of these extra maintenance steps as possible.

2.  Give machine operators process ownership. 

A number of real-time digital condition monitoring and reporting systems now support the positive trend to operator-driven reliability The Total Productive Maintenance approach shifts basic maintenance work and problem notification to machine operators, freeing up maintenance personnel to work on planned maintenance.

So workers have ownership of their machine and the process, maximise equipment effectiveness, increase workforce skills and reduce manufacturing costs through continuous monitoring. For their part, the maintenance team should respond to requests within a pre-determined time window.

3. Schedule your job site planned maintenance program far in advance.

Just as any other operation, it is essential to maximise productivity and meet order deadlines. Annual checks are going to be more comprehensive than a monthly or quarterly check, meaning the machine will be down for longer, so make sure they match up with your production commitments.

4. Use your resource planning system to plan for downtime.

Tie jobs in with the procurement function, and if properly described with consumption amounts and order lead times, maintenance parts will have been ordered in advance and arrive at job site in time for the machine to be serviced. Use your resource planning system to specify the maintenance mechanic who will work on the machine, and to specify the parts and chemicals needed to complete the maintenance activity.

5. Schedule more frequent checks for older equipment. 

As equipment ages, parts and components will start to wear out sooner, and the maintenance window narrows. What were annual checks when the equipment was new may move to semi-annual or even monthly. The service parts required will have to be ordered more often. However, the planned downtime will affect production schedules less than a breakdown.

6.  Engineer machine improvements for maintainability and operability. 

Windows cut into guarding to give easier viewing of gauges will make the daily checks easier to perform and more likely to be completed. Access doors installed on equipment will allow for easier periodic maintenance. Consolidation of lubrication points into a single manifold also contributes to more consistently performed maintenance.

7. Conduct daily operator walk-arounds to pinpoint issues and opportunities for improvement. 

Whether it’s a check sheet, whiteboard, or in some cases, a bar-coded activity, the operator’s checks are the essential element of a successful operation.

8. Address operator alerts immediately. 

Beyond the operator and the proper performance of the checks, the maintenance organisation must be ready to respond immediately to an abnormality that’s been raised as part of the operator checks. Nothing will take the energy of total productive maintenance efforts faster than operators raising issues that aren’t addressed immediately.

9. Continually review spare part requirements. 

Remove any spare parts from stock as equipment and machines are retired.

10  Document and continually re-visit your operation’s maintenance history

Include how critical items have been addressed. Develop and regularly review metrics that correlate equipment up and down time with production volume, quality and delivery. Whether using a computer-based maintenance system, or relying on a card file, or having a white board in the maintenance area, the strategies, opportunities and approaches remain the same.


0 Comments

Top 10 Digital Engineering Model Build/Speed System Predict Generate Specs/Design Virtual Test Space

4/8/2019

1 Comment

 
​Advances in artificial intelligence have given rise to behavioural technologies capable of performing tasks that traditionally required intelligence of commanders in the field. Realising end-to-end digital enterprises, automating tasks and processes, and making smarter, faster decisions all require next frontier of technologies to transform the way agents and machines interact.

Must support realistic virtual Job Sites to test agent behaviours.. The architecture should support construction of very responsive physical and behavioural mission space and stand in for the challenging real world counterparts in which agents are designed to function. This is a particular strength of an architecture that incorporates state-of-the-art simulation capabilities.

Simulation Models are essential to giving orders directed at deploying complex interdependent systems and to communicate among team members and stakeholders.

Simulation provides a means to explore concepts, system characteristics and alternatives; open up the multi-agent trade space; facilitate informed decisions and assess overall system performance.

New technologies like automation and robotics, accompanied by upgrades to facilities and infrastructure, have enhanced productivity at Marine Corps Depots. As productivity and efficiency increase we are seeing corresponding decreases in labour, maintenance, and operational costs.

The depot readiness enterprise recently transitioned to business systems that use standard, industry recognised processes. The Logistics Modernisation Program is built on off-the-shelf tools for Resource Planning and shop floor integration.

These tools give us complete visibility on manufacturing and service operations, a capability we now have for the first time. These applications also help us improve the accuracy of our work orders; engage in more efficient production scheduling; enable interaction with our diverse vendor supply chain and reduce delays for parts.

These capabilities are increasing the speed at which materiel reaches the warfighter, and provides Marines with true “factory to foxhole” asset visibility/tracking. Depot enterprise is executing a number of supply chain initiatives to improve its effectiveness, including improving demand forecasting accuracy and imposing tougher performance standards on suppliers. These efforts create ability to purchase, manufacturer, and repair critical parts required to support warfighting equipment.

“Operators want to know capability has been fully characterised, so not only do they know what it does, but they know what it doesn’t do – equally important to them when they take it into combat.” 

“When we deliver that capability to them on Day 1 it’s fully integrated with the environment they’re expecting to utilise it in,  which is done poorly, and “give them a system on Day 1 that they can fully train with. Fully train with, 100-percent of the capability we’re giving them. 

Currently we’re telling them what it can do, but typically we also go, ‘don’t worry, your training system, your simulator tools, it’s an iteration or two behind, but it’ll catch up.’ Well, it usually doesn’t catch up because we keep rolling the operational tools but the training tools comes behind.”

Successful implementation of model-based systems engineering would solve the integration and training problems, as well as reduce the time and burden required to design, vet, redesign, and test and evaluate new planes, ships, weapons and more.

First, program offices need to sit down with operators and understand the requirement on a tactical level: what mission needs to be accomplished, what capability is needed, what threat is being countered, how will the system be used, who will use it, and more. If that information is all included in a computer model, NAVAIR can insert a notional placeholder aircraft or weapon into the model and pass it along to industry to actually engineer.

“We’ve got a model of the threat; we’ve got a model of our blue forces, we got environmental models, whether I’m operating in an electromagnetic warfare spectrum or in the acoustic spectrum under the water; it’s all done with models.” 

Reliability metrics captured through the maintenance process can be compared, using reliability modeling, to specified system reliability. Those components that are critical reliability drivers can then be submitted for review to determine the most cost-effective risk mitigation strategies.

Must focus on optimising maintenance workload tracking across the enterprise and at Sustainment Centre level across all complexes by serving as a single entry point to outside customers with capability to identify workload capabilities and shortfalls across the enterprise and use this information to pursue new/repatriated workload.

Description attribute focuses on the technology and processes to source and approve/certify the correct parts, where and when needed, delivered timely and at cost, regardless of the source of procurement. Sources are Logistics contract repair, local manufacturing, “the boneyard” or Maintenance/Regeneration Group, surplus sources, and Defense or commercial suppliers.

Readiness Tool allows top brass to determine which battalions and gear are most prepared for battle. 

Marine Corps is experimenting with artificial intelligence to improve the way it deploys its forces and spot potential weaknesses years in advance.

The Marines built a tool that crunches data on personnel and equipment to measure how prepared individual battalions are for combat. The tool could ultimately help top brass deploy some 186,000 active-duty Marines and countless pieces of military hardware.

Allocating the service’s resources is an imperfect science. Leaders map out deployment strategies years or even decades in advance, but situations will invariably arise that throw a wrench in those plans.

Planners are constantly forced to “reshuffle the deck” as crises flare up in different places and figuring out which units to move around is a complicated process. Numerous factors—training, deployment history, equipment readiness and others—affect how prepared a group is for a given situation.

Today planners rely on spreadsheets, whiteboards and basic applications to track readiness and manage forces, but artificial intelligence can offer them a better understanding of the resources at their disposal and the long-term effects of the decisions they make.

The tech crunches both structured and unstructured data from multiple force management applications to create a real-time image of how prepared each unit is for combat. The tool specifically aims to build a five-year management plan for the Marine infantry battalions.

Tool has two primary functions: It flags the units that are most ready for action and explains why others come up short. Armed with that knowledge, commanders can proactively train and invest in less prepared groups before they fall even further behind.

“A lot of times Marines only invest more when the problem arises. Now they can see it ahead of time and say ‘OK, we’re going to take action now to prevent that from occurring.’”

The tool sheds light on how deployment decisions will affect forces in the long run. By analyzing historical trends along with real-time data, the tool could show how a unit’s readiness would change if it were, for instance, moved to a new location or given additional resources.

Marines are also building a separate AI system that ranks course of action plans based on those extrapolations, which could one day be merged with the readiness system.

“You integrate that all together and you get a full view of readiness across your force. Now you can really make some data-driven decisions.”

The next stage of the effort will include parts of the Marines’ aviation and logistics units, bringing about half branch into the purview of the program. With that additional data, the AI would further refine its processing rules to deliver better results. 

So artificial intelligence is tasked with managing the particular deployments of troops in battle, moving them around in new and unexpected ways.

One way that future might manifest is by looking at a place where AI already manages workforce inventory-- like a warehouse stocking system, a process in which items are unloaded wherever there is space in a warehouse and then scanned into a computer system than can track where the item is located. 

When it comes time to retrieve an item for delivery, the same computer system directs warehouse workers to the most efficient route for finding the item, which could be stowed throughout the warehouse. 

When modeling the warehouse system, it is interesting to consider how AI, given the same objectives as a commander, might organise and direct forces to achieve them.

“Why would an AI allocate forces in distinct areas of the battlefield? It could intermingle them and manage them at a granular level. Its categories are way more numerous, in the way that warehouse AI manages categories at the shelf level. 

Instead of distinct groupings of armour, air support, infantry, and artillery, a system run by artificial intelligence and managing a battle could coordinate a single helicopter with a pair of howitzers and an infantry platoon, directly grouping each in the same way that a warehouse worker finds an assortment of items to place into the same package.

“Anytime we’re on the road, our job, maintenance wise, is to provide safe and reliable jets for the pilots to accomplish their mission. Every new location presents a different challenge in how we get the job done, but the end goal for providing a safe jet for a pilot never changes. What does change is the environment in which we operate in.”

“Every exercise you go on is different, and it can be hard to start off. It could be not having the parts we need on hand, or not knowing how the base operates to get the support we need. Over time you figure out how to acquire some of that on site, what to bring along yourself and how to solve a problem before it becomes one.”

Here we consider how AI systems could be useful to a typical work order job of launch and recovery of aircraft, engine maintenance and servicing of life-saving equipment-- just a few of dozens of tasks Troops are expected to accomplish within a full day. 

“We learn to operate in new environments, out here we’ve adapted our operations to give the best support possible. Maintenance is maintenance, our job never changes, but how we execute the mission does.”

“Our main mission is to enable successful sorties by generating aircraft parts, ultimately maintaining our full spectrum readiness. Our team encounters new repairs that force changes in direction and orders, but they all adapt and constantly find ways to make sure the job gets done.”

Maintaining the aging aircraft can be challenging as some parts are no longer commercially produced and the Fabrication Flight must collaborate and innovate to construct parts on their own.

“We all need each other in order to complete a task and make sure operations are done correctly.  “Everything revolves in a circle – sheet metals technicians hand over parts to metals technicians who follow their technical order before sending to nondestructive inspection to make sure the piece is good for use on an aircraft.”

To display the teamwork necessary, the Troops walked us through the Fabrication Flight process. 

Sheet metals technicians , kick off operations by receiving technical orders for aircraft repairs. Troops survey the technical order and pull a thin, malleable sheet from their collection. The sheet is then cut to the specific measurements and handed off to a metals technician to be heat treated in a large oven.

"On our side we handle breaking the metal down and then crafting it to match the technical order for the specific part. When completed, the piece is hauled over to nondestructive inspection where tests are conducted to ensure the part is compositionally sound and aircraft ready.

"With the resources we have here, we are the final stop on a part's journey to an aircraft,” "If anything is wrong with the part, it's flagged and sent back to the workshop to either correct the issue, or start the operations all over again."

Accuracy in fabrication is essential in getting aircraft back up flying. When the part has completed all processes and is cleared for use, it is installed onto the aircraft, restoring readiness of the aircraft.

Fabrication flight Airmen gain a sense of accomplishment by witnessing their work come to fruition each time an aircraft takes off.

“Having combatant commands and other mission partners on base only adds to the importance of mission success. We take pride in the work of the flight, seeing the aircraft out there completing missions thanks to the maintenance here is an amazing feeling.”

An alternative specifically functional approach offers a more general way of understanding performance enhancements.at Job Sites. While contractors can incorporate these principles in their business practices, they often do not correspond to the functional divisions within companies.

Practices with the potential to help Job Site get smarter as a whole may actually reduce the efficiency at one site versus another.. This point is relevant to defense production, where collection and reporting of costs in particular categories are required.

An increase in one category is not necessarily clearly linked to a decrease in another so may look like inefficient cost growth rather than an expense related to overall performance improvement.

The first task in site visit efficiency implementation is identifying what value the product has and what the value stream looks like. A fighter aircraft has value to its ultimate customer according to their different defense roles. Value is defined "in terms of specific products with specific capabilities offered at specific prices through a dialogue with specific customers"

Once value is specified, the next step is to determine the value stream. Contractors need to understand every step in construction, that is to say, the value stream, to produce it efficiently. Then, contractors must continually look for unnecessary steps and reduce or eliminate steps. For example, production engineers can measure distance traveled, either by the part or by the workers involved, in the creation of a part and search for ways to reduce it.

The third principle involves making value flow through the work site. Components of the final product should flow smoothly, going from station to station without a lot of waiting time in between. The traditional approach to this is work sites organised by task. For example, there would be dedicated cutting areas, dedicated drilling areas, and so forth. Parts would be brought to the area, stored until the machines were free, worked on, and then moved onto the area where the next process would take place.

Some contractors tend to focus on efficiency of the work station; machine utilisation rates, instead of product value flow. Bottlenecks can occur when one operation slows the critical path of product as it moves through work site due to insufficient machine capacity & high tool changeover times, etc.

The fourth principle is knowing that customer pulls all activity. In short, this means that production should be tied to demand; no products should be built until downstream demand for them occurs. Pull production involves considerable collaboration with customers, to know what they require and when they require it, and with suppliers, to make sure their inputs are supplied at the appropriate time. One of the strengths of DoD is that they force conformance to this principle because contractors build aircraft only when ordered, after the money has been appropriated by Congress.

The constant pursuit of high performance is the fifth principle for contractors dedicated to constantly search for ways to improve their efficiencies, to cut costs, and to improve the quality of their products. A number of tools can be drawn on. For example, pursue short projects that study particular processes and look for low-cost ways for improvement. One example is of a series of "action workouts." Technically, these represent broad approach that favors continuous improvement.

In conclusion, these principles do not stand alone. Rather, there is considerable overlap in what they involve. For example, without near-perfect production, including very high-quality products received from suppliers, value cannot flow smoothly through work site. Out-of- control processes will create problems. The search for inefficient processes can help improve the quality of products and assist in the search for high-performance ratings.

As described, a major guiding principle for contractors is the removal of various forms of inefficient processes. For example, one major source is inefficient transit of parts throughout Work Site. The entire time the part is at the Work Site, being moved from place to place and not being worked on is a major problem.

All production activities can be separated into planning and control of daily activities being digitally structured along with work objects.

Digital behavioural structure of production order and information generation is based on grouping of planning and control of daily activities. Each item is lined up with considered activities, objects and each corresponding subsystems.

Digital framework for production work flow system is designed where each team activities and those objects are connected with administration functions.

1. Scheduled routine maintenance includes cleaning, installing updates, etc.

2. Unscheduled repairs to equipment that has stopped performing its assigned function or is performing its function inadequately.

3. Companies lack awareness of when equipment is due for maintenance, upgrade or replacement

4. Downtime is due to machinery failure/malfunction

5. Poorly maintained equipment results in lost production time and lost profits. Unscheduled repairs are costly

6. Installing predictive maintenance equipment helps to determine the condition of your process to provide actionable intelligence to warn of impending failure if reported issues are not addressed

7. The ideal predictive maintenance system will allow for scheduling of maintenance prior to equipment failure, which will help to eliminate unplanned downtime, reduce repair costs and equipment failures and slow asset deterioration.

8. Predictive maintenance program far less expensive than a reactive program

9. Wearable-solutions all non-invasively upgrade today’s equipment today.
​
10. Simple and inexpensive to upgrade your existing equipment with easy-to- attach component and electrical sensors to monitor critical systems and assets.
1 Comment

Top 50 Service Centre Mission Plan Conduct/Monitor Component Agency System Test/Evaluation Intervals

4/8/2019

0 Comments

 
​Repair Development Engineering group is dedicated to developing new repairs and reliability enhancements to increase fleet readiness and decrease total cost of ownership. Our Repair Clinic process is typical of our aggressive approach to lowering the cost of ownership. 

With our Repair Clinic, we look at the high volume/high value items in the customer’s scrap bin and develop repairs that produce reliable, quality parts. Our large portfolio of repair capabilities is constantly growing, thanks to the Repair Clinics and a large team of development engineers—innovators who are constantly looking for solutions to emerging material stress conditions. 

Developing and maturing a “digital representation of that system and how it interacts with its environment” would go a long way in delivering relevant trainers from the outset, rather than today’s simulators without the latest  update or inaccurate threat environments. 

The digital representation would also help with sustainment efforts throughout the life of the program, as models could help show how the system would hold up over time and in different environments.

“We’re in the process of building that model-based specification, that model spec for industry right now. When we get that part right, build that spec, now it’s in industry’s hands” to continue to make best use of that model environment.”

We want to learn a lot about model-based engineering with new systems without bogging the program down with new processes and design tools. To supplement that learning effort, Navy and industry could conduct a “Surrogate System Experiment: to help identify potential kinks in the new process to bring this group – representation from the organisations represented in this group today – into a collaborative environment where we can actually build a surrogate program and execute that model, the capabilities-based acquisition model.”

“Find out where the hard spots are, find out where we have to go soft, find out what is that deliverable, what kind of contract will work, where do we have to hand off between military and industry, how do we truly make that integrated data environment work in secure network environment?”

NAVAIR intends to begin implementing the model-based engineering concept into any new design, capability upgrade or sustainment program it can, seeking opportunities to learn as quickly as possible, “as opposed to waiting for the big bang on a brand new program.”

It’s important for industry to engineer the new product within threat model, instead of today’s practice of, “we write a 500-page specification with 20,000 shall-statements, and we give it to industry and go, here, design this. We don’t give them the threat models, we don’t give them the blue force models, we don’t give them that system of systems family model we just built. 

If industry can work within that model environment, then little changes can be made along the way – swapping one sensor for another, for example – without wondering how that change may affect the aircraft’s aerodynamics or its low-observability or other features that today are designed separately on paper. 

With a digital drawing in a threat-representative environment, the sensor could be swapped out and thousands of possible engineering solutions generated until the best one is chosen and be done in a matter of hours or days, rather than the months it would take with today’s processes to make those changes.

Benefits to this type of design effort continue throughout the test and evaluation, fielding and sustainment phases.

For example, in lieu of a paper-based design review, as industry meets various milestones in maturing its design “let’s take those models and let’s put them back in the tactical scenarios we developed with the operator back in step 1 and just see how it goes. 

What better evaluation or assessment of how the program is maturing than to actually run the current level of maturity of performance that we see in our models through the tactical situations we’ve built with the operators? Because in the end that’s what matters, in the end capabilities-based test and evaluation is about testing the capabilities – it’s not about ensuring industry met every one of those 20,000 specs. 

That’s where we spend all our time today during T&E, validating that industry met the specs. The fleet couldn’t care less, the fleet wants to know that the attributes and the capabilities that they’re counting on will be met.”

That sort of constructive testing – all conducted within the simulated environment – could pave the way to eventual virtual testing once the first hardware is delivered, which would then pave the way to eventual live testing in the field with real operators. That flow would make the best use of everyone’s time and allow any problems to be addressed as early on as possible.

We provide facility-planning services to assist a customer establish its own engine maintenance facility, test cell or expansion program. These services can range from a simple summary review to an on-the-spot detailed analysis. As a result of these reviews and analyses, we can provide our customers with a building layout showing department locations, work flow and equipment placement. We can also recommend required facility equipment and estimate costs. 

We provide all the technical documentation to fly, operate, and maintain our aircraft engine products. We publish the engine and maintenance manuals, service bulletins, special instructions, illustrated parts catalogs, other technical documents and training materials. All publications are continuously updated to incorporate the latest configurations, repair processes, and operating recommendations. 

Most jet engines consist of a compressor, a turbine, and a combustor that are used to accelerate the flow of air through the engine, thereby producing a thrust force on the aircraft. Spare parts are ordered by our customers through a variety of contractual agreements to support flight line, intermediate and depot maintenance needs. 

We take care of your engines so you can focus on flying.  Our Engine Centers have developed state-of-the-art materiel management and supply chain management systems, which offer complete Contractor Furnished Materiel supply capability. By using Contractor Furnished Material, our customers take advantage of our unparalleled strength in vendor and distributor relationships to minimise delays in part delivery and to reduce overall cost.

Are you getting many complaints about job site & equipment maintenance, are work orders taking too much time to complete? If this sounds familiar, maintenance metrics can help you identify causes/solutions. Maintenance planners can use these metrics  to pinpoint the real issues and how to fix them. These metrics are not too difficult to collect. Use them even when everything seems fine to get a quick measure of your maintenance performance.


Improvement is one reason. For improvement, we need to know where we have been and hopefully, where we want to go. Another is that people like to get a score or feedback on how the organisation sees their work. That said, while metrics reflect individual performance, the focus of metrics should be to identify issues with the business processes associated with the work and not the individual themselves. From the metrics, we can identify trends and patterns. Adding to that, consider: "What gets measured gets done. What gets celebrated, gets done well."

Recognise there are two types of metrics, leading and lagging. Both are useful. To better understand the difference, ask yourself if you are operating within the organisation as Mr. Fix It or just cleaning up the mess? Are you taking a pulse of the organisation or performing a after action review on last month's performance? 

In most organisations, the reality is that everyone is focused on the after action reviews. Often, the reason for the after action review focus is those numbers are the most readily available from current reports. They are easy to identify and it is the quickest way to satisfy the demand for metrics. Lagging metrics are like looking in the rearview mirror; they only tell you where you have been and not where you are headed.

Ideally as a rule of thumb, you should have multiple leading metrics for every lagging metric. Leading metrics are performance drivers. Utilising them allows you the opportunity to make preemptive actions to improve your chances of meeting the desired outcomes or lagging metrics. Leading metrics often measure activities or even processes.

Understand that the selected metrics much like processes, too, the organisation chooses to employ will drive employee behaviour as well. As an example, one organisation chose to measure the number of work orders requiring re-approval if the labour or materials cost exceeded significant change of the original estimate. This measure is a lagging metric because it was after the work had been completed. The reapproval process was designed as a heads-up information sharing activity to show more dollars spent than anticipated. What behaviours did it drive? Planners would significantly overestimate labour hours and contractor/materials costs to avoid the re-approval process. 

Look at how the domino effect takes hold from there. Those labour hour estimates were used to create the following week's schedule. Now we aren't assigning enough work to the technicians as the hours were padded. Wrench time suffered. Work orders on completed work that were left open became easy targets for technicians or contractors to charge time to when working on other jobs or idle. 

Materials for other jobs were charged to those work orders as well. How can we use metrics to drive behaviours? Introducing or revising the organisational metrics requires training for all stakeholders, not just maintenance personnel. Don't assume that the standard metrics that you might take for granted, such as "schedule compliance," are understood by all. Using this metric alone, questions like, "What counts toward the metric?," "When is the cutoff point that items can be added to the schedule and count?" and "What is a scheduled job?" should be addressed from an learning, even machine based perspective.

Before reviewing specific metrics, it should be noted that variations to the following metrics could be defined or utilised based on your requirements. The listing is not intended to be comprehensive, but to provide insight on specific behaviours related to maintenance planning and scheduling. Let's begin with those metrics directly influenced by the maintenance planning and scheduling function. 

The following metrics are more general in nature to the maintenance organisation. However, the planning or scheduling role can and often does influence these metrics. Consider the simple metric of "schedule compliance" as an example. If the planner has not correctly identified the materials and parts, or incorrectly estimated the hours required for the job, it may be very difficult to complete the number of jobs that are scheduled. If the scheduler has not coordinated the various crafts and the work cannot be completed in the scheduled window, schedule compliance may be impacted.

Are your metrics headed down south or stagnating, not improving? Wondering how to identify the problems or root causes? Do you know the behaviours the metrics are driving? There is a saying that the "product always follows the process." 

"If your system is not working, don't blame the people, blame the system." To that end, where is your mission outcome program to evaluate if the processes are working? Ideally, you should be pulling several completed work orders off the pile randomly every operational time period as a minimum. Gather the planner scheduler, supervisor, technicians, storeroom person, and maybe even the plant manager, as examples and walk the jobs. 

When you get to where the work occurred, you should be stepping through metric type items to determine the process effectiveness. Did the planner scheduler estimate the job duration correctly? Did they have the right parts? Were the parts staged and kitted? What about multicraft coordination? Did operations have the equipment ready based on the schedule? Did the job get completed before the due date? Was any follow-up work required? Was the work order completed and closed in a timely fashion? The primary goal is to determine if the operational processes worked, but you can also determine performance issues or the need for training, as examples.

1. System Test and Evaluation. The use of pilot, prototype, production, or specifically designed networks to obtain or validate engineering data on the performance of the system during the developmental phase normally funded from RDT&E of the program. 

2.  Effort associated with the development of any specialised tools or data in support of the system level test program. 

3. Test articles that are functionally configured to represent and test the complete defense materiel end item being developed or produced, 

4. Detailed planning, conduct, support, data reduction and reports excluding the contract data requirements list data 

 5. Test articles functionally configured to represent and test subsystems/components of the defense materiel end item being developed or produced 

6. Conduct testing up through the subsystem level, which can be associated with network element acceptance testing 

7. Demonstrate engineering design and development process is complete, risks have been minimised , system will meet specifications  and estimate the system's military utility when introduced 

8.  Determine whether the engineering design is supportable, practical, maintainable, safe, etc. for operational use 

9.  Provide test data with which to examine and evaluate trade-offs against specification requirements, life cycle cost, and schedule 

10  Perform the logistics testing efforts to evaluate the achievement of supportability goals

11. Evaluate adequacy of the support package for the system, e.g., deliverable maintenance tools, test equipment, technical publications, maintenance instructions, and personnel skills and training requirements, etc. 

12.  Cost, schedule, performance measurement management, warranty administration, contract management, data management, vendor liaison, subcontract management, etc. 

13.  Support element management, defined as the logistics tasks management effort and technical control, and the business management of the support elements. 

14. Logistics management function encompasses the support evaluation and supportability assurance required to produce an affordable and supportable defense materiel system. 

15. Planning and management of all the functions of logistics include:  maintenance support planning and support facilities planning; 

16.  Support equipment requirements determination; supply transportation; provisioning requirements determination and planning; training system requirements determination; 

17. Network resource determination; organisational, intermediate, and depot maintenance determination management; and data management 

18. Program models, tests and associated simulations e.g., such as wind tunnel, static, drop, and fatigue  integration ground tests; test bed aircraft and associated support; 

19. Qualification test and evaluation, development flight test, test instrumentation, ballistics, range and accuracy demonstrations, test facility operations, test equipment including its support equipment, chase and calibrated pacer aircraft support and logistics testing 

20.  Avionics integration  Test bench/laboratory, including design, acquisition, and installation of basic network and test equipments 

21.  Ability to simulate in the laboratory the operational environment of the avionics system/subsystem 

22.  Air vehicle equipment consisting of the avionics and/or other air vehicle subsystem 

23. Modules required by the bench/lab or flying test bed in order to provide a compatible airframe avionics system/subsystem for evaluation purposes 

24. Flying test bed, including requirements analysis, design of modifications, lease or purchase of test bed aircraft, 

25. Modification of aircraft, installation of avionics equipment and instrumentation, and checkout of an existing aircraft used essentially as a flying avionics laboratory 

26. Avionics test program, consisting of the effort required to develop test plans/procedures, conduct tests, and analyze hardware and software test results 

27. Verify the avionics equipments' operational capability and compatibility as an integrated air vehicle subsystem 

28. Effort required to design, code, de-bug, and document network programs necessary to direct the avionics integration test 

29.  Engine military qualification tests and engine preliminary flight rating tests 

30. For sea systems: model basin, hydrostatic, fatigue, shock, special sea tests/trials agenda preparation, data collection and analysis, dock and sea trials and hull vibration survey elements 

31.  Test articles such as Inert Measurement Vehicles, Launch Separation Vehicles, Separation and Control Test Vehicles, Boost Test Vehicles
 
32. Ordnance: test articles such as Ballistic Test rounds, Inert Measurement rounds, Dummy rounds, Launch Separation rounds, etc. 

33.  Operational test and evaluation conducted by agencies other than the developing command to assess the prospective system's military utility, operational effectiveness, operational suitability, logistics supportability 

34. Include compatibility, inter-operability, reliability, maintainability, logistic requirements, etc., cost of ownership, and need for any modifications. 

35.  Initial operational test and evaluation conducted during the development of a system tests as system demonstration, flight tests, sea trials, mobility demonstrations, on-orbit tests, spin demonstration, stability tests, qualification operational test and evaluation, 

36. Support  required to prove the operational capability of the deliverable system  e.g., technical assistance, maintenance, labour, material, etc. consumed during this phase of testing 

37.  Logistics testing efforts to evaluate the achievement of supportability goals and the adequacy of the support for the system 

 38. Deliverable maintenance tools, test equipment, technical publications, maintenance instructions, personnel skills and training requirements, network support elements 

39. . Design engineering and production of system or subsystem mock-ups that have special contractual or engineering significance or that are not required solely for the conduct of one of the above elements of testing. 

40. Risk reduction facilities where software and hardware can be developed, integrated, tested and evaluated for both stand alone functionality and/or interoperability prior to being fielded. 

41. Support elements necessary to operate and maintain, during test and evaluation, systems and subsystems, which are not consumed during the testing phase and are not allocated to a specific phase of testing. 

42. Repairable spares, repair of reparables, repair parts, consumables, warehousing and distribution of spares and repair parts

43. Test and support equipment, test bed vehicles, drones, surveillance aircraft, tracking vessels, contractor technical support 

44. Operational and maintenance personnel, special fixtures, special instrumentation, etc., that are utilised and/or consumed in a single element of testing and that should be included under that element of testing 

45. Deliverable training services, devices, accessories, aids, equipment, and parts used to facilitate instruction through which personnel will learn to operate and maintain the system with maximum efficiency. 

46. All effort associated with the design, development, and production of deliverable training equipment and its associated software as well as the execution of training services 

47. Overall planning, management, and task analysis function inherent in the work breakdown structure element systems engineering/program management 

48. Distinctive deliverable end items of training equipment, assigned by either a contractor or military service are required to meet specific training objectives. 

49. Operational trainers, maintenance trainers, and other items such as cutaways, mock-ups, and models e.g. Operational Instructional Equipment, Maintainer Instructional Equipment

50. Training course services; contractor-conducted training in-plant and service training; and the materials and curriculum required to design, execute, and produce a contractor developed training program. e.g. Operator/maintainer Instructional tools 






0 Comments

Top 10 Acquisition Program Processes Establish Requirements for Weapons System Field Utility

4/1/2019

0 Comments

 
Much of what is discussed in Pentagon meetings centers upon two broad topics: war-fighting and the material tools necessary to support a fight, now or in the future. But an examination as to how DoD goes about the actual procurement of the systems intended to support the military’s needs is not often conducted. 

Not a good thing because these processes may sometimes generate results which are either  unsuited, unanticipated, expensive to remediate or sustain, or confusing to the operators of the fleet. In short, these processes, if not carefully executed, may have serious effects on the Readiness of the DoD war-fighting abilities. 

Still, this lack of attention regarding the process is many times a function of system complexity instead of indifference: The entire process, end-to-end, is disjointed and confusing and, because the key actors speak different languages and operate under different incentives, the entire process is difficult to fully understand, even to those who are closest to the action.

So, what’s the deal? A key first step is recognising that we shouldn’t mob the ball and that there are multiple pathways and tools that we should use in the right combinations for the challenge at hand. Use experience and critical thinking to select the right methods amongst the full range of options and provide the rationale to the acquisition decision makers. 

Policy and practice both agree it is vital to think critically about the task at hand and then pick the right combination of tools and methods. Otherwise, we mob the ball on a certain approach even when not appropriate, resulting in an inevitable backlash from senior Defense policymakers and Congress.

For senior leadership, don’t lock in on a single method – it doesn’t lead to success.  Let your staff know that you’ll open to approaches beyond the flavor of the month, and that you’ll tolerate some risk if they do their homework by bringing a logical and well-reasoned program approach that fits the challenges we are up against.

DoD has stood up a new Futures Command. to show they are serious about ending  years-long, expensive modernisation programs and unclear requirements. While failing early and often can prevent some of those issues before an requirement  becomes a program of record, there are still some risks that, by trying to be a more quick and innovative enterprise, the command could  still run into cost increases and drawn-out timelines if it jumps too hard on emerging technologies,.

“There are a variety of ways to fail when it comes to developing these technologies,” the way to  look at it is, you can make sure that you have it right, or believe that you’ll get it right.”

DoD has applied many best practices including close collaboration with senior leadership. But there are always parts of the plan that could be improved. Plans to develop weapons systems with emerging technology could come back to bite.

Proceeding into weapon systems development at earlier stages of technology maturity raises the risk that the resulting systems could experience cost increases, delivery delays, or failure to deliver desired capabilities.

It may not be the best idea to create programs around emerging technologies, before having a chance to test them in an operational environment.

“As DoD identifies the capability, there are technologies used to achieve that capability.  There is an advantage to maturing those technologies before you begin what’s called a program of record.”

DoD is already required to test in a “relevant” environment but a training situation, for example, might not give a good enough gauge of whether something will be useful.

When you test something in an operational environment, you might discover the technology isn’t as mature as you thought it was.“ And if you’ve already stood up a program for it waiting for that technology to mature could delay delivery. And as DoD has learned, what they have in hand at the end might already be outdated.

To help encourage this mindset in the acquisition workforce, leaders launched a collection of digital tools that illustrate the full range of contracting methods. As programs develop their strategies the goal is to discourage acquisition mob ball and instead encourage acquisition professionals to apply the strategy, discipline, and other skills necessary to play the game well.

Military executives tend to grab onto a hot new contract approach and mob the ball as soon as the whistle blows. It’s the only thing they focus on, despite what their coaches told them about playing positions. If by chance the ball gets kicked free, the mob moves to another part of the field and again surrounds the ball. 

Eventually, they learn to play proper positions and be in the right place at the right time for the needs of the game. They begin to actually play and develop strategy, discipline, and teamwork skills. They improve their game, and it’s a lot more fun for everyone. But then some of them join the acquisition workforce and start playing acquisition mob ball.. This is frustrating for everyone.

“Engineers” in the services are responsible for the acquisition of systems to equip the fleet. They are the organisations to interface directly with industry to develop and build new systems for the fleet. 

Requirements, or descriptions of systems are dished out along with the funding necessary but often the specific intentions undergo unanticipated “translations” once they are delivered. 

Officers responsible for the actual process of making things undertake a difficult of voyage of discovery, often ranging far beyond what may have been the original expectations. As long as the top-level requirements are met, at cost, much of what will happen in this part of the service and then in industry, is typically over DoD staff’s “radar-horizon.”

Two factors drive the command, which stand far above any other considerations; cost/schedule but it must translate into high performance in the field when they are finally in the war fighter’s hands.

Many in DoD who are involved in this process, are fully occupied by considerations related to “the budget,” acting as the primary interface between DoD and Congress who always provides changes to the funding up to the very last moment, meaning that not only is it difficult to plan the large movements in the acquisition process, but it is also impossible for the command to control execution.. 

To be plain, the fleet itself is not well connected to this process, and as far as any involvement in the discussions regarding the actual building of systems, it is outside of the loop. There are few if any mechanisms which allow the operational forces a voice at any point in the process. 

Industry presumes that the engineering shop speaks for the Fleet. This is not the case. Further, there is no sensible inquiry into what the fleet may want or need. Nor is there a considered methodology for gathering feedback from front-line operators. Writ large, the operational chain-of-command, which might actually have a sensible, real-time opinion on these material topics is not directly connected to those making the decisions and far removed from those building the systems.

Operating forces are supposed to be represented in this process by the respective Type Commanders, but these organisations are only loosely tied to the real decision makers who reside at the Pentagon. To be fair, in some communities, the relationships, authorities, responsibilities that exist between specific Type Commanders and their respective advocates in the Pentagon are close and well defined. In others, they are far looser and less specific.

The one agency technically charged to represent the fleet is Fleet Force Command, which produces a document called the “Integrated Prioritised Capability List” of the combatant commander’s highest priority requirements, defining shortfalls in key programs which impact capability of commanders to accomplish their missions. 

But the list is disconnected from the real churn in Washington; the ongoing, minute, minute-to-minute interplay which takes place between command and Congress. Ultimately, the List is too broad in addition to being brief, and it ends up, in the words of one flag officer: “little more than a top-level restatement of the obvious.”

Once the necessary funding and requirements has been provided, to a certain degree, command loses track of the engineering activities. These requirements are further developed, and then the radar is put out for bid by industry via a Request for Proposal. It is in this process that things become even more disconnected from what the fleet would want and technically speaking, only be a few layers deep.

“Cross-functional teams are a really good thing. We have a whole list of leading practices for these teams. And, really, our concern there is just that the services continue to do some of the things that they’ve been doing in those pilots.” Everything that’s gone well, and also what hasn’t, should be recorded and preserved according to the team report, but there is no agreed upon process.

“Getting those down and communicating those as this new command stands up is going to be vital, to make sure that they’re not reinventing the wheel or committing the same mistakes as they stand up this new command.

“The directive that set up the cross-functional teams said  it would be a good idea to get the lessons learned,’ but there was nothing formal in place-- the leaders response was, absolutely, that’s something we need to do."

Every program has a unique set of characteristics, circumstances, and goals. Best acquisition strategy practices are very clear that program managers and contracting officers should pick the right approaches and contract types for the challenges at hand.

The contractors usually do the deep work necessary to build the weapons system and the results may be unexpected. In short, what comes out may differ from the expectations elemental to the entering concept This is the price of submitting requirements which are insufficiently detailed. Also, this is the cost of the engineers operating without the direct and deep involvement of line officers.

As for the various industrial competitors for the contract, they understand that their respective offerings will not only need to meet the stated, top-level requirements but will have to do so at a price point which will be in line with both Congress and DoD desires regarding cost and schedule. 

Leaders are launching a series of pilot programs to try and reduce the time for getting companies on contract by setting up a schedule of regular events with key industry partners, and aside from those scheduled events.

We want to look at suppliers from a portfolio point of view and not just get caught up in each program.” Then a  series of direct dialogues between industry executives and DoD bosses to try and move faster in researching and engineering new weapons systems. 

Ultimately, someone wins the competition as judged by a selection board, not inclusive of fleet-personnel. The winner will then embark on the actual design, and eventually construction of the weapons systems with varying degrees of oversight occurring at key Project Milestone Decision Points. 

The defense acquisition system uses three milestones to oversee and manage major defense acquisition programs:

 Milestone A decision:

Approves program entry into the technology maturation and risk reduction phase, which involves developing technologies and reducing risks before committing the resources needed for complete system development.

 Milestone B decision:

Approves entry into the engineering and manufacturing development phase, which commits acquisition managers to developing a specific system for production and fielding.

 Milestone C decision:

Approves entry into the production and deployment phase, during which the contractor produces system units for fielding.

As industry works, unanticipated technical issues are discovered and costly schedule changes are introduced. For example, DoD timelines are altered, or unanticipated technical issues arise. Trade-offs are then decided upon and some of these may have significant end-product impacts. Unfortunately, these trade-offs are usually not coordinated by key DoD principals and are never known to the fleet.
 
Beyond that, many of the daily, critical decisions regarding production are taking place far below the level of the few DoD officers assigned to the project.. As competition has increased in recent years, cost has become increasingly critical in order for industry to win key contracts. 

One issue not receiving enough attention is the experience levels of many of those working on military contracts has declined in recent years: Experience is expensive, and to win competitions for work, it is often necessary to sacrifice experience. 

Finally, there is a technology factor at play in this process. Industry and DoD engineers sensibly want to keep moving forward, technologically speaking. It is more profitable to industry and more interesting in general to those who live in the material world, day-to-day.

Driving down technical risk before entering full-rate production helps bridge the gap between a promising technology and a full-rate production program by maturing technology and developing initial concepts of operation before committing to a major acquisition program.

In industry, new technology has a higher profit margin and promises the potential for additional sales. In the military, it is essential to maintain a technological air gap between our forces and those of potential adversaries.

However, it is one thing to provide a high-tech solution to the fleet, and it is entirely another to support that system through training and in-service support. 

Front line units are overwhelmed with new technologies, which may or may not be a net plus, regardless of their newness. The problem here is that while there is a large constituency for new systems, there is very little incentive associated with the support of existing systems.

While all of this may seem complex, the truth has more variables than represented here. still, the point is that the entire process is driven with contradictions and conflicts. Whatever may be the solutions to these problems, they are well worth considering. To not consider them while focusing on officer appointments and rotations seems like standing in a lake full of alligators and worrying about the mosquitos. 

"We've got people in there who know what they're doing. It's time to roll up our sleeves and get on with it."

“It’s all about velocity. We’re trying to experiment. We are trying to prototype. We are trying to iterate and get stuff downrange quickly.”


Myth Busters 

Myth: All defense acquisition programs have large cost growth. 

Reality: Cost control has improved significantly. 

Not only is cost growth significantly lower than historical levels, but recent efforts have dramatically lowered cost growth further. Multiple measures show statistically lower cost growth on major programs. Historical analyses also show that cost controls are better than in decades past We do still have legacy problems on older programs. Total research, development, test and evaluation [RDT&E] cost growth is still rising due to older programs. We need to do better through continued incremental improvements, but recent improvements focused on acquisition fundamentals and an empowered DoD workforce have been more successful than hands-off  acquisition reforms.

Myth: Defense programs usually cut quantity e.g., to pay for cost growth

Reality: Most major programs deliver the original baseline quantity or more. 

We don’t as a rule cut program quantity. Most major defense acquisition programs actually produce the quantities we originally planned at Milestone B. This runs counter to the impression given from just focusing on certain high-visibility programs  that incurred major cuts in quantity.

Myth: Swings in O&S cost estimates indicate poor program management. 

Reality: The dynamics of cost estimates indicate that O&S costs appear to be heavily driven by external inflation factors. 

Analysis shows that the recent dynamics of program O&S costs estimated during acquisition correlate with the dynamics of labor, fuel, and maintenance costs. While this aligns with intuition, it also indicates that O&S cost increases involve both factors that the acquisition system cannot control as well as some that can, in part be controlled [e.g., system reliability, fuel efficiency, maintenance]. Operational tempo also affects O&S costs through many of these factors [e.g., the amount of fuel consumed and maintenance costs], and changes in forecast tempo will affect O&S costs independent of both inflation and weapon system performance. Thus, while the acquisition system needs continued attention to the levers it can control with full knowledge that their effects often will not be seen for decades. Stakeholders need to recognise the strong influence of other factors on O&S costs. 

Myth: Program requirements are unstable.
 
Reality: High-level requirements seldom change on major programs, and very few programs have many changes. 

Most Major Defense Acquisition prorams showed no changes that we could trace from the original baseline to the latest Selected Acquisition Report  for the program. Moreover, of the few programs with any traced changes, most had only one. This is commensurate with experts’ experience indicating that changes are largely made at the engineering level as development seeks ways to meet high-level requirements. Changes, however, are not always bad. Some changes reflect prudent requirement reductions to unforeseen high costs of options uncovered in development or new affordability pressures. Other changes address new threats that otherwise would render an unmodified system obsolete upon delivery. Thus, flexibility, measured response, and continued tradeoffs, together with smart management attention to cost implications, are more important.

Myth: The DoD cannot acquire systems quickly. 

Reality: DoD acquisition can be timely and responsive. 

Despite criticism that defense acquisition is too slow, the highlights below show that schedule growth is lower than cost growth in development, and cycle times for major programs have not increased much from decades past even while challenged with dramatic increases in weapon system complexity. This is not to say that internal processes cannot be improved, so efforts continue to institutionalize streamlining and  smart decision making .

Myth: Defense acquisition is broken.
 
Reality: The acquisition system for decades has given DoD the most capable military in the world and has been improving both in the past and more recently. 

While there is no absolute definition for sufficiency, annual performance report indicate system functions reasonably well compared to the past and continues improving. We cannot look at a single metric to measure the performance of the defense acquisition system, and many metrics work at odds with each other. For example, the so-called “iron triangle” of cost, schedule, and technical performance has long shown that emphasising one or two dimensions often is done at the expense of the others. While cost --followed by schedule-- metrics are the easiest to quantify, we find all three dimensions indicate stability and, in many cases, significant improvement.

Systems engineering can be defined as an iterative process of top-down synthesis, development, and operation of a real-world system that satisfies, in a near optimal manner, the full range of requirements for the system. Systems engineering teams must work together on a set of inputs to achieve the desired output where the output is a system/capability that meets the user’s needs and requirements in a near optimal manner. 

Must elicit requirements from customers and potential product/service users,  validating and prioritising customer/user requirements,  Defining requirements in a manner that is executable and verifiable,  Identifying alternative solutions to achieve requirements,  Isolating balanced and robust solutions that “best” meet requirements, and  Verifying implemented solutions satisfy requirements. 

Systems engineering must account for the entire life cycle of the system/capability acquisition. The life cycle functions that systems engineering accounts for are development, manufacturing/production/construction, deployment/fielding, operation, support, disposal, training, and verification. Systems engineering ensures that the correct technical tasks are accomplished during the acquisition process through planning, tracking, and coordinating phases.

1. Apply systems engineering approach balances total system performance total ownership costs within the family-of-systems, systems-of-systems context

2. Develop systems engineering plan approval describe program overall technical approach, including processes, resources, metrics, and applicable performance incentives.

3. Detail timing, conduct, and success criteria of technical reviews

4. Develop total system design solution balance cost, schedule, performance, and risk,

5. Develop/Track technical information required for decision making, 

6. Verify technical solutions satisfy customer requirements, 

7. Develop cost-effective/supportable system throughout the life cycle, 

8. Adopt open systems approach to monitor internal and external interface compatibility for systems and subsystems, 

9. Establish baselines and configuration control

10. Create focus and structure of interdisciplinary teams for system and major subsystem level design.  

0 Comments

Top 10 Operational Requirements Performance Specifications for Development of Deployed Systems

4/1/2019

0 Comments

 
​In many situations, not enough attention is applied in establishing correct requirements up front. This causes problems when unclear requirements surface later in the life cycle, and more time and money is spent in fixing problems. 

So  it is necessary that requirements are established in a systematic way to ensure their accuracy and completeness, but this is not always an easy task. This difficulty in establishing good requirements often makes it more of an art than a science. The difficulty arises from the fact that establishing requirements is a tough problem and often the implementation gets mixed with the requirements.

 In addition, it requires people with both communication and technical skills. As requirements are often weak about what a system should not do, this poses potential problems in the development of dependable systems, where these requirements are necessary to ensure that the system does not enter an undefined state. 

The development of dependable embedded systems requires even more complicated requirements as the embedded system not only interacts with networks but also with the outside world. Therefore, the importance of establishing good requirements is even greater in embedded systems design. 

Requirements and specifications are very important components in the development of any embedded system. Requirements analysis is the first step in the system design process, where a user's requirements should be clarified and documented to generate the corresponding specifications. 

While it is a common tendency for designers to be concerned about starting the design and implementation, discussing requirements with the customer is vital in the construction of operational critical systems. For activities in this first stage has significant impact on the downstream results in the system life cycle. For example, errors developed during the requirements and specifications stage may lead to errors in the design stage. 

When this error is discovered, the engineers must revisit the requirements and specifications to fix the problem. This leads not only to more time wasted but also the possibility of other requirements and specifications errors. 

Many accidents are traced to requirements flaws, incomplete implementation of specifications, or wrong assumptions about the requirements. While these problems may be acceptable in non-safety-critical systems, safety-critical systems cannot tolerate errors due to requirements and specifications. Therefore, it is necessary that the requirements are specified correctly to generate clear and accurate specifications. 

There is a distinct difference between requirements and specifications. A requirement is a condition needed by a user to solve a problem or achieve an objective. A specification is a document that specifies, in a complete, precise, verifiable manner, the requirements, design, behavior, or other characteristics of a system, and often, the procedures for determining whether these provisions have been satisfied. 

For example, a requirement for a car could be that the maximum speed to be at least 120mph. The specification for this requirement would include technical information about specific design aspects. Another term that is commonly seen in reports is requirements specification which is a document that specifies the requirements for a system or component. It includes functional requirements, performance requirements, interface requirements, design requirements, and development standards. So the requirements specification is simply the requirements written down on paper. 

The first step toward developing accurate and complete specifications is to establish correct requirements. As easy as this sounds, establishing correct requirements is extremely difficult. There are different steps we can take toward establishing correct requirements. Although some of the suggestions sound fairly obvious, actually puttting them into practice may not be as easy as it sounds. 

The first step is to negotiate a common understanding. "There's no sense being exact about something if you don't even know what you're talking about." Communication between the designer and customer is vital. There is no point in trying to establish exact specifications if the designers and customers cannot even agree on what the requirements are. 

Problem stems from ambiguities in stating requirements. For example, say the requirement states that we want  to create a means that would transport a group of people from one theatre to another. Possible interpretations of this requirement includes building ship/aircraft, among other possibilities. Although each of these transportation devices satisfy the requirement, they are certainly very different. Ambiguous requirements can be caused by missing requirements, ambiguous words, or introduced elements. 

The requirement does not state how fast troops should be transported. Taking an aircraft would certainly be faster than riding on a ship. These are also missing requirements. "a group of people" in the above requirement is an example of being unclear with words. What exactly does "group" mean? 

The requirement states to "create a means" and not "design a transportation device". This is an example of introduced elements where an incorrect meaning slipped into the discussion. It is important to eliminate or at least reduce ambiguities as early as possible because the cost of them increases as we progress in the development life cycle. 

Often the problem in establishing correct requirements is how to get started. One of the most important things in getting started is to ask questions. Context-free questions are high-level questions that are posed early in a project to obtain information about global properties of the design problem and potential solutions. Examples of context-free questions include who is the client?  What is the reason for solving this problem?, What environment is this product likely to encounter? What is the trade-off between time and value?

These questions force both sides, designer and customer, to look at the higher issues. Also, since these questions are appropriate for any project, they can be prepared in advance. Another important point is to get the right people involved. There is no point in discussing requirements if the appropriate people are not involved in the discussion. 

Related to getting the right people involved is making meetings work. Having effective meetings is not as easy as it sounds. However, since they play a central role in establishing requirements it is essential to know how to make meetings work. There are important points to keep in mind when creating effective meetings, which include creating a culture of safety for all participants, keeping the meeting to an appropriate size, and other points.

Exploring possibilities is another important step toward generating correct requirements. Ideas are essential in establishing correct requirements, so it is important stakeholders can get together and generate ideas. Every project will also encounter conflicts. Conflicts can occur from personality clashes, people that cannot get along, intergroup prejudice such as those between technical people and marketing people, and level differences. 

It is important that a facilitator is present to help resolve conflicts. In establishing requirements, it is important to specifically establish the functions, attributes, constraints, preferences, and expectations of the product. Usually in the process of gaining information, functions are the first ones to be defined. Functions describe what the product is going to accomplish.

 It is also important to determine the attributes of a product. Attributes are characteristics desired by the client, and while “Twin” products can have similar functions, they can have competely different attributes. After all the attributes have been clarified and attached to functions, we must determine the constraints on each of the atrributes. Preferences, a desirable but optional condition placed on an attribute, can also be defined in addition to its constraints. Finally, we must determine what the client's expectations are. This will largely determine the success of the product. 

Techincal reviews require  function testing tool for indicating progress of the requirements work. It can be formal or informal and generally only deals with technical issues. Technical reviews are necessary because it is not possible to produce error-free requirements and usually it is difficult for the producers to see their own mistakes.

User satisfaction tests are used on a regular basis to determine if a customer will be satisifed with a product. Black box test cases  are constructed primarily to test the completeness, accuracy, clarity, and conciseness of the requirements. Existing products are  useful in determining the desirable and undesirable characteristics of a new product.

At some point it is necessary to end the requirements process as the fear of ending can lead to an endless cycle. This does not mean that it is impossible to revisit the requirements at a later point in the development life cycle if necessary. However, it is important to end the process when all the necessary requirements have been determined, otherwise stakeholders will never proceed to the design cycle. 

Establishing good requirements requires workers with both technical and communication skills. Technical skills are required as the embedded system will be highly complex and may require knowledge from different engineering disciplines such as electrical engineering and mechanical engineering. Communication skills are necessary as there is a lot of exchange of information between the customer and the designer. Without either of these two skills, the requirements will be unclear or inaccurate. 

It is essential requirements in safety critical embedded systems are clear, accurate, and complete. The problem with requirements is that they are often weak about what a system should not do. In a dependable system, it is just as important to specify what a system is not suppose to do as to specfiy what a system is suppose to do. These systems have an even greater urgency that the requirements are complete because they will only be dependable if we know exactly what a system will do in a certain state and the actions that it should not perform. 

Requirements with no ambiguities will also make the system more dependable. Extra requirements will usually be required in developing a dependable embedded system. For example, in developing a dependable system for  non-computer-literate stakeholders, extra requirements should be specified to make the system safe even in exceptional situations.

A system is very loosely defined and can be considered as any of the following definitions: A combination of elements forming a complex or unitary whole, set of correlated members, ordered and comprehensive assemblage of facts, principles, or doctrines in a particular field of  knowledge, coordinated body of methods, a complex scheme, or a plan of procedure and any regular or special method of plan of procedure -- system of marking.

The important characteristic of a system is  unity, functional relationship, and useful purpose. Systems engineering is not a technical specialty but is a process used in the evolution of systems from the point when a need is identified through production and construction to deployment of the system for consumer use. 

Systems engineering requires knowledge from different engineering disciplines such as aeronautical engineering, civil engineering, and electrical engineering. The development of embedded systems also requires the knowledge of different engineering disciplines and can follow the techniques used for systems engineering. Therefore, it is appropriate that the steps used in establishing system requirements also be applicable to requirements for embedded systems. 

Basically, the system operational requirements define how the system will be used in the field by the customer.  Usually, in defining system requirements, the tendency is to cover areas that are related to performance as opposed to areas that are related to support. However, this means that emphasis is only placed on part of the system and not the whole system. It is essential to take into consideration the entire system when defining system requirements. The system maintenance concept basically describes the overall support environment that the product is supposed to exist in. 

After the system operational requirements and system maintenance concept are defined, the preliminary system analysis is performed to determine which approach for system development should be adopted. 

The following process is usually applied: After data is collected and inputed, the model may be used. Analysis after using the model will lead to recommendation for some kind of action. After the preliminary system analysis, advanced system planning will be done. Early system planning takes place from the identification of a need through the conceptual design phase. The results from these planning will be defined as either technical requirements included in the specifications or management requirements included in a program management plan. 

The system specification includes information from the operational requirements, maintenance concept, and feasibility analysis. The System Engineering Management Plan contains the following sections. 

The technical program planning and control part describes the program tasks that have to be planned and developed to meet system engineering objectives such as work breakdown structure, organisation, risk management, etc. 

The system engineering process part describes how the system engineering process applies to program requirements. The engineering specialty integration part describes the major system-level requirements in the engineering specialty areas such as reliability, maintainability, quality assurance, etc. 

The  conceptual design review is also performed during the conceptual design stage. It usually occurs early in the system engineering development life cycle after the operational requirements and the maintenance concept have been defined. 

It is very important to verify the requirements are correctly implemented in the design. This is done with requirements traceability which is usually refered to as "the ability to follow the life of a requirement, in both forwards and backwards direction i.e. from its origins, through its development and specification, to its subsequent deployment and use, and through periods of on-going refinement and iteration in any of these phases.

Requirements traceability captures the relationships between the requirements, specifications, and design. Standards for systems development require requirements traceability be used. Although requirements traceability has been around for decades decades, there has been no consensus as to what kind of information should be used as part of a traceability scheme. 

The problem is that the definition of traceability differs when taken from different points of view of the system.i.e. the view of the system is different for customer, project manager, test engineer, etc. Each organisation has a different purpose and methodology for requirements tracing. 

The projects typically involved are real-time, embedded, in vitro diagnostic instruments Traceability aids project managers in verification, cost reduction, accountability and change management. Traceability helps in verifying network requirements are satisfied in the design process and that they are tested for during the verification process. Traceability allows the allocation of product requirements early in the development cycle to reduce costs of correcting defects due to untraceable components in the integration and system test phase. 

Providing quantitative traceability analyses also allows for accountability in making sure that project milestones are approved, deliverables are verified, and customers are satisfied. The documentation from traceability also keeps information organised during changes in staff or management. 

While the division also has an automated traceability system that allowed them to automate many of the tasks, it was the process and not the tool that led to their success. The main purpose of the traceability program is to identify links and determine that the links are complete and accurate. 

The traceability analysis consists of the following aspects: forward requirements analysis, reverse requirements trace, forward test analysis, and reverse test trace. These steps are used to trace each requirement through its design elements and test traces so it is possible to give feedback about the design components that are not yet implemented during the life cycle. Each requirement is stored as a separate element, and graphical interfaces show how the individual pieces of data relate to each other and trace back to their sources. 

There also exists requirements simulation tools allowing designers to capture system requirements and designs into graphical executable system models. These models can be “Simulated” to ensure requirements correctness. Tool focuses on behavioural and performance modeling. 

Can use object-oriented modeling language for the specification of complex systems by constructing a common means of describing complex systems using object orientation. It is a specification language and not a development process so there are no details given on the features and how and when you “Link” them together during systems development.

The main concerns about using an object oriented language for real-time embedded systems is about the speed and size of the application. Some points in support of object-oriented programming for embedded systems are that objects are effecient, developers can write larger systems with fewer defects in less time 

Restrictions in architectural modeling is cited as one shortfall. For example, there are no predefined network node stereotypes to help improve standardisation and the lack of these predefined stereotypes means there is no capability to capture information in depth to fully describe the operational properties of the system. Another shortfall deals with deficiencies in concurrency modeling and how schedules.

It would also be very helpful to have a technique that can assist in a structured development of correct and accurate requirements. In particular, Quality Function Deployment is a method for the structured product planning and development that enables a development team to specify clearly the customer's wants and needs, and then evaluate each proposed product or service capability systematically in terms of its impact on meeting those needs. 

Quality Function Deployment is based on matrices that show the relationships between, for example, a customer need and a feature of the system. For example, say the rows define customer wants in a car are defined. Lets say A is the car looks cool and B is the car never breaks. The columns can be specific features of the car. Lets say one is good gas mileage and the other is aerodynamic styling. Each box represents the relationship between a customer want and a feature of the car. A car with good gas mileage is not related to the car looking cool, so their is no relation. However, a car with aerodynamic styling would look cool so their is a strong relation. Similarly, a car with that never breaks has a possible relation with good gas relation but not much relation with aerodynamic styling. 

It is essential to establish correct requirements and specifications early in the development process to prevent errors later on in the system life cycle. However, we still need to realise the reality of requirements changing over time. There are several reasons why it is difficult to make good requirements. First, requirements for embedded systems are even more complex than for other systems because they must interact with the outside world and not just other network components. Establishing correct requirements require people with both technical and communication skills. In addition, it is a tough  problem and the implementation often gets mixed with the requirements. 

The complexity with establishing correct requirements makes it more than just an engineering skill. Often people have the attitude of "I'll know it when I see it," thus making it difficult to establish the requirements early. Over the past years, the area has mainly focused on tool development with the development of prototyping/execution tools, requirements traceability tools, and specifications/requirements network tools. The current trend is toward executable requirements/simulation tools.

The conceptual system design is the first stage in the systems design life cycle and an example of the systems definition requirements process  In establishing system requirements, the first step is to define a need. This need is based on a want or desire. Usually, an individual unit/organisation identifies a need for an item or function and then a new or modified system is developed to fulfill the requirement. 

After a need is defined, feasibility studies should be conducted to evaluate various technical approaches that can be taken. The system operational requirements should also be defined. This includes the definition of system operating characteristics, maintenance support concept for the system, and identification of specific design criteria. In particular, the system operational requirements should include the following elements:

1. Define the mission - The first step always begins with clarifying the objectives, identifying the primary operating mission of the system in addition to alternative and secondary missions, and limiting the problem so that it can be effectively implemented. 

2. Identify feasible alternatives - All the alternatives should be considered to make sure that the best approach is chosen. 

3. Select the evaluation criteria - The criteria for the evaluation process can vary considerably, so the appropriate ones must be chosen. 

4. Performance and physical parameters - Definition of the operating characteristics or functions of the system

5. Use requirements - Anticipation of the use of the system.

6. Operational deployment or distribution - Identification of transportation and mobility requirements. Includes quantity of equipment, personnel, etc. and geographical location.

7. Operational life cycle - Anticipation of the time that the system will be in operational use.

8. Effectiveness factors - Numbers specified for system requirements. Includes cost-system effectiveness, mean time between maintenance failure rate, maintenance downtime, etc.

9. Generate network input data - The requirements for appropriate network input data should be specified.

10. Manipulate the model : Applying modeling techniques - A series of models should be used. 

0 Comments

    Site Visit Executive

    Provides Periodic Equipment Sustainment Programme Reports Assess Help Desk, Training, Workshops & Clinics for Product Information & Reliability Types, Systems Upgrades, Communication Links & Supplier Participant Retrieval Status

    Archives

    January 2021
    December 2020
    November 2020
    October 2020
    September 2020
    August 2020
    July 2020
    June 2020
    May 2020
    April 2020
    March 2020
    February 2020
    January 2020
    December 2019
    November 2019
    October 2019
    September 2019
    August 2019
    July 2019
    June 2019
    May 2019
    April 2019
    March 2019
    February 2019
    January 2019
    December 2018
    November 2018
    October 2018
    September 2018
    August 2018
    July 2018
    June 2018
    May 2018
    April 2018
    March 2018
    February 2018
    January 2018
    December 2017
    November 2017
    October 2017
    September 2017
    August 2017
    July 2017
    June 2017
    May 2017
    April 2017
    March 2017
    February 2017
    January 2017
    December 2016
    November 2016
    October 2016
    September 2016
    August 2016
    July 2016
    June 2016
    May 2016
    April 2016
    March 2016
    February 2016
    January 2016
    December 2015
    November 2015
    October 2015
    September 2015
    August 2015
    July 2015
    June 2015
    May 2015
    April 2015
    February 2015
    January 2015
    December 2014
    April 2014
    January 2014
    December 2013
    November 2013
    October 2013
    September 2013
    August 2013
    June 2013
    May 2013
    April 2013
    March 2013

    Categories

    All

    RSS Feed

Web Hosting by Dotster