Mission readiness depends on secure and reliable connectivity to successfully maintain electronic workflows. Without it, we cannot effectively connect its bases to provide effective and efficient communications. Without it, personnel cannot seamlessly communicate with each other, order parts, access technical documents and manuals, upload and analyze aircraft data, or access secure information in maintenance hangars, munition bunkers and airfields.
DoD needs to prioritize and establish a path to enabling the flight line of the future; that is, ensuring and providing connectivity — everywhere.
The flight line of the future encompasses the entire communications system on a military base, including applications, databases, logistics, facilities and more. To meet future mission requirements, the DoD can upgrade and streamline communications with advanced technology that meets every military base’s unique requirements so all airmen have access to the information they need, when they need it.
Successful use cases can be used to replicate communication systems globally across all military bases to realize the full mission value of electronic workflow. This will enable the DoD to save time and money, while becoming much more efficient in accomplishing its mission.
A focus on data-driven decision-driven decisions helped solve a problem that couldn’t be fixed through more money alone. By embracing data and developing the Naval Sustainment System and Performance to Plan frameworks for aviation readiness, Navy had 340 more aircraft flying in March 2020 compared to March 2019, including 90 more Super Hornets.
As an example of how data changed the approach to boosting aviation readiness, an initial dataset in 2018 showed the two quickest ways to improve readiness were to tackle intermediate-level maintenance and manning at the squadrons.
Improvements in both areas were ordered, but continued data analysis showed that, “even if you do that –this was specifically for strike fighters – it said you are not going to get the 80-percent mission capable rate in your strike fighters.
That’s when we, all the sudden, because of that, made the decision to go ahead and do reform basically across all the tenets of naval aviation: at our squadron level, our intermediate level, our depot level, the way we do engineering, the way we run the operations center now and make decisions – all of that was because of the insights provided to us through data. And the results are, we’re at 341 Super Hornets mission capable. We’re at 340 more airplanes of all types flying today than last year. So it really a testament to the power of data-driven decisions.”
Leaders have made sure that everyone – from the squadrons up to the chief of naval operations – has access to the same data on unit readiness and manning through his weekly Air Boss Report Card, to ensure the proper alignment and prioritization of money, manpower, parts and leadership attention.
Dedicated Data Analytics Team
Force Readiness Analytics Group (FRAG) stood up to take existing data in the aviation realm and apply machine learning and data analytics models to turn routinely collected information into something more actionable and predictive.
“We have to be able to man, train and equip better, more granular, more with outcomes in mind: mission capability rates, fire execution. How do we be very precise with our resourcing of that to enable those outcomes?”
Though the military sometimes wants to wait until a system is perfected to begin using it, that wasn’t an option because “we need answers today. Leadership can’t wait a month to go gather the data, figure out what your null sets are and “we learn by do”
We know what data we have, and working with the experts we know what the data means. And by doing that, by bringing it together in one place, we understand where our gaps are at. And the question is, how do we close those gaps: is it other data sets, do we need another software tool or something?”
The early days of the FRAG’s work weren’t easy, with data being siloed and not always kept in compatible formats. Navy has 20-plus years of data on every maintenance action that has taken place on every airframe, as well as every time a pilot flies, every training event associated with each sortie, the financials for every flight hour flown or maintenance action taken, and so forth – but some of the data is recorded daily versus monthly, and other variables that didn’t make them all easy to mesh together into a single database.
“Over the years we’ve had different organizations stand up different software and different databases to collect information to answer their questions or to answer their interests. And it wasn’t really in the scope of the technology to say, how do we do this in one big cloud.”
An early win for the FRAG was looking at 84-day aircraft inspections. Under the Naval Sustainment System, a squadron-level initiative was to improve scheduled maintenance, including inspections on each airframe that take place about every 84 days.
There was no real standard for how long those inspections should take, and completion times ranged from two days to 12 days based on how each squadron prioritized that routine scheduled maintenance.
FRAG wanted to better measure those maintenance evolutions – for most squadrons, at least one aircraft is undergoing an 84-day inspection during any given week, meaning slow completion times can have real consequences on the number of up aircraft and therefore the number of flight hours the pilots can achieve that week.
After some work diving into the maintenance action forms and other available data, FRAG determined that the goal should be three days for the inspection and that the bulk of inspections were actually taking about five days.
Armed with that knowledge, the air wings on the East Coast and West Coast have both been able to bring the inspection durations down by transparently sharing these numbers weekly and looking at what resources are needed to help squadrons that aren’t meeting the three-day goal.
In a more complex initiative, the FRAG looked at how safety affects readiness – specifically, how towing mishaps that cause Class C and Class D damage affect readiness rates in squadrons.
“For the longest time we’ve been reporting mishaps A through D and the day and the community, but we were never able to really understand the impact to the jets’ mission capability. So we asked the question, what’s the average time, due to a mishap, how long does it take to get back to mission capable?
So we realized, in our safety database we have an aircraft number, and in our maintenance reporting system we have, by every day, we have the status of that jet. So we could actually tie when the mishap happened to the status of that jet, and then how long it took to get back to MC.”
“And that was just one of the questions he’s had. So now we can actually measure for [Fiscal Year 2019], due to ground mishaps, a jet takes this long to get back to MC, and then track the status of that jet.
And then the next question is, how many maintenance man hours went into building that jet back up? So we can go into another data set, same thing, tie the maintenance action form to the date-time period and aggregate all those maintenance man hours.”
Once the knowledge exists on how a Class C mishap affects the entire squadron’s readiness, smart decisions can be made specific to an individual squadron and where it is in its deployment cycle.
So if we just had another mishap, the aircraft is fine, pilot is fine, but there’s going to be an impact to this squadron: it’s going to take them probably 40 days and 1,000 maintenance man hours to rebuild this jet, and they don’t really have, if they’re in this phase of the deployment cycle when they’re trying to execute their flight schedule, there’s going to be this much more drain on resources to this jet.
So maybe we move that jet to another squadron, or maybe we move that to another location so they’re not having to rebuild it, or bring in contractors to do that. So understanding the result not only on the jet but the squadron and where they’re at.”
The dashboard, shows the readiness of each squadron and the status of each individual jet for leadership up the chain of command to use in making decisions about applying resources.
The dashboard, used to generate the Air Boss Report Card, that’s the culmination of culling data from different sources, generating a user-friendly interface and applying machine learning tools to auto-update the dashboard every morning.
Through the dashboard, a viewer could look at the entire Super Hornet inventory, for example, and see all relevant information: total inventory, required versus actual number of mission capable aircraft, 30-day mission capable trends and more across top; a look at each squadron, organized by carrier air wing, and a color-coding of each jet’s status of mission capable, down for maintenance or down for supply; a look at other Super Hornets outside of carrier air wings, such as training and testing units; and more.
Clicking on an individual squadron shows more information: flight hours; the fit, fill and experience level of the maintainers; and additional notes for the air wing about the squadron’s performance or needs.
The key is the data being automatically pulled and updated daily. To generate the dashboard’s information by hand for any given day would take several weeks, since the data comes from many sources.
The team implemented the Performance to Plan (P2P) mandate to increase fleet readiness and a rapid data-driven decision making process for senior leadership.
The team created machine-learning models for determining the monthly number of mission capable jets per squadron by incorporating manning-training-equipment datasets from Naval FA-18 squadrons.”
The team “developed machine-learning models to track and predict aviation fleet readiness. Their work analyzes historical information to seamlessly view data and predict future performance, improving the efficiency, effectiveness, and readiness of the Naval Aviation Enterprise.”
The next step is using the data to become even more predictive, to not just reveal hidden readiness deficiencies but to actually avoid them in the first place.
“We’re just tackling the easy stuff. Now it’s, how do we do more, how do we start to get predictive with certain things, and tackling some of the gaps in the data and how do we augment that? There’s a lot of work, there’s a lot of people trying to do stuff in this realm. We were told to run with scissors and be first to the whiteboard; people are now starting to come online, and how do we do what you guys are doing?”
“The whole goal is to help others join the practice so they can use their data in the same way.”
“That’s where we’re really finding, at least through that leadership, that alignment of that data, we can get in front of a lot of the issues we’re facing.”
“That is a big piece of our business at the business now, and that’s not going away any time soon, its just going to continue evolving.”
Depot job site performance consists of multiple dimensions. Often, performance is measured in terms of ratios of output/ input factors.
1. Deployed equipment orders, lines and units
2. Quality measures such as order completeness, error-free and on-time deployment
3. Flexibility to cope with changes in demand,
4. Agility to meet/adapt to changing requirements
5. Innovation-- use of new supply line concepts to yield components required for successful missions.
6. Number of full-time work hour equivalents
7. Investment in modern information systems
8. Creation of top-notch physical work sites
9. Process organisation
10. Assortment of equipment items carried.
1. Managing Equipment Structural Maintenance
2. Equipment Inventory, Status, and Utilization Reporting
3. Improving Equipment Reliability and Maintainability
4. Performance Management
5. Core Automated Maintenance System Manuals
6. Technical Order System
7. Maintenance Inspection Data Documentation
8. Comprehensive Engine Management System
9. Engine Configuration, Status
10. Deficiency Reporting and Investigating System
1. Satisfaction of Equipment Mission Agents
2. Use of Work Order Job Space
3. Condition of Technical Installations
4. State of Materiel Contact
5. Teamwork & Motivation
6. Storage & Order Picking Tech
7. Equipment Inventory Strategies
8. Supply Line Coordination
9. Level & Use of Information Systems
10. Commitment to Quality Services
Top 10 Constant Communication Problems between Sortie Generation and Plans/Scheduling Elements Vital to Inspection
1. Lead operations scheduler changes week to week.
2. Continuation requests are approved by whoever is around.
3. Late landings or early takeoffs are the norm.
4. Reconfiguring instead of servicing in the turn.
5. Ground abort and Cannibalization rates are on the rise.
6. Squadron regularly flies more hours off its fleet than the phase dock can regenerate
.
7. Major changes occur to next week’s flying schedule.
8. All scheduled maintenance is saved for phase.
9 Maintenance schedule does not receive the same attention as the flying schedule.
10. Maintenance scheduling effectiveness is 100 percent, but there are many overdue items in the planning requirements report
Top 10 Rapid Response Capability Enhances Maintenance/Operations Planning
1. Perform test and evaluation of F/A-18 A-G Automated Maintenance Environment (FAME) products Data Courier, Tech Data, MAPA, Hardware, Software, etc… using system engineering principles.
2.Review and provide technical inputs/recommendations of FAME Functional Requirements Documentation FRD. Conduct laboratory testing of system hardware and software.
3. Develop and document test points and test procedures for hardware and software components. Coordination and consolidation of AME testing events within multiple agencies supporting the F/A-18 FAME program.
4. Identify and set test schedules by leveraging knowledge base of AME development teams, training activities, Software development groups, program office, etc.
5. Interface with team, product and program leads on weekly basis communicating test results, schedule, resource allocations etc.
6. Responsible for test reporting, submission of Software Anomaly Reports SAR, data collection, analysis, and posting of data to the common data sharing systems.
7. Required to participate in periodic meetings supporting FAME system, SCS, and other FAME impacted systems
8. Continuous interaction with FAME test activities and software developers such as SPAWAR, Tech Data, etc…
9. Serve as IPT FAME Technical (SME). Selectee should possess experience in developmental test, and FAME Software Suite systems (HW & SW).
10. Identify, document and monitor system deficiencies including verification of corrected deficiencies. Maintain proficiency in the use of data analysis tools, systems, and software programs. .