Leaders want to see readiness improvements in three areas: operational readiness, or the effectiveness of today’s fleet to fight and win; structural readiness, to ensure the Navy has the right relationships and processes in place to respond quickly and agilely as challenges arise; and readiness of the industrial base or the readiness of the public and private shipyards and depots that maintain ships, submarines and aircraft.
For each of those pieces of readiness, leaders want metrics to aim for and data that could measure the success of readiness-building efforts – rather than relying on subjective assessments.
For example, “reviewing what we are being asked to do from the operational level down to the tactical level. So another way of looking at this is we want metrics associated with every item on the mission essential task list to answer the question, ready for what?
And it will ensure we understand, using objective metrics, how any one unit’s readiness affects the fleet’s ability to perform its wider mission.”
The command wants examine the readiness of all enterprises – aviation, submarine, surface, information warfare and expeditionary communities – and create an actionable feedback loop between his command, the type commanders and the systems commands so all organisations can contribute to measuring and improving readiness.
Units have tons of unused or underused data sitting around that would primarily support the industry readiness line of effort, and we need to find a way to take advantage of that data they already have, as well as identify other data that should be collected to create a clearer picture of Navy and industrial base readiness.
“The final frontier of analytical capabilities is prescriptive analytics, where our automated analytical tools would accelerate our decision-making by suggesting decision options that might not have been aware if we had relied only on human actions alone.“
“To accomplish this, we are now establishing a Fleet Analytics Office that would develop the dashboards and reporting tools to see real-time what is going on and develop a risk matrix that helps us assess risk against the mission and drive accountability. And we also need to assess whether we have a modern information technology structure in place to support these efforts across the Navy’s readiness exercise.”
“The Revolutionize Readiness Campaign Plan is perhaps the most important line of effort we have as Fleet Forces” and was developed after working with leaders across industry.
Leaders see a great opportunity for the Services to leverage data to improve readiness and performance by collecting data for the surface navy and understand how to take past data and use it for predictive purposes.
“There is a tremendous opportunity as we get performance data to get it into a database that’s accessible and authoritative and then map out against a sort of a model of each of our ships and assess how combat-ready that ship is.
And then you can also use it with some artificial intelligence types of agents or capabilities to do even some predictive types of analysis – so as we get prepared to deploy, what are those things we need to be paying attention to, where are perhaps we most vulnerable based on historical patterns.
Navy won’t have to make much of an investment in hardware to do this but rather embrace new processes and “just doing business a little bit differently. So there’s a tremendous use for data-driven decisions improving our readiness.”
“We’re getting after this with this ‘Perform to Plan’ approach, we’re seeing some improvements already in aircraft maintenance. We’re turning that same approach to surface ship and submarines, and so great potential.”
Our conclusions call for the creation of an industrial policy to “inform current and future acquisition practices including “reengineering and qualification of new suppliers,” and to “modernise” industrial base to ensure readiness.
DoD policy calls for the expansion of “direct investment in the lower tier of the industrial base to address “critical bottlenecks, support fragile suppliers, and mitigate single points-of-failure.”
Recent efforts by Congress and the administration have been encouraging, but more must be done. Pentagon wants companies to find supply-chain weaknesses and volunteer information on weak spots in their industrial supply chains, as part of a broader review and war-gaming effort to discover potential failure points for the defense industrial base.
DoD requirements and contracting practices play a critical role in shaping the manufacturing and defense industrial base. Current organisational structure of DoD favors a top-down and requirements-driven process, often to the detriment of innovation.
While it is possible to achieve technological breakthroughs or innovative capabilities through such a process, requirements-driven acquisition solicits solutions for specific capabilities rather than for outcomes, potentially imposing an opportunity cost on innovation.
There appears to be few opportunities for companies to offer services or capabilities that do not already fit within the DoD’s stated requirements and scope. The tendency to focus on requirements versus solutions, compounded by the various barriers to entry, cost of doing business, and skewed market incentives can inhibit competition and new entrants.
Companies with established success in contracting space are often necessarily structured to comport to guidelines, rules, and regulations and are typically unlikely to be able or incentivised to challenge the requirements-driven process.
Many of the current policies and practices of DoD strain the industrial base and reduce incentives to supply to DoD, resulting in an inability to meet national security demands, and a DoD challenged to meet its goals in an era of expanding strategic competition.
Pentagon is looking for industrial base risks to include sole source, single source, fragile suppliers, suppliers that may not be looking to stay in the market. there are specific concerns about some companies that DoD fails to “move the needle on” but which are vital to maintaining a technological edge.
Pentagon must look at how the industrial base will be stressed, could be stressed under specific operations to give DoD information it doesn’t currently have much metrics on by distributing a questionnaire to the defense industry to try and gather information about potential weak spots. It’s a very targeted request where we give industry an opportunity to provide some information to help us with assessments
The threat to industrial base has been years in the making, as supply chain deteriorate so Pentagon must focus on expanding programs like the manufacturing institutes to renew and broaden the production capacity of the defense industrial base since ability to procure goods critical to the common defense is threatened by inability to obtain commercial sub-components sometimes indirectly related DoD weapons systems.
Survey seeks to address flagging imbalances in the key sectors of the defense industrial base and establish policy for prioritising industrial base resilience. But the measure isn’t perfect. It needs to be calibrated to ensure national security leaders have complete visibility into a diverse industrial base with secure supply chains.
A useful exercise when working with groups of supply professionals, is to ask if their organisations measure supplier performance. But usually those who say yes are not satisfied with their supplier measurement systems.
An unmistakable conclusion after reviewing the state of affairs associated with supplier performance measurement is that not much is written on this topic. What is written about it reveals overwhelming agreement about its importance. Yet the development of effective measurement systems is still on the “to do” list for many organisations, particularly smaller ones.
Even though some companies believe they have attained a “leading practice” level of supplier risk management, there is no shortage of examples where companies sustained painful hits as a result such as delays in launch of major new products and services, or disruption of supply.
Why is effective supplier risk management so elusive? For starters, companies often lack a comprehensive view of the impact that suppliers can have on the organisation. Companies also struggle to obtain the data needed to assess. supplier risk, even though some data , is readily available, .
Some companies rarely establish programs that go beyond monitoring. Structured risk responses are rarely planned proactively, leaving different functions to scramble to address risk reactively when it is identified.
Even organisations that boast mature systems of supplier metrics recognise that continuous improvement is an ongoing challenge—and that many such metrics systems have shortcomings. We have identified several shortcomings, and provide guidance about how to create an ideal supplier performance measurement and scorecard system.
Supplier performance measurement is the business process that includes the methods and systems used to collect and provide information in order to measure, rate, or rank suppliers on a continuous basis. Many companies use the term “scorecard” to describe the report that conveys performance information to suppliers.
The types of scorecards in use typically fall into one of three categories—categorical, weighted point, or cost-based¹'s performance across different categories. For some items, this may be an effective way to evaluate supplier performance.
As it relates to supplier scorecards, most supply chain organisations use a weighted point system that includes a variety of performance categories, provides weights for each category, and defines the scales used for scoring within each category.
Cost-based systems are used least. It attempts to quantify the total cost of doing business with a supplier over time. Some companies use a hybrid system comprising several of these approaches.
No standard measurement approach exists across industries, although supply chain organisations should strive internally for some consistency, particularly with respect to the technical aspects of their systems.
Some organisations have joined consortiums that share best measurement practices or attempt to follow standards in supply chain operations metrics models. It does not make sense for every business unit or internal location to re-invent how they measure performance. The challenge today is to develop a measurement process and scorecard system that offers some flexibility to a company's internal operations while maintaining company-wide consistency.
Most supplier assessment capabilities and results are unsatisfactory.. Where supplier scorecards do exist, most are not designed well so at times it might be better if they were not used at all. Far too often, measurement is an activity that fails to lead to improved results.
When a consumer products company developed a scorecard to evaluate its suppliers-- substantially larger than the company the experience was bad enough that this scorecard was not pilot-tested and was less than professional in appearance.
The system failed when many larger suppliers challenged the accuracy of the company's scores, particularly when the scores were lower than those received from the suppliers' customers with more resource to put towards doing well in the evaluation. Suffice it to say this experience deterred the company from moving forward with its measurement objectives.
Procurement teams must take a hard look at their measurement processes long before suppliers can challenge the legitimacy of the metrics. The processes must not turn into the kind of exercise that one supplier's executive described as “they present and we reject.”
Another example highlights a variety of shortfalls that confront too many supplier measurement systems. Almost every supply chain organisation has at least considered developing a supplier scorecard system. Those that are serious about the process have most likely committed serious time, budget, and resources toward development and maintenance of systems of measurement.
One such company's system appears ideal. Do senior managers need a ranking of supplier performance sorted by product type group? Do they want a listing of the company's best or worst performing suppliers? This, and much more, is available at the push of a button.
But during a training session at this company, an instructor asked a buyer to name one of his best performing suppliers—what the company called an elite supplier. The intent was to use examples of real suppliers to demonstrate the data features of the system.
Without hesitation, the buyer provided a supplier's name. But from across the room, another participant responded by saying that the supplier just named was one of the worst suppliers that his operations group worked with every day.
How can one person cite a supplier as being worthy of preferred status while another, in the same supply chain organisation, indicates they would rather discontinue the relationship with that supplier? And what are the dangers of a system that awards high scores to poorly performing suppliers?
These differences of opinion led to some conclusions that almost everyone in attendance could agree upon. The consensus was that although the scorecard system was supported by an extensive database that allowed all kinds of rigorous analyses, the data to support the system was still collected and keyed in manually.
Furthermore, many scorecard items required subjective judgments. On top of this, most buyers had responsibility for inputting data periodically for dozens of suppliers, a heavy burden on top of their “normal” workload.
Many in attendance also agreed that the data for the scorecards was keyed in just before, and sometimes after the cutoff period, meaning that the emphasis was hardly on the quality of the data. Attendees also acknowledged that supplier scores were used as an indicator of a buyer's job performance.
The group also agreed their suppliers were held to the same criteria and weights, even though not all suppliers were equally important to the company's success. Participants further agreed that internal customers or stakeholders had no way to be part of the measurement process.
There was also some confusion about what kind of organisation qualified as a supplier since some suppliers provided material from multiple locations. Finally, no clear agreement emerged that the measurement process was contributing to higher performance.
What are some lessons here? Clearly, an effective scorecard system requires much more than a fancy database that can present data in many ways. While that capability is important, technical capabilities do not guarantee system success. And scorecards should not ignore the voices of internal customers. Managers at manufacturing plants, warehouses, distribution centers, and logistics hubs are often perfectly positioned to evaluate suppliers' day-to-day performance.
Another lesson is scorecards often place a serious work burden on the individuals responsible for maintaining them, which often results in scorecards that are late or completed at the last minute—which raises concerns about data integrity. Is a reliance on subjective and last-minute evaluations affecting the integrity of the scores?
A final lesson is that scorecard systems can drive the wrong behaviour. The results will be skewed—and not fit for their intended purpose—if a buyer's performance evaluation is based only partly on the performance of their suppliers.
It’s even worst if their performance is being determined by scorecards teams are responsible for completing. The conflict of interest is obvious. While most everyone at the company may agree that supplier measurement can be a good thing, it is also evident that the system in place is far from ideal.
Implementing an effective scorecard model requires the right foundation – an operating model for supplier performance scorecards. Smart companies must shoot for processes/policies, effective performance management, data, architecture and tools and’ service delivery models must all be aligned in support of risk management capabilities.
No doubt, effective supplier relationship management takes work, but with all that is at stake, companies should take stock of their current capabilities and approach for scorecard risk management and aim for an integrated model that includes the following characteristics:
1, The Enterprise Measurement System Allows Scoring Flexibility
For many companies, an integrated, up-to-date view of enterprise relationships with suppliers and other third parties is not readily available. Organisations need to analyze their spend on an ongoing basis, identify active suppliers, and clean up and normalise the data to arrive at a complete view of the supply base.
Companies should identify suppliers that drive the greatest level of risk. Which suppliers pose the greatest risk to the organisation’s major product launches, growth plans or projects? Which suppliers can disrupt ongoing operations if their deliveries are interrupted? Which suppliers pose the greatest risk to the organisation’s bottom line?
2. Don’t Fall into Trap of “One Size Fits All” Approach
Smart Supplier Risk Systems will allow adjustments to the performance categories and their weights to reflect the realities of different supply requirements. The best scorecards align directly with the outcomes sought from doing business with a particular supplier. Assign specialists in cost, technology, quality, and logistics who are responsible for posting supplier data periodically on a global supplier portal.
If a supply chain organisation is serious about measuring most of its suppliers, then the less critical suppliers should receive a basic scorecard—even one that is categorical. At some point, depending on the level of effort required to obtain scorecard data, the cost to measure a supplier could outweigh the value of measuring that supplier. When this is the case, a logical response is to not measure, measure less frequently, or simplify the type of scorecard used.
3. Suppliers with More than One Location Receive Multiple Scorecards
Some systems are geared toward counting suppliers as a single entity, yet many suppliers provide material from multiple locations. To aggregate different locations into a single scorecard can be misleading. It also makes it harder to assign scores to specific locations.
A possible solution is to evaluate each supplier's shipping locations across a basic set of operational metrics such as cost, quality, and delivery while the supplier as a corporate entity is evaluated by a set of higher-level metrics to include assessments of supplier innovation, responsiveness, and willingness to invest in the buyer-seller relationship.
4. Internal Customers Evaluate Supplier Performance
In the information age, internal customers should be able to submit comments and ratings about a supplier's performance directly into a scorecard system. These individuals are usually in the best position to evaluate a supplier's operational performance.
Procurement teams should consider allowing suppliers to enter a Web-based portal to view any free-form comments or scores submitted by internal customers. This supports the efficient and open exchange of information, something that is widely practiced with other supply chain applications like sharing demand forecasts, Most supply chain experts would agree that information-sharing across the supply chain is a good thing. So why should sharing of supplier performance data be any different?
5. Scorecards are Reviewed and Acknowledged by Suppliers' Top Managers
Key executives at each supplier should receive electronic copies of the scorecards. Perhaps most importantly, the party sending the scorecard should track acknowledgements that the scorecards were received and reviewed, along with any responses to specific queries.
Forwarding scorecards directly to executive managers supports at least two purposes. First, these executives will have access to information that their own personnel may not willingly share. More than one executive has been caught off guard because they were unaware of issues that affected customers. Second: Information will likely reach the individuals who can effect meaningful change when it is required.
6. Scorecards are Updated in Real Time
Too many scorecards still resemble a batch updating system that features periodic input of data submitted manually each time period. In a perfect world, anyone who is granted access to a scorecard system should be able to view supplier performance levels in real time. Whenever an event occurs such as a quality evaluation at a receiving dock, data records should flow seamlessly into the scorecard database with real-time updating of supplier performance. Of all the attributes of an ideal measured system described in this article, this is the one that is rarely implemented.
For real-time updating to work, the scorecard system must be linked to other supply chain constituencies. Any system that stresses objective rather than subjective assessment, particularly in a real-time environment, should receive serious consideration.
It's safe to conclude most supply chain systems are moving toward real-time data visibility. Some purchasing organisations are beginning to rely on suppliers to self-report and submit their performance to the scorecard system on a frequent basis. A few leading companies are even beginning to solicit performance data from or about second-tier suppliers.
7. The Metrics Database Allows User Flexibility in Retrieving and Displaying Data
An effective system will not only generate the scorecard itself; it will enable data to be presented in a variety of reporting formats, along with easy generation of useful reports. Various on-demand reports can show side-by-side supplier rankings, demonstrate performance changes by category, and highlight the suppliers that improved or deteriorated in performance over a certain period. A database that allows the slicing and dicing of raw data is an essential element of an ideal scorecard system.
Companies should consider many different types of data, from a variety of sources in order to gain an understanding of risk, such as, vendor management plans for those that have subcontractors to support business requirements.. Some data such as capacities is not routinely captured nor is it readily available from a single reliable source.
8. The Measurement System Provides Early-warning Performance Alerts
Most measurement systems are reactive in that they report what has happened, not what is likely to happen. As with a statistical process control system, an ideal measurement system would be able to “look ahead” to spot troublesome trends and non-random changes in a supplier's performance before it becomes out of control. An ideal system would notify supply chain managers of potential problems before the impacts of those problems are even realised.
The system must have predictive capabilities. Consider the possibility of generating early warnings when using advance shipping notices. Any time data reveals a possible late delivery after comparing expected transit times against a due date, a material planner would receive a warning of the potential delay. Or consider real-time GPS tracking systems that could reveal that supply chain delays are occurring, with a notification sent to the appropriate personnel. It is almost always better to be proactive.
9. Suppliers Can View and Compare their Performance Online
For many years, almost every supply chain organisation refused to identify the scores and names of competing suppliers within a category or product type group. Later, most organisations became more willing to show relative comparisons against competing suppliers . The time has come to accept that scorecards present a good way to create competition among suppliers. That means permitting and enabling them to access their scores online, complete with comparisons to other suppliers in the same or similar product type groups..
Performance benchmarking involves comparing products, practices, processes, or strategies against key competitors or companies that are considered best-in-class. Benchmarking methodologies can involve working directly with other companies to compare scorecard practices, searching databases and the Internet to find information on performance measurement and working with professional contacts to obtain scorecard information.
It's a challenge for supply chain organisations today to step back and take an unbiased view of their supplier performance measurement systems. The objective should be to take a poor measurement system and make it better—or to transform a good system into an excellent one. Any gaps that exist between the current and future states—and there could be many—will require a clear plan to bring an existing system closer to a preferred system.
10. . Develop Your Organisation’s Playbook and Tools for Addressing Supplier Risk
The organisation’s key actions for risk mitigation must be identified and formalised as “playbooks” to enable structured, repeatable and coordinated execution across the enterprise. Playbooks should include specific action plans with repeatable triggers, procedures, roles, responsibilities and measures. This entails clear definition of triggers, cross-functional execution steps and workflows, roles and metrics, as well as other elements. When executing to a playbook an organization can address risk more quickly, efficiently and effectively.
Supplier risk management works best when risks are predicted in advance and when they are related to the type and magnitude of business impact that they can drive. Understanding risk in the context of the business activities at risk is vital for prioritisation and for mobilising appropriate response. This step requires a well conceived framework and an assessment model for supplier risk analysis. It also requires linkage of supplier risks to the products and services that they may impact.
New tech for supplier risk scorecards is a vital enabler. In an era when almost too much information is available, there is no excuse for not remaining current regarding the trends and technologies that relate to supplier performance scorecards.