The missions, threats, environments, constraints, and measures of effectiveness are continually reviewed. Some constraints that are considered are technological availability, physical and staff resources, cost/budgetary impacts, and the risk of not meeting the stated schedule. The output of this process answers the question “what?” for the functionality and “how well?” for the performance requirements.
Congress’ recent interest in reforming the Pentagon’s acquisition system is all well and good, but its efforts will be wasted unless it addresses the root-cause of the system’s dysfunction: the requirements definition process. Requirements definition is the aspect of acquisition where the technological capabilities a new weapon will have are determined. Problems in the requirements phase of the acquisition process are therefore carried forward and contribute to a host of far-ranging issues down the line.
If you find yourself in a requirements role project, it’s likely that at some point you’ll need to create a functional specification. But what is a functional specification? Why do you create a functional specification? And, perhaps more importantly, what goes into a brief like this?
The purpose of a functional specification is to define the requirements to be implemented by the engineering solution. Now, as business analysts, not all aspects of our solutions are software-based. A perfectly legitimate solution to an engineering problem could involve a process change, organisational change, or even a configuration adjustment.
But since so much of business today is supported directly by engineering systems, many times solving a problem means upgrading or building new products and that means specifying functional requirements. Depending on your methodology and analysis practices, a functional specification can come in a variety of different formats. Let’s look at a few of the most common formats.
Whatever template is in place at your organisation, the purpose of the functional specification is to capture what the product needs to do to support a user. Often it is reviewed and approved by both business and technical stakeholders. The business users confirm that yes, this is what they really want the system to do. The technical users confirm that, yes, these requirements are feasible, implementable, and testable.
It’s the functional spec that sits in the middle and holds everything together.
Teams typically create long requirements specifications which includes information about the project, project team, open issues, environment, assumptions, dependencies, constraints, key dates, business model, data requirements, and, finally, the functional requirements. The functional requirements typically took up most of the proposal .These documents were thorough, but they lacked spunk.
Teams should move towards a shorter scope document to consolidated many of the overview sections in my earlier documents along with a set of use cases to drill into the functional details. Sometimes agile projects are undertaken where user stories were the preferred format.
Whatever the format, the focus should be creating alignment between what the users wanted and needed the system to do and what product was prepared to build for them. And that’s really the essence of the functional spec.
So you need to know what you actually need to put into your functional specification. In addition to contextual information about the project, a functional specification includes a list of features and functions to be supported by the software. How. these features and functions are represented depends largely on the template in use.
Many functional requirements are typically represented as “system should” statements. You’ll typically have a list of system shoulds, often organised in sections by feature with a priority identified. For example, “The system should enable course participants to submit a question.” or “The system shall enable the course instructor to view all course participant questions.”
In a Use Case, functional requirements are typically represented as a series of steps. The use case puts a collection of functional requirements into the context of user action, which typically eliminates a lot of ambiguity that makes it’s way into an out-of context list of system shoulds.
For example, “Course participant selects to submit a question. Course participant provides their name, selects a question category, and provides a textual question. System sends an email to the course instructor containing the information provided by the course participant.”
In a User Story, functional requirements are typically captured in the following syntax: “As a user, we can do something so some benefit is received. When used appropriately, the user story syntax is brilliant at capturing user goals, functional requirements, and business benefits altogether in one concise statement. For example, “As a course participant, I can submit a question so that I get my concerns about the course materials addressed” and “As a course instructor, I can view all course participant questions so I can respond in a timely manner.”
Each of these ways of capturing functional requirements has its pros and cons. System should statements are easy to track in requirements management systems but difficult to implement and test as they are often presented without context.
Use cases provide great context which helps get the right functional requirements approved and implemented, but it’s also easy for the scope inside a use case expand while meeting user goals or for individual requirements to get lost in larger use case documents.
User stories link together benefits, functionality, and user goals and are often at the right level of detail to facilitate easy planning, but often lack the detail that’s needed to implement. It is also easy to lose the big picture in the midst of working through individual user stories.
The approach you choose will often be dictated by organisational standards. In the absence of standards, you get to define your own. It’s a good idea to start by asking the user and technical stakeholders what they’d like to see in a spec, as this can help you avoid a lot of issues down the line.
Our favorite approach to specifying functional requirements is use cases with accompanying end user modular interfaces , and a short scope statement to get aligned around the big picture. There are many templates and work samples covering this approach.
“Aircraft Design Breaks System into Self-Contained Module Components Connect with Standardised Set of Interfaces”
Kiowa Warriors aircraft was a great aircraft at the time, but there was no open architecture…..If you wanted to change code for the multi-functional display for example, you’d have to rewrite the entire code for the whole aircraft and flight test it, because…. you might impact the flight controls.”
“It was a big deal just to change a screen, let alone make a major upgrade like a new weapon, sensor, or defensive jammer. That was tolerable when upgrades were infrequent and each subsystem on an aircraft was effectively stand-alone.
It’s unacceptable in the modern world, where technology advances at such a quick pace and each aircraft has multiple interdependent systems all networked together. What you want, is something like the iPhone, where any vendor can offer a new app as long as it meets the manufacturing technical standards and the manufacturer itself updates the underlying operating system every few weeks.
So why does it take multiple years just to upgrade an existing aircraft? “We’ve got to build our equipment faster. We’ve got to deliver improvements to the warfighter more quickly, we need to look … introducing upgrades over the network, much like you get upgrades to your iPhone.
Software upgrades in particular should be swift because they don’t require any physical changes to install. But in a weapons system like Kiowa Warrior, a change in one section of code like for displays could inadvertently affect another like flight controls. So any upgrade requires not only meticulous debugging of the software, but time-consuming flight tests to prove and certify that the aircraft is still safe.
What’s more, because software and hardware were not standardised, it was often impossible to change them without knowing the original manufacturer’s trade secrets — their proprietary intellectual property. That often led to so-called “vendor lock,” when only that original manufacturer could do upgrades, with no competition on either quality or price.
The solution, much easier said than done, is to break the system into self-contained components — modules — that connect to each other like Legos using a standardised, well-defined set of interfaces — the architecture — and allow all potential competitors to know how those interfaces work .
The “open” part allows you to take a single module of software and/or hardware made by one vendor, e.g. the cockpit displays, and replace it with a new and improved module from another vendor. Because each module is basically a black box that plugs-and-plays into the wider architecture, interacting with other modules only in strictly limited ways that everybody knows about, replacing it shouldn’t have unexpected impacts on the rest of the system and shouldn’t require learning anyone’s trade secrets to make things work together.
“We probably don’t need to understand how that box works, we just need to understand what the input is and what the output is. “No longer is it appropriate to go out and simply ask for the patent for the entire piece of equipment” — a chronic sticking point with contractors, especially small businesses whose intellectual property is often their main asset.
The modular approach applies most obviously to software, but has implications for hardware as well. “We don’t need a whole helicopter in order to prove out a rotor head, a landing gear assembly, or a control feature. Instead, you can just put the new component, whatever it is, on a current aircraft and see how well it works: “We can fly it, we can test it, and we can get feedback.”
Now, actually developing interface standards to allow easy plug-and-play of new components is not easy. There’s at least a foundation out there, the Future Airborne Capability Environment, which is developed and updated by a DoD-industry coalition, but FACE may need more refinement and detail.
“Our first opens systems architecture cockpit is in the UH-60 Victor,” which will soon go into operational testing. That’s really the pre-runner to open source architecture. It’s being informative as we speak.”
“We turn changes right now in that cockpit sometimes overnight, software changes. Instead of having to test everything every time you change anything, the modules are “fully partitioned, such that this part of the aircraft doesn’t have to touch that part of the aircraft.”
“You may think that’s relatively simple. “That’s not simple.”
"So the question is: How will DoD adapt requirements to maintain the resiliency of our warfighters? What steps must we take to protect our installations? How can installations sustain warfighters when networks are damaged or degraded? “Business as usual” is not an option in today’s budget environment."
Acquisition programs often see major increases in development costs during the production phase of the process. Since this phase occurs long after the capability requirements of a program have been set, development costs should stay relatively flat. Usually, cost increases stemmed from the addition of unplanned capabilities but some are due to deficiencies discovered in testing.
Upfront cost increases for capability upgrades are significant,, but the new requirements add complexity to the production process that will balloon costs later. There are three possible explanations for these capability revisions. First, changing operational requirements could mean that the originally planned capabilities no longer meet current operational needs. Second, new or emerging technologies could have become available following the initial requirements definition phase. Third, administrative paperwork by the military services and their subcomponents allowed for unrealistic requirements at the outset of the process that had to be revised later.
While the first two explanations do not stem entirely from the requirements definition process, they are affected by it. The fact that capabilities determined during the initial planning process might need to be changed to account for changing operational needs suggests that both the requirements and production processes are moving too slowly. The same goes for the introduction of new technology late in the process.
The third explanation ties into the first two. Process roadblocks allow for overstated capability requirements that are really technological “wish lists,” because no service has an incentive to veto another service’s plans, no matter how costly or unfeasible they may be. The military services want weapons that push the technological envelope, and attempt to predict what will eventually become state-of-the-art technology five to ten years out with little actual knowledge of whether their dream can become reality.
Those interested in fixing the Pentagon’s acquisition system need to actually address it. Requirements reform serves the goals of current acquisition reform effort and ignoring it will inhibit the effectiveness of current proposals. It will not be easy, but it should take greater priority than addressing the other phases of the acquisition process. The requirements process feeds into and ultimately drives the other phases of acquisition. Addressing it will target a root cause of acquisition dysfunction rather than just treating a symptom.
“Processes such as developing requirements, contracting, making investments, or obligating money are often driven not by a sound business case, but by arbitrary deadlines and outside pressures.”
“DoD could get more from its funds if it functioned in a flexible system that allowed more effective resource allocation. An opportunity cost arises each time DoD makes a spending choice that could have been invested in developing other capabilities, delivering more units, or funding other critical requirements.”
“Many regulations can remove or dilute authority and accountability. Regulations that dictate contract type can deprive acquisition personnel of the discretion needed to get the best deal for DoD. Additionally, the management structure and decision-making process within DoD are too bureaucratic and encumbered by numerous layers of review.
Successive reviews do not necessarily add substantive value, but they do add time to the process and add to the number of people who can say no or influence a program, including people who do not have a stake in the outcome of the acquisition. Because nobody holds actual authority to manage a program, there is no one to hold accountable. The fundamental reason for the continued under performance in acquisition activities is fragmentation of authority and accountability for performance."
“Defense acquisition is a human activity dependent on the judgments, considerations, interests, and decisions of people operating in the real world regardless of how impressive policy initiatives look on paper, or how effective the acquisition system is.
The ultimate effectiveness and efficiency of defense acquisition depends on and is determined by the people who are responsible for all phases of acquisition.”
“The dynamic defense marketplace is vastly different from the defense-centric marketplace of the past in which DoD could set the rules of acquisition. To effectively benefit from and compete in the dynamic defense marketplace, DoD must understand where it fits into the current business environment and adapt to this new reality.
DoD needs to be a better informed buyer, one that understands market dynamics, interests of companies—including cash flow, profit motive, and opportunity costs, as well as the broader economy."
“Systems and capabilities must be developed, deployed, and integrated into operations within the arc of the threat, not after the threat has passed or after DoD has spent billions of dollars on technologies or capabilities that already are obsolete or will be obsolete by the time they are deployed. The private sector now drives much of the technological innovation, which makes it difficult for DoD to keep pace.”
1. What are the reasons behind the system development?
2. What are the customer’s expectations ?
3. How will the performance of the system be measured?
4. Who are the users and how do they intend to use the product?
5. What environmental characteristics e.g., carrier corrosion from salt or strong electromagnetic fields does the system have to take into account?
6. What are the existing and planned interfaces?
7. What is customer level of expertise?
8. What functions will the system perform, expressed in the customer’s terms?
9. What are the constraints with which the system must comply?
10. What will be the final form of the product—model, prototype, or mass production?