“Advanced technology must be integrated into producible systems and deployed in the shortest time practicable. Approved, time-phased capability needs matched with available technology and resources enable bold acquisition strategies as preferred approach to satisfying operational needs. Incremental development is the preferred process for executing such strategies.”
“A bias toward evolutionary development mitigates against over-engineered solutions, requirements creep, and premature technology insertion. Over time, with quick learning, rapid course corrections, and more opportunities to insert evolving technology, a series of incremental projects fulfills the overall portfolio mission needs by following a step-wise path, instead of attempting large and risky episodic jumps in capability.”
Acquisition, contracting, engineering, and testing strategies should reflect the iterative structure to include the processes, resources, and decisions needed to field each release.
Many operational commands overly define system requirements for the acquisition community, who in turn overly define system specifications for industry. This results in many stovepiped solutions that are not responsive to changes in operations, threats, and technologies. Most DoD systems don’t operate as standalone stovepipes, but are part of a systems-of-systems or portfolio of capabilities. Instead of overly defining system requirements, operational commands can enable greater speed and flexibility by capturing operational requirements at a broader portfolio level.
DoD’s requirements system is under-resourced and lacks the speed, agility, and innovative approaches needed to effectively exploit leading technologies for military advantage. DoD’s requirements processes, including implementation of JCIDS policies, contribute to lengthy development timelines, limited flexibility, and stove-piped systems. Although this process is important to provide joint warfighting priorities, the lengthy series of system-centric analyses, requirements documents, and reviews can limit innovation and interoperability by prematurely defining and constraining requirements.
“The purpose of an Initial Capabilities Document ICD is to document joint military capability requirements and associated capability gaps in cases where the Sponsor deems the operational risk of unmitigated capability gaps to be unacceptable. The ICD provides traceability to the operational context, threats, and other relevant factors that determine the joint military capability requirements. The ICD quantifies capability gaps associated with the requirements, operational risks across the joint force, and proposes materiel and/or non-materiel approaches to closing or mitigating some or all of the identified capability gaps.”
ICDs are traditionally scoped for an acquisition program to address. DoD organizations should consider authoring Portfolio ICDs to cover a broad mission or capability area and align with the scope of a portfolio rather than that of a single program. They would be broad documents central to ensuring that the operational, acquisition, and intelligence communities align around common outcomes, priorities, and expectations. In coordination with stakeholder operational commands, operational sponsors could manage capstone requirements via portfolio ICDs as living documents. This could include periodic updates to reflect their current concept of operations, strategic guidance, priorities, threats, capability gaps, and desired effects.
A Portfolio ICD must serve as an umbrella requirements document to allow future programs and rapid acquisition efforts to leverage and iteratively define capability gaps within that mission or capability area via capability development documents or check box alternatives. As operational commands develop and coordinate ICDs, they should consider structuring them to cover a broader capability or mission area vice a specific materiel solution to enable speed in future efforts. As new threats, operational demands, or technology opportunities arise, organizations will no longer have to develop and staff an ICD, but rather proceed to the analysis and subsequent requirements documentation activities.
During the early stages, the program works with its operational sponsors to refine and structure the program’s requirements. To set the stage, the team should have a clear understanding of the users’ Concept of Operations CONOPS, performance objectives, and operational environment.
The requirements are prioritized to support an Analysis of Alternatives tradeoffs, and the advantages and disadvantages of structuring development via releases. As requirements mature, frequent collaboration with end users is critical to ensure the requirements capture users’ priority needs and that all stakeholders have a common understanding of current and future CONOPS.
Collaboration with other stakeholders e.g., engineers, testers, and enterprise architects is also critical during the shaping of the requirements. The program identifies a Product Owner, structures the requirements documents, and establishes the requirements management processes. Once a contractor is selected, the development team refines the program and release backlogs with the Product Owner and PMO.
The program backlog contains all desired functionality and requirements. A release backlog typically comprises the highest priority requirements from a program backlog that a team can complete within the established timeframe. Each sprint consists of the highest priority requirements from the release backlog. Once the development team commits to the scope of work for a sprint, that scope is locked. Sprint demonstrations conducted by the contractor at the end of a sprint may identify new features or defects that the team would add to the release or program backlogs.
The Product Owner, in collaboration with operational sponsors, requirements organizations, legacy system operators, a broad user base, architects, systems engineers, enterprise architects, and other stakeholders captures, integrates, refines, and prioritizes items on a program backlog.
The portfolio’s operational representative should be empowered to dynamically reprioritize, add or delete, and shape capability requirements based on operational needs, threats, technical performance, systems engineering, security, feedback from earlier releases, and other factors. These representatives would actively collaborate with operational commanders, end users, organizations providing threat assessments, and enterprise architects to curate the portfolio backlog. During portfolio reviews with Military Service leadership and operational commands, operational representatives could present the requirements backlog to ensure alignment with operational priorities and outcomes.
Each program or increment must also manage its requirements via dynamic backlogs. As interim developments are demonstrated or fielded, user feedback and system performance might generate new capability requirements or shift priorities for the backlog. The goal should be to ensure that each successive iteration addresses the users’ highest priority needs and strengthens force effectiveness.
A close partnership between users and materiel developers is critical to the success of defense acquisition programs and is a key tenet.. Users must remain actively involved throughout the development process to ensure a mutual understanding across the acquisition and user communities. While most users maintain operational responsibilities associated with their day job, the more actively they can engage in the development, the better chances for success. Operational commanders must make a commitment to allocate time for users to engage in development activities.
Users share the vision and details of their concepts of operations, business processes, and the desired effects of the intended capabilities. Through ongoing discussions, the program office and developers gain a better understanding of the operational or business environment, identify alternatives, and explore solutions.
Users can describe and validate the requirements, user stories, and acceptance criteria. The program office must make certain that the requirements can be put on contract and are affordable given funding, schedule, and technological constraints. Testers should also take an active part in these discussions to ensure common expectations and tests of performance.
User forums enhance collaboration and ensure that all stakeholders understand and agree on the priorities and objectives of the program. The forums can serve as a valuable mechanism for gathering the full community of stakeholders and fostering collaboration. They give users an opportunity to familiarize developers with their operational requirements to communicate their expectations for how the system would support these needs
Continuous engagement of users, developers, acquirers, testers, trainers, and the many other stakeholders at these forums also enables responsive updates and a consistent understanding of the program definition.
Accelerating the pace of delivery requires making adjustments to the design of the system being developed. It is not sufficient to simply “turn the crank faster,” we must also take a different approach to system design. Reuse – Someone probably solved your problem already
“Systems thinking has been defined as an approach to problem solving that attempts to balance whole thinking and reductionist thinking. By taking the overall system as well as its parts into account systems thinking is designed to avoid potentially contributing to further development of unintended consequences.
Apply to problem solving when components as part of entity should best be understood in the context of its relationship with other components of the entity, rather than in isolation. The orientation and connections between the project components and how the project itself fits within the larger environmental structure. The internal structure leads to the project function while the external structure influences and modifies the project operations and outcomes.
The interactions, links and dependencies between the project components. Also, the connections between the project and the components and systems in the external environment. These relationships lead to the system dynamics, behavior and performance of the project.
Characteristics of the environment and context within which the project exists. These set boundaries and conditions for the project operations and may be inherent to the environment or dictated by the sponsor.
Initial conditions and definitions which impact a project’s definition, scope and performance. The assumptions can cover different aspects of the project e.g., conceptual, geographical, temporal, stakeholder etc. and their applicability varies.
Although there are numerous and highly popular process models for design approach, they are consistent in that they begin with problem exploration, move into ideation of solution concepts, and include some element of prototyping and testing those solution concepts for feedback.
While the process is depicted as linear, in practice, it is often highly iterative and may require jumping across process steps depending on the circumstance of the team, problem complexity, and other variables unique to the space by which the problem exists.
The START point. The design is simple, basic, immature, and delivers little value. The best move involves additive strategies that increase complexity and introduce new components, features, or concepts e.g. brainstorming, prototyping.
The SHIFT point. The design has accumulated a critical mass of complexity and now delivers significant value. Now the best move is to adopt reductive strategies that decrease complexity
The STOP point. The design has accumulated too much complexity, which overwhelms value. The best move is to take a pause, then use reductive tools to significantly reduce complexity. At this point, it may be wise to scrap it and start over from scratch.
The SHIP point. The design is elegant: simple and effective, providing maximum value. Declare it complete and send it out into the field.
Trimming is an iterative technique for removing unnecessary elements from a design. It can be applied to technical designs and system architectures, as well as process designs, communications products, and organizational structures. The ideal candidate for trimming is an object that has been accumulating components over time i.e. a system with many features, a process with many steps, a presentation with many charts, etc.
This method is typically used during the middle to late phases of design effort, at a point when the design has accumulated a significant quantity of components, parts, or steps. It can be applied to a finished, mature design as well as to a work-in-progress.
Modular Open Systems Approach is a systems engineering approach that involves dividing large entities into smaller modules with standard interfaces. It builds on principles such as “high cohesion, low coupling” to produce systems that are more robust, maintainable, and easier to update over time.
The term “modular design” refers to building a large system out of discrete pieces, usually built to standard dimensions and attributes. This often allows for rapid assembly of modules into a system, as well as easy customization, depending on which modules are incorporated. Lego blocks are an example of modular design.
Open Systems use standards that are independent of a particular vendor or designer, increasing the diversity of potential suppliers. These standards are particularly important at interfaces. Again, a Lego block’s standardized interfaces allow for connections between any two blocks.
Program management is responsible for evaluating and implementing a modular approach to the maximum extent feasible and cost effective. This approach integrates technical requirements with contracting mechanisms to support a more rapid evolution of capabilities and technologies throughout the product life cycle through the use of architecture modularity, open systems standards, and appropriate business practices.
Modular approach is subjected to verification to ensure major system interfaces comply with, if available and suitable, widely supported and consensus-based standards; uses a system architecture that allows severable major system components at the appropriate level to be incrementally added, removed, or replaced throughout the life cycle of a major system platform to afford opportunities for enhanced competition and innovation.
1. New programs without an approved requirements document are ideal to effectively shape an Agile requirements management structure.
2. Programs with an existing requirements document, could explore tailoring the implementation details or restructuring of the document.
3. Work with Program Manager to effectively scope out high level requirements and managing requirements details via backlogs of user stories.
4. Outline key roles and responsibilities and identify the right individual(s) to serve as Product Owner and related roles.
5. In collaboration with the flag-level requirements board, a program could propose they approve the release backlog prior to the start of each release.
6. Sprint backlogs should be agreed upon by the product owner and development team and not require the requirements board approval prior to beginning each sprint.
7. Provide insight into the larger grouping of requirements/user storiesplanned, priorities of items on the backlog, and insight into the notional items planned for the next few releases.
8. Assign cost estimators to develop annual and lifecycle cost estimates, Identify sources of funding from applicable budget portfolios
9. Operational command identify high-level capability requirements, attributes, and initial minimum values.
10. Hold regularly scheduled user forums and fund travel by stakeholders across the user community; alternatively, or in addition, provide virtual participation.
11. Arrange for developers to demonstrate existing capabilities, prototypes, and emerging technologies.
12. User feedback guides developers and acquirers in shaping the program and R&D investments.
13. Give stakeholders the opportunity to convey expectations and obtain informed feedback.
14. Communicate the acquisition pathways and associated program execution strategies – especially the operational impacts associated with experimentation and testing as they relate to the user training and fielding activities.
15. Identify the current engagements between the acquisition and operational/business organizations to include the level, frequency, and methods of engagement
16. Look to reduce the number of components, minimize the tight coupling of components, break feedback loops in the system
17. Limit the autonomy of independent behaviors of the components
18. Prototype and iterate early and often. Welcome failure and learn from each iteration.
19. A framework for front-end requirements in agile systems development efforts
20. Incorporate principles and some tools in context of collaborative engineering of decisions among diverse stakeholders
21. Decompose Complex Challenge Areas into Problem Statements
22. Enhance competition – open architecture with severable modules, allowing elements to be openly competed.
23. Facilitate technology refresh – delivery of new capabilities or replacement technology without requiring change to all elements in the entire system.
24. Incorporate innovation – operational flexibility to configure and reconfigure available assets to meet rapidly changing operational requirements.
25. Enable cost savings/cost avoidance – reuse of technology, modules, and/or elements from any supplier across the acquisition life cycle.
26. Improve interoperability modules that can be changed independently
27. Understand and discuss transition likelihood across several types of technologies transitions
28. Quick identification of portfolio technology transition imbalances to both S&T and program communities
29. Draft transition and/or integration strategy templates that individual efforts can tailor for their efforts
30. Identify forums for S&T project leads, program offices, innovation groups to share capabilities and program needs and opportunities
31. Regularly collaborate with traditional and non-traditional companies via multiple-award contracts
32. Deep understanding of system and sub-system designs and architectures
33. Ability to conduct end-to-end performance models of the system combined with a continuous technical effort to update and validate system models, using testing and engineering data
34. Quantitative understanding of how related legacy systems or the system being upgraded is used and how it performs operationally e.g., reliability/availability, key performance metrics, etc.
35. Establishment and maintenance of open interface standards, with the ability of the government program office to compete block upgrades to the system
36. Perform an Business Case Analysis to determine where, why, and when to prioritize limited engineering resources.
37. Improve chances of program success by failing fast, continual learning through experimentation, customer feedback, iterative design
38. Structuring programs and processes around small, frequent capability releases
39. Ask potential users, purchasers, and partners for feedback on all elements of the business model, including product features, pricing, distribution channels, and affordable customer acquisition strategies
40. Eliminate wasted time and resources by developing the product iteratively and incremental process start-ups create the minimum viable products they test.
41. Visit customer organizations to meet face-to-face with end users to understand their environment, frustrations, constraints, new threats, and operational priorities.
42. Tailor a business model canvas for your program to understand who your true customers are, what they value
43. Make investments in rapid prototyping and experimentation to share notional solutions with users to solicit their feedback on features, tradespace, and design considerations.
44. Establish a regular collaborative environment to engage users, testers, and maintainers as part of a continual learning mindset throughout development.
45. Limit the number of core team members. Connect with outside experts as needed, but keep the core team small for day-to-day operations.
46. Collaboratively identify the team’s priority outcomes and shared responsibilities in achieving the outcomes.
47. include functional representatives from program management, engineering, contracting, finance, cost, and logistics to ensure holistic discussion across the program’s lifecycle.
48. Provide the teams a set of guiding principles e.g. speed and simplicity are priorities identify the core processes, reviews, documentation for the type of acquisition and applicable policies and statute.
49. Direct connectivity to those who will use the systems they provide, which enables tight coordination on how requirements will be met over time within a portfolio construct.
50. Review upcoming contract actions for similar scope, sole source or restricted competitors, and related requirements.