In today’s acquisition practice, this means competition between various materiel solutions prior to Milestone A. Prototyping at this early stage in development takes the form of conceptual prototypes, which are breadboard designs of individual components and subsystems coupled with estimates of system performance provided by traditional modeling, simulation techniques.
Conceptual prototypes provide only basic approximations of system performance and cost, but enough insight to challenge assumptions and open the door for more innovation. They are also less costly to pursue than other types of prototypes, and, considering their timing, provide benefits that can be exploited over a much longer term. Others have picked on this theme of non-system specific prototyping as a means to innovate and have taken it in another direction: technology transition.
In terms of budget categories, innovation has been found most likely to reside at the applied research and advanced technology development budget categories Technologies that are prototyped at this stage of development are not ready for operational use, but they can be demonstrated and assessed for operational utility.
More prototyping at these early stages of development has been proposed as a means for focusing research, bridging the worlds of technology development and systems development, and expanding operational performance at lower cost prototypes that appear in these budget categories are known as Advanced Technology Demonstrators, and they have become a valuable tool for technology transition.
Competing prototypes in the context of technology transition can take the form of a competition between system upgrades. A competition between system upgrades is typically more cost effective and less risky than competing entirely new systems The incremental improvements upgrades provide go a long way towards facilitating innovation, and therefore, when still cost effective, likely should be pursued.
Competing system upgrades in a cost effective manner is again a matter of how objectives are framed. When framed in the most general terms, such as in terms of mission effectiveness and expected life-cycle costs, more options for pursuing competition result.
One option is to compete a combination of upgrades to existing systems to determine which combination provides the most cost effective capability. The Air Force’s Enhanced Tactical Fighter program, which competed a combination of upgrades to the F-15 and F-16 fighter aircraft and led to the F-15E Strike Eagle aircraft, provides an example of this approach.
Another option is to compete an upgrade to an existing system against the capabilities of a new one. This option is a particularly attractive means for providing competitive pressures when resources are too constrained to compete multiple new designs. What this means for program planners is that competition may be more a question of program planning than anything else.
Given a limited supply of resources, the objective is always to allocate those resources most effectively to obtain the best mix of performance, cost, schedule and risk. Accommodating competition is one way to obtain a better mix. The challenge is to approach competition in such a way that paying for it is worth the costs.
Workshop is a vehicle for exploring, testing, and hopefully proving an AI-driven tool to generate physical designs. Implementing a divergent design flow, Workshop builds off the traditional flow: define, create, explore visualize, analyze and fabricate. Where that flow departs from the norm is in who — or rather, what — is performing or assisting in those development stages, and the extent of that assistance.
With Workshop, engineers and designers specify design goals, along with parameters such as mass, volume, and engineering constraints, as well as materials and manufacturing processes available for production.
All that data gets digested by AI deep computational networks to create a model it concludes is the optimal balance of the given constraints. More typically, the network will generate many versions, creating tens, hundreds, or even thousands of variants, all of which balance those constraints in slightly different ways.
AI-based generative design is no replacement for a product designer or engineer. Rather, think of it more as a right-hand man — or better put, a right-hand team — with countless hours to spend on each task. It allows designers the means to explore options far more exhaustively than they otherwise ever could, make effective decisions more quickly and earlier in the process, and eventually implement a design — from concept all the way to manufacturing.
The computational network proposes options, along with the data to indicate how well that option meets various goals and constraints, but the designer remains the guide of that process, making the key decisions and tradeoffs. Effectively, you tell your team what you’re trying to accomplish and what the limitations are, and that team reports back, “You said you wanted to do this, so here’s the avenue you should probably pursue.”
Ultimately, transformational AI-driven generative design will earn its place in product design toolboxes the same way any now-ubiquitous tools have: by proving its worth in making products faster to design, higher in quality and performance, and most amenable to the optimal means of production. For its part, generative design assists in all those goals by allowing designers to make the most effective and through decisions, and do so earlier in the process.
Workshop delivers that ability for a designer to set objectives and constraints, include limits on geometry, material type and amount, and production methods. With generative design at the fingertips, a designer can then quickly create and integrate hundreds of options, each presenting a different degree of adherence to the constraints given.
This raises a great question: How can the user avoid being overwhelmed by a potentially large number of similar-looking designs, and instead hone in effectively on the one or few best suited to the set goals and priorities?
To make all those generative design results easier to consume, Workshop includes filters that let you sort designs using the most relevant and important info up front. Filters let you navigate, compare, and contrast the tens or hundreds of options by the degree to which those designs meet the goals and constraints. Filter by strength, mass, cost, manufacturing type, or stiffness, for example, to narrow in on and eventually identify the best option to pursue.
Arriving at the right model, or at least the best one to explore first, shouldn’t end at a creating a conceptual, stand-alone structural representation. Rather, the key is to also allow the means to bridge the gap from that machine-generated model to engineering verification and styling, and eventually on to getting it manufactured.
Some simple outputs are of limited value; it yields an interesting shape, but one that can’t serve any use beyond the visual. A productive workflow needs that geometry embedded with all the rich data sets needed to directly feed into simulation, verification, rendering, and on to prototype and manufacturing.
If the machine integration produced just a shell of geometry, the designer might have to re-create the whole thing manually. Bridging the gap from synthesis to the rest of the workflow was a key design goal for Workshop generative design functionality, and its implementation is unique in the way it automates the synthesized design in a complete, usable, and editable format ready for verification, re-design, and ultimately, physical creation.
A generative design–based workflow should seamlessly culminate in a manufacturable model, be it for prototype or volume production. Workshop generative design considers the constraints and capabilities of the manufacturing process and materials available.
The model is particularly relevant in the context of additive methods like 3D printing, especially now given recent advancements in printing with metal. Such methods allow for shapes and structure that conventional manufacturing methods can’t achieve, making it essential for the user to provide generative design guidance on which methods are available or preferred.
But you can’t ask or expect end users to navigate the appropriate performance, features, virtual machine types, or determine when use cases might best for generative design or any other workloads headed to execution space.
As you might guess, Workshop hides the processing wizard behind the curtain so the user doesn’t have to be concerned with how it gets done. Based on the model and constraints, Workshop assesses the workload at hand, “right-sizes” it to the appropriate machine instances, and when complete, hands it back to you inside the application.
Over time, machine learning will permeate virtually every corner of computing technology and applications. Of that, the majority of us have little doubt. In design computing, uses have already popped up to significantly improve the performance of existing 3D graphics and rendering that professionals require. But most certainly, these uses represent but the tip of the iceberg, and the real impact will come in more revolutionary applications.
The demands and workloads product design represent make it fertile ground for AI infiltration. Expect machine learning advancements to both leverage and transform the existing tried-and-true design/verify/iterate/manufacture workflow.
Most of the possibilities we’ve likely not yet imagined, but the same would likely have been said ten years ago for generative design, an approach that offers compelling, undeniable appeal. Combine that with its natural pairing with virtual workstations, and we should see momentum increase for both approaches.
It’s time to pay close attention to what AI offers for product design computing — assisting in design creation itself, leveraging the technology to streamline workflows, improving end products, cutting costs, and shortening time to markets. Your competition likely has.
Shifting traditional design work earlier in the design process so as to avoid rework later is difficult. A number of product development practices that have been characterised as a shift from developing a single-point design to developing a set of possible designs have proven effective at reducing development rework.
Here we refine the definitions of such “set-based” development practices, which are aimed at early development phases, and shows how they can be applied to the systems engineering process in order to reduce or eliminate the root causes of rework
We use the term rework to specifically mean the work that occurs when a prior decision that was assumed to be final for that project is changed because it was later found to be defective.
A decision is considered “ final” in the sense that the team does not have any reason to believe that the decision would need to change and therefore expects that development work can proceed assuming that decision will stand for the remainder of the project.
Such rework is distinct from design iterations performed for rapid learning, where the design decisions are understood to be experimental, and thus other development work does not proceed assuming those to be the final decisions.
It is also distinct from establishing rapid project cycles to accelerate customer feedback. However, rework may occur within a design iteration or rapid project cycle if a poor decision would otherwise result in the iteration or project cycle failing to achieve its goals.
Rework has become so commonplace that most development managers seem to consider it inevitable so they schedule multiple prototype build-and-test cycles into their project plans, create elaborate engineering change processes to manage multiple layers of rework, and pad schedules to account for the unknowns. Such practices do not address the root causes so that the rework can be eliminated.
But much of the rework that product development organisations experience is not inevitable. Development organisations can eliminate rework from their product development processes. But to do so requires a very different approach to the systems engineering of new products and technology, all starting with the front end of the product development process.
Several models and tools have been proposed to reduce the need for rework , but these approaches will have limited impact if they do not address the root causes of that rework.
Systems engineers and development managers are all too familiar with the frustration of seeing development teams revisit decisions made earlier in their projects and watching the ripple effects of violated assumptions, associated design changes, and reworked plans, assessments & designs they know are coming as a result. Here we provide several examples:
First, during prototype testing, teams discovers that two of the design specifications are incompatible, meaning one of them must be revised and all the spec creation to date reworked.
Second, The product team realises to their surprise that customers do not like the trade-offs made among competing objectives, so key product features must be changed late in the game at significant cost to the organisation, or face lower than projected sales.
Third, In developing the manufacturing process, the manufacturing engineering team learns that the current equipment is not capable of manufacturing an important product feature, resulting in the tough decision either to redesign the feature thereby delaying product delivery or to invest much more than planned in updating manufacturing capability.
Finally, After market introduction or delivery to the customer, a latent design flaw results in warranty claims that eat up most of the profitability of that product, possibly even blowing up into a product recall.
The bottom line impact of unfortunate but common occurrences such as these is staggering. The cost to extract such defects can increase significantly the original development costs depending on when they are discovered.
Such defects squander market opportunities, damage brand reputation, and otherwise take huge bites out of return on investment and growth potential. Often missed are the compounding effects of frustrated employees, conflicts across organisations, and related inefficiencies.
Much of life cycle costs of a project are determined by decisions made by the end of the concept phase and most capital is committed before detailed design starts.
Systems engineers face dilemma that often very little is known about full impact of those decisions when they are being made so it is not a surprise that such decisions are frequently revisited, and the associated development work redone.
We tried many tools to get to the final iteration. You can see in the picture we show two design iterations—the middle one that is still quite similar visually to the original node was our first iteration. And you can see version 2.0, the second iteration is where we really made the big step with the freedom of form.
Using generative design, engineers specify parameters, such as weight-to-strength ratios, efficient material use, and temperature, pressure and force ranges. The generative design engine creates several design options through an iterative approach. Engineers then evaluate and select from among the generatively designed options—more options than would be possible with traditional design tools, and likely many options that the engineers would never have considered.
No one group has complete control over the final results, so it is a lot of different engineers have to move at the same time or at least following in a series, and what you see in these processes is that they all start with great concepts but throughout the phases they become victim to planning pressure and costs so a lot of new ideas and challenges are just designed out.
In a Set-Based Design approach, which has been identified as a preferred approach for the development of future design efforts, discipline-specific designs are done in parallel across a broad design space to improve the flexibility of the design by delaying key decisions until the design space is fully understood the parallel nature of the approach also makes it an ideal fit for high performance computing
Since computers are already widely used and the current level of their usage is very high, simulation building process can lead to greatly reduced manufacturing time when you consider how sequence fits into process.
Tools need to be verified and validated; problems must be easy to set up and run; structural process generation must be easy and quick; tools must be built to run effectively and efficiently on massive parallel computers; and, results must be timely.
Geometric assembly constraints, are easily identifiable and definable but certain constraints, especially the component constraints and soft constraints, are difficult to identify without a good realistic feel of the assembly process.
In the mixed prototyping concept, several questions need to be clarified: 1. What parts should be real prototypes? 2. What parts should be virtual? 3) How much manipulation of the virtual parts is feasible and needed, etc. Although the answers to these questions are very context dependent, basically we can make a decision based on the following aspects.
For the specific cases, it is difficult to obtain an obvious optimal solution of all these aspects. Must consider the trade-off of these aspects carefully in terms of our application and requirements to define a proper strategy for assembly evaluation based on mixed prototyping.
Initial constraints are imported into the Automatic Assembly Planning System to Generate the feasible Sequences so planners can view and verify the feasible sequences in the virtual reality working space to
Plans identify new constraints and decide on requirements change criteria e.g., cost, number of orientations, etc. Next, the users go back to the Automatic Assembly Planning System to re-plan the sequences. The planners repeat this process until they find a satisfied sequence.
1. Design Strategy must obtain economies of scale in customised productions so standard components of products have become very popular in manufacturing industry
2. In mixed prototyping concept standard parts must normally be real components since they can be found easily in stocks.
3. For some fixed designs that do not need to be changed much, can use real components through conventional rapid prototype technologies.
4. Customised parts must be evaluated and revised many times, virtual prototypes are used since flexible for modification.
5. During an assembly process impossible to connect two real components using a virtual component to obtain realistic feedback-- cannot stack a real component on a virtual component.
6. Using the largest component of an assembly as a virtual part is not ideal if several other real and virtual parts are connected to it.
7. Parts where several components are to be assembled, such as the base part, would serve better if they are real.
8. Some workspace and assembly parts cannot be completely defined and simulation is so must use real components as much as possible.
9. If prototyping cost of some components is very high, try to use virtual prototypes even though designs are already fixed.
10. Users can obtain more realistic assembly parts sensory feedback based on real components as compared to virtual components.