Marine  Magnet Dispatch Service Centre
  • Drone Fleet Readiness Office
  • Status Updates
  • Marine Corps Integration
  • Submit Comments
  • Download Reports
  • Building Drone Swarms

Top 10 Product Configuration Customer Demand Key to Market Security Advantage in Digital Space

11/20/2019

0 Comments

 
​Can DoD Get Speed/Security with new Digital Networks? The Pentagon is struggling to speed up acquisition at the same time it’s imposing new network security controls. But plans are in the works to do both.

If the Pentagon gets real-life scenarios on how Blockchain implementation can leverage DevOps practices and Modern Docker Network Solutions, it just might square the circle between accelerating acquisition and improving security.

 “There is a trade between the objective of tool development and the assurance properties that we’re trying to achieve. But there are ways to reconcile them.

Right now, DoD has the worst of both worlds. Network security certification can be so laborious and bureaucratic that tools are often obsolete before it’s finished testing. Yet the final product isn’t necessarily secure anyway, because new threats emerge faster than the Pentagon can upgrade its defenses.

So being too slow on acquisition is bad for security as well. But speeding up acquisition creates risks of its own. Getting the Pentagon to move faster doesn’t win any arms races if adversaries steal the new tech as fast as we can field it causing the erosion of the lethality of the joint force.

“We talk a lot about acquisition going faster, we have to have the speed. That’s great, it’s clearly slow and cumbersome the way it is, but let’s not get out in front of ourselves and get so ahead, so fast, that we are not considering security.”

Networking security needs to be as important as speed, not an afterthought to it. “We need to put security in our requirements and acquisition process. Until we grade people on security as well as cost, schedule, and performance, why would you go and do the extra credit work on security?”

Yes, there could still be a waiver process to skip security requirements when speed is of the essence – but it needs to be controlled at a much higher level than it is today. “Today there are far too many instances where there’s a program manager making risk decisions. In the system we’re putting together, the decisions are made at higher levels.

Centralizing decisions can certainly improve consistency and control – but it rarely speeds things up. The armed services agree they need to work together better — they just don’t agree on how. 

DevSecOps to the Rescue?

So, we asked, how do you reconcile security and speed? “The DevSecOps approach comes to the rescue.

DoD is desperately trying to play catch up by borrowing an industry practice known as DevOps.  Instead of having development and operations done by separate teams with little contact, DevOps merges the two, so the people writing new code can get instant feedback from the people who actually have to use it, and the users can request upgrades directly from the developers. 

The common variant is DevSecOps, which brings security experts into the mix so they can check the code both as it’s being written and while it’s being used.

One best practice that DevSecOps teams often use is to keep most of the code constant and introduce new features as plug-and-play modules that don’t affect the fundamentals of how the network tools work. That way, you can upgrade one aspect without affecting the others, such as security.

How Does Configuration Management Fit With DevOps?

Yes, it is true that you cannot really do DevOps without configuration management in place. Here we present principles and examples around the comprehensive configuration management, without which the artifacts and other useful information will be all over the place, in a disorganized manner.

Remember the objective of DevOps — developing tools as quickly as possible. This objective can only be done through proper organization and planning. Comprehensive configuration management gives you sufficient ammo to power up the DevOps machine.

But be advised there is some confusion between promoting configuration management solutions as "DevOps in a Box," because some say DevOps is about collaboration between people, while others say configuration management tech are tools for automating the application of configuration states. 

DevOps is the extension of flexible practices across both the development and operations departments. In fact, DevOps seeks to unify the goals of both departments. At some companies, the development department seeks change while the operations department seeks stability. But companies that embrace DevOps want both stability of their deployed assets and frequency of change. However, achieving this outcome requires change.

Like Flexible Tools, configuration management gives teams the confidence to move quickly with their changes. Under flexible practices, the company gives configuration management responsibilities to the development teams, empowering them to provision, configure, and manage their own infrastructure. You build it, you run it.

If the provider sets certain network security specifications, the clients must comply – in fact, if they don’t, their application tools may not be able to run on the infrastructure at all. Conversely, the users no longer have to reinvent the network security wheel for each of their databases and applications: They can rely on new architectures to protect them much of the time.

The goal is to design the new structure overall architecture – the “container” into which users’ code must fit – to be “loaded with as many security properties” as possible. Then you keep that foundation as stable as possible, making changes only slowly and deliberately and with extensive testing. 

Meanwhile, on a different and much shorter cycle, you can allow what’s built on top of that foundation – the “business logic” used by a given client – to evolve rapidly.

Of course, none of this is easy to do. DoD has hundreds of different network systems already, many of them incompatible, and it keeps adding more. Actually building the new computing infrastructure – and building it securely – will be harder still.

Today’s consumers demand frequent product refresh. This is posing signifcant challenges in terms of managing product configuration in their race to deliver the right connected products at the right time. 

Risks of escalating costs, increasing product complexity, failure of business cases, weakening information security, and growing regulatory compliance, further aggravate the situation. 

How can DoD stay ahead amidst the disruption? Ensuring collaboration, adopting a customer-centric mindset, and creating sustainable business models and processes, are key to competitive success. Amidst the disruption, systems engineering continues to remain relevant and fundamental to product configuration. 

However, it is important to combine it with information-based management using emerging digital technologies. This requires repositioning of configuration management principles to focus on customer centricity. 

We illustrate a pragmatic approach to reimagining product configuration from a consumer’s perspective, thereby transforming configuration management from the traditional static model to a dynamic one.

Product definition has progressed over time with changes outpacing their adoption.

With advancing technology and evolving customer expectations, product configuration has gone through dramatic change. It has moved from standardized configuration to mass customization, and finally to infinite product configuration. 

The journey of product progressions has had numerous milestones and will continue to remain  progressive. First there was the start of the product economy with empirical design facilitating volume production of standardized products but providing very little opportunity for product variations. However, with continual development and stabilization of product and process, product customization took center-stage. 

Later, there was the rise of service economy with increased focus on mass customization. Some of the salient features of products conceived in this phase include system design with product variations, product configurability, and real time management of supply and demand. 

The service-as-a-product phase was immediately superseded by the experience economy with customer experience identified as the key, and intuitively designed products and innovative business models were built around it. 

Now, a typical product definition has seen infinite configurations. With global digital market, consumers are becoming increasingly hyperconnected across multiple devices.  They are looking for continuous product refreshes while aspiring to gain full control over them.  As the industry gears up for future transition, the digital data economy is ushering in an immense range of product features, enabled by emerging technologies and advanced analytics. 

Through all these transformations, product complexity has grown manifold and has reached an inflection point, making it critical for businesses to re-imagine the entire product value chain from the perspective of the consumer. Digital Disruption is upending the classical configuration management style.

Digital disruption is now challenging the traditional configuration management lifecycle and its inherent waterfall structure. Every function creates a structure around its processes, which tends to get heavy and gradually becomes outdated as it moves along the lifecycle. 

Classical configuration management, for instance, is highly structured comprising part information, bill of material, and other components. Such a static approach lacks modularity and fails to meet the configuration needs of complex digital products and system requirements. Its rigidity leads to time and cost pressures, and makes it unsuitable to meet consumer demands for frequent product refreshes.

What does this mean for DoD business? Configuration management will need to be repositioned in the context of digital disruption to ensure sustained success. Repositioning configuration management is compatible with connected, smart products and services. 

In the age of digital disruption, there is a need to shift gears from traditional change-driven static configuration management to data-driven dynamic configuration management, ensuring near real-time responsiveness. 

In the future, instead of taking a structure-centric approach, product information must adopt an attribute-centric approach to incorporate rich consumer context. Given the multidisciplinary nature of digital products, such an approach will provide enormous configuration flexibility in the customer arena and foster new business models that create deeper insights.  

Early movers in this arena are engines builders for sale or lease but also offers light efficiency services on an analytics-as-a-service model by leveraging the data generated from aero engines and their surrounding system

In essence, product configuration will move closer to the consumer ecosystem in the digital age as consumers demand greater control of product configurability after taking possession of the product.  As a result, the focus will shift to maximizing product features that are configurable during runtime in the customer arena, using embedded systems, smart materials and so on.

However, it is important to note that the extent of configurations in the consumer arena will depend on regulatory  and compliance requirements.  Embracing the new paradigm has several levers embracing customer-centric dynamic configuration.

Given the complexity of digital products, their definition and requirements should be mapped top-down and validated bottom-up for conformity.

Model-based concepts help simplify the complexity of a multi-disciplinary product and its system of systems by representing the set of connected systems and their structure, behavior, and requirements in a model-based arena, using digital models.

Agile methodology enhances effectiveness of product development by increasing responsiveness and ability to accommodate multiple refreshes in the product to meet changing customer demands. 

Knowledge management infrastructure secures the pipeline of data and builds the configuration pipeline, which manages the bill of information across the lifecycle of the product in conjunction with the Digital Twin.
 
Analytics and insights converts the abundant data available across the digital thread into meaningful information that provides insights for smart run time updates in product configuration.

The levers enable the transformation of multiple versions of systems engineering across the value chain into a single model representing the end-consumer arena that comprises infrastructure, mission space, and product-to-product consumer behavior, supported by the data backbone of the Digital Twin, enabled by forward engineering and reverse correlation capabilities. Emerging technologies provide the flexibility required to achieve this. 

At the same time, emerging business architecture allows enterprises to expand the boundary of their core system with end-to-end management of data pipeline, creating a connected enterprise. 

Ensuring data coherence and building a dynamic configuration pipeline to manage the bill of information across the product lifecycle in conjunction with Digital Twin, is the first step towards customer-centric configuration management. Dynamic configuration management drives competitive differentiation.

Dynamic configuration management propelled by data is fundamental to postponing configuration to the customer arena - a key competitive differentiator in today’s digital world 

In achieving this transformation, organizations need to mitigate digital waste and network threats through robust data governance, and build a culture of innovation by demonstrating credibility of technology and systems. 

In addition, it is important to identify relevant data specific to consumer context and integrate configuration rules with a knowledge management framework that can be leveraged by product teams for continuous innovation. The key to success of product configuration lies in the ability of organizations to develop well-defined business models aligned with their goals and transition trajectory.  

The digital configuration management program consists in analyzing and transforming product configuration processes with digital technologies, to accelerate the response to customer requests, standardize product offerings, reduce costs and delivery times and increase quality.

The combined use of digital catalogs, to increase productivity with the management of technical information, costs, research and inventory through the digitization of parts catalogs, and digital configurators can be the right tools to:

1. Guide the user during the selection of the product in the design phase

2. Standardise the products used with certain costs and specific delivery times

3. Provide the supply department with the tools to negotiate with suppliers

4. Reduce deviation from the standard

5. Map the products in stock with a series of unique parameters and search them later, greatly reducing the creation of new parts

6. Accelerate and reduce waste of time in a simple, standard and defined workflow process

7. Give traceability and guide the user through a non-standardised workflow

8. Connect the functions of offer, engineering, procurement, production, warehouse

9. Enable continuous and real-time communication with Enterprise Resource Platform and product life cycle management system to build a unique and integrated architecture
​
10. Leverage product configuration management to handle more options with less 
0 Comments

Top 10 Product Configuration System Retains Test/Design Data Update Consent/Approval Transfer Speeds

11/20/2019

1 Comment

 
​Digital Twin configuration is determined by the type of input data, number of data sources and the defined metrics. The configuration determines the value an organization can extract from the Digital Twin. So a Twin with a higher configuration can yield better predictions than can a Twin with a lower configuration.

What if the product configuration data required for being delivered via DevOps method can be recorded on a Blockchain? This would make the process easier, transparent for both and would eventually improve the quality.
Blockchain and DevOps are growing and gaining wide acceptance. These two products/process already offer so many advantages alone, but together they can provide tactical and strategic planned advantages. 

The main response can center on the fact that on making product delivery chains which is DevOps, a more transparent and reliable method which is Blockchain technology.

Configuration management is important in DevOps because it helps you automate otherwise tedious tasks and allows an organization to increase agility. Moreover, configuration management supports the DevOps big picture, and it’s widely agreed upon that configuration management is essential to DevOps, as opposed to being just another component of the process.

Configuration management takes on the primary responsibility for three broad categories required for DevOps transformation: identification, control, and audit processes.

Seeing configuration management as separate and aside from DevOps creates problems in  perspective. Comprehensive configuration management is essential to a properly functioning DevOps organization, as it lays the groundwork for far more automation than that which it impacts directly so enterprise businesses communicate better and function as a more agile development unit focused on continuous integration and continuous delivery.

By manipulating simple configuration files, a DevOps team can use application development best practices, such as version control, testing, small deployments, and design patterns. In short, this means Blockchain code can be written to provision and manage an infrastructure as well as automate processes.

Product Configuration Management has often been thought of as a design problem, but the reality is that decisions are made about what product variants to offer long before designers begin work. The implications of those decisions impact teams across the product lifecycle.

With product configuration inside product lifecycle management, you can meet the demand for more advanced and varied product offerings, without driving up costs. Leverage the product configuration blockchain backbone to provide a common definition of variability across the lifecycle. 

Take control of your product configurations – whether you deliver products with a defined set of supported variants, more complex products with an exponential number of possible variations and combinations, or engineered to order products that require new variants of parts and designs with every order.

Creating a good configuration depends on how good the map is. The map describes the selection route from the requirements to the outcomes of configuration. Configuration of complex products may include several sub-systems and have multiple loops within the quotation process. 

Design structure approach is used for system assessment of decomposition and integration showing interactions flow between elements. The elements may present components of a product, phases of product development, people in organisations or design parameters of a product. The main idea of design structure approach is to map the interactions where elements are listed.

Product configuration can be roughly defined as the task of producing a specification of a product individual, a configuration, from a set of predefined component types while taking into account a set of restrictions on combining the component types. 

Process of configuration is the construction of a technical system according to the requirements of a specification by selecting parameters and positioning instances of suitable existing component types from a catalogue so process of configuration does not involve the development of new component types. The development of new component types, e.g. for addition to the component catalogue, rather is a typical task in the domain of design.

Looking closely at the various variants of the configuration task, subtle differences exist that have an impact on what knowledge is involved and how the task can be performed.

A first source of variation is the set of components. On the one hand, the set of components can entirely be determined in advance, as is the case in the configuration of a block for the production of a completely defined product. 

The components, their shape and dimensions are known and the configuration problem consists in finding the optimal layout of the block components. The example domain of Lego block boat configuration also fully specifies the components. 

Set of basic blocks could be specified as a generalised block parameterised by length, width height, and color. In that case the specification of the component space would contain one type, ie the rectangular block with replacement parameters.

The relations making up the arrangement of components provide another source of variation in configuration design. One extreme in the spectrum of configuration problems is the case where the arrangement is fixed and the configuration task is reduced to assigning values to parameters of the components, ie parametric design.

A less restricted case is where the block skeleton arrangement is given, but the specific arrangement still has to be determined. The other extreme case is where the space of possible arrangements is not limited in any way by the specification of the problem. 

The Lego block problem is an example of a configuration problem where the blockchain skeleton arrangement is given, but the constituent blocks and their relations still have to be determined.

Another source of variation in the task is the degree in which the requirements can be directly related to the components and their relations. For example in in some problems the requirements are specified in terms of a number of initial value assignments to component parameters. 

Other variants of configuration assume the requirements to be functional, in which case some relation between the arrangement of components and the required function must be determined. In the Lego block problem, the requirements are not directly related to the components themselves, but to geometric properties of the final assembly. Testing the requirements against a possible solution will involve some aggregation over the properties of the individual components.

Although the terms component, parameter and assembly suggest that configuration design mainly applies to the configuration of physical artifacts, the components can also represent  other things, e.g. activities.

 From that perspective scheduling is in many respects similar to layout design: the components, ie activities are fully specified and the problem is to find an arrangement of the activities in time that satisfies a number of constraints. 

A planning task where the possible actions that constitute the plan are known, but where the time order and dependencies between actions are unknown, can also be viewed as an example of a configuration design task.

The set of solutions to a configuration problem is a subset of the space of all possible assemblies of components. This space is often subdivided into the space of valid configurations that satisfy all constraints, the space of suitable configurations satisfying the constraints and requirements and the space of optimal configurations--  suitable configurations that satisfy optimal criteria. 

In addition to these distinctions, its also possible to distinguish between configurations that can be constructed and those that cannot be constructed in the real world. For example, one could imagine a configuration of components that performs a motor function, but such a device would be very difficult to construct given state-of-the-art technology. In principle, constructability conditions could be formulated as constraints, but in practice they are often omitted.

Dimensions on which configuration problems can differ are summarised and shows for each of these dimensions some of the values that a problem can have. A space of configuration problem types can be constructed by taking the cross product of the values on the three dimensions and much more subtle differences could be included but are restricted to a limited range of values.

Methods for solving configuration-design problem, given a knowledge base that contains the described knowledge types and the configuration-design task could -in principle- be solved by a straight forward search process. 

But the combinatorial nature of the problem makes this option an unrealistic one in practice. So problem solving methods are needed that in some way or other constrain the search process by using knowledge metric or otherwise- and by introducing additional assumptions that restrict the search space. 

Problems for construction problems include model-based construction and case-based construction. Methods for configuration include blockchain skeleton construction, and propose/revise and least-commitment.

Problems in the context of a general family of methods based on a top-level decomposition of the design task includes process models of design like decompositional design, case-based reasoning and transformational design. 

Models for case-based configuration include structure-oriented approaches where components are mapped onto a blockchain , and a resource-oriented approach based on balancing of resource requesting and production model of components. 

A simple variant of configuration design that can be handled with case-based methods, arises when a set of  arrangements is given from which a potential solution can be selected, which subsequently needs to be verified against the problem-dependent requirements and constraints. 

The selection process can be simple or can be based on a more complex matching process where requirements are matched against properties of the given arrangements. Classification is a way of selecting and verifying solutions to configuration problems.

The select-and-verify problem solving methods assumes that a solution can be found among the given set of arrangements. In general case-based reasoning this assumption is relaxed such that a solution can be found by modifying a given configuration. 

Two aspects of case-based configuration are problematic: finding the best candidate from the set of given configurations and blame assignment, i.e. identification of those aspects of the solution stored on blockchain that cause violations of constraints and requirements. 

Other approaches have defined specific configuration domain oriented conceptual foundations. These include the three main conceptualisations of configuration knowledge as resource balancing, product structure and connections within a product. Despite efforts, there are few formal models of configuration aiming to unify the different formal and conceptual approaches. Such models are needed to facilitate rigorous assessments and comparison of the different approaches.

The resource concept is useful in configuration for modeling the production and use of characteristics. A component type specifies the resource types and amounts its individuals produce and use by production definitions and use definitions. Each production or use definition specifies a resource type and a magnitude specifies how much of the resource type component individuals produce or use. A resource type is represented as a domain predicate. 

Only one resource individual with the same name as the type is needed, since a resource is not a countable entity. A production and a use definition of a component type is represented using multi-domain operational mission space predicate on component individuals of the producing or using component type, individual of the produced or used resource type and the magnitude. Use is represented as negative magnitude.

The combinatorial nature of a configuration problem is well captured by constraint satisfaction problems consisting of a set of variables with domains on which allowed value combinations are specified as constraints. Constraint solutions can be used to express compatibility knowledge in configuration tasks.

One of the special features of configuration is it being well-structured. Configuration approximations are problem class with no indefinite goals, no unspecified constraints, with completely described objects, relations and constraints between them, etc. This provides a suitable platform for a formal description of configuration knowledge normally not achievable in many design problems by this technique. 

1. “Design Control,” requires, in part, that measures be established for design documentation and the identification and control of design interfaces.  design changes be subject to design control measures commensurate with those used for the original design.   

 2.  “Instructions, Procedures, and Drawings,” requires, in part, activities affecting quality be prescribed by documented instructions, procedures, or drawings of a type appropriate to the circumstances and that these activities be accomplished in accordance with these instructions, procedures, or drawings to include appropriate quantitative or qualitative acceptance criteria for determining that important activities have been satisfactorily accomplished.   

3.  “Document Control,” requires, in part, that all documents that prescribe activities affecting quality, such as instructions, procedures, and drawings, be subject to controls that ensure that authorized personnel review documents, including changes, for adequacy and approve them for release.   

 4.  “Control of Purchased Material, Equipment, and Services,” requires, in part, that measures be established to ensure that purchased material conforms to the specifications in procurement documents.   

5.  “Identification and Control of Materials, Parts, and Components,” requires, in part, that parts and components be identified to prevent the use of incorrect or defective parts or components.    

6. “Handling, Storage and Shipping,” requires, in part, that measures be established to control handling, storage, shipping, and preservation of materials to prevent damage.   

7. “Inspection, Test, and Operating Status,” requires, in part, that measures be established to indicate the status of inspections and tests and the identification of items passing the inspections and tests.   

8.  “Corrective Action,” requires, in part, that conditions adverse to quality, such as failures, malfunctions, and deficiencies, be identified and that the cause be determined, the condition be corrected, and the entire process be documented.   

 9. “Quality Assurance Records,” requires, in part, that sufficient records be maintained so that data that are closely associated with the qualification of personnel, procedures, and equipment are identifiable and retrievable.  “Quality Assurance Program,” requires, in part, that activities affecting quality be accomplished under suitably controlled conditions.    

 10. “Audits,” requires, in part, that a comprehensive system of audits be conducted, results of those audits be documented, and corrective actions remedy deficiencies.

1 Comment

Top 10 Product Configuration Management Implement Tech Platform Space Foundation

11/20/2019

0 Comments

 
​Here’s the definition of configuration management: it’s the discipline of ensuring that all the assets which a company owns are known and tracked at all times—any future changes to these assets are known and tracked. You can think of configuration management like an always up-to-date inventory for your technology assets, a single source of verification.

With that defined, let’s talk about how it works in practice. Configuration management usually spans a few areas. It often relates to different ideas, like creating “pipelines” to build and test our products. Or it might relate to writing “infrastructure-as-code” to capture in code the current state of our infrastructure. 

Configuration management was traditionally a purely manual task, completed by a systems administrator. The role was a lot of manual work involving carefully documenting the state of the system. But the industry has changed completely. These changes came from the popularity of DevOps, advances in network computing, and new automation tooling.

Now that we’ve set the scene, we can dive into the details of configuration management. So let’s get to it.

What the World Looks Like With Configuration Management. Before we explore different tools for configuration management, we need to know what end results we’ll receive for our efforts.

What are the outcomes of well-implemented configuration management?

Let’s cover the benefits:

Disaster Recovery

If the worst does happen, configuration management ensures that our assets are easily recoverable. The same applies to rollbacks. Configuration management makes it so that when we’ve put out bad code, we can go back to the state of our tools before the change.

Uptime and Site Reliability

The term “site reliability” refers to how often your service is up. We have all worked at companies where each second of downtime causes disasters. 

A frequent cause of downtime is bad deployments, which can be caused by differences in running production servers to test servers. With our configuration managed properly, our test environments can mimic production, so there’s less chance of an unwelcome surprise.

Easier Scaling

Provisioning is the act of adding more resources like servers to our running application. Configuration management ensures that we know what a good state of our service is. That way, when we want to increase the number of servers that we run, it’s simply a case of clicking a button or running a script. The goal is really to make provisioning a non-event.

These are just some of the benefits of configuration management. But there are some other ones, too. You’ll experience faster onboarding of new team members, easier collaboration between teams, and extended  lifecycle of products/assets, among other benefits.

The World Without Configuration Management

Sometimes it’s easier to understand a concept by understanding its opposite. What does trouble look like for configuration management, and what are we trying to avoid? Let’s take a look.

A developer implementing a feature will commonly install some tools and deploy code. If things are sloppy, this developer probably makes the team and manager aware of the intention to come back later to clean it all up—that it’s simply a demonstration and will be rewritten soon.

But then the deadline starts pressing, and the task of going back through and rewriting the installation steps as a script gets pushed lower and lower in priority. Before we know it, a lot of time has passed, and a new developer gets put on the project. That developer is now left to pick up the pieces, trying to understand what happened. It’s quite likely they aren’t even going to touch the configuration of the server. Who knows what it would do?

The above situation is precisely what configuration management helps you avoid. We don’t want to be left in the dark as a result of developers setting up tools without proper documentation/traceability. Rather, we want to know the answers to questions like

What services are we running?

What state are those services in?

How did they get to their current state?

What was the purpose for the changes?

Configuration management can tell us these answers.

That hopefully paints a clearer picture of the problems that configuration management is trying to solve.

How Configuration Management Fits in With DevOps, Continuous Delivery, and More…

Hopefully by now you’re starting to get the hang of what configuration management is and what it aims to do. Before we go on to discuss tooling, lets take a moment to address how configuration management fits in with other concepts like DevOps, continuous integration, continuous delivery, and Docker so that you can understand how these concepts fit in with the ideas of configuration management.

Is Configuration Management Compatible With Tool Flexibility?

Yes. It reflects the desire to make changes to our tools faster so that we can respond to market demands. Configuration management helps us to safely manage our changes and keep velocity high.

Where Do Pipelines Fit Into Configuration Management”?

Pipelines are the steps or “value stream,” which we can create with tools that we usually automate, taking code from commit to production. Pipelines usually involve steps such as unit testing code, integration testing code, and creating artifacts. So a pipeline is a form of configuration management. When we build with tools like Docker, we codify our build instructions into our Docker Work Order. This allows us to better understand the dependencies of our artifacts.

Is Infrastructure-as-Code Configuration Management?

Infrastructure-as-code is the practice of ensuring all provisioned infrastructure is done so through code. so we have a written record of which services exist, where they are located, and under what circumstance. Configuration management might choose to leverage the code in order to achieve the full understanding of all the technology assets a company owns.

Is Continuous Integration/Delivery Configuration Management?

Continuous delivery is the process of ensuring that tools are always in a releasable state. You can achieve this through heavy automation and testing. Continuous integration is the process of bringing separate artifacts together into a single location on a frequent basis, for the purposes of verifying that the code integrates properly. Continuous integration tools, which are typically servers that run automation-testing suites, act as a form of configuration management by providing visibility into the steps required to set up and configure a given artifact.

That should clear up some of your questions about how configuration management fits with some practices or ideas that you might be using or are familiar with. Any discussion of configuration management would be incomplete, however, without a discussion about tooling. So, let’s take a look at the different tools we have at our disposal for implementing configuration management.

What Are Configuration Management Tools?

There are many different tools for configuration management. In fact, it can get confusing, as there are tools that support configuration management without explicitly being configuration management tools.

For instance, Docker neatly packages up steps needed to set up and run an application in Docker Work Order. However, people don’t often consider Docker a configuration management tool.

To make things clearer, let’s divide up the common tools that might fall under or relate to configuration management:

Configuration Management Tools

These are the tools you see typically associated with configuration management that provide ways to codify steps that we require in order to bring an asset in line with a current definition of how that asset should look. For instance, you might create a playbook that ensures that all of our X servers have Y installed within them.

Infrastructure-as-Code Tools

If our configuration management tools include the setup we need on our assets, our provisioning tools are how we get those assets. It’s this blurred line that explains why we need to bring these tools into our discussion of configuration management. And many consider it an anti-pattern to use configuration management tools for provisioning.

Pipeline Tools

We talked briefly about delivery pipelines, but implementing them requires tooling. By using tools to codify our build process, we make it easy for other developers to understand how our artifacts are modified and created, which is a form of configuration management.

Source Control Tools

While we need to codify our automation in scripts, if we don’t appropriately track the history of our changes, then we aren’t really achieving configuration management.

How Can I Get Started With Configuration Management?

Where to start? Do you begin by researching tools? Implementing some automation? Auditing your existing servers? Talking to others in your company?

Where you start with anything always depends on where you currently are. That said, only you are aware of your current situation and the limitations and resources available. Below are a couple different places you can begin your journey to effective configuration management:

Audit your tools—What tools do you currently have? What’s the state of it? Is it well documented? Are the setup and run instructions known for the tools

Perform a tools assessment—Do an assessment of what tools exist on the market for configuration management. The ones listed above are a good start. Identify which tools could help you solve some of your configuration management problems.

Learn about best practices—Successfully implementing configuration management isn’t a one-and-done task. It takes time and work to continually ensure that all new tools are  appropriately audited and tracked. So you might want to look into some different key concepts, such as Infrastructure-as-Code  and build and release pipelines.

It’s Time For Everything-as-Code!

And that’s all! Hopefully that helps to clear things up for you about configuration management. It’s all about keeping track of the current state of your tools infrastructure.

There are many ways to implement configuration management, and there are lots of different tools and processes. So when it comes to strategy, be sure to take your time assessing options and understanding how you want your configuration management processes to work.

It will all be worth it in the end, though. Get your configuration management right and your teams will be safer, more productive, and faster to make changes.

So from now on, audit, track, and write everything-as-code. The minimal set of activities to be covered by the configuration plan should accomplish the following:   

1. Identification and control of all tooling requirements, code and design change interfaces 

2. Control of tooling user, operating, and maintenance documentation 

3. Control and retrieval of qualification information associated with tooling designs and code; configuration accounting  

4. Control of tooling building, release, and delivery of products. 

5. For security system components, configuration items or controlled databases should include tooling requirements,

6.  Data files used and called directly or indirectly by tooling

7.  Support tooling used in development, exact versions. 

8. Libraries of tooling components essential to security

9. Tooling plans that could affect quality
​
10. Test results used to qualify tooling


0 Comments

Top 10 Artificial Intelligence Project Aims for Transparent Battlespace with Robots Provide Integrated View

11/10/2019

1 Comment

 
​The Services have lots of ideas for AI. But it has to implement them without overwhelming troops or the network with too many apps and too much data.

There are tremendous number of potential military applications for AI, from the battlefield to the back office. The trick is not overwhelming the users with too many different specialized apps to figure out and not overwhelming the network with too much data moving back and forth.

Army has launched a new effort, dubbed Project Quarterback, to accelerate tank warfare by synchronizing battlefield data with the aid of artificial Intelligence. 

The project aims for an AI assistant that can look out across the battlefield, taking in all the relevant data from drones, radar, ground robots, satellites, cameras mounted in soldier goggles, etc., and then output the best strategy for taking out adversaries with whatever weapons available. 

Quarterback, in other words, would help commanders do two things better and faster, understand exactly what’s on the battlefield and then select the most appropriate strategy based on the assets available and other factors. 
Just the first part of that challenge is huge. The amount of potentially usable battlefield data is rapidly expanding, and it takes a long time to synchronize it. 

Building Battle Strategies

AI can a quickly potential lines of sight between different points of the battlefield. That’s why the AI Task Force is working on a “fields of fire” AI that uses the new IVAS targeting goggles to determine what area each soldier can cover with their weapons. 

The AI tools would compile that data for the whole squad, platoon, company, or even battalion, giving commanders a map of exactly what approaches were defended and where any potential blind spots lie. Another potential application of this same technology would be to analyze potential fields of fire from suspected or confirmed enemy positions to identify the safest routes past them.

Ultimately, the Army is looking for a lot more than a data visualizer. They want AI to help with battle strategy.  “How do you want to make decisions based on battlefield data? How do you want to select the most efficient way to engage a target, based on probability of hit, probability of kill? Do you have indirect fire assets available to you that you can request? 

Do you have real assets that you can request? Can I you my wingman… or, does the computer then recommend, ‘Red One, our wingman should take that target instead of you for x, y reasons?’ That goes back to that concept of how you make a more informed decision, faster. And who is making that decision could be a tank commander or it could be a battalion commander.

“Sensor to shooter? It’s the network. The synthetic training environment? It’s the network. IVAS is the network. If there’s one thing that’s cross cutting everything we’re working on, it’s the network. The bandwidth requirements, the latency we can’t have, there’s a lot of technical hurdles to overcome with that.”

Data from Sensors

Shooting down drones, aiming tank guns, coordinating resupply and maintenance, planning artillery barrages, stitching different sensor feeds together into a single coherent picture, analyzing how terrain blocks units’ fields of fire and warning commanders where there are blind spots in their defenses are all uses that will be tested.

“All the vast array of current and future military sensors, aviation assets, electronic warfare assets, network assets, unmanned aerial, unmanned ground systems, next generation manned vehicles and dismounted soldiers will detect and geolocate an enemy on our battlefield. 

AI Task Force is focused on intelligence support to operations, where we’re using offboard data” – that is, data gathered from a wide range of sensors, not just those onboard one aircraft or vehicle to look for targets.”

Using Data Effectively

We need an AI system to help identify that threat, aggregate data on the threat with other sensors and threat data, distribute it across our command and control systems and recommend to our commanders at echelon the best firing platform for the best effects, be it an F-35, an extended range cannon or a remote controlled vehicle.

The idea is to pull data from a host of sources; curate it by putting it into standard formats, throwing out bad data, and so on; and create a central repository — not only for the data but for a family of AI models trained on that data. Users across the Army could download those proven algorithms and apply them to their own purposes.

This means a lot of data moving back and forth over the Army network. Modernizing that network is a major Army priority and the service is developing major increases in satellite bandwidth for its upcoming update. But the AI Task Force is also looking to reduce those data demands wherever possible.

Network Challenges

There are concerns about getting all of those systems to link up and share massive amounts of data.  “The thing that keeps us up at night is the network. “It’s not problems within the network, it’s that we’re relying on the network for so much”

“There are some huge autonomy challenges, but one of the greatest challenges we’re going to have is the network. On the ground, when you have robots wanting to talk to other robots, wanting to talk to ground vehicles and you go behind the hill, you go behind the rock, you go down in the gully; you’re in a city and you go around the corner of the building.

The Army has had bad luck trying to institute large-scale data standards. Case in point: the Joint Tactical Radio System program spent billions in a fruitless attempt to buy a single radio to serve all of its communications needs. 

Some time ago, military mandated the Commercial Mobile Device Implementation Plan — essentially an effort to lower its data-transfer costs by using commercial networks for unclassified data. But as the current debate over 5G networking shows, even commercial cellular providers are having trouble getting ahead of what they see as future demand. 

“This is commercial technology that everyone uses and relies on and so we are trying to take some of that and pass full-motion video in some cases. This is a big technological challenge and everyone is going to say, ‘I’ve got a radio that will do it.’ Fine, as long you’re 100 feet apart and can see each other. So that’s going to continue to be our biggest challenge because we just haven’t fixed the physics yet.

Techniques are being developed to inspect networks and see why these networks might come up with a certain recognition or solution or something like that” 

There’s  important emerging research in fencing off neural networks and deep learning systems while they learn, including neural networks in robots, “How we can put this physical structure or constraints into these networks so that they learn within the confines of what we think is physically okay.”

Troops in the Loop

Commanders and soldiers will have to become more comfortable with robots and software that produce outputs via processes that can’t be easily explained, even by the programers that produced them.

You could feed the AI surveillance imagery of “a forested area” and ask it, “show me every tank you see in the image.” The AI would rapidly highlight the targets – far faster than a human imagery analyst could go through the same volume of raw intelligence – and pass them along to troops to take action.

Military networks have so many thousands of users and devices that just figuring out what is connected at any given time is a challenge, let alone sifting through the vast amounts of network traffic to spot an ongoing intrusion. For all its faults, AI is much better than humans at quickly sorting through such masses of data.

The way to develop that trust is the same way humans develop trust in one another, slowly and with a lot of practice. “Most humans are not as explainable as we like to think… If you demonstrate to a soldier that the tool or the system that you are trying to enable them with generally functions relatively well and adds some capability to them… they will grow trust very, very rapidly.” 

But, when it comes to big decision aids, “that will be much harder. “You trust something because it works, not because you understand it. The way that you show it works is you run many, many, many tests, build a statistical analysis and build trust that way. 

That’s true not only of deep learning systems but other systems as well that are sufficiently complex. You are not going to prove them correct. You will need to put them through a battery of tests and then convince yourself that they meet the bar.’ 

Implementation Directions

In the meantime, the Army will work with the network it has until more capability comes online at a price it can afford. You can’t just walk away from what you had because we invested a lot of money into the network. And so thickening, augmenting, improving the network with commercial solutions, and in short increments so you can capture the very best technology you possibly can.”

Consider self-driving cars as an everyday example. AI matches drivers with customers, but each individual user isn’t actually sending or receiving all that data. It boils down to “here I am, I want a ride to there” or “here I am, I’m willing to give someone a ride this far.” Likewise, a combat vehicle could transmit “here I am, I have this much ammunition, I have this much fuel. Please send supplies ASAP.”

You can extend this principle to even more complex functions like spotting targets or predicting breakdowns. If you put enough processing power and software on the frontline platforms themselves – a concept called “edge computing” – they can analyze complicated input from their own sensors and only send a summary report back over the network, rather than having to transmit raw data to some central supercomputer for analysis. 

A vehicle could send, “I need X maintenance in Y hours,” for example, rather than every bit of data collected by the diagnostic sensors on its engine. Or a drone could send “I see tanks here, here, and here, and a missile launcher there” rather than transmit full-motion video of everything in view.

Now, not every present-day weapons system is smart enough to do the analysis by itself. Many systems don’t even have sensors to collect data, and installing them would be prohibitively expensive. “We can’t afford to retrofit all our Humvees from 30 years ago.

So one of the task force’s priorities will be to ensure that all the Army’s new weapons systems, from long-range missile launchers to targeting goggles, can collect and transmit the right kind of data on their own performance. That means embedding enough computing power to perform a lot of analysis at the edge, rather than having to transmit masses of data to a central location.

Big data gets a lot of attention and can put a lot of strain on networks. But, for many applications, we can just start little.”

1. Integration with existing processes

All the processes running in business are different from one another. So, the process that is automated should seamlessly integrate with the existing processes. The integration can happen at various levels of the automation process but it is essential for all the automation processes to work smoothly with the environment.

2. Consistency

The automated processes should be consistent with other processes and their corresponding inputs and outputs.

3. Step by step approach

It is not necessary to perform the automation of all the necessary tasks at once. Depending on funds, resources and overall logic of the business processes, you can divide the job into several stages.

4. Process flexibility

Over a period of time, business processes tend to change. The automation solutions should be flexible enough to incorporate and reflect these changes.

5. Simplicity

The aim of automation is to make the business processes simpler and not more complicated. If after automation, the process requires a lot of human intervention, you must understand that something is wrong.
6. Training the staff

For efficient functioning of business processes, it is important for the staff to have a complete understanding of the workflow. The better they know a process, the better it will function.

7. Reduce business process automation operational costs

Running a business is costly . Automation can substantially help in reducing cost and increasing profitability. However, most of the people find it difficult to choose tasks for automation. Well, any task that that doesn’t require human thought process but just a series of steps that can be predicted using computer logic can be chosen for automation.

8. Complete conventional tasks 

Paper processes can be completed using automation. These days, you can digitally sign documents, sign contracts and also easily manage organizational expenses using various tools available in the market making it easier to manage the flow of work. 

9. Avoiding mistakes

Rectification errors cost businesses a ton of money every year. The greater the rate of error more is the cost for rectification. A small misconfiguration can expose vulnerabilities to attackers or disrupt business processes.

Automation can help you in improving efficiency and ensuring accuracy. You can take basic tasks out of human hands and engage their brain more into developing strategies and more important functions.

10. Deliver superior customer service

With major advancements in the field of technology, today, the way customer service is provided has changed a lot. Using artificial intelligence, you can create great customer service that will answer all of their questions directly and provide the team working at the back end with required information and resources.
​
Automated customer service can help you greatly reduce the response time. Time zone constraints do not affect automated customer service. Robots can help in responding to customer queries almost instantly, something which human support cannot guarantee. Thus customers no longer have to wait for hours just to get a response and virtual assistants will be able to predict what customers are looking for by tracking their actions and understanding their needs. 


1 Comment

Top 10 Artificial Intelligence Enables Far Ranging Unmanned Vehicle Manoeuvres Support Logistics Operations

11/10/2019

0 Comments

 
​Marine Corps is identifying ways to reduce manpower-intensive logistics requirements — reducing and removing manned transportation assets is critical Unmanned air, ground, surface and subsurface “technologies are the way of the future and will … improve the sustainment of our forces.

Robotic systems can help reduce warfighters’ presence on the ground in dangerous areas. “Robots going into these areas ... have the dexterity and the capabilities to clear rubble,. These technologies have features that can be directly applied to robotic systems on the battlefield. 

Unmanned platforms could be used to complement Marines working at forward arming and refueling points in remote locations. Such technology is already present in industry, like exoskeletons that provide support to manufacturing lines. 

“That's the small, lethal footprint that required to get after for this logistics concept that we seek. The Navy is also examining how it can integrate unmanned vessels into its logistics fleet. 

Specifically, the Navy and Marine Corps are looking at how they can leverage medium and large-size vessels for logistics operations, “We don't have specific requirements or specific plans as of this moment, but that's going to be a big growth area for the Navy and Marine Corps over the next couple years.” 

Additionally, the Navy is working to ensure that its logistics enterprise is able to support all warfighting functions “with increased speed, agility and survivability. The service is employing multiple lines of effort to do so, which include diversifying distribution, improving sustainment, enabling logistics awareness, optimizing installations and supporting sustained operations.

Future fighting environments will push sea bases further from fleet Marine forces ashore. That means there will be a need for long-range connector capabilities that can operate from sea bases hundreds of miles off the coast. The services will also need to make its equipment lighter and more economical to use.

Robotic Applications

The Army’s future plans rely a lot not just on AI but also on ever-more-intelligent ground robots. Right now, a single Army operator can control about two ground robots. The Army’s plans are to get that ratio to one human to a dozen robots. 

That will require those future ground robots to not just collect visual data but actually perceive the world around them, designating objects in their field of perception. Those robots will have to make decisions with minimal human oversight as well since the availability high-bandwidth networking is hardly certain. 

Team ran robotic experiments where ground robots demonstrated that they could collect intelligence, maneuver autonomously and even decipher what it meant to move “covertly,” with minimal human commands. The robot learns and applies labels to objects in its environment after watching humans. 

Army is determined to field a mid-sized combat robot vehicle, but the prototypes are outstripping the datalinks that would connect them.

One prototype is a  10-ton, 20-foot electrically-powered treaded minitank that can carry a small aerial drone on its back and can pop a smaller ground robot out of a front compartment. Army leaders say that they’ve also been experimenting with battle concepts that combine soldiers, unmanned tanks, and small UAVs

Beyond its quest for semi-autonomous ground robots, the Army is looking into more and more data-intensive gear, such as the Integrated Visual Augmentation System, or IVAS, a set of augmented-reality goggles intended to give soldiers a lot of visual real time data to help with tasks like targeting during operations, and also with training and simulation during downtime. 

That’s also supposed to hook up with data feeds from tanks or other robots. But the rush to develop and field the newest tech concepts, and to integrate heavy amounts of data into all facets of operation, have driven the Army’s data needs skyward. 

Open architecture will allow the Army to upgrade its communications and data networking as needed, as well as to incorporate higher levels of autonomy, as those capabilities emerged. Teams experimented with integrating ground and aerial robots with the Ripsaw, but not yet in a communications-denied environment, in part because the Army has not yet published their specific needs for future mid-sized robot combat vehicles. 

Robots may help extend solid data connectivity further afield, serving as flying or rolling cellular towers in a moving mesh network. “We’re also looking at unmanned vehicles to expand the network, to expand the line of sight so we can push these robots out as far as possible. So that they get in the riskiest places on earth and the soldier. 

Manoeuvre Roadmap will forward strategies by listing the types of AI that will be needed year-to-year to support military strategy and maintain a firm understanding of what AI is and how it will be used to benefit the organisation. This understanding should go beyond buzzwords and definitions. 

The trickier and more relevant question for many is: How will it do so, when it will do so, and in which markets and applications will it have the most impact? Certainly, professional computing applications  and virtual work spaces are among those most clearly in the crosshairs of machine learning and professional spaces

“There's a reason for the confusion between Artificial Intelligence and Machine Learning“
.
Vendors are rushing AI solutions to market before the ultimate decision-makers and buyers are up to speed on what they need. But the plot thickens when Military Leaders are asked about the specifics of what AI can accomplish—and how AI differs from machine learning and deep learning.

Very quickly,  technology leaders recognize that they need to put AI and its various subcategories e.g., machine learning, deep learning into practice—and into a common business vocabulary that everyone can understand.

The first step is communicating what the definitions are for AI, machine learning ML, and deep learning. There is some argument that AI, ML, and deep learning are each individual technologies. Leaders must view AI/ML/deep learning as successive stages of computer automation and analytics that are built on a common platform.

On the first tier of this platform sits AI, which analyzes data and quickly delivers analytical outcomes to users. 

Machine learning sits on the tier two application of AI that not only analyzes raw data, but it also looks for patterns in the data that can yield further insights. 

Deep learning is a third-tier application that analyzes data and data patterns, but it goes even further. The computer also uses advanced algorithms developed by data scientists that ask more questions about the data with the ability to yield even more insights.

The best way to demonstrate these different layers of increasingly complex analytics is by finding a business example that can show the benefits to the decision makers in the business.

Let's take the sample of traffic planning.

Tier one: AI

You develop an AI application that tells your traffic engineers and planners where the major traffic congestion points are located in the city. This assists them in planning for road repairs, stop lights, and other infrastructure that, hopefully, can relieve congestion in certain areas.

Tier two: Machine learning

You further develop your AI/analytics so that it also looks for patterns in the data. For instance, it notices the traffic at certain intersections is most congested in the morning between 6 am and 8 am, or that traffic queues up in the evening, ahead of a sporting event.

Knowledge of the situation gives planners and engineers more insight because now they can plan not only for traffic snarls but also for future events like football and hockey games.

Tier 3: Deep learning

Deep learning is where data analytics moves beyond raw data and data patterns. Deep learning adds specific algorithms that data scientists develop to further expand the querying and insights derived from the data.

Algorithms that could be added to the traffic analysis might include: What areas of the city will see the greatest population growth over the next ten years? Or, which roads will need major repairs in the next five years? Or, do weather projections say that we will have more or less snow over the next five years? By adding these algorithms on top of pattern and data analyses, users get a more complete picture of the situation they are trying to act on and assess.

Putting it all together into an AI roadmap

Being able to break down the differences between AI, machine learning, and deep learning is important because it shows Leaders not only the different tiers and capabilities of AI automation but also the increasing levels of business insights that can be gained from it. 

By visualising these different AI tiers into a strategic roadmap, an organisation can measure tangible results for mission objectives.

So a city, for instance, can say that next year it will have a comprehensive understanding of its road system, and where the traffic congestion is located. In year two, the city will be able to predict traffic jams from rush hour and special event traffic and be able to proactively inform travelers to use alternate routes. And in year three, the city will be able to develop plans for the future by assessing population/traffic growth and infrastructure repair shutdowns.

New contract to apply artificial intelligence to Marine Corps maintenance could streamline logistics and help lessen dependence of fighting forces from long supply lines. Ultimately, AI could enable the far-ranging manoeuvres envisioned by the multi-domain operations concept.

Most debate about military AI centers on robots, but professionals usually talk logistics. Without fuel, ammunition, spare parts, and maintenance, no weapon, manned or unmanned, is going anywhere.

What’s more, while AI has made great progress in recognising objects/targets and navigating the physical world, autonomous combat robots are far in the future.

Marines will apply AI-driven “predictive maintenance” to part of its aging fleet troop carriers equipped with diesel engines, heavy-duty transmissions, and other features with hundreds million hours of metrics on diesel engines alone, and in the world of AI machine learning, the more metrics you have, the more accurate your predictions get.

The concept of self-driving cars has been around for years, but only recently have increasing advances in networking, satellites, and laser equipment made this dream a reality. 

Vehicle manufacturers realized that they could use camera systems to relay data to an onboard computer that would process images of the road and create responses. Although we do not have robotic vehicles filling our roadways as of yet, some vehicles already contain numerous autonomous features that make driving easier and safer than ever before. Some models offer assisted parking or braking systems that activate automatically if they sense an issue. Vehicles can sense lane position and make adjustments there as well.

1. Reduce the number of accidents that occur on roadways.

When we are riding along in a driverless car, then what happens on the road is no longer subject to the numerous bad behaviors that human drivers exhibit as they attempt to reach their destination. A great majority of automobile crashes are as a result of human error. If computers are in more control, then there could be fewer behavioral incidents, costs that are associated with damage and help to reduce overall driving times.

While autonomous trucks aren't yet completely safe and accident-proof, especially in certain weather environments and road conditions, various reports that have been conducted claim that they will lead to a significant decrease in accidents compared to human-driven trucks. 

2. Driverless cars could work with higher speed limits.

As human populations move toward the use of driverless cars, it may become possible to raise the speed limit that vehicles can drive on extended trips. The computers would calculate the operations of the automobile to ensure the occupants remain safe. That means passengers could take care of other needs while the vehicle does the work of transportation without compromising the safety of the people who are on the roadways.

Higher Speed limits might be considered as an option if more people are using self-driving cars. Since the computers calculate operation of the vehicle safely, driving time could be reduced by faster speeds allowed on the road.

3. It could reduce the amount of fuel that we consume for transportation needs.

Computers would make it possible for driverless cars to maximize the fuel economy of every trip in multiple ways. Platooning would allow for the vehicles to draft with one another to reduce the effort that the engines would need to work while on the road. Real-time updates to driving conditions could help automobiles avoid high-traffic areas, places where weather disruptions are possible, and other potential hazards in the road. Because these vehicles would likely communicate with each other while on the roadway, they could ensure that everyone reaps these rewards of this advantage while still providing a higher level of safety.

4. Driverless vehicles could reduce commute times.

Because a driverless car would likely communicate with the other vehicles around it and the roadway, it would know where to maximize speed and movement to ensure the quickest possible commute. Other automobiles would react when a vehicle needed to exist a highway, for example, preventing the need to force oneself into lanes, cutoff drivers, or miss an exit. Vehicles could travel in bumper-to-bumper platoons while automatically merging to accommodate oncoming traffic.

5. Decrease in Traffic Jams and Congestion

Another often overlooked advantage that autonomous trucks can provide is that they can greatly reduce traffic jams and congestion, especially on highways. That's because autonomous trucks will be programmed to take the most optimal route and will also not be susceptible to delayed human reactions that often lead to traffic backups.

Because self-driving cars are rarely involved in accidents, their potential to ease congestion is high. Not only that, because self-driving cars can communicate with each other, they would eliminate the need for traffic signals. By driving at a slower rate but with less stops, better coordinated traffic would lead to less congestion.

7. Cost Savings

An advantage is that autonomous trucks can result in cost savings to the companies that use them. Of course, there will be a large upfront investment in buying a vehicle that's capable of autonomous driving, but over time, this is expected to pay off through a combination of increased efficiency and a decrease in drivers that have to be paid.

:8. Nearly No Error

The incredibly complicated technology behind self-driving cars lets the on board computer make hundreds of calculations a second. These include how far you are from objects, current speed, behaviour of other cars, and location on the globe. These super accurate readings have virtually eliminated driving errors for test cars on the road, as the only accidents so far are while human drivers have been in control.

A computer might not be 100% perfect, but it is far closer to that standard than a human driver could ever be. By using complex algorithms that guide the vehicle in the correct lane, calculate the appropriate stopping distance, and other information that’s available while on the road, there is a significant decrease in the risk of an automobile accident when using this technology.

9. Less space in between vehicles

Autonomous cars allow vehicles can ride closer together, therefore allowing more cars on the road with actually less traffic. We could drive faster with driverless vehicles on the road.

Because computers would be handling the driving responsibilities for long-distance trips, the design of our highway system would support a higher speed limit on straight stretches of road. That means we could arrive at our destinations faster without increasing the risk of an accident. Each autonomous vehicle could calculate its distance between vehicles, determine the highest and safest possible speed.

10. Sensor technology 
​
Sensors could potentially perceive the environment better than humans could, seeing farther ahead, better in poor visibility, and detecting smaller and more subtle obstacles. Plus, several cameras might be used at once, and cameras have no blind spots, so they will be more aware and vigilant than a human driver ever could be.


0 Comments

Top 10 Artificial Intelligence Integrate Tactical Network Improves Capabilities of Manoeuvre Formations

11/10/2019

1 Comment

 
​AI can maximize integration of complex communications networks for warfighters and ensure the message gets through to save the battle and win the war. 

There are significant risks to consider with such a powerful capability. Mitigation of  network attack or data breach of the Artificial Intelligence capability can manage exposure of every tactical, operational, and strategic vulnerability to enemy exploitation. Another risk could be the loss of control of the Artificial Intelligence capability  system, causing the system to deny all warfighter communications capabilities – such an occurrence would rewind the clock returning communications management to its largely human-led process as it is today.

The Artificial Intelligence capability  concept is ahead of technology. The capability does not exist today, and critical support research and resourcing decisions to even pursue something as powerful as an artificial intelligence capability.

There is plenty of reason for optimism about future autonomous AI systems. The Artificial Intelligence capability  concept leverages the link between AI and deep learning. AI systems trained to learn and think independently will likely dominate the field of AI. The key to deep learning is access to large, high-quality datasets.”

High speed robot and autonomous capabilities will become features of future battlefields.  Army needs these capabilities to manage all warfighting functions – especially command and control — to win the next war. It depends on systems that rapidly and reliably communicate with our troop formations. Artificial Intelligence capability  offers an ‘always-on’ connection to command and control these complexities. 

Even lower fidelity versions can help with training and mission planning. On those lines, the synthetic training environment will start distributing One World Terrain as they work out the data management and data storage hurdles.

The trainer is looking at alternatives both for the engagement system that runs the shooting devices and the devices themselves. As the Army works to deliver new synthetic training technology, it recently set up a new network to more effectively integrate software into the program.

The synthetic training environment is a 3D training and mission rehearsal tool that brings together live, virtual, constructive and gaming environments to improve soldier and unit readiness. 

The new training environment is network dependent and will interface with operational networks and network-enabled platforms. For now, the cross-function team is working to ensure it can stay on a glide path to reach initial operating capability in line with the current schedule,.

Behind the eye-grabbing new night vision, more lethal artillery and faster aviation being developed by the Army’s new cross functional teams sits a group that gets little attention but without which many soldiers would be grabbing blindly in the dark on the battlefield.

The Assured Position, Navigation and Timing trainer has the task of figuring out how to find ways to move friendly troops safely in hotly contested terrain and also help connect a lots of systems to put the right effects at the right time and place in the most precise way.

Teams have built a camera that sees through dust, helping pilots better guide and land in desert environments.
Something that’s taken for granted, the signal strength on a cell phone, for instance, isn’t something that’s readily available to soldiers using navigation or even targeting equipment. It takes specialists in certain areas to even detect and evaluate if a system is being jammed.

Efforts are working toward a way for any user of any such equipment to see their own signal strength and jamming concerns. Teams are working mounted, dismounted and situational awareness efforts currently. Soldiers will, for the first time, have an alternate way to navigate built into their vehicles that’s not GPS.

There’s also work to consolidate systems. Past equipment development often meant that each system had its own GPS. That’s created scenarios in which a single vehicle with multiple tracking, targeting and communications systems could have seven or more GPS antennae poking out.

If those efforts are successful, it can give tactical-level units beyond line-of-sight capabilities for a variety of uses without GPS. One piece of kit that’s currently under development might not seem, at first glance, like something that most infantry and close combat troops would put at the top of their list. But they should.

That’s because the Adaptive Squad Architecture program is an effort to tie together all of the current and future tech soldiers wear or carry and more. By powering all devices and using one processor, the move could streamline and lighten kit for the individual soldier.

What the squad will do is manage power and signals, and network what a soldier sees from their goggles to their weapon to their radio and what they’re linked into both within and outside of the squad.

And it means more than just what that soldier can see. It has effects on what their commanders can do, too. We give commanders more options than they ever had before, because through the data analysis a commander can best spread the load of equipment through the formations and account for the flood of variables that face the soldier and squad.

Essentially, the architecture becomes kind of a smartphone hub for the soldier and squad that can take on applications, algorithms and other measures to not just evaluate but also improve performance.

Teams will soon launch “Watchtower,” an Army app story similar to what smartphone users access. Within that, a lot of experimentation can take place, and leaders can tailor apps to their unit configurations and specific missions.

One example is soldier load.

Future ammunition is expected to be lighter. So, the commander has a decision to make: carry the same weight but with more rounds, or reduce the weight to the rounds they’ve always carried, keeping fatigue down and performance up?

The system can also use seemingly endless data to see if different weapons systems or communications gear match up, are redundant or ineffective.

Does the squad leader need an M4 or an M249, based on the mission? Does the squad need to carry HF communications gear or can it make do on this mission with satellite and VHF comms only? Those are some examples that the data can help mission planners explore.

And beyond the squad, the aim is to have squad “talk” wirelessly with platforms that the soldiers are riding in, from Bradleys and Strykers to new helicopters and future vehicles. Developers envision a day when the soldier boards a chopper or vehicle and the systems automatically start charging batteries and uploading their data while downloading new information from the larger network.

Squads to tap into cellular networks and battle-track on tablets at network trials. The gear will include lighter and faster communication equipment, battle-tracking applications on tablets and new ways to communicate with one another, as well as partner forces. 

By using a secure-but-unclassified architecture, soldiers can tap into commercial cellular networks, like 4G, and a greater range of wavelengths, depending on the electronic threat environment.

Field Teams are getting new capabilities that include advance networking waveforms, which significantly improves our mobile ad hoc networking capability at the lowest tactical echelon.”

The soldiers will begin collective training at the squad-level to view live-streaming full-motion video from unmanned aerial vehicles through an app on approved smartphones. Dismounted troops will get 3D digital maps and the ability to send precision target coordinates.

“Ultimately it’s going to take a hard look at if we have the right radios in the right places” “Are the new capabilities really adding enough capability to justify price points and overall cost of the networks. 

It’s not a new or separate network, but instead uses existing devices and gateways alongside commercial off-the-shelf products to make sure communications can continue even in limited bandwidth environments and amid electronic attacks.

“One thing we know for sure is the capabilities we’re pursuing are better, definitely better than what we had.  "And soldiers that don’t have it today are chomping at the bit to get it.”

But the amount and specific types of gear needed for infantry, Stryker and armor brigade combat teams do vary and so each type will require different approaches to fielding equipment.

“User feedback, leader feedback and running these through a series of exercises, to include force on force combat training center rotations, will give us that kind of feedback we need to make the best decisions.

“This environment allows us to secure the network, but pass data to where, if it’s perishable time sensitive data, we can actually share it with unified action partners. “That allows us to connect better with teammates and give troops more waveform options, regardless of who they’re working with.

“If we go somewhere where we believe we can leverage the infrastructure, we can and we will. But if there’s a concern ... we have the ability to do some workaround and show up in an area and rapidly “plug and play” into the existing infrastructure is key.

1. Improved Analytics and Better Data

Realistically, you can’t track much data pertaining to human workers. Sure, you can measure their output and productivity in a variety of ways, but the information you collect is not always reliable either.

It’s the complete opposite with robotics and advanced automation systems. As they work, they collect data about anything and everything, which serves as an incredible database of information that you can put to use. It leads to better decision making with more successful outcomes. It reveals a lot more about your processes and it can help you unlock new avenues of improvement. Essentially, the more data you have the better. But it also has to be accurate, error-free, and reliable and that’s something automated systems can definitely provide.

2. Increasing productivity

A robot has the ability to work at a constant speed, unattended, 24/7. That means you’ve got the potential to produce more. New products can be more quickly introduced into the production process and new product programming can be done offline with no disruption to existing processes. 

Robot have been designed to handle routine functions while maintaining long-term precision to increase productivity without affecting quality with automated workers in place. Improving supply chain productivity can deliver significant value to warehouse robots. They can increase accuracy, accelerate routine processes, and solve a number of common problems with employees when used side-by-side.

Conventional operation consists of manual labor, such as product handling, cart pushing and equipment operation. Employees walked the entire pick path then guided the pushcart to the packaging area for staging. With an autonomous and smart-picking solution, employees remain in specific zones while the robotic units travel between pick and put operations. Because each robot is responsible for delivering products from the employee to the packaging area, the distance each employee travels during a standard shift is considerably reduced as well. 

3. Increased Quality Assurance/Reducing mistakes 

The use of robots in the warehouse means that there is little chance of errors in relation to human labor. In addition to errors, this system reduces the time taken to go to different parts of the warehouse only. That is why employees focus more on quality control.

Robots will always deliver quality. Since they’re programmed for precise, repetitive motion, they’re less likely to make mistakes. In some ways, robots are simultaneously an employee and a quality control system. A lack of quirks and preferences, combined with the eliminated possibility of human error, will create a predictably perfect action product every time.

Robotic automation eliminates risks by accurately producing and checking items meet the required standard without fail. With more product going out the door manufactured to a higher standard, this creates a number of new business possibilities for companies to expand upon.

4. Improving safety

Robots don’t need to be with people 24/7. They can work with programmed sensors to avoid other robots and objects. This benefit reduces the chances of serious accidents that could affect a worker’s safety. Some of the most common damage to warehouse employees is caused by collapsing pallet racks, slips, and trips, and often high ladders drop. Warehouse robots eliminate these common risks for employees who are able to handle climbing, pick-up, packaging, and move around the warehouse.

“Employee safety can be improved in highly hazardous environments. In distribution center robots can seamlessly zip past each other, humans, or other moving objects thanks to advanced collision avoidance capabilities, which are processed as quickly as any human can react to potential accidental run-ins.

Safety is the most obvious advantage of utilizing robotics. Heavy machinery, machinery that runs at hot temperature, and sharp objects can easily injure a human being. By delegating dangerous tasks to a robot, you’re more likely to look at a repair bill not other costs. Employees who work dangerous jobs will be thankful that robots can remove some of the risks.

5. Easier scheduling/implementation 

Traditional workers need breaks for rest and energy rehabilitation. But robots don’t need breaks. They can work throughout a long shift. Robots provide flexibility, easily retooled and repositioned for new production programs and can easily be redeployed in new applications with ability to easily switch between a wide range of products without having to completely rebuild production lines

Robots can be realistically implemented, depending on the specifics of the operation. This is a remarkably small amount of time even at the spectrum’s high end, particularly when compared to other technologies. 

6. Ability to scale

Since robots are relatively easy to install in a facility, you can use a modular deployment system, starting with a few units and adding more as your business increases and needs change. This allows you to avoid an incredibly high initial investment, as you can start with one or two and increase your floor over time rather than buy a large number of robots at once. This modular deployment frees you from the capital to pursue other initiatives, analyzing robots ‘ impact on your business, and taking the next steps.

Any process, system, or platform that is rule-based and includes repetitive tasks or operations can be automated, in full, using robotics. That also means you — as a business — can be just as versatile. There is a nearly endless supply of opportunities in regards to how you can use the technology, and it can be scaled up or down, as much as you need.

7. Reduce both direct and indirect operating costs. 

“The implementation of autonomous robots could primarily drive value by reducing direct and indirect operating costs and increasing mission success potential using autonomous robots to drive supply chain innovation.

A quick return on investment outweighs the initial setup costs.  Automation advantages reduce production cost. Decrease in Part cycle Time is crucial for increasing efficiency.  Robotics can work longer and faster which increases production rate.

8. Enable quick ramp-ups to meet increased demand period 

Available labor, particularly during peak periods, cannot always meet seasonal demands. However, companies can turn to robots to improve operations and meet labor demands. “During high demand peak some warehouses have even used robots to quickly train temp workers so that they could add value is added with minimal training. Mixed-flow production approach allows for flexibility in adjusting to demand fluctuations

9. Consistency/Speed

Robots never need to divide their attention between a multitude of things. Their work is never contingent on the work of other people. They won’t have unexpected emergencies, and they won’t need to be relocated to complete a different time sensitive task. They’re always there, and they’re doing what they’re supposed to do. Automation is typically far more reliable than human labor.

This guarantee of quality and consistency coupled with time and efficiency savings means that you can start developing higher quality and more feature-filled products with little or no increase in production time and costs.

10. Create major efficiencies. 

Efficiency by definition describes the extent to which time, effort and cost is effectively applied for the intended task or purpose. Process automation reduces the time it takes to achieve a task, the effort required to undertake it and the cost of completing it successfully.

Autonomous robots can test, pick, pack, sort, build, inspect, count, or transport materials of various sizes and weights faster and more efficiently than ever. Robots are designed on compact bases to fit in confined spaces. In addition to being mounted on the floor, robots can be mounted on walls, ceilings, rail tracks and shelves. They can perform tasks in confined spaces, saving you valuable floor space. 
​
Drone technology combined with autonomous navigation and artificial intelligence is being used to understand inventory levels and location within warehouses and enabling organizations to move inventory faster throughout the supply chain. Investing in aerial robots redirects staff to handle high-value activities and empowering their teams with rich information, which can then be revisited at any point and time and can be used to make better decisions.
1 Comment

Top 10 Multi Domain Operations Logistics Facilitates Force Integration Down to Lowest Battlespace Level

11/1/2019

0 Comments

 
​Recent reports have concluded there are gaps in Multi-domain Logistics mechanics that compromise ability of the Joint Force ability to provide reliable, agile, responsive and survivable sustainment, causing senior leaders to ask how the Services are going to sustain itself and the joint force from fort to port to foxhole. 

It is imperative Services update robust Multi-Domain sustainment Logistics enterprise that can provide support over great distances in austere environments, both inter- and intra-theater, to prevail in conflict.

The strategic security environment described in the 2018 National Defense Strategy necessitates that DoD and the military services prepare for operations on a complex and multi-domain battlefield where business-as-usual Logistics can not be successful.

The Services response to this Logistics challenge is embedded in a concept called Multi-Domain Operations MDO.  MDO calls for the rapid and continuous integration of all domains of warfare to ensure we can deter threats and prevail in battle. 

Doing so first requires neutralizing adversaries’ anti-access/area denial A2/AD capabilities as forces deploy and then employ combat power Logistics into hostile environments. Threat A2/AD capabilities impede not only the mobility but also the sustainment of forces. 

By making sure Logistics can keep up with multi-domain operations, the adversary will not just be forced to contend with multiple competing dilemmas, but be brought to the negotiating table under optimal conditions

Pentagon vision of how it will fight in the future rests on the notion of the “dilemma.” Multi-domain operations as a concept proposes that the joint force can achieve competitive advantage over a near-peer adversary by presenting multiple complementary threats that each require a response, thereby exposing adversary vulnerabilities to other threats. It is the artful combination of these multiple dilemmas, rather than a clear overmatch in terms of any particular capability, that produces the desired advantage.

An exercise offered a glimpse of what the multi-domain operations concept will look in practice like when applied on the battlefield. The 1st Infantry Division, for example, achieved some success in its ability to rapidly erode enemy defenses through the integrated use of fires, aviation attacks, tactical deceptions, vertical envelopment by light infantry, and armored penetration.

These complementary tools, when applied rapidly and in close synchronization, exchanged mass for tempo and forced the enemy into multiple dilemmas across multiple domains. What follows is an explanation of how the division accomplished this, what we learned, and how we got better along the way.

Managing Risk in the Digital Battlespace

In our initial command post exercise, the division commander challenged us to reframe how we defined risk to the force. Received experience from previous Warfighter exercises suggested that the most dangerous of all courses of action in the face of this peer enemy would be to do nothing, or worse, to halt and await favorable conditions to be set while our very limited armor capability sat within range of the enemy’s long-range artillery.
 
In this fight, audacity—when properly seasoned with a prudent understanding of the risk—is a critical combat multiplier. Must decide up front “how we would enter the digital battlespace”; how we would identify its forward edge; and once in it, how we would proceed audaciously with the simultaneous commitment of all forms of contact that the division could generate. Once in the digital battlespace we realized that the most dangerous and risky thing we could do was to stop attacking.

This challenge is further complicated by the presence of underground facilities throughout the exercise’s area of operations. How troops effectively employed underground facilities to deny coalition forces what has traditionally been our greatest asymmetric advantage—the ability to shape and attrite OPFOR prior to the advance of the main body. We observed a very carefully calibrated set of triggers for the OPFOR decision to uncover its artillery from underground facilities and a period of extreme vulnerability as the OPFOR exited those facilities—often in single file. 

Our goal then was to ensure we were prepared to exploit this window of opportunity either with the timely application of attack aviation or through the insertion of light infantry forces who would then move, often many kilometers, to block the exits to these underground facilities at the opportune moment.

Purpose-Driven Operations

Operational frameworks are difficult to conceptualize, yet they are fundamental to a logical plan. Divisions utilized operational frameworks as a digital  tool to clearly visualize and describe the application of combat power in time, space, and purpose. It provided a logical architecture and foundation on which the subsequent detail, resource, permission, responsibility, effort, operation, concept, and task were built. Through a clearly articulated concept of operations, the division ensured that actions that it and the brigade combat teams executed were in pursuit of the commander’s end state.

During planning, dividing an area of operations into parts categorized as deep, close, support, and consolidation only explains the plan by time and space. Meanwhile, assigning operations as either decisive, shaping, or sustaining and units as the main or supporting efforts explains actions by purpose. Combining all three frameworks achieved what the division and brigade commanders needed: to explain actions and responsibilities in time, space, and purpose.

Presenting Multiple Dilemmas

In this particular Warfighter exercise, the division observed that the terrain limited options for ground maneuver. A single penetration, though conservative and often effective, would not achieve the commander’s intent. The penetration presents the enemy with one problem—a problem that other units have presented repeatedly. Dilemmas are not the same as problems. A problem is a situation regarded as unwelcome or harmful that must be dealt with and overcome. 

A dilemma, by contrast, is a situation in which a difficult choice has to be made between two or more alternatives, especially equally undesirable ones. To present the enemy with multiple dilemmas across multiple domains and in multiple locations, the division combined penetrations with audacious turning movements and tactical deceptions, complemented and reinforced with nonlethal effects.

The turning movements were achieved by conducting air assaults across the coordinated fire line and up to the fire support coordination line. To avoid enemy air defenses, these air assaults were often offset by several kilometers and at least a major terrain feature away from their intended target. 

The targets were often key points of overwatch for particular underground facilities suspected of housing long-range artillery, or points of domination that could cover major avenues of approach. Timely execution of these air assaults forced the enemy to divert resources and attention from the advance of our armored formations along heavily defended avenues of approach and thereby dislocated the main enemy defenses. 

In the cases where we were successful, the division forced the enemy to react to our operations and enter the fight on our terms. More importantly, we were able to achieve tempo not just through the sustained geographical advance of the forward line of troops, but by persistently presenting complementary dilemmas to the enemy in unexpected ways to diminish adversaries decision space and disrupted their understanding of its own plan. By the time the enemy observed and oriented on one dilemma, the division sought to present another, thereby causing the enemy to not render a decision on the initial dilemma.

Sustaining both momentum and tempo against a capable enemy required the division to reframe how we achieved synchronization during sustained and dynamic combat operations. Too often our decision-making process in combat operations mirrors the activity of a football team on the gridiron. In the midst of a long offensive drive, we seek to impose periods of planning i.e., the huddle, an approach march, a decisive operation where synchronization is optimized, and a culminating point that leads into a period of disengagement and another planning session. 

This “battle period” model, thankfully obsolete at our brigade training centers with the advent of open phasing, is equally inappropriate in a Warfighter exercise. Instead, we needed to think like a rugby team, where synchronization occurred rapidly and unexpectedly with fleeting moments of opportunity quickly identified and exploited by individual players who then become the supported effort as the team synchronizes around them. This required a different and more dynamic approach with near/long term tactical planning and targeting cycles all occurring constantly as conditions changed on the ground.

Simultaneity and Momentum

Conducted simultaneously, the penetrations, turning movements, and tactical deceptions enabled the division to achieve a degree of irreversible momentum against the enemy. The armor penetrations kept the enemy’s sensors engaged. The turning movements avoided the enemy’s principle defensive positions and seized objectives behind the enemy’s current positions causing the enemy to both dislocate from its positions and to divert forces to meet the threat. The tactical deceptions kept the enemy fixed on sizable threats, which influenced the enemy’s decision to prematurely unmask forces in sanctuary inside its underground facilities. 

Additionally, the combat aviation brigade was employed as an independent maneuver organization focused on destroying enemy high-payoff targets—in particular long-range artillery. Synchronizing all of these actions in time, space, and purpose became a tremendously complex task and the primary focus for the division main command post. 

We managed this effort with a centralized division synchronization matrix that was incredibly detailed, included all subordinate, adjacent, and functional unit actions, remained prominently posted on our current operations floor, and printed in placemat form at every major battle-rhythm event, especially the targeting working group and the target decision board. The synchronization matrix allowed us to forecast time out and visualize the timing of the dilemmas that would be presented to the enemy as we fought through the typical frictions of a large and complex operation.

Targeting and the Plan

The division ensured that the targeting process was nested within the plan and that the plan was flexible enough to adapt with the targeting process. From the beginning, targeting was aligned with the commander’s intent—to “cause the rapid erosion of the enemy’s defenses and will to fight.” 

The division’s targeting imposed the commander’s will, in the form of physical, temporal, and cognitive effects, on the enemy. The division simultaneously employed multiple defeat mechanisms to accomplish its mission, while at the same time removing the ability for the enemy to present dilemmas to the division. The targeting process integrated all warfighting functions, but specifically integrated the enemy plan, the maneuver plan, tactical deceptions, lethal effects, and non lethal effects to exhaust the enemy’s ability to make sound and timely decisions.

Targeting was assessment-driven, and therefore required specificity in information collection and analysis to evaluate the effects. In the deep fight, this process disintegrated the enemy—disrupting and degrading the enemy’s ability to conduct operations while leading to the collapse of enemy capabilities and will to fight. In the close fight, this process prevented the enemy from massing combat power in the battlespace.

Our experience in this Warfighter exercise confirmed practically the conceptually central idea of multi-domain operations—that competitive advantage emerges from the skillful integration of complementary capabilities, sequenced in time, space, and purpose to create multiple dilemmas for an adversary.

When we achieved this effect, we found success. When we failed, the very capable enemy we faced quickly overwhelmed and defeated our exposed forces. Presenting multiple dilemmas required the diivison staff to redefine its understanding of prudent risk and to develop a natural bias toward action rather than inaction. 

Once in the “digital battlespace,” the most dangerous and risky course of action was the failure to act. This required a very clear intent from the commander and a staff that could coordinate, integrate, and anticipate actions in time, space, and purpose, with higher, adjacent, and subordinate unit headquarters. 

Simultaneity across all domains and irreversible momentum can only be achieved through a well-trained and experienced team of teams. “Presenting multiple dilemmas to the enemy” is more than just a catchphrase. Achieiving it requires clear intent, a culture of empowerment, a capable staff, and a level of risk tolerance that many of us may be uncomfortable with. But, when properly and artfully executed it can yield a signficant competitive advantage on the modern battlefield that ultimately saves lives and produces decisive results.

New concept of warfighting has evolved into multi-domain operations and rapidly spread in conversation to the joint force. To take the concept beyond white papers and white boards, the Army has built a Multi-Domain Task Force, which recently saw its first field experiments.

The concept looks at how services will penetrate the more sophisticated defenses of adversaries and travel along the continuum of conflict while maintaining superiority and advantage. It means using longer range fires, better missile defense, an impenetrable network and highly refined small unit capabilities. And it means all of that in more domains at once than any commander has ever had to consider.

At some point, the task forces will simply dissolve into the way the Army does business. “You’ll have these capabilities at echelon throughout the entire theater.

In the near future even company commanders will become used to thinking in all domains and using data, high-performance computing, sensors, nodes, networks and firing platforms across all of the services.

You can’t effectively prosecute a campaign using MDO if it’s not joint. Different domains can be applied to create the window of opportunity. Imagine effectively securing airspace and waterways by long-range precision fires or air missile defense. It’s turning everything on its head.

It’s fair to say in the future MDO is at the tactical level. At the company and battalion level, it’s still about combined arms at the lowest tactical level. MDO just lets the maneuver force continue to do combined arms maneuver.

Now what will be distinct is reliance on the tactical level to think, assess and employ all domains when necessary.. So the thinking will be for tactical-level commanders to see opportunities for all domains to be integrated, or leverage the benefits of other domains in their tactical space. But in the end a battalion will maneuver like a battalion.

That makes operations more dispersed and leaders must call on support from a much wider area. Operational and theater-level commanders must compete daily to enable any kind of penetration or in order to enable conflict.

“What soldiers will see is after forces get out into theater and then geographic combatant commanders recognize the virtues that they bring to their capacity and as we learn and as we go through service wide analysis what you’ll see is that we’ll start building out those echelons we’ve talked about.

“You’ll need to have a theater fires command, you’ll need to have an operational fires command. You’ll need to have network capacity at echelon. You’ll need to have access to space assets at echelon.”

1. Feeds war fighter information necessary to orient structure of battlespace

2. Perspective gives ability to see and understand the competition 

3. Provides insight into battlespace from others’ perspectives 

4. Integrate information where and when it is needed,

5. Aggressive collection and sourcing of information to provide behavior context

6. Indivisible effects depend on ISR and integration.

7. ISR is domain-neutral ISR focused on capabilities and effects, not platforms

8.  ISR is operations—not solely support to operations

9.  Interoperability required for interservice effectiveness
​
10. Provides ability for quick maneuver or action between domains
0 Comments

Top 10 Training Exercise System Requirements Assess Logistics Process/Capable Contribute to Readiness

11/1/2019

0 Comments

 
​Every military activity or exercise is an opportunity for assessing logistics performance, but it is rare that military exercises comprehensively test and assess operational sustainability and logistics readiness. Fewer still are those exercises that test logistics readiness through a major deployment performed at short-notice; a phase of an operation that demands all supporting agencies are ready.

There must be a high state of materiel readiness across the force. In addition to appropriately funding the sustainment of equipment, and the establishment of appropriate stocks in appropriate areas to enable operational contingencies, the means of sustaining equipment must be as appropriate for support operations as they are for efficiency in garrison. 

Failures in materiel readiness are often replicated in major sustainability issues on operations, and necessitate consequential actions such as switching parts between aircraft to achieve desired operational readiness outcomes.

The 2nd Marine Division is doing its largest unscripted training in decades. About 10,000 Marines and sailors are training in California for the 2nd Marine Division's largest exercise in recent history.

New technology will allow the Marines to see how they do against a simulated adversary in the Marine Air-Ground Task Force Warfighting Exercise called MWX 1-20.

The troops will be pitted against each other in a force-on-force battle as they prepare for the kind of fight against an enemy with sophisticated equipment and skills.

In the unscripted exercise, Marines will be equipped with "laser-based equipment" that will register when hit by enemy fire. "We're interested in seeing how the Division conducts combat operations against a simulated adversary with capabilities as advanced as our own in the unforgiving terrain of the combat center.

During MWX, Marines and simulated advisory will be using technology such as unmanned aerial surveillance and electronic warfare, which will "challenge how, and the way in which, they make decisions.Technology will provide "unique challenges across every level" during this exercise.

"The 'Follow-Me' Division is going to learn a lot from this challenge and will be a much better, trained and led organization because of this experience.

Logistics has been a double-edged sword for Marines for generations. On the upside, plentiful supplies of fuel, ammunition, and spare parts in good times have kept huge armoured forces on the march. On the downside, the long supply lines, iron mountains of supply stockpiles, and the huge numbers of support troops and vehicles required slow down deployments to a crisis and restrict manoeuvres once it’s arrived.

Marines could cope with these logistical limits when it has months to build up before the shooting started, with nearby as bases, and a relatively short distance to drive.

But logistical demands can be much greater when distances are longer with large combat formations moving along a single axis of advance, let alone supply convoys and depots.

So emerging concepts called multi-domain operations or distributed operations envisions Marines spreading out to make themselves harder targets. Relatively small units would operate “semi-independently,” moving frequently from one position to another, without resupply for days at a time.

The problem is Marines are not set up to do this today. Heavy armoured vehicles just require too much fuel and maintenance to operate this way. The long-term solution is to develop lighter and less logistically demanding vehicles, but recent efforts have been less than successful.

In the meantime, Marines need to figure out how to support the forces it has more efficiently so they can manoeuvre more freely, with less frequent pit stops for maintenance or supply runs for repair parts.

That’s where the new contract comes in. A lot of maintenance that’s done is based on what the owner’s manual says. You should go and get your oil changed and your engine checked every so many miles which can function as a baseline but it doesn’t take into account how the machine is being used and the wear and tear and stresses.

The goal is to track the performance of each major component in real time — oil pressure, turbocharger speed, battery life, etc. etc. — and predict when it’s likely to fail.

Predictive maintenance has two benefits. First, most obviously, it lets you replace or repair a part before it breaks on you. Second, it lets you skip a lot of so-called preventive maintenance, when you pull your vehicle into the shop after so many hours of operation because that’s when, on average, such-and-such a component will need an overhaul.

There’s been a small blitz of media coverage of the contract, it’s focused on how predictive maintenance can improve efficiency and cut costs, but there are uniquely military benefits.

So we track not only the individual performance of specific components on specific vehicles, but also external variables like weather. Heat, cold, and humidity can all impose stress on machinery.

Where is this information coming from? It turns out the ability to put digital sensors on its products got ahead of its ability to do anything with it. A lot of machines have the sensors already on them that are producing metrics, it’s just that nobody’s listening.

Another problem is when vehicle is in a location with poor bandwidth, or if there’s a military reason to turn off all transmissions, the system can stop sending updates for a time. It can also do some of the assessments onboard the vehicle and may not have to send the results back to the central station minimising bandwidth use and transmission length.

But the big benefit is the ability to pool all available information in one place and then let machine learning figure out patterns, which can then be used to forecast future performance.

We can track general trends across a fleet of vehicles, but the real value is with prediction. Imagine if, instead of having to go to the shop for your scheduled work, you could have your status 24/7.

On the individual machine/equipment level, will the fighter unit make it through the day and do what it needs to do?
Our goal is for tactical commanders to know -- we have this many vehicles this is what the overall status is for each one so better strategic decisions can be made

Our expertise spans the entire spectrum of marines operational requirements, such as configuration management, material procurement, training, shipboard assessments, developing and correcting maintenance processes, and documenting material discrepancies. We provide an array of tools, processes, and support systems to manage the flow of equipment, services, and data efficiently and within budget. 

Through data and systems integration, the fusion of information and transportation, and agile infrastructure, seamless and modular logistics support systems are introduced. We create and update maintenance strategies, plans, and procedures for in-service assets. For Marine customers, we develop ship overhaul work package requirements, including development and/or review of work specifications and associated requirements. 

We also support material condition assessment teams and document material deficiencies. We provide professional services in support of military supply, maintenance, and logistics military commands. This support saves cost through efficiencies in procurement and maintenance policies, and provides improved metrics to facilitate superiority in future decisions.

Marines require engineering logistics for all levels of maintenance, providing acquisition planning and support, engineering planning, lifecycle maintenance expertise, engineering technical research, and tools for rapid engineering assessments. 

Effectively identifying and replacing worn parts stateside, prior to deployment, requires extensive maintenance and logistics expertise and improves cost-efficiency and mission effectiveness. 

Our engineers and maintenance personnel have extensive experience in evaluating equipment operability risks and identifying discrepancies. Our logistics support personnel have deep experience in material identification, sourcing, ordering, tracking, and expediting. We update configuration databases to reflect as-installed equipment and workload schedule forecasting.
​
Highly intensive systems in domains such as transportation, infrastructure, aerospace and telecom have long operational life cycles and their stakeholders expect them to exhibit the necessary operational and performance characteristics during these long operational life spans. 

Often the results have been less than satisfactory, which has led many to envision alternative approaches to effectively sustaining such systems. Among the alternative approaches is Performance Based Logistics,  whose essence is to define key system readiness and effectiveness criteria and to contract for threshold values of these criteria. The emphasis is on contracting for results, and not for resources as traditionally done.

We partnered with marines to conduct survey aimed at assessing current practices in performance-based logistics contracting, highlighting the lessons learned and outlining the primary drawbacks observed.

 We developed a framework for formulating more efficient and effective contractual agreements and identified the main topics, aspects and metrics representing system effectiveness of a successful performance-based logistics initiative. 

Marine leaders must consider the following factors for early assessment, since failure to do so could cause significant consequences in operation phases. 

1. Threat 

The sensitivity of the program to uncertainty in the threat description, the degree to which the system design would have to change if the threat’s parameters change, or the vulnerability of the program to collect intelligence collection efforts sensitive to threat countermeasures

2. Requirements

How program are impacted by uncertainty in the system description and requirements, excluding those caused by threat uncertainty. Requirements include operational needs, attributes, performance and readiness parameters to include key Performance Parameters, constraints, technology, design processes, and work breakdown structure

3. Technical Baseline

The ability of the system configuration to achieve the program’s engineering objectives based on the available technology, design tools, design maturity, etc. Program uncertainties and the processes associated with the reliability, supportability, maintainability, etc. must be considered. The system configuration is an agreed-to description of the attributes of a product, at a point in time, which serves as a basis for defining change.

4. Test and Evaluation

The adequacy and capability of the test and evaluation program to assess attainment of significant performance specifications and determine whether the system is operationally effective, operationally suitable, and interoperable.

5. Modeling and Simulation

The adequacy and capability of model/simulation to support all life-cycle phases of a program using verified, validated, and accredited models and simulations.

6. Technology

The degree to which the technology proposed for the program has demonstrated sufficient maturity to be realistically capable of meeting all of the program’s objectives.

7. Logistics

The ability of the system configuration and associated documentation to achieve the program’s logistics objectives based on the system design, maintenance concept, support system design, and availability of support data and resources.

8. Production/Facilities

The ability of the system configuration to achieve the program’s production objectives based on the system design, manufacturing processes chosen, and availability of manufacturing and repair resources in the sustainment phase.

9. Concurrency

The sensitivity of the program to uncertainty resulting from the combining or overlapping of life-cycle phases or activities. The ability of the system needs ability to achieve the program’s life-cycle support objectives. This includes the effects of budget and affordability decisions and the effects of inherent errors in the cost estimating techniques used given that the technical requirements were properly defined and taking into account known and unknown program information.

10. Schedule

The degree to which program scheduling plans and strategies exist and are realistic and consistent within qualified support team. Schedule involves sufficiency of the time allocated for performing tasks include effects of programmatic schedule decisions, errors in schedule estimating, and external physical constraints.


0 Comments

Top 10 Decision Making Steps Simulate Battlespace Conditions Design Strategies Capable Change/Adapt

11/1/2019

0 Comments

 
​We train Marine combat leaders to making successful decisions under pressure. In a war game, troops play their own company, a select group of their competitors and the battlespace. A control team plays all other entities that affect the field units. The game begins with a prepared set of conditions and, when the whistle blows, anything goes - that is, anything that can happen in the real world.

To be most effective, war games should include certain specific real world conditions. These include a strongly competitive battlespace, so that players must make decisions reacting to each other’s actions. Another is unpredictability, illustrated by changing technologies and shifts in battlespace target demand. 

Long and short term perspective are required, to show how decisions made now will affect field level success later on. One important result that makes wargames supremely worthwhile: troops learn the importance of being absolutely clear in their communications.

When the dust has settled, Marines look back on these simulations as one of the most challenging and stimulating exercises of their careers. 

In general, the decision making process helps troops solve problems by examining alternative choices and deciding on the best route to take. Using a step-by-step approach is an efficient way to make informed decisions that have a positive impact on mission results.

1. Identify the decision

The first step in making the right decision is recognizing the problem or opportunity and deciding to address it. Determine why this decision will make a difference to your organization.

Must identify exactly what decision is being made. If it's a choice between two things, such as choosing between two jobs that are being offered, it's fairly simple to name the decision. But if it's a question of whether to start an operation and what exactly the team should do, or how to run expansion, that's trickier. 

Write down what you think the decision is that you're making, then iterate your description of the decision until it expresses exactly what you're trying to decide. 

If you misidentify the problem to solve, you’ll knock the decision train off the track before it even leaves the station.

2.  Build the proper infrastructure

It isn’t safe to delegate before you have built the proper infrastructure to assign the right people with potential to return time investment to train them.

Must document processes so you have to literally write down how you want things done. Yes, this is work and no one is going to celebrate your efforts. However, documentation is the best means for communicating how you want things done when you can’t be there. 

Developing robust metrics so you get regular reports that quantify the performance of your team with solid operating metrics that tell you how things are going in almost real time.

3. Put data access processes in place

Once you’ve identified what you want to monitor, make sure your processes can deliver your data. For example, if you don’t currently update your estimates with change orders, you can add that to your workflow to get a truer picture of your estimate-to-actual variance.

The data won’t do much good if your decision makers can’t get to it. Look for technology that allows commanders in the field to access the information they need from their mobile devices. Use customized dashboards and easy to understand graphs and other visuals to make the data easily consumable and understandable.

4. Gather information 

Now it’s time to gather information so that you can make a decision based on facts and data. This requires making a value judgment, determining what information is relevant to the decision at hand, along with how you can get it. Ask yourself what you need to know in order to make the right decision, then actively seek out anyone who needs to be involved.

So you know you have a decision to make; now you need to learn more about the decision so you want to identify what kinds of information you need, how you will get that information and how you will use the information so you learn more about the situation, and zero in on prospective alternatives.

5. Identify alternatives

Once you have a clear understanding of the issue, it’s time to identify the solutions at your disposal. It’s likely that you have many different options when it comes to making your decision, so it is important to come up with a range of options to determine which course of action is the best way to achieve your objective.

You will need to weigh the evidence to “evaluate for feasibility, acceptability and desirability” to know which alternative is best, You need to be able to weigh pros and cons to select the option that has the highest chances of success. It may be helpful to seek out a trusted second opinion to gain a new perspective on the issue at hand.

6.. Choose among alternatives

When it’s time to make your decision, be sure that you understand the risks involved with your chosen route. You may also choose a combination of alternatives now that you fully grasp all relevant information and potential risks.

If you noticed a pattern that one alternative dominates all the rest, that clearly becomes your decision.  But if you cannot decide, you’ll need to make tradeoffs using a process called even swaps to increase the value of an alternative in terms of one objective while decreasing its value by an equivalent amount in terms of another objective.

7. Design solutions

Now that you have collected and assessed the data, you can start to design solutions. This can take on many different forms, such as creating battlespace area layouts, or new processes to name a few. 

You need to create a plan for implementation. This involves identifying what resources are required and gaining support from stakeholders since getting them onboard with your decision is a key component of executing your plan effectively. Be prepared to address any questions or concerns that may arise.

8. Validate the preferred design option 

Before you pull the trigger on the decision, you want to know if it’s going to actually solve the problem. To help validate the preferred option, you can run simulations or even model the system so you will be able to see your system and test different worse-case scenarios.

Considering the consequences is key because it will help you determine how your final decision will impact yourself, and/or others involved. In this step, you will be asking yourself what is likely to be the results of your decision. How will it affect you now? And how will it affect your future?

9. Take Action

Here comes the fun/scary part. You have identified your problem or your goal. You've gathered all of the possible information, gotten information from the experts, and weighed the consequences. Now it's time to make the choice. You've gotten rid of all the alternatives that are not practical or simply do not fit.

Time can be a tough friend. Sometimes it is good, and sometimes it is not. When making major decisions and taking action, understanding the timing process is crucial because sometimes it is best to delay a decision, and other times delaying a response can cause more problems. There are also times when making a quick decision is advantageous because it allows you more time to make necessary changes should problems arise

Once you’ve made your decision, act on it! Execute your plan to set the team loose on their tasks.

10.  Review your decision 

An often-overlooked but important step in the decision making process is evaluating your decision for effectiveness. Ask yourself what you did well and what can be improved next time.
​
Even the most experienced commanders can learn from their mistakes … be ready to adapt your plan as necessary, or to switch to another potential solution. If you find your decision didn’t work out the way you planned, you may want to revisit some of the previous steps to identify a better choice.
0 Comments

    Site Visit Executive

    Provides Periodic Updates Operation Status

    Archives

    August 2021
    July 2021
    June 2021
    May 2021
    April 2021
    March 2021
    February 2021
    January 2021
    December 2020
    November 2020
    October 2020
    September 2020
    August 2020
    July 2020
    June 2020
    May 2020
    April 2020
    March 2020
    February 2020
    January 2020
    December 2019
    November 2019
    October 2019
    September 2019
    August 2019
    July 2019
    June 2019
    May 2019
    April 2019
    March 2019
    February 2019
    January 2019
    December 2018
    November 2018
    October 2018
    September 2018
    August 2018
    July 2018
    June 2018
    May 2018
    April 2018
    March 2018
    February 2018
    January 2018
    December 2017
    November 2017
    October 2017
    September 2017
    August 2017
    July 2017
    June 2017
    May 2017
    April 2017
    March 2017
    February 2017
    January 2017
    December 2016
    November 2016
    October 2016
    September 2016
    August 2016
    July 2016
    June 2016
    May 2016
    April 2016
    March 2016
    February 2016
    January 2016
    December 2015
    November 2015
    October 2015
    September 2015
    August 2015
    July 2015
    June 2015
    May 2015
    April 2015
    February 2015
    January 2015
    December 2014
    April 2014
    January 2014
    December 2013
    November 2013
    October 2013
    September 2013
    August 2013
    June 2013
    May 2013
    April 2013
    March 2013

    Categories

    All

    RSS Feed

Web Hosting by Dotster