article

ShamimAhmed avatar image
0 Likes"
ShamimAhmed posted

A Model-Based Approach to CD: From ''Everything as Code'' to ''Everything Is a Model''

Techniques such as ?Infrastructure-as-Code? (IAC) have proven a popular paradigm for codifying and managing infrastructure as versioned software to drive automated deployments in Continuous Delivery pipelines.

The concept of IAC has been extended to ?Everything-as-code? (EAC) extends that paradigm to apply to other aspects of DevOps such as testing, security, databases, and operations.

While treating everything as code provides many benefits, it has its drawbacks. Code sprawl and complexity creates its own quality and maintenance challenge.

In this blog, I will discuss the concept of ?Everything-as-Model? (EAM) approach to Continuous Testing and Delivery. This is an innovative evolution of the EAC concept that addresses its drawbacks and provides significant benefits.

What Is a Model?

A model, in our context, is a form of abstraction for different types of entities in a continuous delivery system ? for example, code, tests, data, infrastructure, etc. For example, model-based testing is an emerging discipline which allows us to represent tests as model from which actual tests are generated. Similarly, model-based software allows us to design the software as a model from which code is generated. 

A model-based approach offers many advantages, such as:  

  1. Models are visual, easy to understand and better at representing relationships between different components. Modeling can be a great way to express complex behavior as well disambiguate such behavior.
  2. The associated entity (for which the model is an abstraction -- for example code or tests) can generated from the model with greater precision, accuracy and quality using sophisticated algorithms
  3. Change management is easier since the model can be used to define the change, predict the impact of change, thereby promoting greater agility and optimization of effort.

Model-Based Approaches for Continuous Delivery Processes

All the benefits of model-based approaches are very pertinent to continuous delivery where the emphasis is on agile delivery of small packages of continuous change to application systems. In other words, being able to manage change ? i.e. understanding the impact of change, implementing, testing and deploying the change in an automated manner is of paramount importance.

Let?s look at the key processes involved in Continuous Delivery and how model-based approaches can be applied to each. These are:

  1. Planning, or backlog/requirements management  (?Continuous Planning?)
  2. Development (?Continuous Development?)
  3. Testing (?Continuous Testing?)
  4. Configuration, Deployment and Release (?Continuous Deployment/Release?)

Model-Based Requirements

Traditional approaches to requirements/backlog management where they are captured textually (e.g. as feature descriptions or user stories) are fraught with challenges such as ambiguity and maintainability. My good friend and colleague of many years, Alex Martins, does a great job in explaining in this blog how traditional requirements approaches represent a significant impediment for continuous delivery. He finds: ?Ambiguity is the cause of approximately 56% of defects introduced in the application code?.

Representing requirements as a model address much of the ambiguity problems associated with requirements by explicitly capturing the intended behavior of the application (with support for complex logic) as visual flow. Here?s an example of a model developed in CA Agile Requirements Designer.

Image title


Such models can be expressed at many different levels, such as: business process, feature, story etc. And we can establish relationships between the models at different levels, which is much easier to do than using traditional text-based techniques. This allows us to do much more detailed traceability between different requirements components. As well as perform automated impact analysis when a requirement is changed.

Model-based requirements also allow better collaboration between various stakeholders (such as product owners, business analysts, developers, testers, data and service engineers, release engineers, etc.) through progressive requirements refinement. For example, the product owner expresses the needs at the highest level in a simplified model. This model can then be refined by requirements specialist (or a modeler) to flesh out the details and add more behavioral logic. Developers can review the model to get an un-ambiguous view of the requirements and may themselves enrich the model with additional detail. Test data engineers can then extend the model to specify test data requirements. For a more detailed description of this collaborative model-based progressive requirements development approach, please refer to Alex?s blog on this subject.  

Most importantly, the requirements model serves as the single source of truth for what the application does. The power of model-based approach comes from the fact that a variety of SDLC artifacts can then be generated automatically from this model ? including, code, tests, test data, APIs/services ? based on the progressive refinement described above (see Figure below).

Image title


Model-Based Development

As described above, detailed feature and story level requirements (that encapsulate the behavioral logic of the application) can be defined using model-based approach. This enables developers to easily translate such logical behavior into code either manually or automatically. Model-based development has been practiced in domains such as embedded systems, GUI design, and database development (where the ?code? is database schemas and tables for example).  Regardless of whether the code is auto-generated, application development based on a model allows developers to build software that closely matches the requirements stated. Some tools allow management of traceability between the model and code, so it is possible to do automated change impact analysis (and often automated code updates) when the model is changed.

In addition, such software can be tested more easily and quickly by having tests automatically generated from the model. These automatically generated tests help to accelerate Test Driven Development (TDD), Behavior Driven Development (BDD) as well as functional testing of the application. This is discussed next under Model-based Testing.

Model-Based Testing

No process area benefits more from model-based approach than testing. One of the biggest challenges (and time sink in an agile context) in effective testing is manual test design and maintenance. Understanding frequently changing requirements and turning them into optimal tests requires significant time and skills. Most application systems are either under-tested (resulting in defect leakages or failure risk) or (sometimes) over-tested (resulting in higher costs or elapsed time), rarely optimal.

Most of these challenges are easily circumvented by having tests generated automatically from the model of requirements rather than having to manually create the tests. This not only obviates the labor intensive work of designing tests, but also helps to generate the most exhaustive (or most optimal) set of tests needed to validate the requirements at the touch of a button. Tools such as CA Agile Requirements Designer ? which you can take for a test run by starting your Free Trial ? allow generation of tests that allow a variety of coverage depending on customer needs ? most exhaustive (thorough testing of all possible scenarios), optimal (smallest number of tests for maximum coverage), based on risk/complexity, based on past defect history etc. (for example see Figure below).

 Image title


In addition to automated test optimization, model-based testing allows the following benefits:

  • Change impact based testing: the model automatically (and almost precisely) identifies what tests have been impacted based on a change in the model (for a requirements change or addition). This allows us to do agile testing (for example build verification testing) to quickly test what has been impacted, reducing valuable test cycle times.
  • Automated test data generation: the model allows synthetic test data to be generated for use in testing. This ?test data matching? is very useful for story level tests where traditional test data management practices is somewhat onerous
  • Automated virtual services generation: virtual services that stand-in for an API can be generated automatically from the model from synthetic request-response pairs of data
  • Automated test automation: the tedious task of test automation can largely be automated by attaching the model with automation engines such as Selenium and HP-UFT.

Model-Based Configuration, Deployment, and Release Management

One of the biggest impediments in Continuous Delivery is environment configuration, provisioning, and deployment of applications. ?Infrastructure-As-code? approaches automate the configuration and deployment, but often require detailed scripting to define these topologies, provision them and then deploy applications into environments. In addition, such scripts proliferate depending on the number and types of environments (e.g. containerized, cloud, on-prem, hybrid etc.). By comparison, model-based approaches abstract the elements of the environment, application and the logic into an overall topology from which lower level artifacts can be generated using code generation, see Figure below:

Image title


Essentially this allows the modeling of the entire CD pipeline (tools such as Jenkins 2.0 already support the concept of ?pipeline as code?) along with the topologies of the environments that make up the pipeline. Model-based topologies also allow us to efficient testing of environments to ensure that defects do not arise from functional differences between various environments such as development, QA, staging and production. The idea behind such testing is to ensure that such environment topologies in a CD pipeline are functionally equivalent (or congruent) but may be different in scale. Manual verification would obviously be an onerous task for complex environments. However, model-based representations of topologies allow us to perform static functional equivalence testing across those environments to detect anomalies. An approach to doing this is shown in the Figure below:

 Image title


In addition, we can establish traceability between different environment topologies. This allows us to do change impact testing every time the topology of an environment is changed (e.g. by a developer or an operations engineer) by flagging topology components (e.g. OS patch version) that have become non-conformant. The same approach can be used for regulatory compliance verification in regulated environments.

Release modeling also helps us map requirements (features/stories) to release payloads. Large enterprises typically have multiple release trains active at the same time tied to different teams assigned to different backlog (see Figure below).

Image title


As the payload of these trains change, manual tracking and adapting to such change can be challenge. Modeling of release trains with application payload information allows the deployment packages to be automatically re-configured without manual intervention. 

Pulling It All Together: "Everything Is a Model" (EAM)

Now that we have considered how different CD processes can take advantage of model-based approaches, let?s consolidate our overall approach to model-based CD. Essentially, we propose a versioned model repository (analogous to code repositories) for various processes (and artifacts) which are inter-related to each other and drive the end-to-end CD pipeline (see Figure below).

Image title


The model repository serves as the source of truth for all the artifacts and nothing can be changed without a check-out/check-in/verification process that is a core software engineering discipline. All changes (for example a requirement change) originate in the models, which triggers change-based impact analysis across multiple models, generation of artifacts (for example tests are the ?code? generated from the model), and execution of pipeline processes (for example a Continuous Integration tool like Jenkins is triggered by a committed change to a code model).  

A typical model-based CD process could be as follows:

  1. Requirements model is checked out from repository, and updated to reflect changes in requirements (e.g. new feature or enhancement to an existing feature). The appropriate release model is updated to reflect the new payload

  2. Updates to the model trigger updates to source code, test cases, test data, virtual services which are also updated in the repository. It also identifies the minimal test set that needs to be executed based on the change impact

  3. If a topology update is needed (for example the new feature requires a new version of a dependent 3P library), the developer updates the dev topology and checks it back in ? which triggers alerts (or auto-updates if feasible) to downstream environment topology models

  4. Updates to the code trigger a continuous integration engine (like Jenkins) to perform the usual CI tasks

  5. Upon build completion, a deployment engine provisions the appropriate build verification environment and deploys the build (and other application components) therein, along with the build verification tests, test data, virtual services.

  6. The change-impact based test sets are executed for build verification

  7. Upon success, the deployment engine provisions and deploys the build to the QA environment where testing happens at the integration level with appropriate test data and virtual services as described in the integration test topology model

  8. Upon success, the deployment engine deploys the build to the next environment (such as staging) where end-to-end system testing happens with appropriate test data and virtual services as described in the staging topology model

  9. The release model tracks the progression of the payload along the pipeline and deploys the release to production using the appropriate process embedded in the model (for example a blue-green release or a dark deploy for a future business driven go-live date)

As described before, the principal benefit of this approach is to abstract a level above ?everything is code? to enable better dependency management, automated change impact management and mostly script-less automation of pipeline processes.

Continuous Modeling?

In the spirit of the ?Continuous Everything? philosophy in DevOps, we propose ?Continuous Modeling? as a new approach to managing CD processes. Continuous Modeling is in fact an established concept in mathematics where modelers continuously update models of systems with new data to test its validity resulting in continuous refinement (or rollback if it fails). In many ways, our approach Continuous Modeling for CD is similar. It allows for rapid management of change, early testing (with fast failure detection) and continuous refinement of the application system. In an age of digital transformation driven by big data, analytics and autonomics, we feel this capability is critical as we build self-healing systems that analyze data large streams, use machine learning to make intelligent decisions and continuously refine our applications.  

In subsequent blogs, we will delve into each of the model-based process in detail and demonstrate how they are supported by CD solutions from CA Technologies. Of course, we can count on Alex to continue to lead the charge on Model-Based Requirements and Testing as we break new ground in this space.

In conclusion, we believe that modeling approaches described above will be infused into every aspect of CD going forward and evolve the way CD is performed. Stay tuned my friends, and may all our CD be model-based!

continuous development
10 |600

Up to 8 attachments (including images) can be used with a maximum of 1.0 MiB each and 10.0 MiB total.

Article

Contributors

ShamimAhmed contributed to this article

Related Articles