Ranger4 DevOps Blog

Service Virtualization: Considerations at Implementation Time

Posted by Steve Green on Sat, Feb 8, 2014 @ 00:02 am

The evolution of applications is accelerating. Applications are not discrete islands but build on complex, interconnected sets of services including disparate technologies, developers, deployment topologies and organizations. Developers are directed to deliver high-quality applications while testing expenses are often limited. A combination of automated integration testing and test virtualization can help test teams to improve quality and keep up with the rate of change.

Measuring success

A simple measure of success for a test manager is the ratio of captured defects versus escaped defects. However, success or failure is not simply determined by the number of defects that have escaped into production. Categorization of defects to determine where the defect should have been found can dramatically reveal the efficiency or inefficiency of your testing. For example, if a functional defect is found during end-to-end system testing, the costs of remediation would far exceed the costs of fixing the defect as it was introduced in an earlier development phase. The increased costs would be due to factors such as: more regression testing, more test resources, usage of more live-like environments and greater requirement for coordination.

Don’t let developers write the tests on their own code

Having developers write their own testing simulations or stubs to validate their own code changes or new code is a bad idea - analogous to having a student mark his own homework or exam. Testers, who know how the functionality should be tested and the data required to properly test the various scenarios, can easily create virtual components and tests, making them available to the development team.

Manage requirements: establish an incremental integration plan

Accurate and unambiguous requirements are particularly vital for integration testing. Business processes must be defined and broken down into their constituent parts so that corresponding

testing requirements can be defined at the most granular level of service interaction. Requirements coverage is thus tracked at:

- A functional level

- A service interaction level

- The business process level

- At full system level

When granular requirements are fully visible, project leads and test managers can more easily collaborate to plan an effective and controlled release schedule. Gradually controlling the introduction of functions and components into test environments makes it faster and easier to isolate faults and defects.

Identify services to virtualize

You need to identify the right services to virtualize - start by bringing together the key development stakeholders involved in the application life cycle and find out where your organization experiences the most testing pain. Then ask these questions:

  1. Do you have all the environments you need for integration testing?

  2. Are these test environments available to teams throughout the development cycle?

  3. Do you experience downtime due to unavailable test environments?

  4. How often does downtime occur? How long do your teams usually have to wait?

  5. What’s the impact on time and cost due to testing downtime?

  6. Does your application interface with third-party services?

  7. Do you need to pay for and schedule access to these third-party interfaces prior to scheduling your tests? How much does this cost?

  8. Who controls the information needed for creating test environments?

  9. Do individuals or teams conflict with each other when scheduling the sharing of test environments (or parts of a test environment)?

The answers to these questions should enable you to identify a prioritised area where you can start your project.

Techniques to consider

After establishing an incremental integration plan for the project, test professionals should consider the following techniques to help achieve a more proactive test procedure:

Employ test virtualization

In test virtualization, reals component are replaced by virtual components (“stubs”). These virtual components can be used to model and simulate real system behaviour. Virtualizing the environment helps to eliminate application test dependencies and reduces the setup and infrastructure costs of traditional testing environments. A proactive test plan should use test virtualization to make virtual components available for key service components, allowing situations to be simulated and tested more easily.

Use continuous system-level testing and asset sharing

Running full regression cycles whenever new or virtualized components are introduced provides an immediate feedback loop to the development team who can run the exact same scripts, replicate and resolve issues with minimal effort. This promotes innovation rather than remediation. Testing tools encourage developers and test teams to collaborate by sharing integration tests and virtualized services throughout the software development lifecycle (DevOps).

Plan for effective data management

Representative, appropriate data is needed in order to support the necessary test coverage and achieve the appropriate level of confidence in a delivered solution. It’s recommended that data considerations begin at the requirements stage and are factored into test creation and execution. Given limited timescales and budgets it is usually necessary to use equivalence partitioning or boundary analysis to identify the data that is absolutely essential to the project. Test data management is such an important activity that it is often assigned to dedicated individuals.

Reduce E2E testing and isolate the GUI

Since integration testing is incremental, E2E testing takes on less significance. When a proactive test approach is implemented, far less time is spent performing costly end-to-end testing - by the time full end-to-end functionality has been created in the test environment, the functional, integration and business-process-level tests will have been executed many times. Incremental and continuous testing will have removed many of the risks of the integrated solution.

It’s recommended that end-to-end testing is focused on driving through the E2E processes via the various GUI front ends. Implementing automated testing that occurs at the service layer and bypasses the GUI makes it faster to create and execute tests and delivers a more robust process than using GUI-driven automated scripts.

It’s recommended that GUI components are isolated in environments where they interface with virtualized services - this ensures a high rate of test execution. The GUI components can then be periodically introduced, validated and withdrawn through the project life cycle (they should only be formally introduced after completion of all other integration testing). Combining automation at multiple isolated layers of an application and a more predictable execution environment for GUI tests helps mitigate the traditional challenges of a GUI-only approach to testing.

Test earlier (shift left)

The above considerations are all undertaken in order to test more effectively at an earlier point in the software development lifecycle. It is widely accepted that testing and defect resolution are more expensive if undertaken at the latter stages of integration.

Avoid the Big Bang

A traditional “Big Bang” test method brings all integration points together for end-to-end testing. With this test method, there is a sudden threshold at which it becomes possible to run many more test cases. The increase in the number of cases causes a sudden drop in the percentage that are tested and passed.

With Big Bang testing, a huge proportion of the project’s risk has been deferred until late in the development cycle when all of the components are available. This profile needs to be reversed by addressing integration risk earlier and more continuously (shifting left).

Topics: GreenHat, Service Virtualization, Integration Testing, DevOps