Part 1 of 3
Strategies to improve environment availability
Rapid integration testing is a key to delivering frequent, high quality software. But, environment availability is often a limiting factor. This article reviews several strategies to improve environment availability as well as when to use each strategy.
Integration testing is where the systems delivered are validated. It’s where the business can really see applications and determine whether or not development has built what was required. As software systems become increasingly componentized and are made of more and more services, the lag time from code change to integration testing is a key predictor of time to market and developer productivity.
The ideal process is simple. Every time a developer changes code, all tests are run quickly and feedback to the developer delivered. The changed components are built, unit tested, deployed to an integration environment, and all integrations test run in just a few minutes.
Unfortunately, that ideal is not reality for many teams. Automated tests can be too few or take too long. Continuous integration might not be set up. Automated deployments of complex applications can require special tools.
Solutions to these challenges are fairly well understood today. Tests should be automated with a heavy weight towards API testing. Setting up a continuous automated build processes is simple so there is no excuse for not having one. Deployment automation tools are now well established.
However, an increasingly common challenge for many organizations is a lack of integration testing environments. They may be incomplete. They may be inconsistent. There just may not be enough of them. This article looks at why these problems exist and what to do about it.
Limitations on environments
To understand how to get additional and higher-quality testing environments to speed feedback, you need to understand the constraints on environments. That knowledge helps you resolve the issues.
- Limited hardware: Resources are required to run test environments. Those resources aren't free.
- Expensive to setup: Setting up a new test environment requires provisioning servers, configuring middleware and getting the applications to run. Those tasks take considerable effort.
- Expensive to maintain: It takes effort to maintain configuration, patch levels, etc. as the number of test environments increases.
- Inconsistent utilization: Sometimes a team needs multiple environments, other times they need just a few.
- Precious components: Some application components are expensive to use for testing, thus limiting how frequently you want to test against them. Third party web services that charge by the transaction, mainframe components, and appliance applications are also limiting factors.
- Missing components: Sometimes another team owns a service you need to test against but they haven't yet delivered it. This leaves you with an incomplete solution.
- Broken components: When numerous components change frequently the likelihood that a given component is broken at any time is high.
In general, these characteristics of integration test environments reinforce each other. For example, expensive environment setup is tolerable if it is for long-lived environments, but due to inconsistent utilization the need may be short-lived. Maintaining those environments is easier if they are always turned on. Unfortunately, because of hardware costs it's desirable to shut them down when not utilized.
In part 2 we will be looking at techniques to resolve the bottlenecks.