Testing can mean very different things, depending on the software to be tested, the organization in which the testing takes place, and several other factors. Since testing can be so different, it is useful to have a classification system for testing at hand. It can guide test management during planning and preparation of the various test activities.

A notable classification of testing has been proposed by Robert L. Glass, along with recommendations for test planning. I came across it in late 2008, when I read one of his columns in IEEE Software magazine. Let’s have a look on Glass’s testing classification and discuss what it means to test management practice. At the end of the article, you find references to additional information.

Glass classifies testing along two dimensions. The first dimension is the goal of testing, i.e., the main principle that drives identification of test cases. There are four goal-driven approaches to testing: (1) Requirements-driven testing derives test cases from defined requirements. (2) Structure-driven testing derives test cases from the software’s implementation structure. (3) Statistics-driven testing derives test cases from typical and frequently conducted usage scenarios of the software. (4) Risk-driven testing derives test cases from the most critical software-related risks.

The second dimension of Glass’s model refers to the phase of the software development lifecycle, in which the testing takes place: (1) Unit testing involves the lowest-level components of the software product. (2) Integration testing involves the intermediate-level of the software product and lifecycle. (3) System testing involves the final level of software development.

Glass argues that both dimensions must be combined. For each combination, he recommends the degree at which the respective testing type shall be executed. The following table summarizes the recommendations.

Testing phase
Testing approach Unit testing Integration testing System testing
Requirements-driven 100% unit requirements 100% product requirements 100% system requirements
Structure-driven 85% logic paths 100% modules 100% components
Statistics-driven 90-100% of usage profiles if required
Risk-driven As required As required 100% if required

How can we use this classification system and its recommendations?—First, it tells us that testing focus should be different in different lifecycle phases or stages of product aggregation. Second, the classification system recommends that we shall aim for complete requirements coverage in every testing phase, while other testing approaches should be emphasized mainly during the later phases of testing. This way, we receive guidance for focused and more efficient testing.

While I value the essence of Glass’s classification very much, I partly question the first dimension of testing approaches, and I am particularly sceptic of the recommended 100% testing degree for the requirements-driven approach. In my opinion, only the first two testing approaches are basic and fundamentally different categories: Requirements-driven and structure-driven testing. The other two approaches, statistics-driven and risk-driven, are variants of the basic approaches. Statistics-driven usage scenarios and risk can only be determined based on requirements or structure. So, those latter approaches are means for focusing requirements-driven and structure-driven testing.

Why am I sceptic of the 100% degree for requirements-driven testing? I find it impractical for several reasons: First, most testing that I have encountered suffered from severe time and resource constraints, which clearly demanded “less than 100% testing!” Second, requirements are often vague and uncomplete. So, 100% of something vague is just an illusion of 100%. Third, hardly any project has explicitly stated unit and product requirements. As a consequence, there is no basis for stipulating any kind of test coverage for those requirements types.

However, the basic messages from Glass’s testing classification remain valid and important: Distinguish between requirements-driven and structure-driven testing, and apply different kinds of testing at different phases of system aggregation. Also use statistics-driven and risk-driven approaches for focusing testing. In an earlier article, I have proposed a pragmatic approach for establishing good test coverage based on those principles: Two Essential Stages of Testing.

Further information about Robert L. Glass’s classification can be found in two of his columns in IEEE Software magazine and in a publically available excerpt from one of his latest books: An Ancient (but Still Valid?) Look at the Classification of Testing (IEEE Software magazine Nov/Dec 2008), A Classification System for Testing, Part 2 (IEEE Software magazine Jan/Feb 2009), and The Many Flavors of Testing (Book Excerpt).

https://makingofsoftware.com/archives/358Two