If you are dealing with the term requirements coverage in the context of systems or software development you might discover the following ambivalence:
This article is part 3 of our series about “requirements traceability”.
You should also pay attention to:
On one hand, requirements coverage comes with room for interpretation, because it is not well-defined. On the other hand, several well-defined process standards and maturity models demand that coverage has to be measured, especially in safety-critical developments. In this post, I am going to report on some experiences we made when we were confronted with that ambivalence and on how we approached an “adjustable definition” of the term coverage.
Let’s have a look at three different sources for a definition of the term requirements coverage. As we will see, according to all the sources, requirements coverage addresses the direct relationship between requirements and test cases
A Google Search shows several definitions of the term, but they are not universal. The majority of these definitions are related to commercial requirement management tools and thus they are tool-specific. Anyhow, they have one thing in common: It seems that they consider a requirement as being covered, if at least one test case is assigned to it.
Wikipedia (at least the English and German one) does not provide any definition of requirements coverage. However, it defines code coverage as a measure to describe “the degree to which source code is executed when a test suite runs”.
Automotive SPICE is a standard to assess the maturity of the process for the development of control modules in the automotive industry. The A-SPICE Process Assessment Model (PAM) utilizes the V-Model. When it comes to verifying development artifacts of a given type the PAM demands that “the selection of test cases shall have sufficient coverage”.
The following picture roughly illustrates a development process according to the V-Model. I am going to use it to report on our experiences.
In a specific project, the customer demanded regular progress reports based on their customer requirements. They wanted to know how many of them are analyzed, implemented, and covered.
The terms analyzed, implemented, and covered were defined based on coverage metrics:
Let me illustrate this by means of some images showing development artifacts and their traceability links:
Customer requirement Cust-Req 3 is covered: high-level requirement HL-Req 11 and low-level requirement LL-Req 18 are derived, both have an adequate test case assigned (HL-Test g and LL-Test d, resp.).
Customer requirement Cust-Req 2 is analyzed. It is not considered implemented, because HL-Req 19 lacks a low-level requirement.
So, what did we learn from this project ?
The definition of Requirement Coverage given above as “direct relationship between requirements and test cases” is not sufficient.
Summing it up, we define coverage as follows:
A development artifact (requirement, test case, test result, code, etc.) is “covered by a given type of artifacts (target type)”, if
Based on this definition, we can state the following (please refer to above images):
This post is a simplified summary of several project experiences. I did fade out:
Here’s another quote from the Automotive Spice PAM: “Bidirectional traceability supports coverage, consistency and impact analysis.”
So, if you want (or need) to measure your requirements coverage, you need to establish traceability.
But how can you do that efficiently? Several guides recommend to maintain a traceability matrix as a base solution. This looks feasible on first view, however, it does not scale. I cannot even imagine how a matrix with n dimensions (one for each layer of the V) and more than thousand rows and columns would look like, not to mention having to maintain it.
Instead, what you need is a tool for your traceability management. Good traceability tools
But the most important thing is:
Look for a tool that provides the following:
The above chapter listed some criteria for the evaluation of traceability tools. Our tool recommendation is YAKINDU Traceability. The following screenshot shows an example of YAKINDU Traceability's analysis perspective. At the top, it shows a dynamic query and query result below.
The result is a Requirements Traceability Matrix: As opposed to a classical two-dimensional matrix, YAKINDU Traceability shows a list of trace chains. If any artifacts are not covered these trace chains contain gaps – both in forward and in backward direction. This representation scales and can be analyzed easily both by humans and using further queries. For example, let’s focus on the three rows for Cust-Req 2: The first row means that this requirement is linked to HL-Req 12 which in turn is linked to LL-Req 17. The second row is analogous to the first one. The third row, however, is not completely filled. This means that Cust-Req 2 is not covered by low-level requirements. For illustration purposes, the YAKINDU Traceability Overview is also included. It displays the trace graph showing artifacts linked to Cust-Req 2.
Please note that in this post we’re just using a small set of sample test data. In order to create the screenshots I have configured YAKINDU Traceability to recognize all requirements and test cases from the very same Excel sheet shown below.
Even for this tiny amount of data, imagine how a “conventional” traceability matrix would look like and how hard it would be to find the relevant data in that matrix.
If you want to learn more about YAKINDU Traceability and its features check our blog.