Enterprise Services
Showing results for 
Search instead for 
Do you mean 

Six steps for removing subjectivity from testing system requirements

‎07-19-2013 06:49 AM - edited ‎09-30-2015 06:57 AM


teaser.jpgBy: James (Jim) R. Hughes, Global Business Analysis Capability Leader, Hewlett Packard Company


Author’s note: My blogs address the challenges that limit applications solution delivery success, and how to overcome them.


Good requirements are the foundation of every successful project. If we cannot specify what an application must do, then it will be difficult for developers to build software which meets the requirements. In addition, it has long been known that correcting a defect during requirements analysis is orders-of-magnitude less costly than correcting the defect if it is detected late in the development life-cycle or when the application is in production. Therefore, we should apply discipline to ensuring that our requirements are complete and accurate—in other words, we should test them.


Testing system requirements is different from testing software. When we test software we can test it against a set of requirements to ensure that it produces the results expected. However, when we test requirements we have not yet produced a fleshed-out baseline against which to compare the requirements—at best, we have a set of higher level business requirements or scope statements against which we can map the system requirements. So we need to utilize a different strategy for testing requirements. This blog will consider only how to test a set of narrative requirements which are used to specify what a system should do. We will not consider how to assess the quality of Use Cases or User Stories (although they can also be tested).


There are many factors which could be assessed to test requirements. However, in our Business Analysis Capability at HP, we determined that many factors are correlated and redundant. We settled on the following basic criteria to test requirements statements:


  • Clear – An individual requirement is clear (unambiguous) if it addresses a single need, does not include multiple scenarios, has no ambiguous terms, and has no implied or hidden assumptions.
  • Complete – A requirement is complete if it has no missing information (including attributes such as priority, release, risk level, constraints, etc.), has precise calculations (where applicable, such as in a business rule), and can be interpreted only one way.
  • Consistent – A requirement is consistent if it does not contradict any other requirement or compromise the possibility of fulfilling another requirement, and complements or completes a set of related requirements.
  • Verifiable – A requirement is verifiable if has an acceptance criterion and its fulfillment can be tested using one of the standard system testing approaches to produce an unambiguous result.
  • Solution Independent – A requirement is solution independent if it states what is required but not how the requirement should be met, and does not impose unnecessary or non-essential design or implementation constraints.
  • Traceable – A requirement is traceable if it can be mapped clearly to a business requirement or scope statement.

We use these criteria to test the quality of a set of requirements during requirements review and validation, which takes place at the end of the detailed requirements phase but before the requirements are delivered to the customer for sign off. We also use them to assess the quality of a set of requirements provided by a customer or third-party. We produce a Requirements Quality Index (RQI) which allows us to present an objective assessment of the quality of each requirement and of the set of requirements; thereby eliminating subjective opinions about the quality of the requirements from the review process.


This approach to verifying the quality of requirements moves testing earlier in the software development life cycle and helps to reduce development costs.


Other blog postings by Jim Hughes, which address the challenges that limit applications solution delivery success, and how to overcome them:



Blogs in the Producing Quality Software series by Jim Hughes


Other blogs by Jim Hughes:


About the Author


Image for Blog.jpgJames (Jim) R. Hughes, Global Strategic Capability Leader, Hewlett Packard Company

Jim has been with HP for 33 years and currently leads a global Strategic Capabilities Management team, with a specific focus on Business Analysis and Configuration Management. Jim also manages a team within the US State, Local, and Education division of HP. He was a member of the IIBA committee that created a Business Analyst Competency Model and he participated in the development of IEEE standards. Jim graduated from Harvard University, the University of British Columbia, and Ottawa Theological Hall. He currently lives in Toronto, Canada.

About the Author


Jim has been with HPE for almost 35 years and leads a matrixed global Strategic Capabilities Management team. His specific focus is on the Business Analysis capability. Jim is also a member of a team within the US Public Sector division of HPE, responsible for delivering application software to Federal and State governments. He was a member of the IIBA committee that created a Business Analyst Competency Model and he participated in the development of IEEE standards. Jim graduated from Harvard University, the University of British Columbia, and Ottawa Theological Hall. He lives in Toronto, Canada.

on ‎07-19-2013 09:16 AM

Excellent points Jim. In specific, I very much liked what you said  in "Testing system requirements is different from testing software".  I agree fully here. It is lot more easier to test a software than testing a requirement. Typically, it requires a seasoned BA and experienced Testers to work together.


I also have a point to add to your traceability factor.  A requirement should have both upward and downward traceability. On upward, it should map to scope or business requirement and for downward traceability t should map to design,code, test cases and defects.  If not design & code atleast each requirement should have traceability to test cases , to analyse the impact/priority of defects.  When in question, priority of defect is same as the priority the requirement it impacts. Makes life so easy for test engineers  

on ‎07-22-2013 06:33 AM

Good point Srini. I guess, I assume, but should spell out, that traceability is bi-directional. As you say 'upward' for scope management. Downward is also for impact assessment. If you change a requreiment you want to be able to determine what objects instantiate that requirement (code, test cases, user documentation, etc.).

Thanks for reading. Smiley Happy

Nov 29 - Dec 1
Discover 2016 London
Learn how to thrive in a world of digital transformation at our biggest event of the year, Discover 2016 London, November 29 - December 1.
Read more
Each Month in 2016
Software Expert Days - 2016
Join us online to talk directly with our Software experts during online Expert Days. Find information here about past, current, and upcoming Expert Da...
Read more
View all