When discovering and resolving issues two attributes are absolutely necessary. These are a calm head and an experimental mindset. Uncertainty is inevitable. The sooner this is accepted, the better you will become at solving problems.
Having numbers that expose issues are of little use unless the system under investigation is improved. A strong increase in the outflow of bugs during Test Rounds C and D offers an opportunity for improvement. In Josh Kaufman’s book The Personal MBA: Master the Art of Business, the author recommends breaking the process into the following categories enabling better analysis:
- Inflows
- Processes
- Triggers
- Conditionals
- Endpoints
- Outflows
Although these categories are more geared toward manufacturing, they are easily applied to software testing.
- Inflows (What is required to QA a software application?)
- Use Cases
- Test Cases
- Automated Test Scripts
- Automated testing solution such as those offered by HP
- Test Data
- QA Servers
- Testers
- Processes (What processes are undertaken to conduct a test?)
- Review the requirements documentation for the enhancements.
- Review the enhancement use cases.
- Train testers in the basics of the application under test.
- Update existing test cases and scripts.
- Create new test cases based on new use cases.
- Analyze the test round risk.
- Decide on the test round scope. (Using the test matrix, apply test methods and test types to each test level based on the risk the enhancements pose to the system. Decide which samples of the enhancements will be scrutinized based on this risk assessment. (link to part one).)
- Create the test round’s critical path.
- Begin testing based on the critical path.
- Triggers (What must occur before the next set of tasks may proceed?)
- How dependent is the current process upon the previous process?
- How dependent is the following process on the current process?
- Conditionals (What impact does a failed test or set of failed tests have on the test round, given the schedule?)
- What impact does a “blocker” (break in the critical path) have on the overall project schedule?
- What tests should be delayed if previous tests fail?
- Under which circumstances should all testing stop based on issues?
- When should a deployment be delayed due to insufficient testing?
- What impact do new developers and large amount of use cases have on the quality of testing based on a fixed timeline?
- Endpoints (Where should endpoints be placed during testing?)
- After a test level is complete?
- After all the tests for a given type are finished?
- After all the tests for a given method are finished?
- After all tests with a given risk score have passed?
- Outflows
- How far along in the test cycle may we be to determine if an application may be deployed?
- Do you wait until all selected test cases have been executed? What happens when you run out of time and not all test cases have been executed?
- If certain tests fail, should the failed features be removed from the application so the deployment may proceed as scheduled?
As we see, asking the right questions is central to analyzing issues. After a set of basic questions have been identified I recommend conducting a simple root cause analysis. Such an analysis can be very technical, but I prefer taking the simplest approach. This is to ask the “why” question five times. Take this Outflow question as an example, “How far along in the test cycle may we be to determine if an application may be deployed?” If the answer is 75% of the test cases with a risk score of 5, then ask why. Continue asking why five times as demonstrated below:
- Why is running 75% of the tests with a risk score of 5 considered sufficient? Because we did not have time to run all the tests with this risk score.
- Why did we not have time to run the other tests? Because the customer was promised the application on the date given.
- Why was this promise made? Because it seemed sensible at the time.
- Why did it seem sensible? Because six weeks is plenty of time to develop and test any software.
- What criteria is being used to determine the development and test deadlines? The sales person’s best estimate.
The root cause behind the failures of Test Rounds C and D based on the answer to the outflow question is that the sales person is setting delivery dates.
Depending on how thorough your investigation must be, conducting a root cause analysis to each answer for the Conditional, Endpoint and Outflow questions may be necessary. I also recommend taking a close look at the Inflows and Processes. Develop questions for these as well. Here is a short list that initially comes to mind:
- Do you have sufficient time to review the requirements?
- Does the team have enough time to review the application before they begin testing?
- Are the use cases available on time so new test cases can be written?
- Are the existing test cases and scripts sufficiently updated to reflect the lessons learned from previous test rounds?
- Are you giving your testers room to learn from each test round and enhance the quality of their tests accordingly?
- Are you successful in increasing the depth each time a test is run?
- Are you having regular risk-orientated discussions with stake holders?
The more you understand your own software testing processes and the policies that influence decisions, the better you will become at discovering the root cause behind issues.
Writing a Corrective Action Plan is beyond the scope of this essay, but once you have identified the root cause, hold a meeting with your stake holders and explain what was discovered. Since software quality involves multiple departments, in many cases you must obtain cross departmental cooperation to make any corrective action work. As in the case of the sales person setting delivery dates, a discussion must be held asking how well this policy supports the values and goals of the company.
Life, as well as software quality assurance is an iterative process. The only real failure is not to learn from mistakes. Running an analysis of the inflows, processes, triggers, conditionals, endpoints and outflows is vitally important to the learning process. Running a root cause analysis against the answers to the questions raised insure the lessons learned are transformed into improved systems.
For additional information, I recommend the following books:
The Personal MBA: Master the Art of Business by Josh Kaufman
Lessons Learned in Software Testing: A Context-Driven Approach by Chem Kaner, James Back and Bret Pettichord