Now that we understand how to assess the project risk, we are ready to actually plan and manage our test rounds. Since software development is a dynamic process, each level of testing must allow a great amount of flexibility. Offering all testing methods and types at each level gives the quality assurance manager the ability to tailor each test round to challenge the unique risks each system change represents.
I recommend the following matrix be applied to each test level (unit, integration, system and acceptance).
|Type||Method||Test Case ID||Test Case Risk Score||Tests Use Care ID||Assign to|
|Smoke or Sanity|
|A/B Alternate Route|
This matrix lists the test types in rows. It then lists the variables, which may impact the test types, in columns. Offering a great amount of flexibility, this table also allows any test method (white, gray or black box) to be applied to any test type. Each test case is assigned a risk score indicating the risk posed to the application if the case is not run. This value allows for rapid prioritization in case the test round must be cut short. The use case is indicated insuring all use cases are tested at least once. Indicating the owner of the test allows for quick followup.
I also find it helpful to add a column entitled special instructions and another entitled complete. If no other management tools are available, these additional columns should suffice in creating a brief overview the test round’s progress.
If using Excel or Numbers to organize test rounds, I find it helpful to associate a note containing a short question with each test type. If I don’t know the answer, I should run the test.
- Installation – Does the application run after installation?
- Compatibility – Does system support the application?
- Smoke or sanity – Are there serious problems? Is it safe to proceed with further testing?
- Regression – Have any old bugs resurfaced?
- Acceptance – Does the application support the new business requirements?
- Alpha – Are the new features stable enough for a simple end user review?
- Bata – Do the new features initially meet the end users expectations?
- Functional – Does the application support the new technical requirements?
- Destructive – Do unexpected inputs cause the system to fail?
- Performance – (the numbers given serve as an example only)
–Does the system support a load of 100,000 users?
–Is the system scalable when adding 1 to 100,000 users?
–Does the system display endurance when 100,000 users hit the system non-stop for 48 hours?
- Usability – Is the system user friendly?
- Accessibility – Does the system meet the requirements for special needs users?
- Security – Does the system satisfy organization’s required security standards?
- Internationalization – Does the system support the expectations of international users?
- Development – Did the system pass the code review? (Was the best technical approach used for this solution?)
- A/B Alternate Route – Have alternative workflows been identified and tested?
Click her for Part Four – Test Round Scenario