Quality Assurance Planning – Part Four (Selecting the Appropriate Test Type)

Select the type that fits your test.

Select the type that fits your test.

Here is how I break-down quality assurance types and what questions I ask to determine which should be included in the test cycle:

  • Development (Have the coding architectural errors been identified and resolved? This can only done as a white box test and requires a code review.)
  • Installation (Does the application come on line after an installation? Depending on where the application is installed, I usually run a smoke test of basic automated scripts in this case. I recommend those with a risk score of three through five, depending on how much time may be allowed.)
  • Compatibility (Can the application live in the environment you have prepared for it?  Again, depending on how much time is available, you may want to run all your test scripts and perform a full performance test.)
  • Smoke / Sanity (This is a quick test to see if further testing is warranted. Any show-stoppers? Run your basic set of scripts testing essential functionality, probably with a risk score of four and five.)
  • Alpha (Is the application ready for an external user to take a peek? This may be a good time to run your scripts with a risk score of three through five.)
  • Beta (This test confirms that the new features are headed in the correct direction. This is done by external testers. Add a good mix of both regression and functional test cases. This is a black box test.)
  • Acceptance (Do the changes work as the end users expect? These are black box tests conducted by the end users. Add all your functional and a good mix of regression test cases.)
  • Functional (Does the new functionality work as defined in the use case? Create and run new test scripts during this test. Don’t forget to add these new scripts to your library.)
  • Regression (Have old bugs returned? Has the software fallen back into old habits? To flush these out you should run your complete library of automated scripts.)
  • Performance (How well does the application perform with a set amount of end users? Refer to this essay for more details.)
  • Destructive (How does the system respond when unexpected input is introduced? This is best done as a black box test.)
  • Usability (Is the system designed to be user friendly? This too is best done as a black box test. Don’t forget to include tab order tests. Most end users use tabs and do not appreciate being forced to use the mouse.)
  • Accessibility (Does the application accommodate special needs users?)
  • Security (How secure is the application? This will require a well designed set of gray box tests. Security risks constantly change. Insure your test addresses the latest style of documented intrusions.)
  • Localization (Does the application accommodate international use? This too is best done as a grey box test. A good understanding of code page standards comes in handy when creating these tests.)

This essay is written with the assumption the reader has enough experience in software testing to understand how to combine each test element (levels, approaches and types) into a successful test round. It is important to take full responsibility if the test fails and develop corrective action plans to learn from failures. Experience is by far the best teacher. Her bitter lessons will transform into valuable insight if both ownership and corrective action is fully realized.

Click here for Part Five – A Final Word

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.