Burnt Toast & Other Myths
When it comes to writing black box test scripts, I don’t have any way to make it sound pleasant, but its part of the job and it’s also pretty important. You probably already know this, but let me put this step in perspective. The developer usually does the unit testing, which ensures his code works within itself. He also, usually, does the integration testing, checking if his stuff blew-out anyone else’s code. By the time it gets to you, you can be assured the system sort of works. I liken a pre-tested application to one of those images of a newborn giraffe you may see on Animal Planet. It’s there and can stand on its own – sort of.
The name “black box” testing comes from the idea that the tester should have no knowledge of what’s going on under the hood. They should click button and something predictable should happen. It’s that simple. But what am I saying? Simple isn’t always easy, and writing testing scripts until your fingers fall off, under the pressure of a deadline that is totally to short, is just plain hard stressful work.
In a way, there is a bright side. You can now show your mastery of the application everyone has worked so hard to create. You’re the person who sat with the stakeholders for countless hours listening, asking questions and watching them or the domain experts teach you their job. They showed you their forms and explained how and why they do their job the way they do. In some instances, there are several years of hard earned experience rolled into the enterprise application. By now you may have a greater knowledge of the overall business process than anyone around. The only step between you and all that contentment is lots of typing. You are the person best qualified to write these scripts, so dude, just do it.
There are two simple approaches to developing test scripts. The best case scenario is that you have Use Cases from which you can develop your test scripts. If, however, things got out of hand and there are no Use Cases, then you have to sit-down with the application and try ALL the possible ways of accomplishing a task. When I say all, I have to admit probably only God would write test scripts that don’t miss something, but do your best with the time you have. Remember, don’t write scripts to test only what the application should do, but also create scripts that require actions the application may not be designed to do. Testing only the expected case is not sufficient.
If you have to “drink the hemlock” and write the scripts from scratch, begin by quickly identify and prioritize various execution paths. Start with those that represent the most direct route to a result or completion of a task. Work your way down this list – most direct, less direct least direct. Ask your customer support team for execution path ideas. They interface with the end users every day and most likely have a very good idea how the end users work with the system.
If writing the scripts were that simple, we may still feel a bit better about the whole thing. However, the greatest hindrance to getting the test scripts done on time is running into bugs and being unable to proceed. This is why you should have your path prioritized ahead of time. When you hit the wall following one path, report the bug and quickly begin another. All that “click and break” stuff can wear even the best and most patient analyst down, but even this has an important perspective.
Your developer teammates are doing their best. You must believe this, or you’ll have the most miserable time of your life. I won’t bore you with explaining the culture and general mentality of a software developer but there is a lot of pride involved in getting the solution done the best way possible (given the time restrains and the scope of the project). Developing a solution takes a lot of hard mental work, and the developer’s frame of mind and how many disturbances they experienced while writing their code is often reflected in the bugs you encounter.
When you report a bug, never blame anyone. Mistakes happen and new approaches don’t always pan out. Software development is as much a process of becoming a better person as it is developing better solutions. Be polite and communicate the issue accurately and thoroughly. If the company you’re working for doesn’t have a bug tracking system, use a bug reporting template, never simply send an email or make a phone call. That doesn’t help at all. Check the comments in the Bug Report section for details on how to normally report a bug.
If you must report a problem on the fly, remember a couple of things. What helps the developer the most is first letting them know what happened before the bug surfaced. Next let them know how the system responded as a result of the next action. Pass along any system messages you encountered.
Your test scripts should follow the “Actor Action” and “System Response” established in the Use Case. Simply enter the “Actor Action” and “System Response” from the Use Case, if the case is accurate. If not, follow the prioritized paths mentioned earlier.
Make sure you note the pre-conditions form the Use Case in the introduction section of the test script, there should be no ambiguity as to the state of the system when the operation began.
The introduction part of the template includes all the stuff I’ve ever had to document in a test script. It could be that you may want to add or delete certain sections to meet your needs.
I’ve added a signatory to each test script to make clear to all concerned that if the script passes, anything the system “should” have done requires a change request. Unmanaged change requests can be expensive and extend the development time. They can also make a lot of people very upset. It is important that the client understands they got what they asked for, so turn on your charm again and get those signatures.
It goes without saying managing and prioritizing the library of test cases or test scripts is vitally important. This library must be updated by adding every new script created for each new cycle. Additionally I recommend each script be assigned a risk score. The score is assigned in relationship to the risk the application is exposed to if the script is not run. One means not running the script represents very low risk and five very high risk. Any script with a risk score of four or five should be run for each smoke test. If there is “absolutely no time for testing at all”, then run scripts with a risk score of five. Scripts with a score of five through three should be run during alpha testing and for a full regression test scripts with a score of five through one must be run. You may use any risk assessment system you find convenient. I use these risk assessment values to track and report all my critical numbers.
Also, while creating test scripts keep in mind who will run the script. Each tester has a test approach which best supports their level of system knowledge. If the script is primarily for the developer’s unit test, then create white box tests. If the script is for integration and system testing they create gray box tests. If the script is for acceptance tests, then black box tests are most appropriate.
Knock yourself out!
Note – this post does not define the difference between a test case and a test script. A test script is generally used for automated testing. A test case is generally used for manual testing. The test represented by either should be the same. Test scripts are generally developed directly from test cases. As indicated above, test cases are created either from Use Cases supplied by the business analyst, or from direct interaction with the application.