« David's Mario VideoNavigationg BlackBerry Screen Saving flow »

Test Process

05/03/10

Permalink 10:04:30 am, by victor.ewert Email , 2501 words   English (CA) latin1
Categories: Articles

Test Process

Note: this article first appeared in the April 2010 Issue (Volume 23, No. 4) of ASPects, The Monthly Newsletter of the Association of Shareware Professionals.

Association of Software Professionals

So you have decided you want to implement a more formal approach to testing your software, but don't really know where to begin. In this article I will outline a basic structure of the test process, including some tips for test case writing, and suggestions for some tools to use in testing. There is no one set process that works for all types of projects or for all teams, so I'm hoping this will get you started, and you can adapt the process to fit your own needs.

For this article, I will assume  a generalized approach to software development in which firstly, requirements are gathered, followed the actual coding coding of the application.  Once the coding is more or less complete, the software goes to a testing phase where there are several iterations where the developers create a build, the testers test it and report defects.  The developers fix the defects, create a new build and so on, until the application is ready to be released.

Test Planning
Before embarking on any kind of project, it is important to plan ahead.  This is true when designing and coding your application and is equally true when testing it.  For any of you that have worked in a structured software development organization, you will have heard of the test plan.  A test plan is a document that contains all the efforts of the test planning, and then drives the testing effort.  You can find a lot of complex templates for how to write up a test plan, and can use them if you like, but here I will just cover some of the basic things that should be included in any test plan, and more importantly, the thinking that should be involved when creating it.

One of the first things to think about in your test planning, is the test environment.  This includes what operating systems, browsers, configurations, etc. you will want to test on or with.  You will also want to think about how much testing will be done on each environment; for example, will you want to execute all the tests on Windows XP, then again on Vista and again on Windows 7, or will it be enough to only cover some of the tests on Windows Vista and Windows 7?  There is no right answer to these types of questions, but it is something you need to consider and then document in your test plan.

You will also want to consider how much testing you want to do.  If this is the first release of your product, you will probably want to execute all of your tests, so you can thoroughly test all of its features.  On the other hand, if  you are doing a fairly minor release, you may not need to execute all of your previous test cases and rather only run tests on the new features.

Next, you need to think about how much testing you will do on a build by build basis.  During the testing phase, as defects get found and fixed, there are likely to be several builds of the software before the final release.  You need to plan whether you want all your planned tests to be executed with each build, or if they only need to be executed once during the entire test phase.  A common scheme is to execute each test case at least once during the testing phase, and for the final build, at least those test cases marked as regression tests, must be executed again.

Finally, you will want to think about what is your criteria for completing testing?  If you wanted, you could test a build forever; there is always one more test to try and one more defect to find.  At some point, however, you will actually want to release this new version of your application, and it is better to decide up front what determines that the software is ready.  A common criteria would be something like: All planned tests must have been executed at least once, all regression tests must have been executed on final build, all high priority test cases must have passed and there are no critical defects left to fix.  This may mean you are releasing your product with some minor defects, but at least you will be aware of them, and so when any support calls come in you shouldn't be taken by surprise.

Test Writing
The heart of all testing is the set of test cases.  A test case is a document that contains the details of how to execute a particular test, including the actions to take, the data to input and the expected results.  The input into designing test cases needs to be the set of requirements for the application.  It is the requirements that drive the test cases and ideally for every requirement there will be one or more test case(s) that test that requirement.  This mapping of requirements to test cases is often call the Requirements Traceability Matrix.  It is important that this matrix is complete so no requirements get missed.

To make managing the test cases easier, individual test cases should be grouped and organized.  There are several different ways to group test cases including grouping by feature, grouping by requirement and grouping by testing type (e.g. GUI testing vs. Security Testing, etc.).  Groups of tests cases are sometimes call test suites.  There is no best way to group them, the main thing is to find something that works for you.

So how do you go about actually writing a test case?  Again the exact format you use, is up to you, but there are several elements each test case should have:

Test Case ID: some kind of unique identifier for the Test Case
Title: a descriptive title
Environment: the test environment needed
Pre-conditions: Any setup (of data etc.) that needs to be done prior to executing the test.
Steps: The actual instructions of the test
Expected Results: expected results

In addition, things like author, version and date created/modified should also be included if you want to include traceability and version tracking.

When writing the actual steps, or instructions of the test, the question often arises as to how much detail to include.  One approach is to write highly detailed steps, that read almost like a user guide and include exactly what button to press, the text of each user prompt, exact warning messages, etc.  The advantage of this approach is that it makes the testing easy, especially for someone not familiar with the application.  The other approach is to simply provide general instructions, such as "Create a new user with no last name".  This assumes you know how to use the GUI to create a new user, and focuses only on what the test case is actually trying to test, which, in this case, is to check what happens if the last name is left blank.  The advantage of this approach is that test cases can be written more quickly and there is less maintenance, since when the GUI changes slightly, the test probably could stay the same, whereas in the detailed approach the test steps most like would need to be changed.

As for the tests themselves, the details will obviously be different for different applications, and should be based on the software requirements.  There are, however, several general things to test for in nearly all applications:

Boundary Conditions: Most applications will have some data entry fields and will store data in some way.  Most fields have a valid range of expected values or expect values of a certain type.  When testing these fields, make sure to test valid values, invalid values, including values on either side of the the boundary.  For example, if a field is meant to be used to input the minute part of a time field, then the valid range would be 0 to 59.  Some good values to try would include:

value reason
0 lower boundary
25 valid value
59 upper boundary
-1 just below lower boundary
60 just above upper boundary
-854 well below lower boundary
125 well above upper boundary
ABC letters instead of numbers
<blank> test what happens if field is blank

 

Maximum Characters: Most data fields have some kind of maximum length allowed.  For example, in a database application, the database implementation may have a name field may be set to have a maximum of 25 characters.  The User Interface should not allow the user to enter more than 25 characters, so when testing, it would be a good to try to enter more than 25 characters into the text field.   It is always a good idea to enter the maximum allowable characters into to each data input field and see what happens.

Cycle 1 Cycle 2 Cycle 3
Test Case #1 Pass Pass Pass
Test Case #2 Pass Pass Pass
Test Case #3 Fail Pass Pass
Test Case #4 Pass Fail Pass
Test Case #5 Fail Pass Pass


Special Characters: Another good test is to add special characters to each field and make sure they are handled properly.  Many applications that use other technologies such as SQL or XML and may have problems with certain characters.  The set of characters I typically use is: =-+_)(*&^%$#@!~`|}{\][":';?></.,

Test Execution
Now that you have written a complete set of test cases that cover all of the requirements, and have documented your plan on how you want to test, you are ready to execute the tests.  Test execution is basically executing the test cases according to your test plan.  For each test case you execute, you follow the test steps described in the test case and check the expected results.  When you have finished all of the steps, if the expected results were achieved the test passes, otherwise the test case fails.  Any failures should be reported as a defect; include any screen shots or other attachments that help explain the defect, if applicable.  Sometimes while executing a test, you may find a defect that is not directly related to the test you are running.  It is important to still enter it as a defect.  It is also important then, to either update that test case you were executing, update a different test case or to create a new test case, that will test for the defect you found.  Often finding a defect will trigger you to think of other related tests to try.  These new test should also be added to you set of test cases.  It is important to note that the set of test cases is never really static but is always being enhanced.

While working through the tests, the results need to be kept track of.  This can be done with a simple spreadsheet as follows:

Testing Complete?
So when is testing complete?  Without a test process, this question is difficult to answer, and quite possibly the answer would be never.  If, however, you have created a test plan and followed it carefully, then testing is done when all the criteria set, has been met.  At this point you can feel confident in knowing what has been tested, what areas may be at risk, what defects may have been left unfixed, and be ready to release the software with less chances of any major surprises coming your way.  Software development is all about balance and managing risks, and while a properly implemented test process won't eliminate risk (or defects for that matter), it should minimize risk, provide you the information you need to make wise decisions about your software application.

Tools of the Trade
To be as effective as possible when testing, it is key to have a good set of tools.  Firstly, it is important to have a computer specifically dedicated to testing.  This means the tester should have their own personal desktop computer for activities such as test planning, defect tracking, report writing, etc. as well as have access to a separate computer to test on.  This test computer (or computers) should be as clean as possible, to minimize the side affects of other software, and should be able to be base lined, so various configurations can be set up and then the system can be returned to a baseline.  For this task I recommend using virtual machines such as VMWare or Sun VirtualBox.  With these you can create different guest operating system, configurations, etc. storing each as its own virtual hard drive on a single host computer.

Test planning, test case writing and test execution tracking can all be done using standard office productivity software such as Microsoft Office or OpenOffice.  Test Plans are typically written using a word processor, while for test cases, some people prefer to using word processors, others prefer spreadsheets.  Tracking test execution and results would definitely go into a spreadsheet.  Using word processors and spreadsheets works, but they are not ideal, especially when you have more than one person on your team.  Sharing of test cases and test results is not that easy and there is also the concern of where to store all the files, how to version them and back them up.  Instead, I would highly recommend using a dedicated test management software.  Most of these programs provide the ability to write test plans and test cases, and track execution results.  Some also provide linking between requirements and test case and may also provide pretty graphs and charts that manager like to look at.  There are several commercial products available by the big names in testing such as IBM Rational, HP Mercury, Borland Sile, etc., but there are free alternatives as well.  A product I have been using is TestLink, which is an open source, web based test manager with a MySQL backend.  It is a fairly mature product that is actively being developed and enhanced.

A basic tool for testers is to have a good text editor.  This is important for inspecting any data files, creating test data, etc.  There are many text editors out there both free and commercial and I must admit, I am a bit of text editor junky, constantly trying out different editors.  Over my career I have used TextPad, JEdit, Crimson Editor, and most recently have seemed to settled on Notepad++.

Often when writing up a defect, you want to include a screen shot, that shows the error.  While using print-screen with Paint, works, I find using a dedicated screen capture product makes me much more efficient, allowing me to easily capture entire screens or only portions of a screen and then annotate and highlight different parts, making the screen capture much more effective.  A couple of commerical screen capture products include HyperSnap and SnagIt.

A final tool I'll mention is a stopwatch.  This can be either a separate stopwatch, or stopwatch software.  This is particularly useful during performance testing, when seeing how long it takes for pages to load, processing to complete, calculations to occur, etc.

References
VMWare - http://www.vmware.com/
Sun VirtualBox - http://www.virtualbox.org/
OpenOffice - http://www.openoffice.org/
IBM Rational ? http://www.ibm.com/software/rational/
Mercury Interactive - https://h10078.www1.hp.com/cda/hpms/display/main/hpms_content.jsp?zn=bto&cp=1-11-127_4000_100__
Borland Silk - http://www.borland.com/us/products/silk/index.html
TestLink - http://www.teamst.org/
TextPad - http://www.textpad.com/
JEdit - http://www.jedit.org/
Crimson Editor - http://www.crimsoneditor.com/
Notepad++ - http://notepad-plus.sourceforge.net/uk/site.htm
Hypersnap -http://www.hyperionics.com/
Snagit -http://www.techsmith.com/

Expedia.com Cheap airfare, hotels, car rentals, vacations and cruises at Expedia.com!

Small Business and Office Supplies!

No feedback yet

XML Feeds

Kobo Inc.
December 2014
Mon Tue Wed Thu Fri Sat Sun
 << <   > >>
1 2 3 4 5 6 7
8 9 10 11 12 13 14
15 16 17 18 19 20 21
22 23 24 25 26 27 28
29 30 31        
Tiger Deal Slasher! Savings up to 70% OFF!

Search

Kobo Touch

Blogorama

blogarama - the blog directory
powered by b2evolution