You’ve Come A Long Way, Baby
If you’ve been following along, you now know all about the first three phases of a software development project: requirements analysis, software design and implementation. But in case you thought your job ended with the completion of the code, think again - you’ve still got a way to go before you can call it a wrap.
In this fourth article of a five-part series, I’m going to be spending some time discussing what happens after you’ve finished writing the code that meets your customer’s requirements. I’ll be discussing the various types of testing your code must go through, providing you with a brief overview of how test cases should be prepared and software problems should be reported, and explaining how software is released to a customer. I’ll also be discussing that bane of developers everywhere - documentation - with some tips and tricks that might make the process more painless. So let’s get started.
Code To Zero
Writing code is only half the battle - the other half involves testing it. Typically, three types of tests are performed across the software development lifecycle.
- Unit testing: This involves testing the units, or individual code modules, that make up the entire system. This type of test is performed to ensure that each module meets the individual requirements laid down for it, and it highlights bugs or errors within each module. Typically, unit testing is not performed by the test team; rather, it is performed by the developer, or the developer’s peer group.
Unit testing is meant to confirm the functionality, data processing, logic flow and exception handling of a particular unit of code. In most cases, it involves supplying the code unit with a variety of input data and verifying that the output produced is as per requirements. Errors that are encountered can be rapidly fixed by the developer in this phase, before the code moves to the system test team for more exhaustive testing. Unit testing thus serves as the first defense in the battle against software flaws.
- System testing: In this second phase, the integrated system is tested to verify that all the modules within it are working properly, communicating with each other where required, and satisfy all inter-module dependencies. This second level of testing highlights flaws in the linkages between modules, and also serves to verify the complete functionality of the system as it will be delivered to the customer.
Typically, system testing is performed by a separate test team, which installs the software in an independent test environment and proceeds to verify it using the test plan previously decided. This type of testing is intended to find flaws in the communication between modules, and also bugs within each module that may have been missed during the developer’s unit testing. The test team may also provide feedback to the development team on the usability and performance of the software being tested; this feedback may be used to improve the product during subsequent test cycles.
- Acceptance testing: As the name implies, an acceptance test is the test that the software must undergo in order for it to be formally accepted by the customer. Although the test plan for this activity is defined near the beginning of this project (you might remember this from the second part of this article), this test is typically organized last, once the internal tests are concluded to the satisfaction of the project manager.
Acceptance testing may be performed by the customer, or by an internal team acting on behalf of the customer. Note that it may be necessary to train the customer on proper operation of the system prior to acceptance testing. Depending on the nature of the application and the customer’s requirements, acceptance testing may be performed on-site, or in a simulated environment that replicates the conditions of the real one.
Each of these phases is essential to the software development process, and plays an important role in achieving the goal of high-quality, zero-defect software. Sadly, however, many organizations neglect this phase of the development cycle, and this is turn leads to bugs appearing at a later stage. Since it is generally understood that software bugs are rectified by the developer at no charge, the lack of adequate attention to software testing can cause project costs to mushroom…which is bad for the bottom line in the long run.
It is for this reason that most successful companies pay rigorous attention to their quality control procedures, and ensure that processes are in place to discover and correct bugs before the software is released. Regardless of whether you’re an independent developer or a manager of a large development team, remember to budget enough time in your schedule for a thorough test of your software, together with time for error rectification and re-test - your customers will thank you, and your organization’s accountants will think nice things about you.
Casing The Joint
In order to verify the integrity and functionality of the software, it is necessary to first develop a series of test cases that produce a positive or negative result depending on whether or not the software functions as advertised. In order to avoid bias, these test cases should be developed by an independent quality assurance team, and should focus on verifying the functionality and integrity of the software against the various line items in the requirements document.
Typically, a test case contains the following information:
-
Overview: This introductory section contains a brief description of the purpose of this test case - for example, “calculating total cost of items in shopping cart”
-
Input: This section contains a description of the data that is to be input into the system, with samples if necessary. Make it a point to avoid including specific test values in this section.
-
Output: This section contains a description of expected output from the system or module.
-
Procedure: This section contains a list of the steps to be followed to execute the test.
-
Evaluation criteria: This section lists the criteria necessary to determine if the test was a success or a failure.
-
Results: Added once the test is complete. This section records the actual results observed by the tester while performing the test, together with a pass or fail notice.
Since test cases have a direct impact on the overall quality of the software that is being released, due attention should be paid to making them as stringent as possible. Here are a few tips to keep in mind when designing a test case:
-
Ensure that each requirement defined in the requirements specification has one or more test cases linked to it. If you have a requirements traceability matrix, this task becomes much easier.
-
Ensure that the test case checks for both boundary and average input values.
-
Ensure that the test case checks for input of incorrect data type.
-
For highly interactive applications with multiple click-through paths, or applications that accept multiple combinations of data (and data types), ensure that the test case checks all combinations.
-
Ensure that the test case contains clearly-defined evaluation criteria for judging the test a success or failure.
-
Learn from the past. Use knowledge of previous errors in similar projects to make your test cases more effective.
The various test cases for each phase of the testing process ultimately get compiled into a comprehensive test plan. Let’s take a look at that next.
A Man With A Plan
Since test cases map directly into the requirements specification, the comprehensive test plan functions as a blueprint of the full capabilities of the system, and allows an assessment of how closely the final release meets the defined needs of the customer. This test plan is a detailed document outlining the test schedule, level of testing, test cases, quality control processes and resources required. Typically, this document contains the following sections:
-
Introduction: This section introduces the test plan, indicating the goals of the test, the schedule, the level of testing and the resolution procedure in the event of test case failures.
-
Test strategy: This section contains information on test procedures, the various levels and types of tests that are to be conducted, the tools to be used, report formats, evaluation criteria and retest procedures.
-
Prerequisites: This section outlines the pre-requisites for the tests to be conducted, including the hardware, software and network configuration. In case special tools or particular environmental conditions are needed for the test, these requirements are noted in this section.
-
Traceability matrix: This matrix maps each test case to a requirement from the requirements document, in order to ensure that every requested feature is correctly implemented. While this matrix may be difficult and time-consuming to construct for small- and medium-size projects, it plays a critical role in the QA process for larger projects, and its omission can result in important test cases being missed.
-
Test cases: This section consists of test cases to verify if the software meets the various requirements specified in the previously-agreed requirements document. Each test case must be accompanied with a description outlining the test procedure, required input(s), expected output(s) and rules for determining whether or not the test was successful.
-
Test schedule and resources: This section lists the schedule for test completion, together with a list of the types and number of persons required and their responsibilities. The schedule listed in this section must map into the broad schedule listed in the software development plan; deviations if any should be reported to the project manager.
Appropriate managers need to sign off on this test plan prior to execution. Unit and system testing plans need internal approval; the acceptance test plan also requires customer approval. Once appropriate approvals are received and the development team confirms that unit testing is concluded and system testing may begin, the software is checked out of the project repository and transferred to a test team for the testing process to begin.
Bug-bustin’
Software defects encountered during the unit and system testing phases are noted in a software problem report. This report contains a detailed description of each problem, complete with information on where it occurred, the data input or procedure that caused it, its classification (fatal, major or minor), and its impact and suggested resolution. This report is then returned to the development team for evaluation and software rectification where required.
Every organization needs a sound defect management process, in order to manage and resolve software defects found during testing. The best process is still the simplest: the project manager assigns every bug a unique number in a so-called “defect log”, and updates the status (assigned/resolved/verified/deferred) of each on a daily basis. This provides a fast snapshot of all the bugs found during a particular test cycle, and makes it easy to see the current disposition of each.
Once all the bugs found during a particular testing cycle have been resolved and verified, the software may be returned to the test team for another round of testing. This process continues until no further bugs are found. Care should be taken at each stage to ensure that bug fixes do not slip through the cracks, that all test cases relevant to the bugs found are fully re-executed, and that the resolution of a particular bug does not introduce new bugs.
Use of bug tracking systems like Bugzilla can play an important role in managing these software fixes, but are not sufficient by themselves; tools must go hand in hand with a clear and error-free process flow, such as the one described above, to ensure that all detected problems are corrected and deltas sent back to the test team for re-verification.
Once the test plan has been completely executed and the test team has verified that the product has zero defects, the project manager(s) can begin organizing the acceptance test. Typically, this involves notifying the customer that the software is ready for release, setting a date for the acceptance test, providing the customer (or the acceptance test team) with training and sample data for the test, and closing all open items related to the project. In case the acceptance test is to be conducted on-site, the final release code, together with all tools and related assets, needs to be checked out of the project repository and placed on appropriate storage media for delivery to the customer’s site.
It is very important that the code tested during the final acceptance phase be identical to the code released by the project’s development and QA team, and that no undocumented changes to the code tree take place during the closure of system testing and the beginning of acceptance testing.
In case problems are reported during the acceptance test, the same need to be communicated back to the development team for resolution. At this time, each problem reported by the acceptance test team needs to be clearly classified as a bug or a change; bugs should be rectified, while changes need to be processed as per the process laid down for change requests (more on this in the next article). Once all open issues are closed and all bugs fixed, the software can be re-tested and - assuming no further errors - formally accepted for release.
The Write Stuff
Simultaneous to testing is documentation, the activity of providing your customer with written information on the software being developed. This is not the most interesting of activities; however, it has tremendous value to the customer and should therefore be considered an important deliverable of the project.
Documentation comes in many flavours; when dealing with software, one of the most common ones is the user manual, which demonstrates to customers how the software may be used. This manual is critical in training your customers on correct operation of the software, and - if clearly written - can substantially reduce the time you spend on post-release support. This user manual typically contains detailed information on the features and goodies built into the application, and focuses on demonstrating how to accomplish common tasks within the application; it also provides detailed examples, complete with screenshots and sample data, and explains the significance of the various status and error messages displayed by the application. This material is written for, and targeted towards, the user profile and skill set defined in the earlier phases of the project.
In addition to a user manual, some customers also require a developer’s guide, which contains technical information on the software that has been developed. This guide is usually targeted at more technical users, such as system administrators or developers, and should therefore contain as much information as possible on the application design, data processing and internal logic flow, performance and security considerations, data storage constraints, interprocess communication, exception handling, resource management et al. It should also contain high-level architectural diagrams of the system design (including models of component relationships) and a detailed function reference, or API, to all the functions used within the application.
Some projects or customers may additionally demand detailed design documents, architectural flowcharts, API specifications, and technical software specifications; you should try and provide this information if possible.
Since documentation is usually customized to each project, expect to go through a couple of reviews and revisions until it fully satisfies your customer. Typically, documentation is developed near the end of the project; however, depending on your workflow, you may even have a writer working on it through the different phases of the project, revising it as per customer feedback at different points, and delivering the final version simultaneous to the software release.
It should be clearly understood that documentation is an art in itself - it’s unfair to both your customer and your team to treat it as a second-tier deliverable and not assign it adequate attention or resources. Remember that your customer is paying for well-written documentation, and that he or she expects to use it extensively, either for internal user training or for future development of the software; it therefore constitutes an important deliverable of the contract you have undertaken.
If you can afford it, always consider hiring a professional technical writer to develop documentation for your project - the returns, both in terms of customer satisfaction and lower stress levels, will be well worth the additional cost.
Endgame
Once the software has been formally accepted, all that’s left is to release it to the customer, together with the required documentation. This release may be delivered either on a CD-ROM or via the Internet, depending on the nature of the software. In case you’re also supposed to turn over the software source code to the customer, make sure that it’s included in your release.
Every release should be prominently tagged with a version number, and should include a release note which provides information on the release. This release note should include the version number of the software being released, the release date, and a brief description of the changes in the release (if any). This release note provides both customer and vendor with an easy-to-understand trace of the software release history.
In case the software requires installation, it should be packaged appropriately with an installer; if it’s a Web site or Web application, installation typically involves merely placing the files in a specific location on the Web server. Either way, specific installation instructions should be included with your deliverable, either as part of the manual or as an additional appendix on the delivery media.
At this point, you’ve successfully designed, implemented, tested and delivered a software application to the specified requirements of your customer. All that’s left now is to wait for the rest of the cash to roll in, and get ready for what many consider to be the most commercially profitable phase of the software lifecycle: software updates. I’ll be discussing that in the next (and final) article in this series…but until then, lie back and take a well-deserved break. You’ve earned it!
Note: Examples are illustrative only, and are not meant for a production environment. Melonfire provides no warranties or support for the source code described in this article. YMMV!
This article was first published on 24 Sep 2002.