|Build Acceptance Test||The build acceptance test is a simplistic check of a product's functionality in order to determine if the product is capable of being tested to a greater extent. Every new build should undergo a build acceptance test to determine if further testing can be executed. Examples of a build acceptance: |
Product can be installed with no crashes or errors that terminate the installation. (Development needs to install the software from the same source accessed by QA (e.g. Drop Zone, CD-ROM, Electronic Software Distribution archives, etc.).
Clients can connect to associated servers.
Simple client server communication can be achieved
|Bottom - up||Start testing with the bottom of the program. The bottom - up strategy does not exist until the last module is added.|
|CET (Customer Experience Test)||An in-house test is performed before the Alpha, Beta, and FCS milestones which is used to determine whether the product can be installed and used without any problems, assistance, or support from others.|
|Client-Server Test||Testing Systems that operate in client/server environments.|
|Compatibility Test||This test is used to test compatibility between different client/server version combinations as well as other supported products.|
|Confidence Test||The confidence test ensures a product functions as expected by ensuring platform specific bugs are not introduced and functionality has not regressed from drop to drop. A typical confidence test is designed to touch all major areas of a product's functionality. These tests are run regularly once the Functional Freeze milestone is reached throughout the remaining development cycle.|
|Configuration Tests||These tests are run for product testing across various system configuration combinations. Examples of configurations: |
Cross platform (e.g. Windows Clients against a UNIX server).
Client/server network configurations. Operating systems and database combinations (also including version combinations).
Web servers and web browsers (for web products). The system configurations to test are determined from the product's compatibility matrix. This test is sometimes called a 'Platform test'.
CET (Customer Experience Test)
An in-house test is performed before the Alpha, Beta, and FCS milestones which is used to determine whether the product can be installed and used without any problems, assistance, or support from others.
|The depth tests are designed to test all the product's functionality |
|Error Test||The error test is designed to test the dialogs, alerts, and other feedback provided to the user when an error situation occurs. The difference between this test and a Negative Test is that an Error Test is simply verifying that the correct dialogs are seen. The Negative Test is primarily looking at the robustness and recovery facets.|
|Event-Driven||Testing event-driven processes, such as unpredictable sequences of interrupts from input devices or sensors, or sequences of mouse clicks in a GUI environment.|
|Final Installation Test||Verification that the final media, prior to hand off to Operations for duplication, contains the correct code which was previously tested and is installable on all the supported platforms and databases. The product demo is executed and product Release Notes verified.|
|Functionality Test||This is designed to test the full functionality, features, and user interfaces of software based upon the functional specifications.|
|A full test is Build Acceptance + Sanity + Confidence + Depth. This is designed to test the full functionality, features, and user interfaces of software based upon the functional specifications.|
|Graphical User Interface (GUI)||Testing the front-end user interfaces to applications which use GUI support systems and standard such as MS Windows or Motif.|
|GUI Roadmaps||Step by step walk through of a tool or application, exercising each screen or window's menus, toolbar and dialog boxes to verify the execution and correctness of the Graphical User Interface. Typically, this is handled by automated scripts and rarely is used as a manual tests due to the low numbers of bugs found from them.|
|Module testing||To test large program its necessary to use module testing. Module testing (or unit testing) is a process of testing individual subprograms (small blocks), rather than testing the program as a whole. Module testing eases the task of debugging. When error is found, it is known is which particular module it is.|
|Multi-user Test||Test maximum number of users specified in the design concurrently, to simulate the real user environment when they use the product.|
|Negative Test||Tests that deliberately introduce an error to check an application's |
behavior and robustness. For example, erroneous data may be entered, or attempts made to force the application to perform an operation that is should not be able to complete. Generally a message box is generated to inform the user of the problem. If the
program terminates, the program should exit gracefully
|Object-Oriented||Testing systems designed or coded using an object-oriented approach or development environment, such as C++ or Smalltalk.|
|Parallel Testing||Testing by processing the same (or at least closely comparable) test workload against both the existing and new versions of a product, then comparing results.|
|Performance||Measurement and prediction of performance (e.g. Response time and/or throughput) for a given benchmark workload. |
A testing strategy where test cases are developed in stages so a minimally acceptable level of testing can be completed at any time. As new features are coded and frozen, they receive priorities for a given amount of time-so that a concentrated effort is directed toward testing those new features before the effort returns to validate the preexisting functionality. When no new features are available, preexisting features will be targeted-with priorities set by Project Leads.
1st level - Minimal Acceptance Test
2nd level - Confidence Tests
3rd level - Full Functionality Test
4th level - Error, Negative, and other Tests
5th level - System level tests
|Phased Approach||A testing strategy where testcases are developed in stages so a minimally acceptable level of testing can be completed at any time. As new features are coded and frozen, they receive priorities for a given amount of times-so that a concentrated effort is directed toward testing those new features before the effort returns to |
validate the preexisting functionality. When no new features are available, preexisting features will be targeted-with priorities set by Project Leads.
1st level - Build Acceptance Test
2nd level - Sanity Test
3rd level - Confidence Test
4th level - Depth Test
5th level - Error, Negative, and other Tests
6th level - System level tests
|Sanity Test||Sanity tests are subsets of the confidence test and are used only to validate high-level functionality.|
|Security Testing||It is a test how easy to break program's security system.|
|Stress Test||These tests are used to validate software functionality at the limit (e.g. Maximum throughput) and then testing at and beyond these limits.|
|System Level Test||These tests check for factors such as Cross-Tool testing, memory management and other operating system factors.|
|Top - Down strategy||Start testing with the top of the program.|
|Volume Testing||Is the process of feeding a program with heavy volume of data.|
|Usability||The effectiveness, efficiency, and satisfaction with which specified users can achieve specified goals in a particular environment. Synonymous with "ease of use".|