Testing Topics

TESTING:
·         White box testing: It requires knowledge of internal program design and coding  It is usually done be the developers to test the programmers
·         Black box testing: Testing system functionality as per the requirements

System Testing: System Testing tests all components and modules that are new, changed, affected by a change, or needed to form the complete application. The system test may require involvement of other systems but this should be minimized as much as possible to reduce the risk of externally-induced problems. Testing the interaction with other parts of the complete system comes in Integration Testing. The emphasis in system testing is validating and verifying the functional design specification and seeing how all the modules work together. For example, the system test for a new web interface that collects user input for addition to a database doesn’t need to include the database’s ETL application—processing can stop when the data is moved to the data staging area if there is one.
The first system test is often a smoke test. This is an informal quick-and-dirty run through of the application’s major functions without bothering with details. The term comes from the hardware testing practice of turning on a new piece of equipment for the first time and considering it a success if it doesn’t start smoking or burst into flame. System testing requires many test runs because it entails feature by feature validation of behavior using a wide range of both normal and erroneous test inputs and data.

The Test Plan is critical here because it contains descriptions of the test cases, the sequence in which the tests must be executed, and the documentation needed to be collected in each run. When an error or defect is discovered, previously executed system tests must be rerun after the repair is made to make sure that the modifications didn’t cause other problems.
This will be covered in more detail in the section on regression testing. Testing look and feel, field validation, error messages etc.,
Functional:
·         Usability Testing: Look and Feel of the application, for an End user is it easy to understand and able to execute it.
·         Regression Testing: Regression testing is also known as validation testing and provides a consistent, repeatable validation of each change to an application under development or being modified. Each time a defect is fixed, the potential exists to inadvertently introduce new errors, problems, and defects. An element of uncertainty is introduced about ability of the application to repeat everything that went right up to the point of failure. Regression testing is the probably selective retesting of an application or system that has been modified to insure that no previously working components, functions, or features fail as a result of the repairs. Regression testing is conducted in parallel with other tests and can be viewed as a quality control tool to ensure that the newly modified code still complies with its specified requirements and that unmodified code has not been affected by the change. It is important to understand that regression testing doesn’t test that a specific defect has been fixed. Regression testing tests that the rest of the application up to the point or repair was not adversely affected by the fix.  Testing existing or old features or functionality when ever new functionality added
·         Sanity check testing: A sanity test is a narrow regression test that focuses on one or a few areas of functionality. Sanity testing is usually narrow and deep.
A sanity test is usually unscripted.
A Sanity test is used to determine a small section of the application is still working after a minor change.
Sanity testing is a cursory testing; it is performed whenever a cursory testing is sufficient to prove the application is functioning according to specifications. This level of testing is a subset of regression testing.
Sanity testing is to verify whether requirements are met or not, checking all features breadth-first.
Testing the high level functionality to make sure the build can be accepted for further testing. Ex: Minimum test conditions should be passed

·         Smoke Testing: Smoke testing originated in the hardware testing practice of turning on a new piece of hardware for the first time and considering it a success if it does not catch fire and smoke. In software industry, smoke testing is a shallow and wide approach whereby all areas of the application without getting into too deep, is tested.
A smoke test is scripted, either using a written set of tests or an automated test
A Smoke test is designed to touch every part of the application in a cursory way. It’s shallow and wide.
Smoke testing is conducted to ensure whether the most crucial functions of a program are working, but not bothering with finer details. (Such as build verification).
Smoke testing is normal health check up to a build of an application before taking it to testing in depth. Should validate some functionality

Testing Process:
1) Test Plan
2) Test Strategy/Approach
3) Test Cases
1) Test Plan:
Plan of activites carried through out testing the application Test plan includes Scope, Approach, Risks,Resource, Shedule.
THE TEST PLAN
The test plan is a mandatory document. You can’t test without one. For simple, straight-forward projects the plan doesn’t have to be elaborate but it must address certain items. As identified by the “American National Standards Institute and Institute for Electrical and Electronic Engineers Standard 829/1983 for Software Test Documentation”, the following components should be covered in a software test plan.

Items Covered by a Test Plan Component
Description
Purpose
Responsibilities
Specific people who are and their assignments
Assigns responsibilities and keeps everyone on track and focused
Assumptions
Code and systems status and availability
Avoids misunderstandings about schedules
Test
Testing scope, schedule, duration, and prioritization
Outlines the entire process and maps specific tests
Communication
Communications plan—who, what, when, how
Everyone knows what they need to know when they need to know it
Risk Analysis
Critical items that will be tested
Provides focus by identifying areas that are critical for success
Defect Reporting
How defects will be logged and documented
Tells how to document a defect so that it can be reproduced, fixed, and retested
Environment
The technical environment, data, work area, and interfaces used in testing
Reduces or eliminates misunderstandings and sources of potential delay








Test Plan:
A test specification is called a test plan. The developers are well aware what test plans will be executed and this information is made available to management and the developers. The idea is to make them more cautious when developing their code or making additional changes. Some companies have a higher-level document called a test strategy.
A formal detailed document that describes Scope, objectives, and the approach to testing, People and equipment dedicated/allocated to testing Tools that will be used Dependencies and risks Categories of defects Test entry and exit criteria Measurements to be captured Reporting and communication processes Schedules and milestones
Test Case:      
A document that defines a test item and specifies a set of test inputs or data, execution conditions, and expected results. The inputs/data used by a test case should be both normal and intended to produce a ‘good’ result and intentionally erroneous and intended to produce an error. A test case is generally executed manually but many test cases can be combined for automated execution.
A test case normally consists of a unique identifier, requirement references from a design specification, preconditions, events, a series of steps (also known as actions) to follow, input, output, expected result, and actual result. Clinically defined a test case is an input and an expected result.[34] This can be as pragmatic as 'for condition x your derived result is y', whereas other test cases described in more detail the input scenario and what results might be expected. It can occasionally be a series of steps (but often steps are contained in a separate test procedure that can be exercised against multiple test cases, as a matter of economy) but with one expected result or expected outcome. The optional fields are a test case ID, test step, or order of execution number, related requirement(s), depth, test category, author, and check boxes for whether the test is automatable and has been automated. Larger test cases may also contain prerequisite states or steps, and descriptions. A test case should also contain a place for the actual result. These steps can be stored in a word processor document, spreadsheet, database, or other common repository. In a database system, you may also be able to see past test results, who generated the results, and what system configuration was used to generate those results. These past results would usually be stored in a separate table.
Test Script :
Step-by-step procedures for using a test case to test a specific unit of code, function, or capability.
The test script is the combination of a test case, test procedure, and test data. Initially the term was derived from the product of work created by automated regression test tools. Today, test scripts can be manual, automated, or a combination of both.

Test Scenario :
A chronological record of the details of the execution of a test script. Captures the specifications, tester activities, and outcomes. Used to identify defects.
Test Run:
 A series of logically related groups of test cases or conditions.

TYPES OF SOFTWARE TESTS
 The V-Model of testing identifies five software testing phases, each with a certain type of test associated with it.
 Phase                                                                   Guiding Document                                        Test Type
Development Phase                                           Technical Design                            Unit Testing
System and Integration Phase                           Functional Design                           System Testing Integration Testing
User Acceptance Phase                                      Business Requirements                 User Acceptance Testing
 Implementation Phase                                      Business Case                                 Product Verification Testing
                                                         Regression Testing applies to all Phases
Test Case Template:
1)      Test case title
2)      Customer Requirement- The name of the customer requirement (or functional specification) this test case is used for. Logging this will ensure you have adequate testing for all customer requirements.
3)      Owner- This is the person ultimately responsible for the test case
4)      Assignee –Person the test case is assigned to.
5)      Group/Sub group-Optional, used to categorize the test case
6)      Steps –List the exact steps you take to perform the test.
7)      Expected results- List the results you expect to happen when the test is run.
Defect Tracking:
In engineering defect tracking is the process of finding defects in a product (by inspection, testing, or recording feedback from customers), and making new versions of the product that fix the defects. Defect tracking is important in software engineering as complex software systems typically have tens or hundreds or thousands of defects: managing, evaluating and prioritizing these defects is a difficult task: defect tracking systems are computer database systems that store defects and help people to manage them.
The objective of defect prevention is to identify the defects and take corrective action to ensure they are not repeated over subsequent iterative cycles. Defect prevention can be implemented by preparing an action plan to minimize or eliminate defects, generating defect metrics, defining corrective action and producing an analysis of the root causes of the defects.
Defect prevention can be accomplished by auctioning the following steps:
1.     Calculate defect data with periodic reviews using test logs from the execution phase: this data should be used to segregate and classify defects by root causes. This produces defect metrics highlighting the most prolific problem areas;
2.     Identify improvement strategies;
3.     Escalate issues to senior management or customer where necessary;
4.     Draw up an action plan to address outstanding defects and improve development process. This should be reviewed regularly for effectiveness and modified should it prove to be ineffective.
5.     Undertake periodic peer reviews to verify that the action plans are being adhered to;
6.     Produce regular reports on defects by age. If the defect age for a particular defect is high and the severity is sufficient to cause concern, focused action needs to be taken to resolve it.
7.     Classify defects into categories such as critical defects, functional defects, and cosmetic defects.
Evaluating a bug tracking system requires that you understand how specific features, such as configurable workflow and customizable fields, relate to your requirements and your current bug tracking process. After a defect has been found, it must be reported to development so that it can be fixed. Much has been written about identifying defects and reproducing them�-but very little has been done to explain the reporting process and what developers really need.

Everyone knows that it is the responsibility of the tester not only to find and report bugs, but also to manage and handle bugs efficiently until they have been fixed.

Defect tracking systems influence business-critical decisions. Building and installing a corporate-wide defect tracking system takes a small but well-balanced development team. Your implementation may be as simple as opening the package and typing "setup" or it may take months of programming.

Defect tracking Procedure:
At any point in development (during testing or not), any defect found will be entered into the database provided by Team.  The person who discovered the defect will assign it to a team member, and it will be their responsibility to correct the problem and update the database by the due date.  When a defect is assigned, it is also that person’s responsibility to notify the person assigned to correction.
           
      Steps for finding a defect
1.      Create a new entry in the Teamatic database
2.      Email the person who was assigned to correct the defect (automatic with Teamatic)
Necessary Field Descriptions:
Release ID - stage in which defect is found
Created by - person reporting defect
Assigned to - person assigned to fix the defect
Type - the type of defect found (i.e. bug, enhancement, etc)
Status - the current state of the defect
Priority - how important it is to fix the defect quickly
Severity - amount of impact it has on the program
Short Description - brief narrative describing the problem
Module - (if known) the class the defect resides in
Description - a more detailed narrative on the problem that includes information on reproducing the defect.
Resolution - how the defect was fixed (to be added by the person who fixed the defect)
 Defect tracking is an important part of Software Testing. Mostly in many organizations defect tracking process will be the same. The steps below describe a sample defect tracking process. Depending on the size of the project or project team, this process may be substantially more complex.
1. Execute test and log any discrepancies.

The tester executes the test and compares the actual results to the documented expected results. If a discrepancy exists, the discrepancy is logged as a “defect” with a status of “open.” Supplementary documentation, such as screen prints or program traces, is attached if available.

2. Determine if discrepancy is a defect.

The Test Manager or tester reviews the defect log with an appropriate member of the development team to determine if the discrepancy is truly a defect, and is repeatable. If it is not a defect, or repeatable, the log should be closed with an explanatory comment.

3. Assign defect to developer.

If a defect exists it is assigned to a developer for correction. This may be handled automatically by the tool, or may be determined as a result of the discussion in
step 2.

4. Defect resolution process.

When the developer has acknowledged the defect is valid, the resolution process begins. The four steps of the resolution process are:

·         Prioritize the correction – Three recommended prioritization levels are: “critical”, “major”, and “minor”. “Critical” means there is a serious impact on the organization’s business operation or on further testing. “Major” causes an output of the software to be incorrect or stops or impedes further testing. “Minor” means something is wrong, but it does not directly affect the user of the system or further testing, such as a documentation error or cosmetic GUI error. The purpose of this step is to initiate any immediate action that may be required after answering the questions: Is this a new or previously reported defect? What priority should be given to correcting this defect? Should steps be taken to minimize the impact of the defect before the correction, such as notifying users, finding a workaround?

·         Schedule the correction – Based on the priority of the defect, the correction should be scheduled. All defects are not created equal from the perspective of how quickly they need to be corrected, although they may all be equal from a defect-prevention perspective. Some organizations actually treat lower priority defects as changes.

·         Correct the defect – The developer corrects the defect, and upon completion, updates the log with a description of the correction and changes the status to “Corrected” or “Retest”. The tester then verifies that the defect has been removed from the system. Additional regression testing is performed as needed based on the severity and impact of the correction applied. In addition, test data, checklists, etc., should be reviewed and perhaps enhanced, so that in the future this defect will be caught earlier. If the retest results match the expected results, the tester updates the defect status to “closed.” If the problem remains, the tester changes the status back to “Open” and this step is repeated until closure.

·         Report the resolution – Once the defect has been corrected and the correction verified, appropriate developers, users, etc., need to be notified that the defect has been corrected, the nature of the correction, when the correction will be released, and how the correction will be released. As in many aspects of defect management, this is an area where an automated process would help. Most defect management tools capture information on who found and reported the problem and therefore provide an initial list of who needs to be notified. Computer forums and electronic mail can help notify users of widely distributed software.

Traceability matrix:
A traceability matrix is a table that correlates requirements or design documents to test documents. It is used to change tests when the source documents are changed, or to verify that the test results are correct.
Traceability matrices can be established using a variety of tools including requirements management software, databases, spreadsheets, or even with tables or hyperlinks in a word processor.
A traceability matrix is created by associating requirements with the work products that satisfy them. Tests are associated with the requirements on which they are based and the product to be tested against the requirement.
Traceability requires unique identifiers for each requirement and product. Numbers for products are established in a configuration management(CM) plan. The configuration management plan is how changes will be tracked and controlled. Traceability is a key part of managing change.
Traceability ensures completeness, that all lower level requirements come from higher level requirements, and that all higher level requirements are allocated to lower level ones. Traceability also provides the basis for test planning.
Below is a simple traceability matrix structure. There can be more things included in a traceability matrix than shown. In traceability, the relationship of driver to satisfier can be one-to-one, one-to-many, many-to-one, or many-to-many.

Test Execution:
The test execution engine monitors, presents and stores the status, results, time stamp, length and other information for every Test Step of a Test Sequence
Main functions of a test execution engine:
§  Select a test type to execute. Selection can be automatic or manual.
§  Load the specification of the selected test type by opening a file from the local file system or downloading it from a Server, depending on where the test repository is stored.
§  Execute the test through the use of testing tools (SW test) or instruments (HW test), while showing the progress and accepting control from the operator (for example to Abort
§  Present the outcome (such as Passed, Failed or Aborted) of test Steps and the complete Sequence to the operator
§  Store the Test Results in report files
An advanced test execution engine may have additional functions, such as:
§  Store the test results in a Database
§  Load test result back from the Database
§  Present the test results as raw data.
§  Present the test results in a processed format. (Statistics)
§  Authenticate the operators.
Advanced functions of the test execution engine maybe less important for software testing, but these advanced features could be essential when executing hardware/system tests.

Test Execution process:

            1.
Based on risk prioritization, project constraints, and any other pertinent considerations, select the test suites (from the test set) that should be run in this test cycle.
¨
            2.
Assign the test cases in each test suite to testers for execution.
¨
            3.
Execute the test cases, report bugs, and capture information about the tests continuously, taking into account previous test results for each subsequent test.
¨
            3.A
Put the system under test and the test system into appropriate initial states. If this initial state is useful across multiple tests or multiple iterations of this test, save the initial states for subsequent re-use.
¨
            3.B
Through data inputs and other stimulus, provoke the system under test into a desired test condition.
¨
            3.C
Observe and evaluate the resulting outputs, behaviors, and states. Research any deviations from expected results.
¨
            3.D
If appropriate, report problems in the system under test.
¨
            3.E
If appropriate, report and/or resolve problems in the test system.
¨
            3.F
Capture and report information about the test just executed.
¨
            4.
Resolve blocking issues as they arise.
¨
            5.
Report status, adjust assignments, and reconsider plans and priorities daily.
¨
            6.
If appropriate, eliminate unrealizable or redundant tests in reverse-priority order (drop lowest priority tests first, highest priority tests last).
¨
            7.
Periodically report test cycle findings and status.
¨
            8.
Check any status documents, initial states, updated test ware or other test system elements, or other useful permanent records produced into the project library or configuration management system. Place the item(s) under change control.
¨

A sample testing cycle
Although variations exist between organizations, there is a typical cycle for testing[33]. The sample below is common among organizations employing the Waterfall development  model.
§  Requirements analysis: Testing should begin in the requirements phase of the software development life cycle. During the design phase, testers work with developers in determining what aspects of a design are testable and with what parameters those tests work.
§  Test planning: Test strategy, test plan, test based creation. Since many activities will be carried out during testing, a plan is needed.
§  Test development: Test procedures, test scenarios, test cases, test datasets, test scripts to use in testing software.
§  Test execution: Testers execute the software based on the plans and test documents then report any errors found to the development team.
§  Test reporting: Once testing is completed, testers generate metrics and make final reports on their test effort  and whether or not the software tested is ready for release.
§  Test result analysis: Or Defect Analysis, is done by the development team usually along with the client, in order to decide what defects should be treated, fixed, rejected (i.e. found software working properly) or deferred to be dealt with later.
§  Defect Retesting: Once a defect has been dealt with by the development team, it is retested by the testing team.
§  Regression testing: It is common to have a small test program built of a subset of tests, for each integration of new, modified, or fixed software, in order to ensure that the latest delivery has not ruined anything, and that the software product as a whole is still working correctly.
§  Test Closure: Once the test meets the exit criteria, the activities such as capturing the key outputs, lessons learned, results, logs, documents related to the project are archived and used as a reference for future projects.
Test Execution workflow:
If your test execution needs are more complex, and you need to support multiple platforms and test environments, you can take advantage of the full range of product capabilities to help automate your work. The following scenario presents one possible workflow for test execution.
1.      Create a test case.
A test case is required for test execution.
2.      Add the test case to a test plan.
A test plan helps to organize your test execution activities, but it is not a requirement for test execution.
3.      Open the Test Environments section of the test plan and define your platform coverage.
4.      Also in the Test Environments section, generate your test environments based on the defined platform coverage.
5.      Optionally, create a test script and associate it with the test case.
You can create a manual test script using Rational Quality Manager or migrate the manual test script from another tool, such as Rational Manual Tester. You can also create references to preexisting automated tests created with other tools, such as Rational Functional Tester.
Note: Certain kinds of tests, such as system verification tests, may not include a test script.
6.      Create a test execution record for the test case or generate multiple test execution records for the test case automatically.
7.      Run the test execution record.
8.      View the test execution results and file defects as needed.
9.       Update the test execution results as needed and save the results.

Comments

Popular posts from this blog

Terminal Emulation – QTP Automation Testing

HP QC SQL Queries