http://qainterviews.com/test_plan_template.htm




http://qainterviews.com/telecom_testing.htm
http://qainterviews.com/test_strategy.htm
http://qainterviews.com/stress_testing.htm


Stress testing

Stress testing is subjecting a system to an unreasonable load while denying it the resources (e.g., RAM, disc, mips, interrupts etc.) needed to process that load. The idea is to stress a system to the breaking point in order to find bugs that will make that break potentially harmful. The system is not expected to process the overload without adequate resources, but to behave (e.g., fail) in a decent manner (e.g., not corrupting or losing data). Bugs and failure modes discovered under stress testing may or may not be repaired depending on the application, the failure mode, consequences, etc. The load (incoming transaction stream) in stress testing is often deliberately distorted so as to force the system into resource depletion.

Stress testing tries to break the system under test by overwhelming its resources or by taking resources away from it. The main purpose behind stress testing is to make sure that the system fails and recovers gracefully, in other words stress testing determines the recoverability of the application.

Stress testing deliberately induces chaos and unpredictability. To take the example of a Web application, here are some ways in which stress can be applied to the system:

- Double the number for concurrent users/HTTP connections

- Randomly shut down and restart ports on the network switches/routers that connect the servers (via SNMP commands for example)

- Offline the database, then restart it

- Rebuild a RAID array while the system is running

- Run processes that consume resources (CPU, memory, disk, network) on the Web and database servers

Stress testing does not break the system purely for the sake of breaking it, but instead it allows testers to observe how the system reacts to failure and how it recovers sanely after the failure. Does it save its state or does it crash suddenly? Does it just hang and freeze or does it fail gracefully? On restart, is it able to recover from the last good state? Does it print out meaningful error messages to the user? Is the security of the system compromised because of unexpected failures?



http://qainterviews.com/testing_methodologies.htm
Testing Methodologies

Below are the testing methodologies followed in Software Testing:

Black - Box Testing
In using this strategy, the tester views the program as a black - box, tester doesn't see the code of the program: Equivalence partitioning, Boundary - value analysis, Error guessing.

White - Box Testing
In using this strategy, the tester examine the internal structure of the program: Statement coverage, Decision coverage, condition coverage, Decision/Condition coverage, Multiple - condition coverage.

Gray - Box Testing
In using this strategy Black box testing can be combine with knowledge of database validation, such as SQL for database query and adding/loading data sets to confirm functions, as well as query the database to confirm expected result.

Test Script
Type of test file. It is a set of instructions run automatically by a software or hardware test tool.

http://qainterviews.com/web_testing.htm

http://qainterviews.com/test_metrics.htm
Test Metrics

Metrics are the means by which the software quality can be measured; they give you confidence in the product. You may consider these product management indicators, which can be either quantitative or qualitative. They are typically the providers of the visibility you need.



Defect Removal Efficiency:

DRE = (Defects removed during a development phase/Defects latent in the product at that phase) x 100%

Since the latent defects in a software product is unknown at any point in time, it is approximated by adding the number of defects removed during the phase to the number of defects found later (but that existed during that phase).


Defect density:

Defect Density is a measure of the total known defects divided by the size of the software entity being measured.

Defect Density= (Number of Known Defects/Size)

The Number of Known Defects is the count of total defects identified against a particular
software entity, during a particular time period. Examples include:
· defect to date since the creation of module
· defects found in a program during an inspection
· defects to date since the shipment of a release to the customer


Defect severity index:

An index representing the average of the severity of the defects. This provides a direct measurement of the quality of the product—specifically, reliability, fault tolerance and stability.

Two measures are required to compute the defect severity index. A number is assigned against each severity level: 4 (Critical), 3 (Serious), 2 (Medium), 1 (Low). Multiply each remark by its severity level number and add the totals; divide this by the total number of defects to determine the defect severity index.
Test coverage:

Defined as the extent to which testing covers the product’s complete functionality. This metric is an indication of the completeness of the testing. It does not indicate anything about the effectiveness of the testing. This can be used as a criterion to stop testing. Coverage could be with respect to requirements, functional topic list, business flows, use cases, etc. It can be calculated based on the number of items that were covered vs. the total number of items.


Test case effectiveness:

Defined as the extent to which test cases are able to find defects. This metric provides an indication of the effectiveness of the test cases and the stability of the software.

Test case effectiveness = (Number of test cases that found defects/ Total number of test cases)


Defects per KLOC:

Defined as the number of defects per 1,000 lines of code. This metric indicates the quality of the product under test. It can be used as a basis for estimating defects to be addressed in the next phase or the next version.

Defects per KLOC= (Number of defects found/Total kilo lines of code)