Software Testing
http://qainterviews.com/s
1. Load Testing Vs Stress Testing
http://qainterviews.com/load_vs_stress.htm
2. Performance Testing
http://qainterviews.com/performance_testing_concepts.htm
3. Scalability Testing
http://qainterviews.com/scalability_testing.htm
4. Smoke Testing Vs Sanity Testing
http://qainterviews.com/smoke_vs_sanity.htm
5. Functional Testing Interview Questions
http://qainterviews.com/functional_testing.htm
6. General Testing Interview Questions
http://qainterviews.com/general_testing.htm
7.Database Testing Interview Questions
http://qainterviews.com/database_testing.htm
8. Tips to design test data before executing your test cases
http://www.softwaretestinghelp.com/tips-to-design-test-data-before-executing-your-
9. 7 basic tips for testing multi-lingual web sites
http://www.softwaretestinghelp.com/7-basic-tips-for-testing-multi-lingual-web-sites/test-cases/
Cookie Testing
Cookie is a temporary piece of information stored in a text file on user’s local drive by web server every time the computer is connected to the internet. This information is later used by web browser to retrieve information from that machine. Generally cookie contains personalized user data or information that is used to communicate between different web pages.
What is the use of Cookies?
Cookies are nothing but the user’s online identity and used to track where the user navigated throughout the web site pages.
For example if you are accessing domain http://www.testing.com/x.htm then web browser will simply query to testing.com web server for the page x.htm. Next time if you type page as http://www.testing.com/y.htm then new request is send to testing.com web server for sending 2.html page and web server doesn’t know anything about to whom the previous page x.htm served.
What if you want the previous history of this user communication with the web server? You need to maintain the user state and interaction between web browser and web server somewhere. This is where cookie comes into picture. Cookies serve the purpose of maintaining the user interactions with web server.
How cookies work?
The HTTP protocol used to exchange information files on the web is used to maintain the cookies. There are two types of HTTP protocol. Stateless HTTP and Stateful HTTP protocol. Stateless HTTP protocol does not keep any record of previously accessed web page history. While Stateful HTTP protocol does keep some record of previous web browser and web server interactions and this protocol is used by cookies to maintain the user interactions.
Whenever the user visits a webpage that is using cookie, a small code inside that HTML page writes a text file on users machine called cookie. Generally this is a call to some language script to write the cookie like cookies in Java Script, PHP, Perl.
Here is one example of the code that is used to write cookie and can be placed inside any HTML page:
Set-Cookie: NAME=
When a user visits the same page or domain later time this cookie is read from disk and used to identify the second and subsequent visits of the same user on that domain.
Generally two types of cookies are there:
1) Session cookies: This cookie is active till the browser that invoked the cookie is open. When we close the browser this session cookie gets deleted. Also there is a way to set the expiration time for this cookie.
2) Persistent cookies: These cookies are written permanently on user machine and lasts for months or years.
Where cookies are stored?
When any web page application writes cookie it get saved in a text file on user hard disk drive. The path where the cookies get stored varies for different browsers. E.g. Internet explorer store cookies on path “C:\Documents and Settings\Default User\Cookies”
Here the “Default User” can be replaced by the current user you logged in as. Like “Administrator”, or user name like “Sam” etc.
In Firefox browser to see the cookies that are stored: Open the Firefox browser, click on Tools->Options->Privacy and then “Show cookies” button.
How cookies are stored?
Lets take example of cookie written by say google.com on Mozilla Firefox browser:
On Mozilla Firefox browser when you open the page google.com, a cookie will get written on your hard disk. To view this cookie click on “Show cookies” button from Tools->Options->Privacy. Click on google.com site under this cookie list. You can see different cookies written by google.com domain with different names. Given below the description for one particular cookie named _utmz.
Name: _utmz (cookie name)
Content: 173272373.1215690934.1.1.utmccn=(direct)utmcsr=(direct)utmcmd=(none)
Domain: google.com
Path: /support/talk/
Send For: Any type of connection
Expires: Friday, January 09, 2009 5:25:34 AM
Applications where cookies can be used:
1) Shopping carts:
Cookies are used for maintaining online ordering system. Cookies remember what a user wants to buy. If the user adds some products in their shopping cart and if due to some reason the user doesn’t want to buy those products this time and closes the browser window, the next time the same user visits the purchase page he can see all the products he added in shopping cart in his last visit.
2) User sessions:
Cookies can track user sessions to particular domain using user ID and password.
3) Personalized sites:
When user visits certain pages they are asked which pages they don’t want to visit or display. User options are get stored in cookie and till the user is online, those pages are not shown to him.
4) User tracking:
To track number of unique visitors online at particular time.
Disadvantages of cookies:
1) Security issues:
Cookies can sometimes store user's personal information. If some hacker hacks these cookies, then the hacker can get access to the user's personal information. Even corrupted cookies can be read by different domains and can potentially lead to security issues.
2) Sensitive information:
Some sites may store user's sensitive information in cookies, which should not be allowed due to privacy concerns.
Test cases for cookie testing:
1) Check if your application is writing cookies properly on hard disk.
2) As a Cookie privacy policy make sure from your design documents that no personal or sensitive data is stored in the cookie.
3) If you have no option than saving sensitive data in cookie make sure data stored in cookie is stored in encrypted format so that others can not decrypt it.
4) Make sure that there is no overuse of cookies on your site under test. Overuse of cookies will annoy users if browser is prompting for cookies more often and this could result in loss of site traffic and eventually loss of business.
5) Disable the cookies from your browser settings: If you are using cookies on your site, your sites major functionality will not work by disabling the cookies. Then try to access the web site under test. Navigate through the site. See if appropriate messages are displayed to user like “For smooth functioning of this site make sure that cookies are enabled on your browser”. There should not be any page crash due to disabling the cookies. (Please make sure that you close all browsers, delete all previously written cookies before performing this test)
6) Accepts/Reject some cookies: The best way to check web site functionality is, not to accept all cookies. If you are writing 10 cookies in your web application then randomly accept some cookies say accept 5 and reject 5 cookies. For executing this test case you can set browser options to prompt whenever cookie is being written to disk. On this prompt window you can either accept or reject cookie. Try to access major functionality of web site. See if pages are getting crashed or data is getting corrupted.
7) Delete cookie: Allow site to write the cookies and then close all browsers and manually delete all cookies for web site under test. Access the web pages and check the behavior of the pages.
8) Corrupt the cookies: Corrupting cookie is easy. You know where cookies are stored. Manually edit the cookie in notepad and change the parameters to some vague values. Like alter the cookie content, Name of the cookie or expiry date of the cookie and see the site functionality. In some cases corrupted cookies allow to read the data inside it for any other domain. This should not happen in case of your web site cookies. Note that the cookies written by one domain say rediff.com can’t be accessed by other domain say yahoo.com unless and until the cookies are corrupted and someone trying to hack the cookie data.
9 ) Checking the deletion of cookies from your web application page: Some times cookie written by domain say rediff.com may be deleted by same domain but by different page under that domain. This is the general case if you are testing some ‘action tracking’ web portal. Action tracking or purchase tracking pixel is placed on the action web page and when any action or purchase occurs by user the cookie written on disk get deleted to avoid multiple action logging from same cookie. Check if reaching to your action or purchase page deletes the cookie properly and no more invalid actions or purchase get logged from same user.
10) Cookie Testing on Multiple browsers: This is the important case to check if your web application page is writing the cookies properly on different browsers as intended and site works properly using these cookies. You can test your web application on Major used browsers like Internet explorer (Various versions), Mozilla Firefox, Netscape, Opera etc.
11) If your web application is using cookies to maintain the logging state of any user then log in to your web application using some username and password. In many cases you can see the logged in user ID parameter directly in browser address bar. Change this parameter to different value say if previous user ID is 100 then make it 101 and press enter. The proper access message should be displayed to user and user should not be able to see other users account.
These are some of the main test cases for testing website cookies. More test scenarios can be derived from these by combining the above scenarios.
White Box Testing
White box testing is the testing performed at the code level of software. When you know the internal structure of code of a product, white box testing can be performed to ensure that the internal operations are performed according to the specification. This is also known as Structural Testing and Glass box testing.
The various types of white box testing techniques are described below:
Unit Testing:
The developer carries out unit testing in order to check if the particular module or unit of code is working fine. The Unit Testing comes at the very basic level as it is carried out as and when the unit of the code is developed or a particular functionality is built.
Static and dynamic Analysis:
Static analysis involves going through the code in order to find out any possible defect in the code. Dynamic analysis involves executing the code and analyzing the output.
Statement Coverage:
In this type of testing the code is executed in such a manner that every statement of the application is executed at least once. It helps in assuring that all the statements execute without any side effect.
Branch Coverage:
No software application can be written in a continuous mode of coding, at some point we need to branch out the code in order to perform a particular functionality. Branch coverage testing helps in validating of all the branches in the code and making sure that no branching leads to abnormal behavior of the application.
Security Testing:
Security Testing is carried out in order to find out how well the system can protect itself from unauthorized access, hacking – cracking, any code damage etc. which deals with the code of application. This type of testing needs sophisticated testing techniques.
Mutation Testing:
A kind of testing in which, the application is tested for the code that was modified after fixing a particular bug/defect. It also helps in finding out which code and which strategy of coding can help in developing the functionality effectively.
Advantages of White box testing are:
i) As the knowledge of internal coding structure is prerequisite, it becomes very easy to find out which type of input/data can help in testing the application effectively.
ii) White box testing helps in optimizing the code
iii) White box testing helps in removing the extra lines of code, which can bring in hidden defects.
Disadvantages of white box testing are:
i) As knowledge of code and internal structure is a prerequisite, a skilled tester is needed to carry out this type of testing, which increases dependency and of course the cost.
ii) It is almost impossible to look into every bit of code to find out hidden errors, which may create problems, resulting in failure of the application.
http://qainterviews.com/black_box_testing.htm
Q. What is Black box testing?
Black box testing is also known as functional testing. This is a software testing technique whereby the internal workings of the item being tested are not known by the tester. For example, in a black box test on a software design the tester only knows the inputs and what the expected outcomes should be and not how the program arrives at those outputs. The tester does not ever examine the programming code and does not need any further knowledge of the program other than its specifications.
Q. What are the advantages of Black box testing?
The test is unbiased because the designer and the tester are independent of each other.
The tester does not need knowledge of any specific programming languages.
The test is done from the point of view of the user, not the designer.
Test cases can be designed as soon as the specifications are complete.
Q. What are the disadvantages of Black box testing?
The test can be redundant if the software designer has already run a test case.
The test cases are difficult to design.
Testing every possible input stream is unrealistic because it would take a inordinate amount of time; therefore, many program paths will go untested.
Q. Give real time examples of Black box testing.
In this technique, we do not use the code to determine a test suite; rather, knowing the problem that we're trying to solve, we come up with four types of test data:
Easy-to-compute data
Typical data
Boundary / extreme data
Bogus data
For example, suppose we are testing a function that uses the quadratic formula to determine the two roots of a second-degree polynomial ax2+bx+c. For simplicity, assume that we are going to work only with real numbers, and print an error message if it turns out that the two roots are complex numbers (numbers involving the square root of a negative number).
We can come up with test data for each of the four cases, based on values of the polynomial's discriminant (b2-4ac):
Easy data (discriminant is a perfect square):
a b c Roots
1 2 1 -1, -1
1 3 2 -1, -2
Typical data (discriminant is positive):
a b c Roots
1 4 1 -3.73205, -0.267949
2 4 1 -1.70711, -0.292893
Boundary / extreme data (discriminant is zero):
a b c Roots
2 -4 2 1, 1
2 -8 8 2, 2
Bogus data (discriminant is negative, or a is zero):
a b c Roots
1 1 1 square root of negative number
0 1 1 division by zero
As with glass-box testing, you should test your code with each set of test data. If the answers match, then your code passes the black-box test.
Q. Describe the Black box testing techniques.
Equivalence partitioning
Boundary value analysis
State transition tables
Decision table testing
Pairwise testing
Error Guessing.
Details of each types of testing are given below: (Note: Some of the definitions and examples have been borrowed from Wikipedia)
Equivalence partitioning:
Equivalence partitioning is a black box testing technique with the goal:
To reduce the number of test cases to a necessary minimum.
To select the right test cases to cover all possible scenarios.
Although in rare cases equivalence partitioning is also applied to outputs of a software component, typically it is applied to the inputs of a tested component. The equivalence partitions are usually derived from the specification of the component's behavior. An input has certain ranges which are valid and other ranges which are invalid. This may be best explained at the following example of a function which has the pass parameter "month" of a date. The valid range for the month is 1 to 12, standing for January to December. This valid range is called a partition. In this example there are two further partitions of invalid ranges. The first invalid partition would be <= 0 and the second invalid partition would be >= 13.
... -2 -1 0 1 .............. 12 13 14 15 .....
------------------------------------------------------
invalid partition 1 valid partition invalid partition 2
The testing theory related to equivalence partitioning says that only one test case of each partition is needed to evaluate the behaviour of the program for the related partition. In other words it is sufficient to select one test case out of each partition to check the behaviour of the program. To use more or even all test cases of a partition will not find new faults in the program. The values within one partition are considered to be "equivalent". Thus the number of test cases can be reduced considerably.
An additional effect by applying this technique is that you also find the so called "dirty" test cases. An inexperienced tester may be tempted to use as test cases the input data 1 to 12 for the month and forget to select some out of the invalid partitions. This would lead to a huge number of unnecessary test cases on the one hand, and a lack of test cases for the dirty ranges on the other hand.
The tendency is to relate equivalence partitioning to so called black box testing which is strictly checking a software component at its interface, without consideration of internal structures of the software. But having a closer look at the subject there are cases where it applies to grey box testing as well. Imagine an interface to a component which has a valid range between 1 and 12 like in the example above. However internally the function may have a differentiation of values between 1 and 6 and the values between 7 and 12. Depending on the input value the software internally will run through different paths to perform slightly different actions. Regarding the input and output interfaces to the component this difference will not be noticed, however in your grey-box testing you would like to make sure that both paths are examined. To achieve this it is necessary to introduce additional equivalence partitions which would not be needed for black-box testing. For this example this would be:
... -2 -1 0 1 ..... 6 7 ..... 12 13 14 15 .....
------------------------------------------------------
invalid partition 1 P1 P2 invalid partition 2
valid partitions
To check for the expected results you would need to evaluate some internal intermediate values rather than the output interface.
Equivalence partitioning is no stand alone method to determine test cases. It has to be supplemented by boundary value analysis. Having determined the partitions of possible inputs the method of boundary value analysis has to be applied to select the most effective test cases out of these partitions
Boundary value analysis:
Boundary value analysis is a black box testing technique to determine test cases covering off-by-one errors. The boundaries of software component input ranges are areas of frequent problems
Testing experience has shown that especially the boundaries of input ranges to a software component are liable to defects. A programmer implement e.g. the range 1 to 12 at an input, which e.g. stands for the month January to December in a date, has in his code a line checking for this range. This may look like:
if (month > 0 && month <>= 0 && month < 13)
For more complex range checks in a program this may be a problem which is not so easily spotted as in the above simple example.
Applying boundary value analysis:
To set up boundary value analysis test cases the tester first has to determine which boundaries are at the interface of a software component. This has to be done by applying the equivalence partitioning technique. Boundary value analysis and equivalence partitioning are inevitably linked together. For the example of the month a date would have the following partitions:
... -2 -1 0 1 .............. 12 13 14 15 .....
------------------------------------------------------
invalid partition 1 valid partition invalid partition 2
Applying boundary value analysis a test case at each side of the boundary between two partitions has to be selected. In the above example this would be 0 and 1 for the lower boundary as well as 12 and 13 for the upper boundary. Each of these pairs consists of a "clean" and a "negative" test case. A "clean" test case should give a valid operation result of the program. A "negative" test case should lead to a correct and specified input error treatment such as the limiting of values, the usage of a substitute value, or in case of a program with a user interface, it has to lead to warning and request to enter correct data. The boundary value analysis can have 6 test cases: n, n-1, and n+1 for the upper limit; and n, n-1, and n+1 for the lower limit.
A further set of boundaries has to be considered when test cases are set up. A solid testing strategy also has to consider the natural boundaries of the data types used in the program. If working with signed values, for example, this may be the range around zero (-1, 0, +1). Similar to the typical range check faults, there tend to be weaknesses in programs in this range. e.g. this could be a division by zero problem where a zero value may occur although the programmer always thought the range started at 1. It could be a sign problem when a value turns out to be negative in some rare cases, although the programmer always expected it to be positive. Even if this critical natural boundary is clearly within an equivalence partition it should lead to additional test cases checking the range around zero. A further natural boundary is the natural lower and upper limit of the data type itself. E.g. an unsigned 8-bit value has the range of 0 to 255. A good test strategy would also check how the program reacts at an input of -1 and 0 as well as 255 and 256.
The tendency is to relate boundary value analysis more to so called black box testing, which is strictly checking a software component at its interfaces, without consideration of internal structures of the software. But looking closer at the subject, there are cases where it applies also to white box testing.
After determining the necessary test cases with equivalence partitioning and subsequent boundary value analysis, it is necessary to define the combinations of the test cases when there are multiple inputs to a software component
State Transition table:
In automata theory and sequential logic, a state transition table is a table showing what state (or states in the case of a nondeterministic finite automaton) a finite semi automaton or finite state machine will move to, based on the current state and other inputs. A state table is essentially a truth table in which some of the inputs are the current state, and the outputs include the next state, along with other outputs.
A state table is one of many ways to specify a state machine, other ways being a state diagram, and a characteristic equation.
One-dimensional state tables
Also called characteristic tables, single-dimension state tables are much more like truth tables than the two-dimensional versions. Inputs are usually placed on the left, and separated from the outputs, which are on the right. The outputs will represent the next state of the machine. Here's a simple example of a state machine with two states, and two combinatorial inputs:
A B Current State Next State Output
0 0 S1 S2 1
0 0 S2 S1 0
0 1 S1 S2 0
0 1 S2 S2 1
1 0 S1 S1 1
1 0 S2 S1 1
1 1 S1 S1 1
1 1 S2 S2 0
S1 and S2 would most likely represent the single bits 0 and 1, since a single bit can only have two states.
Two-dimensional state tables
State transition tables are typically two-dimensional tables. There are two common forms for arranging them.
The vertical (or horizontal) dimension indicates current states, the horizontal (or vertical) dimension indicates events, and the cells (row/column intersections) in the table contain the next state if an event happens (and possibly the action linked to this state transition).
State Transition Table Events
State E1 E2 ... En
S1 - Ay/Sj ... -
S2 - - ... Ax/Si
... ... ... ... ...
Sm Az/Sk - ... -
(S: state, E: event, A: action, -: illegal transition)
The vertical (or horizontal) dimension indicates current states, the horizontal (or vertical) dimension indicates next states, and the row/column intersections contain the event which will lead to a particular next state.
State Transition Table next
current S1 S2 ... Sm
S1 Ay/Ej - ... -
S2 - - ... Ax/Ei
... ... ... ... ...
Sm - Az/Ek ... -
(S: state, E: event, A: action, -: impossible transition)
Decision tables:
Decision tables are a precise yet compact way to model complicated logic. Decision tables, like if-then-else and switch-case statements, associate conditions with actions to perform. But, unlike the control structures found in traditional programming languages, decision tables can associate many independent conditions with several actions in an elegant way
Decision tables are typically divided into four quadrants, as shown below.
The four quadrants Conditions Condition alternatives
Actions Action entries
Each decision corresponds to a variable, relation or predicate whose possible values are listed among the condition alternatives. Each action is a procedure or operation to perform, and the entries specify whether (or in what order) the action is to be performed for the set of condition alternatives the entry corresponds to. Many decision tables include in their condition alternatives the don't care symbol, a hyphen. Using don't cares can simplify decision tables, especially when a given condition has little influence on the actions to be performed. In some cases, entire conditions thought to be important initially are found to be irrelevant when none of the conditions influence which actions are performed.
Aside from the basic four quadrant structure, decision tables vary widely in the way the condition alternatives and action entries are represented. Some decision tables use simple true/false values to represent the alternatives to a condition (akin to if-then-else), other tables may use numbered alternatives (akin to switch-case), and some tables even use fuzzy logic or probabilistic representations for condition alternatives. In a similar way, action entries can simply represent whether an action is to be performed (check the actions to perform), or in more advanced decision tables, the sequencing of actions to perform (number the actions to perform).
Pair wise testing:
All-pairs testing or pairwise testing is a combinatorial software testing method that, for each pair of input parameters to a system (typically, a software algorithm), tests all possible discrete combinations of those parameters. Using carefully chosen test vectors, this can be done much faster than an exhaustive search of all combinations of all parameters, by "parallelizing" the tests of parameter pairs. The number of tests is typically O (nm), where n and m are the number of possibilities for each of the two parameters with the most choices.
The reasoning behind all-pairs testing is this: the simplest bugs in a program are generally triggered by a single input parameter. The next simplest category of bugs consists of those dependent on interactions between pairs of parameters, which can be caught with all-pairs testing. Bugs involving interactions between three or more parameters are progressively less common, whilst at the same time being progressively more expensive to find by exhaustive testing, which has as its limit the exhaustive testing of all possible inputs.
Many testing methods regard all-pairs testing of a system or subsystem as a reasonable cost-benefit compromise between often computationally infeasible higher-order combinatorial testing methods, and less exhaustive methods which fail to exercise all possible pairs of parameters. Because no testing technique can find all bugs, all-pairs testing is typically used together with other quality assurance techniques such as unit testing, symbolic execution, fuzz testing, and code review.
Error Guessing:
Error guessing is a test case design technique where the experience of the tester is used to postulate what faults might occur, and to design tests specifically to expose them. This is a Test data selection technique. The selection criterion is to pick values that seem likely to cause errors.
http://qainterviews.com/telecom_testing.htm
http://qainterviews.com/test_strategy.htm
http://qainterviews.com/stress_testing.htm
Stress testing
Stress testing is subjecting a system to an unreasonable load while denying it the resources (e.g., RAM, disc, mips, interrupts etc.) needed to process that load. The idea is to stress a system to the breaking point in order to find bugs that will make that break potentially harmful. The system is not expected to process the overload without adequate resources, but to behave (e.g., fail) in a decent manner (e.g., not corrupting or losing data). Bugs and failure modes discovered under stress testing may or may not be repaired depending on the application, the failure mode, consequences, etc. The load (incoming transaction stream) in stress testing is often deliberately distorted so as to force the system into resource depletion.
Stress testing tries to break the system under test by overwhelming its resources or by taking resources away from it. The main purpose behind stress testing is to make sure that the system fails and recovers gracefully, in other words stress testing determines the recoverability of the application.
Stress testing deliberately induces chaos and unpredictability. To take the example of a Web application, here are some ways in which stress can be applied to the system:
- Double the number for concurrent users/HTTP connections
- Randomly shut down and restart ports on the network switches/routers that connect the servers (via SNMP commands for example)
- Offline the database, then restart it
- Rebuild a RAID array while the system is running
- Run processes that consume resources (CPU, memory, disk, network) on the Web and database servers
Stress testing does not break the system purely for the sake of breaking it, but instead it allows testers to observe how the system reacts to failure and how it recovers sanely after the failure. Does it save its state or does it crash suddenly? Does it just hang and freeze or does it fail gracefully? On restart, is it able to recover from the last good state? Does it print out meaningful error messages to the user? Is the security of the system compromised because of unexpected failures?
http://qainterviews.com/testing_methodologies.htm
Testing Methodologies
Below are the testing methodologies followed in Software Testing:
Black - Box Testing
In using this strategy, the tester views the program as a black - box, tester doesn't see the code of the program: Equivalence partitioning, Boundary - value analysis, Error guessing.
White - Box Testing
In using this strategy, the tester examine the internal structure of the program: Statement coverage, Decision coverage, condition coverage, Decision/Condition coverage, Multiple - condition coverage.
Gray - Box Testing
In using this strategy Black box testing can be combine with knowledge of database validation, such as SQL for database query and adding/loading data sets to confirm functions, as well as query the database to confirm expected result.
Test Script
Type of test file. It is a set of instructions run automatically by a software or hardware test tool.
http://qainterviews.com/web_testing.htm
http://qainterviews.com/test_metrics.htm
Test Metrics
Metrics are the means by which the software quality can be measured; they give you confidence in the product. You may consider these product management indicators, which can be either quantitative or qualitative. They are typically the providers of the visibility you need.
Defect Removal Efficiency:
DRE = (Defects removed during a development phase/Defects latent in the product at that phase) x 100%
Since the latent defects in a software product is unknown at any point in time, it is approximated by adding the number of defects removed during the phase to the number of defects found later (but that existed during that phase).
Defect density:
Defect Density is a measure of the total known defects divided by the size of the software entity being measured.
Defect Density= (Number of Known Defects/Size)
The Number of Known Defects is the count of total defects identified against a particular
software entity, during a particular time period. Examples include:
· defect to date since the creation of module
· defects found in a program during an inspection
· defects to date since the shipment of a release to the customer
Defect severity index:
An index representing the average of the severity of the defects. This provides a direct measurement of the quality of the product—specifically, reliability, fault tolerance and stability.
Two measures are required to compute the defect severity index. A number is assigned against each severity level: 4 (Critical), 3 (Serious), 2 (Medium), 1 (Low). Multiply each remark by its severity level number and add the totals; divide this by the total number of defects to determine the defect severity index.
Test coverage:
Defined as the extent to which testing covers the product’s complete functionality. This metric is an indication of the completeness of the testing. It does not indicate anything about the effectiveness of the testing. This can be used as a criterion to stop testing. Coverage could be with respect to requirements, functional topic list, business flows, use cases, etc. It can be calculated based on the number of items that were covered vs. the total number of items.
Test case effectiveness:
Defined as the extent to which test cases are able to find defects. This metric provides an indication of the effectiveness of the test cases and the stability of the software.
Test case effectiveness = (Number of test cases that found defects/ Total number of test cases)
Defects per KLOC:
Defined as the number of defects per 1,000 lines of code. This metric indicates the quality of the product under test. It can be used as a basis for estimating defects to be addressed in the next phase or the next version.
Defects per KLOC= (Number of defects found/Total kilo lines of code)
Software Testing Life Cycle
Software Testing Life Cycle consists of six phases:
Test Planning
Test Strategy
Test Design
Verification
Validation Cycles
Final Testing and Implementation
Post Implementation.
.
Test Planning:
This is the phase where Project Manager has to decide what things need to be tested, what are the budget allocated for the testing activity, what is the time period allocated for testing etc. Hence these factors have to be planned properly during Test Planning.
Activities at this stage would include preparation of high level test plan-(according to IEEE test plan template The Software Test Plan is designed to address the scope, approach, resources, and schedule of all testing activities. The test plan must identify the items to be tested, the features to be tested, the types of testing to be performed, the personnel responsible for testing, the resources and schedule required to complete testing, and the risks associated with the plan.). Almost all of the activities done during this stage are included in this software test plan and revolve around a test plan.
Test Strategy:
Once test plan is made and decided upon, next step is to delve little more into the project and decide what types of testing should be carried out at different stages of SDLC, do we need or plan to automate, if yes then when the appropriate time to automate is, what type of specific documentation I need for testing.
Proper and regular meetings should be held between testing teams, project managers, development teams, Business Analysts to check the progress of things which will give a fair idea of the movement of the project and ensure the completeness of the test plan created in the planning phase, which will further help in enhancing the right testing strategy created earlier. We will start creating test case formats and test cases itself. In this stage we need to develop Functional validation matrix based on Business Requirements to ensure that all system requirements are covered by one or more test cases, identify which test cases to automate, begin review of documentation, i.e. Functional Design, Business Requirements, Product Specifications, Product Externals etc. We also have to define areas for Stress and Performance testing.
Test Design:
Test plans and cases which were developed in the analysis phase are revised. Functional validation matrix is also revised and finalized. In this stage risk assessment criteria is developed. If you have thought of automation then you have to select which test cases to automate and begin writing scripts for them. Test data is prepared. Standards for unit testing and pass / fail criteria are defined here. Schedule for testing is revised (if necessary) & finalized and test environment is prepared.
Verification:
In this phase we have to complete all the test plans, test cases, complete the scripting of the automated test cases, Stress and Performance testing plans needs to be completed. We have to support the development team in their unit testing phase. And obviously bug reporting would be done as when the bugs are found. Integration tests are performed and errors (if any) are reported.
Validation Cycles:
In this phase we have to execute all the test cases that were planned in the test Planning and Verification phases. This includes the manual and automation test cases executions. What ever defects/bugs are found they have to be reported using a bug tracking tool like Bugzilla or Test Director. This validation process occurs in cycles. After getting the build from the developers, the testers execute the test cases and report the defects. Then the developers fix the bug and give new builds. The testers pick the new build and start testing it for the bug fixes as well as for regressions. This process continues for a few cycles until the build becomes stable enough to stop testing.
Final Testing and Implementation:
In this phase we have to execute remaining stress and performance test cases, documentation for testing is completed / updated, provide and complete different matrices for testing. Acceptance, load and recovery testing will also be conducted and the application needs to be verified under production conditions.
Post Implementation:
In this phase, the testing process is evaluated and lessons learnt from that testing process are documented. Line of attack to prevent similar problems in future projects is identified. Create plans to improve the processes. The recording of new errors and enhancements is an ongoing process. Cleaning up of test environment is done and test machines are restored to base lines in this stage.
Given below a tabular representation of the Testing Life Cycle.
Software Testing Life Cycle
Phase Activities Outcome
Planning
Create high level test plan
Test plan, Refined Specification
Analysis
Create detailed test plan, Functional Validation Matrix, test cases
Revised Test Plan, Functional validation matrix, test cases
Design
test cases are revised; select which test cases to automate
revised test cases, test data sets, sets, risk assessment sheet
Construction
scripting of test cases to automate,
test procedures/Scripts, Drivers, test results, Bug reports.
Testing cycles
complete testing cycles
Test results, Bug Reports
Final testing
execute remaining stress and performance tests, complete documentation
Test results and different metrics on test efforts
Post implementation
Evaluate testing processes
Plan for improvement of testing process
Tips Software Testing Resume
Lets Check the Sample Software Testing Resume below:
Always use MS Word/Rich Test Editor to write your resume and a good font would be Verdana 10 or Arial 11.
Your Name
1234 Commonwealth Ave
Boston, USA-02134
1-234-567-0171 (Home)
1-234-567-0171 (Cell)
Youremail@gmail.com
Summary:State this clearly in Bulleted points. This will make your achievements stand out:
Example:
• Has been working in the IT industry for more than eight and a half years and out of that over four years in SQA and testing arena.
• Has the ability and experience to understand the architecture and life cycle of a project in depth and would be an ideal candidate for a process oriented black box as well as white box testing and team management.
• Have in depth knowledge of processes and procedures needed in a professional performance management environment, well versed in testing methodologies as data driven tests, regression tests, code coverage, etc. and Mercury Interactive proprietary Test Scripting Language (TSL).
Skills/Tools:
The mentioned list below is quite comprehensive skill sets. But never list anything or too many thinks, particularly if you are writing fresher software testing resume with the hope that people wont able to catch it. Once you are hired, a falsehood on your resume can be grounds for termination. If your resume is examined as part of your promotion review, you could lose your job if someone finds a lie. Or, if your employer wants an excuse to fire you, he could investigate details on your resume with the hope of finding a lie.
• Testing Tools: Win Runner 6.0/5.0, Load Runner 6.0/5.0, Silk Test 5.03, Silk Performer 3.5, Silk Test Radar 2.1, Test Track 5.0, Silk Test 5.03
• Languages: Java, C++, C and Power Builder 4.0
• Internet: ASP, Java Script, VB Script, HTML, XML and DHTML
• GUI: Visual Basic 6.0/5.0/4.0/3.0, Oracle Forms 6i/4.5, Reports 6i/2.5, Crystal Reports 8.0/6.0
• Statistical Package: SAS
• Web Servers: IIS 5.0/4.0,Vignette Story Server, Web logic Server, SQL Server 7.0/6.0
• Middleware: COM/DCOM, MTS
• Trackers: PVCS Tracker, SQA Manager and Test Director
• IDE’s: Visual Interdev, Home Site and Cold Fusion RDBMS: SQL Server 7.0/6.5, Oracle 8i/7.x/6.0, MS-Access
• Operating Systems: Macintosh, Unix, Windows XP/2000/NT/98/95
Experience:Company Name 1 and web site: It is a good practice to put the web site address so that people can find it quickly. Give the name of the latest company first.
May 2001 – Till date
Project # 1 : Customization, development and maintenance of the web enabled fee based portfolio management solutions.
Clients: Give the name of the client
DurationL From September 2001.
Role: QA Lead
Technology: Development: ASP, JavaScript, HTML. Java, .NET, COM, SQL Server, etc.
Testing: WinRunner, LoadRunner, Astra QuickTest, LoadTest, PVCS Tracker, Notify, Visual Source Safe, CSE HTML Validator, etc.
Description
• Leading a team of 11 testers for various clients.
• Planning and implementation of tests for various projects at various stages.
• Generation of manual and automated test plans and test cases.
• Preparation of Automated Regression Test Suites for multiple projects.
• Execution of automated regression tests on a day-to-day basis.
• Assisting team members in generation of WinRunner, QuickTest scripts.
• Administration of on-site and local installations of PVCS Tracker for defect tracking.
• Performance testing using LoadRunner and Astra LoadTest.
• Reporting at multiple levels for multiple projects.
• Training members of QA Team in WinRunner, TSL, QuickTest and LoadTest.
• Assisting QA Consultants for the CMMI compliance activities including Gap Analysis, Internal Audit, etc.
Similarly put project # 2 and so on..You can also use this format :
Company: Company name.
Web site:
Role: Team Leader
Duration: January 1999 to March 2001.
Responsibilities:
• Leading teams of developers/designers/testers, system analysis, and quality assurance.
• Designing, Planning, Executing White Box and Black Box tests.
• Planning and scheduling multiple projects as well as overall administration and management.
• Development of various e-commerce web sites.
• Administration and Maintenance of Windows 2000 Advanced Server based production network and MS SQL Server based database.
• Maintenance of web sites including content management, online database administration using MS SQL Server Enterprise Manager and performance analysis.
• Remote administration of collocated web server, Windows 2000/IIS5, through PcAnywhere.
• Remote administration of LINUX/Apache server through SSH.
Education:• Master in Computer Applications (MCA)
• PG Diploma in Computer Application (PGDCA)
Personal Information:
Age and Date of Birth:
Sex:
Marital status:
Nationality:
Place of Birth:
Religion:
Permanent Address:.
Passport Details:
Passport Number:
Date of Issue:
Date of Expiration:
Place of Issue:
Conclusion: Do not make your resume too long. On average try to keep it between 3 pages. Also if it is a Fresher Software Testing Resume then keep it in 1/2 pages only. If it is little more than one page then just try fit it into one page by doing little editing. If you have any question or want me to personally review your resume then just send it to support@softwareqatestings.com
5 Habits to become a Good Software Tester
1. Attitude Matters: In any kind of effort attitude is always the key to success. Inborn passion for what you do always define their interest and enthusiasm towards their work. Successful and efficient software testers often describe themselves as tough, rigorous, uncompromising and firm and have a passion for analysis and evaluation. In general, the passion is a natural substance in a person rather than characteristics acquired.
2. Intellectual and technical capacity: There should be a good (maybe above average), analytical and logicical ability. He must be able to cope with complex logic and should be able to perform at a high level in this kind of adjustment in short, a good software tester must be intelligent. Apart from intellectual ability, a good test engineer must have a excellent background in coding in order to grasp the concept of the system being tested.
3. Flexibility: Ability to adapt and willingness to learn is one of the most essential qualities of a good software tester. He must have an embedded inclination to be continuing learners and have dedication in upgrading and developing their skills. For instance, the Visual Basic platform change dramatically from VB6 to VB.Net, a good software tester must be flexible enough to cope with the changes of new technology.
4. Communication Skills: An effective test engineer must have strong written and verbal communication skills. He must have the ability to listen critically and rationally speaking clearly convey the message in person to the meetings. A good software tester must be able to read and analyze the product documentation, writing test plans, write bug reports clear, coherent writing status reports to management, both in official reports and special reports on email .
5. Business sense: It is important for a good software to have good business sense. Must have the ability to see the larger picture of a company's global business strategy. This allows a software tester to actively participate in a higher level than just an individual contributor.