De thi ViOlympic Toan lop 5 : http://violympic.vn/Default.aspx
Tham khao
http://buuduc.blogspot.com/2010/03/e-thi-va-bai-giai-violympic-lop-5-phan.html

Software Testing Glossary

Acceptance testing: Formal testing with respect to user needs, requirements, and business processes conducted to determine whether or not a system satisfies the acceptance criteria and to enable the user, customers or other authorized entity to determine whether or not to accept the system. [As per IEEE 610]



Accessibility testing: Testing to determine the ease by which users with disabilities can use a component or system.



Ad Hoc Testing: Testing carried out using no recognized test case design technique.



Agile testing: Testing practice for a project using agile methodologies, such as extreme programming (XP), treating development as the customer of testing and emphasizing the test-first design paradigm.



Alpha testing: Simulated or actual operational testing by potential users/customers or an independent test team at the developers’ site, but outside the development organization. Alpha testing is often employed as a form of internal acceptance testing.



Assertion Testing: A dynamic analysis technique which inserts assertions about the relationship between program variables into the program code. The truth of the assertions is determined as the program executes.

--------------------------------------------------------------------------------
Back-to-back testing: Testing in which two or more variants of a component or system are executed with the same inputs, the outputs compared, and analyzed in cases of discrepancies. [IEEE 610]

Background testing: is the execution of normal functional testing while the SUT is exercised by a realistic work load. This work load is being processed "in the background" as far as the functional testing is concerned.

Benchmarks: Programs that provide performance comparison for software, hardware, and systems.

Benchmarking: is specific type of performance test with the purpose of determining performance baselines for comparison.

Beta testing: Operational testing by potential and/or existing users/customers at an external site not otherwise involved with the developers, to determine whether or not a component or system satisfies the user/customer needs and fits within the business processes. Beta testing is often employed as a form of external acceptance testing in order to acquired feedback from the market.

Big-bang testing: Integration testing where no incremental testing takes place prior to all the system's components being combined to form the system

Black box testing: Testing, either functional or non-functional, without reference to the internal structure of the component or system.

Breadth test: - A test suite that exercises the full scope of a system from a top-down perspective, but does not test any aspect in detail

Bottom-up testing: An incremental approach to integration testing where the lowest level components are tested first, and then used to facilitate the testing of higher level components. This process is repeated until the component at the top of the hierarchy is tested. See also integration testing.

Boundary value: An input value or output value which is on the edge of an equivalence partition or at the smallest incremental distance on either side of an edge, for example the minimum or maximum value of a range.

Boundary value analysis: A black box test design technique in which test cases are designed based on boundary values.

Branch coverage: The percentage of branches that have been exercised by a test suite. 100% branch coverage implies both 100% decision coverage and 100% statement coverage.

Branch testing: A white box test design technique in which test cases are designed to execute branches.

Business process-based testing: An approach to testing in which test cases are designed based on descriptions and/or knowledge of business processes.
--------------------------------------------------------------------------------
Capability Maturity Model (CMM): A five level staged framework that describes the key elements of an effective software process. The Capability Maturity Model covers practices for planning, engineering and managing software development and maintenance. [CMM]

Capability Maturity Model Integration (CMMI): A framework that describes the key elements of an effective product development and maintenance process. The Capability Maturity Model Integration covers practices for planning, engineering and managing product development and maintenance. CMMI is the designated successor of the CMM. [CMMI]

CAST: Acronym for Computer Aided Software Testing. See also test automation.

Cause-effect graphing: A black box test design technique in which test cases are designed from cause-effect graphs. [BS 7925/2]

Clean test: A test whose primary purpose is validation; that is, tests designed to demonstrate the software`s correct working.(syn. positive test)

Code coverage: An analysis method that determines which parts of the software have been executed (covered) by the test suite and which parts have not been executed, e.g. statement coverage, decision coverage or condition coverage.

Code Inspection: A manual [formal] testing [error detection] technique where the programmer reads source code, statement by statement, to a group who ask questions analyzing the program logic, analyzing the code with respect to a checklist of historically common programming errors, and analyzing its compliance with coding standards. Contrast with code audit, code review, code walkthrough. This technique can also be applied to other software and configuration items. Syn: Fagan Inspection

Code Walkthrough: A manual testing [error detection] technique where program [source code] logic [structure] is traced manually [mentally] by a group with a small set of test cases, while the state of program variables is manually monitored, to analyze the programmer's logic and assumptions. Contrast with code audit, code inspection, code review.

Coexistence Testing: Coexistence isn’t enough. It also depends on load order, how virtual space is mapped at the moment, hardware and software configurations, and the history of what took place hours or days before. It’s probably an exponentially hard problem rather than a square-law problem.

Compatibility bug: A revision to the framework breaks a previously working feature: a new feature is inconsistent with an old feature, or a new feature breaks an unchanged application rebuilt with the new framework code.

Compatibility Testing. The process of determining the ability of two or more systems to exchange information. In a situation where the developed software replaces an already working program, an investigation should be conducted to assess possible comparability problems between the new software and other programs or systems.

Compliance testing: The process of testing to determine the compliance of component or system.

Concurrency testing: Testing to determine how the occurrence of two or more activities within the same interval of time, achieved either by interleaving the activities or by simultaneous execution, is handled by the component or system. [After IEEE 610]

Condition Coverage. A test coverage criteria requiring enough test cases such that each condition in a decision takes on all possible outcomes at least once, and each point of entry to a program or subroutine is invoked at least once. Contrast with branch coverage, decision coverage, multiple condition coverage, path coverage, statement coverage

Configuration management: A discipline applying technical and administrative direction and surveillance to: identify and document the functional and physical characteristics of a configuration item, control changes to those characteristics, record and report change processing and implementation status, and verify compliance with specified requirements. [IEEE 610]


Conformance directed testing. Testing that seeks to establish conformance to requirements or specification.

CRUD Testing. Build CRUD matrix and test all object creation, reads, updates, and deletion.

Cyclomatic Complexity: The number of independent paths through a program. Cyclomatic complexity is defined as: L – N + 2P, where - L = the number of edges/links in a graph - N = the number of nodes in a graph - P = the number of disconnected parts of the graph (e.g. a calling graph and a subroutine). [After McCabe]

--------------------------------------------------------------------------------
Data-Driven testing: An automation approach in which the navigation and functionality of the test script is directed through external data; this approach separates test and control data from the test script.

Data flow testing Testing in which test cases are designed based on variable usage within the code


Database testing. Check the integrity of database field values.

Debugging: The process of finding, analyzing and removing the causes of failures in software.

Decision table: A table showing combinations of inputs and/or stimuli (causes) with their associated outputs and/or actions (effects), which can be used to design test cases.

Decision table testing: A black box test design techniques in which test cases are designed to execute the combinations of inputs and/or stimuli (causes) shown in a decision table.

Defect: A flaw in a component or system that can cause the component or system to fail to perform its required function, e.g. an incorrect statement or data definition. A defect, if encountered during execution, may cause a failure of the component or system.

Defect density: The number of defects identified in a component or system divided by the size of the component or system (expressed in standard measurement terms, e.g. lines-ofcode, number of classes or function points).

Defect Discovery Rate. A metric describing the number of defects discovered over a specified period of time, usually displayed in graphical form.

Defect Removal Efficiency (DRE). A measure of the number of defects discovered in an activity versus the number that could have been found. Often used as a measure of test effectiveness.

Defect Seeding. The process of intentionally adding known defects to those already in a computer program for the purpose of monitoring the rate of detection and removal, and estimating the number of defects still remaining. Also called Error Seeding.

Defect Masking. An existing defect that hasn't yet caused a failure because another defect has prevented that part of the code from being executed.

Definition-use pair: The association of the definition of a variable with the use of that variable. Variable uses include computational (e.g. multiplication) or to direct the execution of a path (“predicate” use).


Deliverable: Any (work) product that must be delivered to someone other that the (work) product’s author.



Depth test. A test case, that exercises some part of a system to a significant level of detail.



Decision Coverage. A test coverage criteria requiring enough test cases such that each decision has a true and false result at least once, and that each statement is executed at least once. Syn: branch coverage. Contrast with condition coverage, multiple condition coverage, path coverage, statement coverage



Dirty testing Negative testing.



Driver: A software component or test tool that replaces a component that takes care of the control and/or the calling of a component or system.



Dynamic testing: Testing that involves the execution of the software of a component or system.




--------------------------------------------------------------------------------



End-to-End testing. Similar to system testing; the 'macro' end of the test scale; involves testing of a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate.



Entry criteria: the set of generic and specific conditions for permitting a process to go forward with a defined task, e.g. test phase. The purpose of entry criteria is to prevent a task from starting which would entail more (wasted) effort compared to the effort needed to remove the failed entry criteria.



Equivalence Partitioning: An approach where classes of inputs are categorized for product or function validation. This usually does not include combinations of input, but rather a single state value based by class. For example, with a given function there may be several classes of input that may be used for positive testing. If function expects an integer and receives an integer as input, this would be considered as positive test assertion. On the other hand, if a character or any other input class other than integer is provided, this would be considered a negative test assertion or condition.



Error: A human action that produces an incorrect result. [After IEEE 610]



Error Guessing: Another common approach to black-box validation. Black-box testing is when everything else other than the source code may be used for testing. This is the most common approach to testing. Error guessing is when random inputs or conditions are used for testing. Random in this case includes a value either produced by a computerized random number generator, or an ad hoc value or test conditions provided by engineer.



Error seeding. The purposeful introduction of faults into a program to test effectiveness of a test suite or other quality assurance program.



Exception handling: Behavior of a component or system in response to erroneous input, from either a human user or from another component or system, or to an internal failure.



Exception Testing. Identify error messages and exception handling processes an conditions that trigger them.



Exhaustive Testing. (NBS) Executing the program with all possible combinations of values for program variables. Feasible only for small, simple programs.



Exit criteria: The set of generic and specific conditions, agreed upon with the stakeholders, for permitting a process to be officially completed. The purpose of exit criteria is to prevent a task from being considered completed when there are still outstanding parts of the task which have not been finished. Exit criteria are used by testing to report against and to plan when to stop testing.



Exploratory Testing: An interactive process of concurrent product exploration, test design, and test execution. The heart of exploratory testing can be stated simply: The outcome of this test influences the design of the next test.




--------------------------------------------------------------------------------



Failure: Actual deviation of the component or system from its expected delivery, service or result.



Failure Mode and Effect Analysis (FMEA): A systematic approach to risk identification and analysis of identifying possible modes of failure and attempting to prevent their occurrence.



Finite state machine: A computational model consisting of a finite number of states and transitions between those states, possibly with accompanying actions. [IEEE 610]



Follow-up testing, we vary a test that yielded a less-than spectacular failure. We vary the operation, data, or environment, asking whether the underlying fault in the code can yield a more serious failure or a failure under a broader range of circumstances



Free Form Testing. Ad hoc or brainstorming using intuition to define test cases.



Functional Decomposition Approach. An automation method in which the test cases are reduced to fundamental tasks, navigation, functional tests, data verification, and return navigation; also known as Framework Driven Approach.



Function Point Analysis (FPA): Method aiming to measure the size of the functionality of an information system. The measurement is independent of the technology. This measurement may be used as a basis for the measurement of productivity, the estimation of the needed resources, and project control.



Functional testing: Testing based on an analysis of the specification of the functionality of a component or system. See also black box testing.




--------------------------------------------------------------------------------



Glass box testing: See white box testing.



Gray box testing: The testing approach which is a mixture of Black box and White box testing. Gray box testing examines the activity of back-end components during test case execution. Two types of problems that can be encountered during gray-box testing are:
1) A component encounters a failure of some kind, causing the operation to be aborted. The user interface will typically indicate that an error has occurred.
2) The test executes in full, but the content of the results is incorrect. Somewhere in the system, a component processed data incorrectly, causing the error in the results.




--------------------------------------------------------------------------------



High-level tests. These tests involve testing whole, complete products




--------------------------------------------------------------------------------



Impact analysis: The assessment of change to the layers of development documentation, test documentation and components, in order to implement a given change to specified requirements.



Incremental development model: A development life cycle where a project is broken into a series of increments, each of which delivers a portion of the functionality in the overall project requirements. The requirements are prioritized and delivered in priority order in the appropriate increment. In some (but not all) versions of this life cycle model, each subproject follows a ‘mini V-model’ with its own design, coding and testing phases.



Incremental testing: Testing where components or systems are integrated and tested one or some at a time, until all the components or systems are integrated and tested.



Incident: Any event occurring during testing that requires investigation.



Inspection: A type of review that relies on visual examination of documents to detect defects, e.g. violations of development standards and non-conformance to higher level documentation. The most formal review technique and therefore always based on a documented procedure. [After IEEE 610, IEEE 1028]



Interface Tests Programs that probide test facilities for external interfaces and function calls. Simulation is often used to test external interfaces that currently may not be available for testing or are difficult to control. For example, hardware resources such as hard disks and memory may be difficult to control. Therefore, simulation can provide the characteristics or behaviors for specific function.



Internationalization testing (I18N) - testing related to handling foreign text and data within the program. This would include sorting, importing and exporting test and data, correct handling of currency and date and time formats, string parsing, upper and lower case handling and so forth.



Interoperability Testing which measures the ability of your software to communicate across the network on multiple machines from multiple vendors each of whom may have interpreted a design specification critical to your success differently.



Integration testing: Testing performed to expose defects in the interfaces and in the interactions between integrated components or systems. See also component integration testing, system integration testing.



Interface testing: An integration test type that is concerned with testing the interfaces between components or systems.



Interoperability: The capability of the software product to interact with one or more specified components or systems. [After ISO 9126] See also functionality.



Interoperability testing: The process of testing to determine the interoperability of a software product. See also functionality testing.




--------------------------------------------------------------------------------



Latent bug A bug that has been dormant (unobserved) in two or more releases.



LCSAJ: A Linear Code Sequence And Jump, consisting of the following three items (conventionally identified by line numbers in a source code listing): the start of the linear sequence of executable statements, the end of the linear sequence, and the target line to which control flow is transferred at the end of the linear sequence.



Load testing Testing an application under heavy loads, such as testing of a web site under a range of loads to determine at what point the system's response time degrades or fails.



Load-stability test. Test design to determine whether a Web application will remain serviceable over extended time span.



Load Ìsolation test. The workload for this type of test is designed to contain only the subset of test cases that caused the problem in previous testing.




--------------------------------------------------------------------------------



Maintainability: The ease with which a software product can be modified to correct defects, modified to meet new requirements, modified to make future maintenance easier, or adapted to a changed environment.



Master Test Planning. An activity undertaken to orchestrate the testing effort across levels and organizations



Monkey Testing. Input are generated from probability distributions that reflect actual expected usage statistics -- e.g., from user profiles. There are different levels of IQ in smart monkey testing. In the simplest, each input is considered independent of the other inputs. That is, a given test requires an input vector with five components. In low IQ testing, these would be generated independently. In high IQ monkey testing, the correlation (e.g., the covariance) between these input distribution is taken into account. In all branches of smart monkey testing, the input is considered as a single event.



Maximum Simultaneous Connection testing. This is a test performed to determine the number of connections which the firewall or Web server is capable of handling.



Memory leak: A defect in a program’s dynamic store allocation logic that causes it to fail to reclaim memory after it has finished using it, eventually causing the program to fail due to lack of memory.



Metrics: A measurement scale and the method used for measurement.



Moderator: The leader and main person responsible for an inspection or other review process.



Mutation testing/Mutation analysis: A method to determine test suite thoroughness by measuring the extent to which a test suite can discriminate the program from slight variants (mutants) of the program.



Multiple Condition Coverage. A test coverage criteria which requires enough test cases such that all possible combinations of condition outcomes in each decision, and all points of entry, are invoked at least once.[G.Myers] Contrast with branch coverage, condition coverage, decision coverage, path coverage, statement coverage.




--------------------------------------------------------------------------------



Negative testing. A testing method whose primary purpose is falsification; that is tests designed to break the software




--------------------------------------------------------------------------------



Off-the-shelf software: A software product that is developed for the general market, i.e. for a large number of customers, and that is delivered to many customers in identical format.



Orthogonal array testing: Technique can be used to reduce the number of combination and provide maximum coverage with a minimum number of TC.Pay attention to the fact that it is an old and proven technique. The OAT was introduced for the first time by Plackett and Burman in 1946 and was implemented by G. Taguchi, 1987



Oracle (Test Oracle). a mechanism to produce the predicted outcomes to compare with the actual outcomes of the software under test




--------------------------------------------------------------------------------



Pair programming: A software development approach whereby lines of code (production and/or test) of a component are written by two programmers sitting at a single computer. This implicitly means ongoing real-time code reviews are performed.



Pair testing: Two testers work together to find defects. Typically, they share one computer and trade control of it while testing.



Penetration testing The process of attacking a host from outside to ascertain remote security vulnerabilities.



Performance Testing. Testing conducted to evaluate the compliance of a system or component with specific performance requirements



Preventive Testing Building test cases based upon the requirements specification prior to the creation of the code, with the express purpose of validating the requirements



Portability: The ease with which the software product can be transferred from one hardware or software environment to another. [ISO 9126]




--------------------------------------------------------------------------------



Quality: The degree to which a component, system or process meets specified requirements and/or user/customer needs and expectations. [After IEEE 610]



Quality assurance: Part of quality management focused on providing confidence that quality requirements will be fulfilled. [ISO 9000]



Qualification Testing. (IEEE) Formal testing, usually conducted by the developer for the consumer, to demonstrate that the software meets its specified requirements. See: acceptance testing.



Our definition of Quality: Achieving the target (not conformance to requirements as used by many authors) & minimizing the variability of the system under test




--------------------------------------------------------------------------------



Race condition defect: Many concurrent defects result from data-race conditions. A data-race condition may be defined as two accesses to a shared variable, at least one of which is a write, with no mechanism used by either to prevent simultaneous access. However, not all race conditions are defects.



Recovery testing: Testing how well a system recovers from crashes, hardware failures, or other catastrophic problems.



Reengineering: The process of examining and altering an existing system to reconstitute it in a new form. May include reverse engineering (analyzing a system and producing a representation at a higher level of abstraction, such as design from code), restructuring (transforming a system from one representation to another at the same level of abstraction), recommendation (analyzing a system and producing user and support documentation), forward engineering (using software products derived from an existing system, together with new requirements, to produce a new system), and translation (transforming source code from one language to another or from one version of a language to another).



Reference testing. A way of deriving expected outcomes by manually validating a set of actual outcomes. A less rigorous alternative to predicting expected outcomes in advance of test execution.



Regression testing: Testing of a previously tested program following modification to ensure that defects have not been introduced or uncovered in unchanged areas of the software, as a result of the changes made. It is performed when the software or its environment is changed.



Release note: A document identifying test items, their configuration, current status and other delivery information delivered by development to testing, and possibly other stakeholders, at the start of a test execution phase. [After IEEE 829]



Reliability testing. Verify the probability of failure free operation of a computer program in a specified environment for a specified time.

Reliability of an object is defined as the probability that it will not fail under specified conditions, over a period of time. The specified conditions are usually taken to be fixed, while the time is taken as an independent variable. Thus reliability is often written R(t) as a function of time t, the probability that the object will not fail within time t.

Any computer user would probably agree that most software is flawed, and the evidence for this is that it does fail. All software flaws are designed in -- the software does not break, rather it was always broken. But unless conditions are right to excite the flaw, it will go unnoticed -- the software will appear to work properly.



Range Testing: For each input identifies the range over which the system behavior should be the same.



Resumption criteria: The testing activities that must be repeated when testing is re-started after a suspension. [After IEEE 829]



Re-testing: Testing that runs test cases that failed the last time they were run, in order to verify the success of corrective actions.



Risk analysis: The process of assessing identified risks to estimate their impact and probability of occurrence (likelihood).



Risk management: An organized process to identify what can go wrong, to quantify and access associated risks, and to implement/control the appropriate approach for preventing or handling each risk identified.



Robust test. A test, that compares a small amount of information, so that unexpected side effects are less likely to affect whether the test passed or fails.



Root cause analysis: Analyzing the underlying factors that caused a non-conformance and possibly should be permanently eliminated through process improvement.




--------------------------------------------------------------------------------



Sanity Testing: typically an initial testing effort to determine if a new software version is performing well enough to accept it for a major testing effort. For example, if the new software is often crashing systems, bogging down systems to a crawl, or destroying databases, the software may not be in a 'sane' enough condition to warrant further testing in its current state.



Scalability testing is a subtype of performance test where performance requirements for response time, throughput, and/or utilization are tested as load on the SUT is increased over time.



Scribe: The person who has to record each defect mentioned and any suggestions for improvement during a review meeting, on a logging form. The scribe has to make ensure that the logging form is readable and understandable.



Security testing: Testing to determine the security of the software product.



Sensitive test. A test, that compares a large amount of information, so that it is more likely to defect unexpected differences between the actual and expected outcomes of the test



Severity: The degree of impact that a defect has on the development or operation of a component or system. [After IEEE 610]



Simulation: The representation of selected behavioral characteristics of one physical or abstract system by another system. [ISO 2382/1]



Skim Testing A testing technique used to determine the fitness of a new build or release



Smoke test: A subset of all defined/planned test cases that cover the main functionality of a component or system, to ascertaining that the most crucial functions of a program work, but not bothering with finer details. A daily build and smoke test is among industry best practices. See also intake test.



Specification: A document that specifies, ideally in a complete, precise and verifiable manner, the requirements, design, behavior, or other characteristics of a component or system, and, often, the procedures for determining whether these provisions have been satisfied. [After IEEE 610]



Specification-based test. A test, whose inputs are derived from a specification.



Spike testing. to test performance or recovery behavior when the system under test (SUT) is stressed with a sudden and sharp increase in load should be considered a type of load test



State transition: A transition between two states of a component or system.



State transition testing: A black box test design technique in which test cases are designed to execute valid and invalid state transitions. See also N-switch testing.



STEP (Systematic Test and Evaluation Process) Software Quality Engineering's copyrighted testing methodology.



State-based testing Testing with test cases developed by modeling the system under test as a state machine



State Transition Testing. Technique in which the states of a system are fist identified and then test cases are written to test the triggers to cause a transition from one condition to another state.



Static testing. Source code analysis. Analysis of source code to expose potential defects.



Statistical testing. A test case design technique in which a model is used of the statistical distribution of the input to construct representative test cases.



Stealth bug. A bug that removes information useful for its diagnosis and correction.



Storage test. Study how memory and space is used by the program, either in resident memory or on disk. If there are limits of these amounts, storage tests attempt to prove that the program will exceed them.



Stress testing: Testing conducted to evaluate a system or component at or beyond the limits of its specified requirements. [IEEE 610]



Structural Testing. (1)(IEEE) Testing that takes into account the internal mechanism [structure] of a system or component. Types include branch testing, path testing, statement testing. (2) Testing to insure each program statement is made to execute during testing and that each program statement performs its intended function. Contrast with functional testing. Syn: white-box testing, glass-box testing, logic driven testing.



Stub: A skeletal or special-purpose implementation of a software component, used to develop or test a component that calls or is otherwise dependent on it. It replaces a called component. [After IEEE 610]



System integration testing: Testing the integration of systems and packages; testing interfaces to external organizations (e.g. Electronic Data Interchange, Internet).



System testing: The process of testing an integrated system to verify that it meets specified requirements.




--------------------------------------------------------------------------------



Test case: A set of input values, execution preconditions, expected results and execution postconditions, developed for a particular objective or test condition, such as to exercise a particular program path or to verify compliance with a specific requirement. [After IEEE 610]



Test conditions: The set of circumstances that a test invokes.



Test Coverage: The degree to which a given test or set of tests addresses all specified test cases for a given system or component.



Test Criteria: Decision rules used to determine whether software item or software feature passes or fails a test.



Test data: The actual (set of) values used in the test or that are necessary to execute the test.



Test Documentation: (IEEE) Documentation describing plans for, or results of, the testing of a system or component, Types include test case specification, test incident report, test log, test plan, test procedure, test report.



Test Driver: A software module or application used to invoke a test item and, often, provide test inputs (data), control and monitor execution. A test driver automates the execution of test procedures.



Test environment: An environment containing hardware, instrumentation, simulators, software tools, and other support elements needed to conduct a test. [After IEEE 610]



Test Harness: A system of test drivers and other tools to support test execution (e.g., stubs, executable test cases, and test drivers). See: test driver.



Test infrastructure: The organizational artifacts needed to perform testing, consisting of test environments, test tools, office environment and procedures.



Test Item: A software item which is the object of testing



Test Log: A chronological record of all relevant details about the execution of a test



Test management: The planning, estimating, monitoring and control of test activities, typically carried out by a test manager.



Test oracle: A source to determine expected results to compare with the actual result of the software under test. An oracle may be the existing system (for a benchmark), a user manual, or an individual’s specialized knowledge, but should not be the code.



Test Plan: A high-level document that defines a testing project so that it can be properly measured and controlled. It defines the test strategy and organized elements of the test life cycle, including resource requirements, project schedule, and test requirements



Test Rig: A flexible combination of hardware, software, data, and interconnectivity that can be configured by the Test Team to simulate a variety of different Live Environments on which an AUT can be delivered



Test strategy: A high-level document defining the test levels to be performed and the testing within those levels for a program (one or more projects).



Test Stub: A dummy software component or object used (during development and testing) to simulate the behavior of a real component. The stub typically provides test output.



Test suite: A set of several test cases for a component or system under test, where the post condition of one test is often used as the precondition for the next one.



Testing: The execution of tests with the intent of providing that the system and application under test does or does not perform according to the requirements specification.



Testware: Artifacts produced during the test process required to plan, design, and execute tests, such as documentation, scripts, inputs, expected results, set-up and clear-up procedures, files, databases, environment, and any additional software or utilities used in testing. [After Fewster and Graham]



(TPI) Test Process Improvement: A method for baselining testing processes and identifying process improvement opportunities, using a static model developed by Martin Pol and Tim Koomen.



Thread Testing: A testing technique used to test the business functionality or business logic of the AUT in an end-to-end manner, in much the same way a User or an operator might interact with the system during its normal use



Top-down testing: An incremental approach to integration testing where the component at the top of the component hierarchy is tested first, with lower level components being simulated by stubs. Tested components are then used to test lower level components. The process is repeated until the lowest level components have been tested.



Traceability: The ability to identify related items in documentation and software, such as requirements with associated tests. See also horizontal traceability, vertical traceability.



Traceability matrix: The 2 dimensional matrix that maps the requirements from the requirement specifications to the test cases developed and the test cases executed with the status of each execution. This helps in finding out test coverage for a release.


--------------------------------------------------------------------------------



Usability testing. Testing for 'user-friendliness'. Clearly this is subjective, and will depend on the targeted end-user or customer.



Use Case: A use case is a description of a system’s behavior as it responds to a request that originates from outside of that system. The use case technique is used in software engineering to capture the functional requirements of a system. Use cases describe the interaction between a primary Actor (the initiator of the interaction) and the system itself, represented as a sequence of simple steps. Actors are something or someone which exist outside the system under study, and that take part in a sequence of activities in a dialogue with the system to achieve some goal. They may be end users, other systems, or hardware devices. Each use case is a complete series of events, described from the point of view of the Actor.



Use case testing: A black box test design technique in which test cases are designed to execute user scenarios.




--------------------------------------------------------------------------------



V-model: A framework to describe the software development life cycle activities from requirements specification to maintenance. The V-model illustrates how testing activities can be integrated into each phase of the software development life cycle.



Validation: Confirmation by examination and through provision of objective evidence that the requirements for a specific intended use or application have been fulfilled. [ISO 9000]



Verification: Confirmation by examination and through the provision of objective evidence that specified requirements have been fulfilled. [ISO 9000]



Vertical traceability: The tracing of requirements through the layers of development documentation to components.



Volume testing: Testing where the system is subjected to large volumes of data


--------------------------------------------------------------------------------



Walkthrough: A step-by-step presentation by the author of a document in order to gather information and to establish a common understanding of its content. [Freedman and Weinberg, IEEE 1028]



White box testing: Testing based on an analysis of the internal structure of the component or system.



Wide Band Delphi: An expert based test estimation technique that aims at making an accurate estimation using the collective wisdom of the team members.

Get the Latest Posts Delivered to You Through E-mail Click to Subscribe Download Consolidated Study Material for Certification Exams Like

Full Study Material for ISTQB – Foundation & All 3 Advanced level Certification Exams

I am bringing to you Full-Fledged Crash Courses cum Self Learn Study Material for ISTQB Foundation Level as well as Advanced CTAL level exams for Test Managers, Test Analysts & Technical Test Analysts exams.

Use the following link to access the new study material for All 4 types of ISTQB Certification exams.

ISTQB Foundation Level Exam- (Crash Course & Study Material)

- Quickstart to ISTQB Foundation Exam - Key Questions Answered

- ISTQB Foundation Exam - Full Crash Course (Set of 30 Parts)

- ISTQB Foundation Exam - K-Level wise Special Questions with Explanation (Set of 60 Questions)

Largest Database of Sample Papers - 750 Unique Questions for Practice Just before the Exam

- ISTQB Certification Preparation Help Guide

- Practical Roadmap to ISTQB Certification

- Twelve Top Questions about ISTQB Certification


ISTQB Advanced CTAL Test Manager Exams (Crash Course & Study Material)


- Quickstart to ISTQB CTAL Test Managers Exam - Key Questions Answered

- K Level Wise Topics CTAL Test Managers Exam as per the CTAL Syllabus

- CTAL Test Managers Exam Sample Papers Set of 70 Questions

- Set of 120 Descriptive Question / Answers and Preparatory Articles

- Consolidated Crash Course for CTAL Test Managers Exam


ISTQB Advanced CTAL Test Analyst & Technical Analyst Exams (Crash Course & Study Material)

- Quickstart to ISTQB CTAL Test Analysts Exam - Key Questions Answered

- K Level Wise Topics CTAL Test Analysts Exam as per the CTAL Syllabus

- K Level Wise Topics CTAL Technical Test Analysts Exam as per the CTAl Syllabus

- CTAL Test Managers Exam Sample Papers Set of 80 Questions

- Study Guides (Set of 8) & Preparatory Articles

- Consolidated Crash Course for CTAL Test Analysts Exam

Security in Software Testing and Introduction to Security Development Lifecycle

Security in Software Testing and Introduction to Security Development Lifecycle

Software Development Life Cycle, Software Testing Life Cycle & Security-Testing Life Cycles are methodologies well known across the IT industry. Let us try to know about a sparingly known methodology - Security Development Lifecycle or SDL

Security Development Lifecycle is an innovative methodology brought by Microsoft & IBM in the year 2002. This is a process wherein every security issue is made a priority during every stage of the software development process.

The SDL introduces the use of several techniques like threat modeling, use of static analysis tools, code reviews, and a final review of security into a structured process that can reduce the number of security vulnerabilities found after system shipment.

Following are the 4 - Principles in the Security Development Lifecycle.

Principle-1: Ensure Security by Designing
The system has to be designed from the start to protect both itself and all information processed by it, as well as to be resistant to attacks. This design has to be carried out through implementation as well.

Principle-2: Ensure Security by Default
The default state of the system should minimize the possible risks when attacks (successful or unsuccessful) take place. This includes items such as running at least access, turning off features not needed by the majority of users, etc.

Principle-3: Ensure Security while Deployment

The software needs to be shipped with documentation and manuals that will help the end users as well as the administrators install and use the software securely. Secondly the installation of all updates must be easy.

Principle-4: Communications
There must be open and responsible communication with consumers when product vulnerabilities are found, in order to keep both the end users as well as the administrators aware of how to take proactive measures to protect themselves.

In addition to the preceding four main principles, the SDL lays out the security tasks that need to take place at each step in the traditional software development life cycle.

Probably, the key aspect to the success of any attempt to adopt SDL is education - a lot of education. Most people (in all disciplines) do not come to a project already completely educated on what they need to do to insure an effective and comprehensive job of implementing SDL. An educational effort needs to be put in place, both at the beginning of SDL adoption and on an ongoing basis. This can be an in-house or a contract effort, or even a mix, according to the needs & the size of your organization.

Following figure represents a graphical representation of a typical SDL.


(A) Security in Requirements Defining Phase

The first thing to be done at this stage is to determine a person who will be the single window contact, advisor, and resource as the release goes through the stages of SDL to release. This person must have the training and experience sufficient enough to lead and guide the project and its team. Such a person assists in reviewing the plans, making useful recommendations, & insuring any required resources or training are received by the team.

During the requirements phase, the following decisions are made:

1) How security shall be integrated with the process of development?

2) What are the main objectives of security?

3) How can security be maximized with disruption remaining minimized?

4) What software is likely to be used with the system under development, and how security related features will be integrated with that other software?

5) What security feature requirements are needed for the system under development? Though some of these are discovered later (when threat analysis is done), this is the time when the features determined by customer request, certification requirements, or regulatory requirements are considered.

All these steps should be taken into account and addressed at the same time the new feature and other requirements are being collected.

(B) Security in Designing Phase
During this phase in the software development life cycle, the overall plan and architecture for the system is created. As the project goes through this stage, the SDL focus remains on the following.

a) Defining the designing guidelines & architecture of security:
It includes determining what functions are integral to security as well as what design techniques apply to a project globally. Basically it involves the creation of an overall security design.

b) Documenting the elements of the surface of software attacks:
By default, which features get automatically exposed to the users?

What is the minimum possible privilege level for these features?

It is very important to find any place where the attack surface is increased and question it every time.

c) Conducting the threat modeling:
This should be done at a component level. There are several methods of threat modeling that can be used, each with its own focus and take on the process, but the intent is still to come away with a prioritized list of threats that must be mitigated, In addition the areas that should receive careful examination to insure that those areas function properly.

d) Defining Supplemental Criteria for Shipping:
This can include criteria such as the beta testing being security bug-free or having passed a security bug bash.

(C) Security in Implementation Phase
During this phase, coding and integration are performed. Note that, in the Microsoft version of the SDL, this is when the formal testing is conducted, but testing should (and usually does) continue all the way through until the system is actually shipped. Any steps that can be taken in this phase to prevent or eliminate security defects are very inexpensive, and they drastically reduce the chance that these flaws will migrate to your final system.

In the SDL, the following steps are implemented:

# Use standards for coding & testing.

# Use fuzzing tools & relevant tools for security-testing.

# Use tools for code scanning / static analysis.

# Carry out code reviews.

(D) Security in Verification Phase

This is the phase in which the features are code complete and testing (include beta testing) is being conducted. In the SDL, this is the time that more stringent code reviews & a specific security test pass are conducted. It allows review & testing of not only the new or modified code but also the unmodified legacy code of the release phase.

(E) Security in Release Phase

The release phase in the SDL is when the system is put through "Final Security Review" (FSR). This review is designed to answer the question of whether the system is now ready to be released to the customers with a security standpoint. The stated ideal is to have the FSR conducted 2-6 months before the system is to be released, both to insure that the FSR is conducted on code that is as mature as possible and as least likely to be changed. Of course, this depends heavily on the release schedule of the system, and the move to faster and more nimble release schedules makes this timeline an almost unattainable goal in many cases.

The "Final Security Review" is intended to be conducted by an independent team, and sometimes even by outside security review consultants. This is to try to isolate the FSR from preconceptions and biases that exist on the product design team as much as possible.

(F) Security in Support and Servicing
There is no way to ship a system that is 100 percent bug free, so there has to be a way to respond to newly discovered vulnerabilities. This process includes a way to evaluate reports of new vulnerabilities and issue fixes as needed.

The other thing that needs to occur during this part of the SDL is a postmortem assessment and analysis of the security bugs found. How, where, and when they were found may indicate a need for process change, a need for tool updates or changes, etc.

PHƯƠNG PHÁP SANH TRAI HAY GÁI?

PHƯƠNG PHÁP SANH TRAI HAY GÁI?
Xem tuổi cha và mẹ là chẵn hay lẻ, xem tháng thụ thai là chẳn hay lẻ rồi sẽ tính ra ngay:

2 chẵn 1 lẻ = con trai
2 lẻ 1 chẵn = con gái

Nhưng nếu 3 chẵn = con gái
3 lẻ = con trai

Ví dụ: cha là 30 tuổi (chẵn), mẹ là 29 tuổi (lẻ), người mẹ mang thai tháng 7 (lẻ). Vậy sẽ sinh bé gái.


Chú ý: Thường hay tính sai về tháng thụ thai, cho nên ngày xưa tính tháng sanh trước rồi đếm ngược lại 10 tháng (dù không đủ 10 tháng nhưng vẫn tính đủ) ra tháng 5 đếm ngược lại là 1, tháng 4 là 2,...đến tháng 8 là đủ 10, tức tháng 8 người mẹ thụ thai.

Đây là cách tính theo Âm Dương lịch bát quái, rất đúng. Ở trên, ba số chẵn tức là quẻ khôn (thuần âm=đứa bé gái có nữ tính rất mạnh), còn ba số lẻ tức quẻ càn (thuần dương=đứa bé trai có nam tính rất mạnh).

Vì thế vợ chồng muốn sanh trai hay gái nên dựa vào phương pháp trên để tính.

Software Testing

http://www.softwaretestinghelp.com/
http://qainterviews.com/s


1. Load Testing Vs Stress Testing
http://qainterviews.com/load_vs_stress.htm
2. Performance Testing
http://qainterviews.com/performance_testing_concepts.htm
3. Scalability Testing
http://qainterviews.com/scalability_testing.htm
4. Smoke Testing Vs Sanity Testing
http://qainterviews.com/smoke_vs_sanity.htm
5. Functional Testing Interview Questions
http://qainterviews.com/functional_testing.htm
6. General Testing Interview Questions
http://qainterviews.com/general_testing.htm
7.Database Testing Interview Questions
http://qainterviews.com/database_testing.htm
8. Tips to design test data before executing your test cases
http://www.softwaretestinghelp.com/tips-to-design-test-data-before-executing-your-
9. 7 basic tips for testing multi-lingual web sites
http://www.softwaretestinghelp.com/7-basic-tips-for-testing-multi-lingual-web-sites/test-cases/

Cookie Testing

What is Cookie?
Cookie is a temporary piece of information stored in a text file on user’s local drive by web server every time the computer is connected to the internet. This information is later used by web browser to retrieve information from that machine. Generally cookie contains personalized user data or information that is used to communicate between different web pages.

What is the use of Cookies?
Cookies are nothing but the user’s online identity and used to track where the user navigated throughout the web site pages.

For example if you are accessing domain http://www.testing.com/x.htm then web browser will simply query to testing.com web server for the page x.htm. Next time if you type page as http://www.testing.com/y.htm then new request is send to testing.com web server for sending 2.html page and web server doesn’t know anything about to whom the previous page x.htm served.

What if you want the previous history of this user communication with the web server? You need to maintain the user state and interaction between web browser and web server somewhere. This is where cookie comes into picture. Cookies serve the purpose of maintaining the user interactions with web server.

How cookies work?
The HTTP protocol used to exchange information files on the web is used to maintain the cookies. There are two types of HTTP protocol. Stateless HTTP and Stateful HTTP protocol. Stateless HTTP protocol does not keep any record of previously accessed web page history. While Stateful HTTP protocol does keep some record of previous web browser and web server interactions and this protocol is used by cookies to maintain the user interactions.

Whenever the user visits a webpage that is using cookie, a small code inside that HTML page writes a text file on users machine called cookie. Generally this is a call to some language script to write the cookie like cookies in Java Script, PHP, Perl.
Here is one example of the code that is used to write cookie and can be placed inside any HTML page:

Set-Cookie: NAME=; expires=; path=; domain=.

When a user visits the same page or domain later time this cookie is read from disk and used to identify the second and subsequent visits of the same user on that domain.

Generally two types of cookies are there:

1) Session cookies: This cookie is active till the browser that invoked the cookie is open. When we close the browser this session cookie gets deleted. Also there is a way to set the expiration time for this cookie.
2) Persistent cookies: These cookies are written permanently on user machine and lasts for months or years.

Where cookies are stored?
When any web page application writes cookie it get saved in a text file on user hard disk drive. The path where the cookies get stored varies for different browsers. E.g. Internet explorer store cookies on path “C:\Documents and Settings\Default User\Cookies”
Here the “Default User” can be replaced by the current user you logged in as. Like “Administrator”, or user name like “Sam” etc.
In Firefox browser to see the cookies that are stored: Open the Firefox browser, click on Tools->Options->Privacy and then “Show cookies” button.

How cookies are stored?
Lets take example of cookie written by say google.com on Mozilla Firefox browser:
On Mozilla Firefox browser when you open the page google.com, a cookie will get written on your hard disk. To view this cookie click on “Show cookies” button from Tools->Options->Privacy. Click on google.com site under this cookie list. You can see different cookies written by google.com domain with different names. Given below the description for one particular cookie named _utmz.

Name: _utmz (cookie name)
Content: 173272373.1215690934.1.1.utmccn=(direct)utmcsr=(direct)utmcmd=(none)
Domain: google.com
Path: /support/talk/
Send For: Any type of connection
Expires: Friday, January 09, 2009 5:25:34 AM

Applications where cookies can be used:

1) Shopping carts:
Cookies are used for maintaining online ordering system. Cookies remember what a user wants to buy. If the user adds some products in their shopping cart and if due to some reason the user doesn’t want to buy those products this time and closes the browser window, the next time the same user visits the purchase page he can see all the products he added in shopping cart in his last visit.

2) User sessions:
Cookies can track user sessions to particular domain using user ID and password.

3) Personalized sites:
When user visits certain pages they are asked which pages they don’t want to visit or display. User options are get stored in cookie and till the user is online, those pages are not shown to him.

4) User tracking:
To track number of unique visitors online at particular time.


Disadvantages of cookies:

1) Security issues:
Cookies can sometimes store user's personal information. If some hacker hacks these cookies, then the hacker can get access to the user's personal information. Even corrupted cookies can be read by different domains and can potentially lead to security issues.

2) Sensitive information:
Some sites may store user's sensitive information in cookies, which should not be allowed due to privacy concerns.



Test cases for cookie testing:

1) Check if your application is writing cookies properly on hard disk.

2) As a Cookie privacy policy make sure from your design documents that no personal or sensitive data is stored in the cookie.

3) If you have no option than saving sensitive data in cookie make sure data stored in cookie is stored in encrypted format so that others can not decrypt it.

4) Make sure that there is no overuse of cookies on your site under test. Overuse of cookies will annoy users if browser is prompting for cookies more often and this could result in loss of site traffic and eventually loss of business.

5) Disable the cookies from your browser settings: If you are using cookies on your site, your sites major functionality will not work by disabling the cookies. Then try to access the web site under test. Navigate through the site. See if appropriate messages are displayed to user like “For smooth functioning of this site make sure that cookies are enabled on your browser”. There should not be any page crash due to disabling the cookies. (Please make sure that you close all browsers, delete all previously written cookies before performing this test)

6) Accepts/Reject some cookies: The best way to check web site functionality is, not to accept all cookies. If you are writing 10 cookies in your web application then randomly accept some cookies say accept 5 and reject 5 cookies. For executing this test case you can set browser options to prompt whenever cookie is being written to disk. On this prompt window you can either accept or reject cookie. Try to access major functionality of web site. See if pages are getting crashed or data is getting corrupted.

7) Delete cookie: Allow site to write the cookies and then close all browsers and manually delete all cookies for web site under test. Access the web pages and check the behavior of the pages.

8) Corrupt the cookies: Corrupting cookie is easy. You know where cookies are stored. Manually edit the cookie in notepad and change the parameters to some vague values. Like alter the cookie content, Name of the cookie or expiry date of the cookie and see the site functionality. In some cases corrupted cookies allow to read the data inside it for any other domain. This should not happen in case of your web site cookies. Note that the cookies written by one domain say rediff.com can’t be accessed by other domain say yahoo.com unless and until the cookies are corrupted and someone trying to hack the cookie data.

9 ) Checking the deletion of cookies from your web application page: Some times cookie written by domain say rediff.com may be deleted by same domain but by different page under that domain. This is the general case if you are testing some ‘action tracking’ web portal. Action tracking or purchase tracking pixel is placed on the action web page and when any action or purchase occurs by user the cookie written on disk get deleted to avoid multiple action logging from same cookie. Check if reaching to your action or purchase page deletes the cookie properly and no more invalid actions or purchase get logged from same user.

10) Cookie Testing on Multiple browsers: This is the important case to check if your web application page is writing the cookies properly on different browsers as intended and site works properly using these cookies. You can test your web application on Major used browsers like Internet explorer (Various versions), Mozilla Firefox, Netscape, Opera etc.

11) If your web application is using cookies to maintain the logging state of any user then log in to your web application using some username and password. In many cases you can see the logged in user ID parameter directly in browser address bar. Change this parameter to different value say if previous user ID is 100 then make it 101 and press enter. The proper access message should be displayed to user and user should not be able to see other users account.

These are some of the main test cases for testing website cookies. More test scenarios can be derived from these by combining the above scenarios.

White Box Testing

What is White Box Testing?

White box testing is the testing performed at the code level of software. When you know the internal structure of code of a product, white box testing can be performed to ensure that the internal operations are performed according to the specification. This is also known as Structural Testing and Glass box testing.
The various types of white box testing techniques are described below:

Unit Testing:
The developer carries out unit testing in order to check if the particular module or unit of code is working fine. The Unit Testing comes at the very basic level as it is carried out as and when the unit of the code is developed or a particular functionality is built.

Static and dynamic Analysis:
Static analysis involves going through the code in order to find out any possible defect in the code. Dynamic analysis involves executing the code and analyzing the output.

Statement Coverage:
In this type of testing the code is executed in such a manner that every statement of the application is executed at least once. It helps in assuring that all the statements execute without any side effect.

Branch Coverage:
No software application can be written in a continuous mode of coding, at some point we need to branch out the code in order to perform a particular functionality. Branch coverage testing helps in validating of all the branches in the code and making sure that no branching leads to abnormal behavior of the application.

Security Testing:
Security Testing is carried out in order to find out how well the system can protect itself from unauthorized access, hacking – cracking, any code damage etc. which deals with the code of application. This type of testing needs sophisticated testing techniques.

Mutation Testing:
A kind of testing in which, the application is tested for the code that was modified after fixing a particular bug/defect. It also helps in finding out which code and which strategy of coding can help in developing the functionality effectively.



Advantages of White box testing are:
i) As the knowledge of internal coding structure is prerequisite, it becomes very easy to find out which type of input/data can help in testing the application effectively.
ii) White box testing helps in optimizing the code
iii) White box testing helps in removing the extra lines of code, which can bring in hidden defects.

Disadvantages of white box testing are:
i) As knowledge of code and internal structure is a prerequisite, a skilled tester is needed to carry out this type of testing, which increases dependency and of course the cost.
ii) It is almost impossible to look into every bit of code to find out hidden errors, which may create problems, resulting in failure of the application.


http://qainterviews.com/black_box_testing.htm

Q. What is Black box testing?

Black box testing is also known as functional testing. This is a software testing technique whereby the internal workings of the item being tested are not known by the tester. For example, in a black box test on a software design the tester only knows the inputs and what the expected outcomes should be and not how the program arrives at those outputs. The tester does not ever examine the programming code and does not need any further knowledge of the program other than its specifications.







Q. What are the advantages of Black box testing?

The test is unbiased because the designer and the tester are independent of each other.

The tester does not need knowledge of any specific programming languages.

The test is done from the point of view of the user, not the designer.

Test cases can be designed as soon as the specifications are complete.







Q. What are the disadvantages of Black box testing?

The test can be redundant if the software designer has already run a test case.

The test cases are difficult to design.

Testing every possible input stream is unrealistic because it would take a inordinate amount of time; therefore, many program paths will go untested.







Q. Give real time examples of Black box testing.

In this technique, we do not use the code to determine a test suite; rather, knowing the problem that we're trying to solve, we come up with four types of test data:

Easy-to-compute data

Typical data

Boundary / extreme data

Bogus data

For example, suppose we are testing a function that uses the quadratic formula to determine the two roots of a second-degree polynomial ax2+bx+c. For simplicity, assume that we are going to work only with real numbers, and print an error message if it turns out that the two roots are complex numbers (numbers involving the square root of a negative number).

We can come up with test data for each of the four cases, based on values of the polynomial's discriminant (b2-4ac):

Easy data (discriminant is a perfect square):

a b c Roots
1 2 1 -1, -1
1 3 2 -1, -2



Typical data (discriminant is positive):

a b c Roots
1 4 1 -3.73205, -0.267949
2 4 1 -1.70711, -0.292893



Boundary / extreme data (discriminant is zero):

a b c Roots
2 -4 2 1, 1
2 -8 8 2, 2



Bogus data (discriminant is negative, or a is zero):

a b c Roots
1 1 1 square root of negative number
0 1 1 division by zero

As with glass-box testing, you should test your code with each set of test data. If the answers match, then your code passes the black-box test.







Q. Describe the Black box testing techniques.

Equivalence partitioning
Boundary value analysis
State transition tables
Decision table testing
Pairwise testing
Error Guessing.
Details of each types of testing are given below: (Note: Some of the definitions and examples have been borrowed from Wikipedia)

Equivalence partitioning:

Equivalence partitioning is a black box testing technique with the goal:

To reduce the number of test cases to a necessary minimum.
To select the right test cases to cover all possible scenarios.
Although in rare cases equivalence partitioning is also applied to outputs of a software component, typically it is applied to the inputs of a tested component. The equivalence partitions are usually derived from the specification of the component's behavior. An input has certain ranges which are valid and other ranges which are invalid. This may be best explained at the following example of a function which has the pass parameter "month" of a date. The valid range for the month is 1 to 12, standing for January to December. This valid range is called a partition. In this example there are two further partitions of invalid ranges. The first invalid partition would be <= 0 and the second invalid partition would be >= 13.

... -2 -1 0 1 .............. 12 13 14 15 .....
------------------------------------------------------
invalid partition 1 valid partition invalid partition 2


The testing theory related to equivalence partitioning says that only one test case of each partition is needed to evaluate the behaviour of the program for the related partition. In other words it is sufficient to select one test case out of each partition to check the behaviour of the program. To use more or even all test cases of a partition will not find new faults in the program. The values within one partition are considered to be "equivalent". Thus the number of test cases can be reduced considerably.

An additional effect by applying this technique is that you also find the so called "dirty" test cases. An inexperienced tester may be tempted to use as test cases the input data 1 to 12 for the month and forget to select some out of the invalid partitions. This would lead to a huge number of unnecessary test cases on the one hand, and a lack of test cases for the dirty ranges on the other hand.

The tendency is to relate equivalence partitioning to so called black box testing which is strictly checking a software component at its interface, without consideration of internal structures of the software. But having a closer look at the subject there are cases where it applies to grey box testing as well. Imagine an interface to a component which has a valid range between 1 and 12 like in the example above. However internally the function may have a differentiation of values between 1 and 6 and the values between 7 and 12. Depending on the input value the software internally will run through different paths to perform slightly different actions. Regarding the input and output interfaces to the component this difference will not be noticed, however in your grey-box testing you would like to make sure that both paths are examined. To achieve this it is necessary to introduce additional equivalence partitions which would not be needed for black-box testing. For this example this would be:

... -2 -1 0 1 ..... 6 7 ..... 12 13 14 15 .....
------------------------------------------------------
invalid partition 1 P1 P2 invalid partition 2
valid partitions



To check for the expected results you would need to evaluate some internal intermediate values rather than the output interface.

Equivalence partitioning is no stand alone method to determine test cases. It has to be supplemented by boundary value analysis. Having determined the partitions of possible inputs the method of boundary value analysis has to be applied to select the most effective test cases out of these partitions



Boundary value analysis:

Boundary value analysis is a black box testing technique to determine test cases covering off-by-one errors. The boundaries of software component input ranges are areas of frequent problems

Testing experience has shown that especially the boundaries of input ranges to a software component are liable to defects. A programmer implement e.g. the range 1 to 12 at an input, which e.g. stands for the month January to December in a date, has in his code a line checking for this range. This may look like:

if (month > 0 && month <>= 0 && month < 13)

For more complex range checks in a program this may be a problem which is not so easily spotted as in the above simple example.



Applying boundary value analysis:

To set up boundary value analysis test cases the tester first has to determine which boundaries are at the interface of a software component. This has to be done by applying the equivalence partitioning technique. Boundary value analysis and equivalence partitioning are inevitably linked together. For the example of the month a date would have the following partitions:

... -2 -1 0 1 .............. 12 13 14 15 .....
------------------------------------------------------
invalid partition 1 valid partition invalid partition 2


Applying boundary value analysis a test case at each side of the boundary between two partitions has to be selected. In the above example this would be 0 and 1 for the lower boundary as well as 12 and 13 for the upper boundary. Each of these pairs consists of a "clean" and a "negative" test case. A "clean" test case should give a valid operation result of the program. A "negative" test case should lead to a correct and specified input error treatment such as the limiting of values, the usage of a substitute value, or in case of a program with a user interface, it has to lead to warning and request to enter correct data. The boundary value analysis can have 6 test cases: n, n-1, and n+1 for the upper limit; and n, n-1, and n+1 for the lower limit.

A further set of boundaries has to be considered when test cases are set up. A solid testing strategy also has to consider the natural boundaries of the data types used in the program. If working with signed values, for example, this may be the range around zero (-1, 0, +1). Similar to the typical range check faults, there tend to be weaknesses in programs in this range. e.g. this could be a division by zero problem where a zero value may occur although the programmer always thought the range started at 1. It could be a sign problem when a value turns out to be negative in some rare cases, although the programmer always expected it to be positive. Even if this critical natural boundary is clearly within an equivalence partition it should lead to additional test cases checking the range around zero. A further natural boundary is the natural lower and upper limit of the data type itself. E.g. an unsigned 8-bit value has the range of 0 to 255. A good test strategy would also check how the program reacts at an input of -1 and 0 as well as 255 and 256.

The tendency is to relate boundary value analysis more to so called black box testing, which is strictly checking a software component at its interfaces, without consideration of internal structures of the software. But looking closer at the subject, there are cases where it applies also to white box testing.

After determining the necessary test cases with equivalence partitioning and subsequent boundary value analysis, it is necessary to define the combinations of the test cases when there are multiple inputs to a software component



State Transition table:

In automata theory and sequential logic, a state transition table is a table showing what state (or states in the case of a nondeterministic finite automaton) a finite semi automaton or finite state machine will move to, based on the current state and other inputs. A state table is essentially a truth table in which some of the inputs are the current state, and the outputs include the next state, along with other outputs.

A state table is one of many ways to specify a state machine, other ways being a state diagram, and a characteristic equation.

One-dimensional state tables

Also called characteristic tables, single-dimension state tables are much more like truth tables than the two-dimensional versions. Inputs are usually placed on the left, and separated from the outputs, which are on the right. The outputs will represent the next state of the machine. Here's a simple example of a state machine with two states, and two combinatorial inputs:

A B Current State Next State Output
0 0 S1 S2 1
0 0 S2 S1 0
0 1 S1 S2 0
0 1 S2 S2 1
1 0 S1 S1 1
1 0 S2 S1 1
1 1 S1 S1 1
1 1 S2 S2 0

S1 and S2 would most likely represent the single bits 0 and 1, since a single bit can only have two states.

Two-dimensional state tables

State transition tables are typically two-dimensional tables. There are two common forms for arranging them.

The vertical (or horizontal) dimension indicates current states, the horizontal (or vertical) dimension indicates events, and the cells (row/column intersections) in the table contain the next state if an event happens (and possibly the action linked to this state transition).
State Transition Table Events
State E1 E2 ... En
S1 - Ay/Sj ... -
S2 - - ... Ax/Si
... ... ... ... ...
Sm Az/Sk - ... -

(S: state, E: event, A: action, -: illegal transition)

The vertical (or horizontal) dimension indicates current states, the horizontal (or vertical) dimension indicates next states, and the row/column intersections contain the event which will lead to a particular next state.
State Transition Table next
current S1 S2 ... Sm
S1 Ay/Ej - ... -
S2 - - ... Ax/Ei
... ... ... ... ...
Sm - Az/Ek ... -

(S: state, E: event, A: action, -: impossible transition)



Decision tables:

Decision tables are a precise yet compact way to model complicated logic. Decision tables, like if-then-else and switch-case statements, associate conditions with actions to perform. But, unlike the control structures found in traditional programming languages, decision tables can associate many independent conditions with several actions in an elegant way

Decision tables are typically divided into four quadrants, as shown below.

The four quadrants Conditions Condition alternatives
Actions Action entries

Each decision corresponds to a variable, relation or predicate whose possible values are listed among the condition alternatives. Each action is a procedure or operation to perform, and the entries specify whether (or in what order) the action is to be performed for the set of condition alternatives the entry corresponds to. Many decision tables include in their condition alternatives the don't care symbol, a hyphen. Using don't cares can simplify decision tables, especially when a given condition has little influence on the actions to be performed. In some cases, entire conditions thought to be important initially are found to be irrelevant when none of the conditions influence which actions are performed.

Aside from the basic four quadrant structure, decision tables vary widely in the way the condition alternatives and action entries are represented. Some decision tables use simple true/false values to represent the alternatives to a condition (akin to if-then-else), other tables may use numbered alternatives (akin to switch-case), and some tables even use fuzzy logic or probabilistic representations for condition alternatives. In a similar way, action entries can simply represent whether an action is to be performed (check the actions to perform), or in more advanced decision tables, the sequencing of actions to perform (number the actions to perform).



Pair wise testing:

All-pairs testing or pairwise testing is a combinatorial software testing method that, for each pair of input parameters to a system (typically, a software algorithm), tests all possible discrete combinations of those parameters. Using carefully chosen test vectors, this can be done much faster than an exhaustive search of all combinations of all parameters, by "parallelizing" the tests of parameter pairs. The number of tests is typically O (nm), where n and m are the number of possibilities for each of the two parameters with the most choices.

The reasoning behind all-pairs testing is this: the simplest bugs in a program are generally triggered by a single input parameter. The next simplest category of bugs consists of those dependent on interactions between pairs of parameters, which can be caught with all-pairs testing. Bugs involving interactions between three or more parameters are progressively less common, whilst at the same time being progressively more expensive to find by exhaustive testing, which has as its limit the exhaustive testing of all possible inputs.

Many testing methods regard all-pairs testing of a system or subsystem as a reasonable cost-benefit compromise between often computationally infeasible higher-order combinatorial testing methods, and less exhaustive methods which fail to exercise all possible pairs of parameters. Because no testing technique can find all bugs, all-pairs testing is typically used together with other quality assurance techniques such as unit testing, symbolic execution, fuzz testing, and code review.



Error Guessing:

Error guessing is a test case design technique where the experience of the tester is used to postulate what faults might occur, and to design tests specifically to expose them. This is a Test data selection technique. The selection criterion is to pick values that seem likely to cause errors.
http://qainterviews.com/test_plan_template.htm




http://qainterviews.com/telecom_testing.htm
http://qainterviews.com/test_strategy.htm
http://qainterviews.com/stress_testing.htm


Stress testing

Stress testing is subjecting a system to an unreasonable load while denying it the resources (e.g., RAM, disc, mips, interrupts etc.) needed to process that load. The idea is to stress a system to the breaking point in order to find bugs that will make that break potentially harmful. The system is not expected to process the overload without adequate resources, but to behave (e.g., fail) in a decent manner (e.g., not corrupting or losing data). Bugs and failure modes discovered under stress testing may or may not be repaired depending on the application, the failure mode, consequences, etc. The load (incoming transaction stream) in stress testing is often deliberately distorted so as to force the system into resource depletion.

Stress testing tries to break the system under test by overwhelming its resources or by taking resources away from it. The main purpose behind stress testing is to make sure that the system fails and recovers gracefully, in other words stress testing determines the recoverability of the application.

Stress testing deliberately induces chaos and unpredictability. To take the example of a Web application, here are some ways in which stress can be applied to the system:

- Double the number for concurrent users/HTTP connections

- Randomly shut down and restart ports on the network switches/routers that connect the servers (via SNMP commands for example)

- Offline the database, then restart it

- Rebuild a RAID array while the system is running

- Run processes that consume resources (CPU, memory, disk, network) on the Web and database servers

Stress testing does not break the system purely for the sake of breaking it, but instead it allows testers to observe how the system reacts to failure and how it recovers sanely after the failure. Does it save its state or does it crash suddenly? Does it just hang and freeze or does it fail gracefully? On restart, is it able to recover from the last good state? Does it print out meaningful error messages to the user? Is the security of the system compromised because of unexpected failures?



http://qainterviews.com/testing_methodologies.htm
Testing Methodologies

Below are the testing methodologies followed in Software Testing:

Black - Box Testing
In using this strategy, the tester views the program as a black - box, tester doesn't see the code of the program: Equivalence partitioning, Boundary - value analysis, Error guessing.

White - Box Testing
In using this strategy, the tester examine the internal structure of the program: Statement coverage, Decision coverage, condition coverage, Decision/Condition coverage, Multiple - condition coverage.

Gray - Box Testing
In using this strategy Black box testing can be combine with knowledge of database validation, such as SQL for database query and adding/loading data sets to confirm functions, as well as query the database to confirm expected result.

Test Script
Type of test file. It is a set of instructions run automatically by a software or hardware test tool.

http://qainterviews.com/web_testing.htm

http://qainterviews.com/test_metrics.htm
Test Metrics

Metrics are the means by which the software quality can be measured; they give you confidence in the product. You may consider these product management indicators, which can be either quantitative or qualitative. They are typically the providers of the visibility you need.



Defect Removal Efficiency:

DRE = (Defects removed during a development phase/Defects latent in the product at that phase) x 100%

Since the latent defects in a software product is unknown at any point in time, it is approximated by adding the number of defects removed during the phase to the number of defects found later (but that existed during that phase).


Defect density:

Defect Density is a measure of the total known defects divided by the size of the software entity being measured.

Defect Density= (Number of Known Defects/Size)

The Number of Known Defects is the count of total defects identified against a particular
software entity, during a particular time period. Examples include:
· defect to date since the creation of module
· defects found in a program during an inspection
· defects to date since the shipment of a release to the customer


Defect severity index:

An index representing the average of the severity of the defects. This provides a direct measurement of the quality of the product—specifically, reliability, fault tolerance and stability.

Two measures are required to compute the defect severity index. A number is assigned against each severity level: 4 (Critical), 3 (Serious), 2 (Medium), 1 (Low). Multiply each remark by its severity level number and add the totals; divide this by the total number of defects to determine the defect severity index.
Test coverage:

Defined as the extent to which testing covers the product’s complete functionality. This metric is an indication of the completeness of the testing. It does not indicate anything about the effectiveness of the testing. This can be used as a criterion to stop testing. Coverage could be with respect to requirements, functional topic list, business flows, use cases, etc. It can be calculated based on the number of items that were covered vs. the total number of items.


Test case effectiveness:

Defined as the extent to which test cases are able to find defects. This metric provides an indication of the effectiveness of the test cases and the stability of the software.

Test case effectiveness = (Number of test cases that found defects/ Total number of test cases)


Defects per KLOC:

Defined as the number of defects per 1,000 lines of code. This metric indicates the quality of the product under test. It can be used as a basis for estimating defects to be addressed in the next phase or the next version.

Defects per KLOC= (Number of defects found/Total kilo lines of code)

Software Testing Life Cycle

Software testing has its own life cycle that intersects with every stage of the SDLC. The basic requirements in software testing life cycle is to deal with software testing. Software testing life cycle identifies the different test activities to be carried out and at different stages of the overall software development process to achieve best results in terms of software quality.

Software Testing Life Cycle consists of six phases:

Test Planning

Test Strategy

Test Design

Verification

Validation Cycles

Final Testing and Implementation

Post Implementation.

.

Test Planning:

This is the phase where Project Manager has to decide what things need to be tested, what are the budget allocated for the testing activity, what is the time period allocated for testing etc. Hence these factors have to be planned properly during Test Planning.

Activities at this stage would include preparation of high level test plan-(according to IEEE test plan template The Software Test Plan is designed to address the scope, approach, resources, and schedule of all testing activities. The test plan must identify the items to be tested, the features to be tested, the types of testing to be performed, the personnel responsible for testing, the resources and schedule required to complete testing, and the risks associated with the plan.). Almost all of the activities done during this stage are included in this software test plan and revolve around a test plan.



Test Strategy:

Once test plan is made and decided upon, next step is to delve little more into the project and decide what types of testing should be carried out at different stages of SDLC, do we need or plan to automate, if yes then when the appropriate time to automate is, what type of specific documentation I need for testing.

Proper and regular meetings should be held between testing teams, project managers, development teams, Business Analysts to check the progress of things which will give a fair idea of the movement of the project and ensure the completeness of the test plan created in the planning phase, which will further help in enhancing the right testing strategy created earlier. We will start creating test case formats and test cases itself. In this stage we need to develop Functional validation matrix based on Business Requirements to ensure that all system requirements are covered by one or more test cases, identify which test cases to automate, begin review of documentation, i.e. Functional Design, Business Requirements, Product Specifications, Product Externals etc. We also have to define areas for Stress and Performance testing.


Test Design:

Test plans and cases which were developed in the analysis phase are revised. Functional validation matrix is also revised and finalized. In this stage risk assessment criteria is developed. If you have thought of automation then you have to select which test cases to automate and begin writing scripts for them. Test data is prepared. Standards for unit testing and pass / fail criteria are defined here. Schedule for testing is revised (if necessary) & finalized and test environment is prepared.



Verification:

In this phase we have to complete all the test plans, test cases, complete the scripting of the automated test cases, Stress and Performance testing plans needs to be completed. We have to support the development team in their unit testing phase. And obviously bug reporting would be done as when the bugs are found. Integration tests are performed and errors (if any) are reported.


Validation Cycles:

In this phase we have to execute all the test cases that were planned in the test Planning and Verification phases. This includes the manual and automation test cases executions. What ever defects/bugs are found they have to be reported using a bug tracking tool like Bugzilla or Test Director. This validation process occurs in cycles. After getting the build from the developers, the testers execute the test cases and report the defects. Then the developers fix the bug and give new builds. The testers pick the new build and start testing it for the bug fixes as well as for regressions. This process continues for a few cycles until the build becomes stable enough to stop testing.


Final Testing and Implementation:

In this phase we have to execute remaining stress and performance test cases, documentation for testing is completed / updated, provide and complete different matrices for testing. Acceptance, load and recovery testing will also be conducted and the application needs to be verified under production conditions.


Post Implementation:

In this phase, the testing process is evaluated and lessons learnt from that testing process are documented. Line of attack to prevent similar problems in future projects is identified. Create plans to improve the processes. The recording of new errors and enhancements is an ongoing process. Cleaning up of test environment is done and test machines are restored to base lines in this stage.



Given below a tabular representation of the Testing Life Cycle.

Software Testing Life Cycle
Phase Activities Outcome
Planning
Create high level test plan
Test plan, Refined Specification

Analysis
Create detailed test plan, Functional Validation Matrix, test cases
Revised Test Plan, Functional validation matrix, test cases

Design
test cases are revised; select which test cases to automate
revised test cases, test data sets, sets, risk assessment sheet

Construction
scripting of test cases to automate,
test procedures/Scripts, Drivers, test results, Bug reports.

Testing cycles
complete testing cycles
Test results, Bug Reports

Final testing
execute remaining stress and performance tests, complete documentation
Test results and different metrics on test efforts

Post implementation
Evaluate testing processes
Plan for improvement of testing process