Menu
- Software Testing Tutorial
- Software Testing Process Slideshare
- Software Testing Process Document
- Software Testing Process Diagram
- Software Testing Procedures
- Software Testing Useful Resources
- Selected Reading
Software Testing Software testing is the process of evaluation a software item to detect differences between given input and expected output. Also to assess the feature of A software item. Testing assesses the quality of the product. What is fundamental test process in software testing? Testing is a process rather than a single activity. This process starts from test planning then designing test cases, preparing for execution and evaluating status till the test closure. Software testing is a process of executing a program or application with the intent of finding the software bugs. It can also be stated as the process of validating and verifying that a software program or application or product.
What is Testing?
Testing is the process of evaluating a system or its component(s) with the intent to find whether it satisfies the specified requirements or not. In simple words, testing is executing a system in order to identify any gaps, errors, or missing requirements in contrary to the actual requirements.
According to ANSI/IEEE 1059 standard, Testing can be defined as - A process of analyzing a software item to detect the differences between existing and required conditions (that is defects/errors/bugs) and to evaluate the features of the software item.
- Test Process in Software Testing Test Process in Software Testing Testing is a process rather than a single activity.Testing must be planned and it requires discipline to act upon it.The quality and effectiveness of software testing are primarily determined by the quality of the test processes used.
- Software testing is an organizational process within software development in which business-critical software is verified for correctness, quality, and performance. Software testing is used to ensure that expected business systems and product features behave correctly as expected. Software testing may either be a manual or an automated process.
Who does Testing?
It depends on the process and the associated stakeholders of the project(s). In the IT industry, large companies have a team with responsibilities to evaluate the developed software in context of the given requirements. Moreover, developers also conduct testing which is called Unit Testing. In most cases, the following professionals are involved in testing a system within their respective capacities −
- Software Tester
- Software Developer
- Project Lead/Manager
- End User
Different companies have different designations for people who test the software on the basis of their experience and knowledge such as Software Tester, Software Quality Assurance Engineer, QA Analyst, etc.
It is not possible to test the software at any time during its cycle. The next two sections state when testing should be started and when to end it during the SDLC.
When to Start Testing?
An early start to testing reduces the cost and time to rework and produce error-free software that is delivered to the client. However in Software Development Life Cycle (SDLC), testing can be started from the Requirements Gathering phase and continued till the deployment of the software.
It also depends on the development model that is being used. For example, in the Waterfall model, formal testing is conducted in the testing phase; but in the incremental model, testing is performed at the end of every increment/iteration and the whole application is tested at the end.
Testing is done in different forms at every phase of SDLC −
- During the requirement gathering phase, the analysis and verification of requirements are also considered as testing.
- Reviewing the design in the design phase with the intent to improve the design is also considered as testing.
- Testing performed by a developer on completion of the code is also categorized as testing.
When to Stop Testing?
It is difficult to determine when to stop testing, as testing is a never-ending process and no one can claim that a software is 100% tested. The following aspects are to be considered for stopping the testing process −
- Testing Deadlines
- Completion of test case execution
- Completion of functional and code coverage to a certain point
- Bug rate falls below a certain level and no high-priority bugs are identified
- Management decision
Verification & Validation
These two terms are very confusing for most people, who use them interchangeably. The following table highlights the differences between verification and validation.
Sr.No. | Verification | Validation |
---|---|---|
1 | Verification addresses the concern: 'Are you building it right?' | Validation addresses the concern: 'Are you building the right thing?' |
2 | Ensures that the software system meets all the functionality. | Ensures that the functionalities meet the intended behavior. |
3 | Verification takes place first and includes the checking for documentation, code, etc. | Validation occurs after verification and mainly involves the checking of the overall product. |
4 | Done by developers. | Done by testers. |
5 | It has static activities, as it includes collecting reviews, walkthroughs, and inspections to verify a software. | It has dynamic activities, as it includes executing the software against the requirements. |
6 | It is an objective process and no subjective decision should be needed to verify a software. | It is a subjective process and involves subjective decisions on how well a software works. |
Given below are some of the most common myths about software testing.
Myth 1: Testing is Too Expensive
Reality − There is a saying, pay less for testing during software development or pay more for maintenance or correction later. Early testing saves both time and cost in many aspects, however reducing the cost without testing may result in improper design of a software application rendering the product useless.
Myth 2: Testing is Time-Consuming
Reality − During the SDLC phases, testing is never a time-consuming process. However diagnosing and fixing the errors identified during proper testing is a time-consuming but productive activity.
Myth 3: Only Fully Developed Products are Tested
Reality − No doubt, testing depends on the source code but reviewing requirements and developing test cases is independent from the developed code. However iterative or incremental approach as a development life cycle model may reduce the dependency of testing on the fully developed software.
Myth 4: Complete Testing is Possible
Reality − It becomes an issue when a client or tester thinks that complete testing is possible. It is possible that all paths have been tested by the team but occurrence of complete testing is never possible. There might be some scenarios that are never executed by the test team or the client during the software development life cycle and may be executed once the project has been deployed.
Myth 5: A Tested Software is Bug-Free
Reality − This is a very common myth that the clients, project managers, and the management team believes in. No one can claim with absolute certainty that a software application is 100% bug-free even if a tester with superb testing skills has tested the application.
Myth 6: Missed Defects are due to Testers
Reality − It is not a correct approach to blame testers for bugs that remain in the application even after testing has been performed. This myth relates to Time, Cost, and Requirements changing Constraints. However the test strategy may also result in bugs being missed by the testing team.
Myth 7: Testers are Responsible for Quality of Product
Reality − It is a very common misinterpretation that only testers or the testing team should be responsible for product quality. Testers’ responsibilities include the identification of bugs to the stakeholders and then it is their decision whether they will fix the bug or release the software. Releasing the software at the time puts more pressure on the testers, as they will be blamed for any error.
Myth 8: Test Automation should be used wherever possible to Reduce Time
Reality − Yes, it is true that Test Automation reduces the testing time, but it is not possible to start test automation at any time during software development. Test automaton should be started when the software has been manually tested and is stable to some extent. Moreover, test automation can never be used if requirements keep changing.
Myth 9: Anyone can Test a Software Application
Reality − People outside the IT industry think and even believe that anyone can test a software and testing is not a creative job. However testers know very well that this is a myth. Thinking alternative scenarios, try to crash a software with the intent to explore potential bugs is not possible for the person who developed it.
Myth 10: A Tester's only Task is to Find Bugs
Reality − Finding bugs in a software is the task of the testers, but at the same time, they are domain experts of the particular software. Developers are only responsible for the specific component or area that is assigned to them but testers understand the overall workings of the software, what the dependencies are, and the impacts of one module on another module.
Testing, Quality Assurance,and Quality Control
Most people get confused when it comes to pin down the differences among Quality Assurance, Quality Control, and Testing. Although they are interrelated and to some extent, they can be considered as same activities, but there exist distinguishing points that set them apart. The following table lists the points that differentiate QA, QC, and Testing.
Quality Assurance | Quality Control | Testing |
---|---|---|
QA includes activities that ensure the implementation of processes, procedures and standards in context to verification of developed software and intended requirements. | It includes activities that ensure the verification of a developed software with respect to documented (or not in some cases) requirements. | It includes activities that ensure the identification of bugs/error/defects in a software. |
Focuses on processes and procedures rather than conducting actual testing on the system. | Focuses on actual testing by executing the software with an aim to identify bug/defect through implementation of procedures and process. | Focuses on actual testing. |
Process-oriented activities. | Product-oriented activities. | Product-oriented activities. |
Preventive activities. | It is a corrective process. | It is a preventive process. |
It is a subset of Software Test Life Cycle (STLC). | QC can be considered as the subset of Quality Assurance. | Testing is the subset of Quality Control. |
Audit and Inspection
Audit − It is a systematic process to determine how the actual testing process is conducted within an organization or a team. Generally, it is an independent examination of processes involved during the testing of a software. As per IEEE, it is a review of documented processes that organizations implement and follow. Types of audit include Legal Compliance Audit, Internal Audit, and System Audit.
Inspection − It is a formal technique that involves formal or informal technical reviews of any artifact by identifying any error or gap. As per IEEE94, inspection is a formal evaluation technique in which software requirements, designs, or codes are examined in detail by a person or a group other than the author to detect faults, violations of development standards, and other problems.
Formal inspection meetings may include the following processes: Planning, Overview Preparation, Inspection Meeting, Rework, and Follow-up.
Testing and Debugging
Testing − It involves identifying bug/error/defect in a software without correcting it. Normally professionals with a quality assurance background are involved in bugs identification. Testing is performed in the testing phase.
Debugging − It involves identifying, isolating, and fixing the problems/bugs. Developers who code the software conduct debugging upon encountering an error in the code. Debugging is a part of White Box Testing or Unit Testing. Debugging can be performed in the development phase while conducting Unit Testing or in phases while fixing the reported bugs.
Many organizations around the globe develop and implement different standards to improve the quality needs of their software. https://cornernew713.weebly.com/blog/international-drivers-license-saudi-arabia. This chapter briefly describes some of the widely used standards related to Quality Assurance and Testing.
ISO/IEC 9126
This standard deals with the following aspects to determine the quality of a software application −
- Quality model
- External metrics
- Internal metrics
- Quality in use metrics
This standard presents some set of quality attributes for any software such as −
- Functionality
- Reliability
- Usability
- Efficiency
- Maintainability
- Portability
The above-mentioned quality attributes are further divided into sub-factors, which you can study when you study the standard in detail.
ISO/IEC 9241-11
Part 11 of this standard deals with the extent to which a product can be used by specified users to achieve specified goals with Effectiveness, Efficiency and Satisfaction in a specified context of use.
This standard proposed a framework that describes the usability components and the relationship between them. In this standard, the usability is considered in terms of user performance and satisfaction. According to ISO 9241-11, usability depends on context of use and the level of usability will change as the context changes.
ISO/IEC 25000:2005
ISO/IEC 25000:2005 is commonly known as the standard that provides the guidelines for Software Quality Requirements and Evaluation (SQuaRE). This standard helps in organizing and enhancing the process related to software quality requirements and their evaluations. In reality, ISO-25000 replaces the two old ISO standards, i.e. ISO-9126 and ISO-14598.
SQuaRE is divided into sub-parts such as −
- ISO 2500n − Quality Management Division
- ISO 2501n − Quality Model Division
- ISO 2502n − Quality Measurement Division
- ISO 2503n − Quality Requirements Division
- ISO 2504n − Quality Evaluation Division
The main contents of SQuaRE are −
- Terms and definitions
- Reference Models
- General guide
- Individual division guides
- Standard related to Requirement Engineering (i.e. specification, planning, measurement and evaluation process)
ISO/IEC 12119
This standard deals with software packages delivered to the client. It does not focus or deal with the clients’ production process. The main contents are related to the following items −
- Set of requirements for software packages.
- Instructions for testing a delivered software package against the specified requirements.
Miscellaneous
Some of the other standards related to QA and Testing processes are mentioned below −
Sr.No | Standard & Description |
---|---|
1 | IEEE 829 A standard for the format of documents used in different stages of software testing. |
2 | IEEE 1061 A methodology for establishing quality requirements, identifying, implementing, analyzing, and validating the process, and product of software quality metrics. |
3 | IEEE 1059 Guide for Software Verification and Validation Plans. |
4 | IEEE 1008 A standard for unit testing. |
5 | IEEE 1012 A standard for Software Verification and Validation. |
6 | IEEE 1028 A standard for software inspections. |
7 | IEEE 1044 A standard for the classification of software anomalies. Force feedback 2 driver. |
8 | IEEE 1044-1 A guide for the classification of software anomalies. |
9 | IEEE 830 A guide for developing system requirements specifications. |
10 | IEEE 730 A standard for software quality assurance plans. |
11 | IEEE 1061 A standard for software quality metrics and methodology. |
12 | IEEE 12207 A standard for software life cycle processes and life cycle data. |
13 | BS 7925-1 A vocabulary of terms used in software testing. |
14 | BS 7925-2 A standard for software component testing. |
This section describes the different types of testing that may be used to test a software during SDLC.
Manual Testing
Manual testing includes testing a software manually, i.e., without using any automated tool or any script. In this type, the tester takes over the role of an end-user and tests the software to identify any unexpected behavior or bug. There are different stages for manual testing such as unit testing, integration testing, system testing, and user acceptance testing.
Testers use test plans, test cases, or test scenarios to test a software to ensure the completeness of testing. Manual testing also includes exploratory testing, as testers explore the software to identify errors in it.
Automation Testing
Automation testing, which is also known as Test Automation, is when the tester writes scripts and uses another software to test the product. This process involves automation of a manual process. Automation Testing is used to re-run the test scenarios that were performed manually, quickly, and repeatedly.
Apart from regression testing, automation testing is also used to test the application from load, performance, and stress point of view. It increases the test coverage, improves accuracy, and saves time and money in comparison to manual testing.
What to Automate?
It is not possible to automate everything in a software. The areas at which a user can make transactions such as the login form or registration forms, any area where large number of users can access the software simultaneously should be automated.
Furthermore, all GUI items, connections with databases, field validations, etc. can be efficiently tested by automating the manual process.
When to Automate?
Test Automation should be used by considering the following aspects of a software −
- Large and critical projects
- Projects that require testing the same areas frequently
- Requirements not changing frequently
- Accessing the application for load and performance with many virtual users
- Stable software with respect to manual testing
- Availability of time
How to Automate?
Automation is done by using a supportive computer language like VB scripting and an automated software application. There are many tools available that can be used to write automation scripts. Before mentioning the tools, let us identify the process that can be used to automate the testing process −
- Identifying areas within a software for automation
- Selection of appropriate tool for test automation
- Writing test scripts
- Development of test suits
- Execution of scripts
- Create result reports
- Identify any potential bug or performance issues
Software Testing Tools
The following tools can be used for automation testing −
- HP Quick Test Professional
- Selenium
- IBM Rational Functional Tester
- SilkTest
- TestComplete
- Testing Anywhere
- WinRunner
- LoadRunner
- Visual Studio Test Professional
- WATIR
There are different methods that can be used for software testing. This chapter briefly describes the methods available.
Black-Box Testing
The technique of testing without having any knowledge of the interior workings of the application is called black-box testing. The tester is oblivious to the system architecture and does not have access to the source code. Typically, while performing a black-box test, a tester will interact with the system's user interface by providing inputs and examining outputs without knowing how and where the inputs are worked upon.
The following table lists the advantages and disadvantages of black-box testing.
Advantages | Disadvantages |
---|---|
Well suited and efficient for large code segments. | Limited coverage, since only a selected number of test scenarios is actually performed. |
Code access is not required. | Inefficient testing, due to the fact that the tester only has limited knowledge about an application. |
Clearly separates user's perspective from the developer's perspective through visibly defined roles. | Blind coverage, since the tester cannot target specific code segments or errorprone areas. |
Large numbers of moderately skilled testers can test the application with no knowledge of implementation, programming language, or operating systems. | The test cases are difficult to design. |
White-Box Testing
White-box testing is the detailed investigation of internal logic and structure of the code. White-box testing is also called glass testing or open-box testing. In order to perform white-box testing on an application, a tester needs to know the internal workings of the code.
The tester needs to have a look inside the source code and find out which unit/chunk of the code is behaving inappropriately.
The following table lists the advantages and disadvantages of white-box testing.
Advantages | Disadvantages |
---|---|
As the tester has knowledge of the source code, it becomes very easy to find out which type of data can help in testing the application effectively. | Due to the fact that a skilled tester is needed to perform white-box testing, the costs are increased. |
It helps in optimizing the code. | Sometimes it is impossible to look into every nook and corner to find out hidden errors that may create problems, as many paths will go untested. |
Extra lines of code can be removed which can bring in hidden defects. | It is difficult to maintain white-box testing, as it requires specialized tools like code analyzers and debugging tools. |
Due to the tester's knowledge about the code, maximum coverage is attained during test scenario writing. |
Grey-Box Testing
Grey-box testing is a technique to test the application with having a limited knowledge of the internal workings of an application. In software testing, the phrase the more you know, the better carries a lot of weight while testing an application.
Mastering the domain of a system always gives the tester an edge over someone with limited domain knowledge. Unlike black-box testing, where the tester only tests the application's user interface; in grey-box testing, the tester has access to design documents and the database. Having this knowledge, a tester can prepare better test data and test scenarios while making a test plan.
Advantages | Disadvantages |
---|---|
Offers combined benefits of black-box and white-box testing wherever possible. | Since the access to source code is not available, the ability to go over the code and test coverage is limited. |
Grey box testers don't rely on the source code; instead they rely on interface definition and functional specifications. | The tests can be redundant if the software designer has already run a test case. |
Based on the limited information available, a grey-box tester can design excellent test scenarios especially around communication protocols and data type handling. | Testing every possible input stream is unrealistic because it would take an unreasonable amount of time; therefore, many program paths will go untested. |
The test is done from the point of view of the user and not the designer. |
A Comparison of Testing Methods
The following table lists the points that differentiate black-box testing, grey-box testing, and white-box testing.
Black-Box Testing | Grey-Box Testing | White-Box Testing |
---|---|---|
The internal workings of an application need not be known. | The tester has limited knowledge of the internal workings of the application. | Tester has full knowledge of the internal workings of the application. |
Also known as closed-box testing, test and partly to provide a preview of the next release. In this phase, the audience will be testing the following −
Non-Functional TestingThis section is based upon testing an application from its non-functional attributes. Non-functional testing involves testing a software from the requirements which are nonfunctional in nature but important such as performance, security, user interface, etc. Some of the important and commonly used non-functional testing types are discussed below. Performance TestingIt is mostly used to identify any bottlenecks or performance issues rather than finding bugs in a software. There are different causes that contribute in lowering the performance of a software −
Performance testing is considered as one of the important and mandatory testing type in terms of the following aspects −
Performance testing can be either qualitative or quantitative and can be divided into different sub-types such as Load testing and Stress testing. Load TestingIt is a process of testing the behavior of a software by applying maximum load in terms of software accessing and manipulating large input data. It can be done at both normal and peak load conditions. This type of testing identifies the maximum capacity of software and its behavior at peak time. Most of the time, load testing is performed with the help of automated tools such as Load Runner, AppLoader, IBM Rational Performance Tester, Apache JMeter, Silk Performer, Visual Studio Load Test, etc. Virtual users (VUsers) are defined in the automated testing tool and the script is executed to verify the load testing for the software. The number of users can be increased or decreased concurrently or incrementally based upon the requirements. Stress TestingStress testing includes testing the behavior of a software under abnormal conditions. For example, it may include taking away some resources or applying a load beyond the actual load limit. The aim of stress testing is to test the software by applying the load to the system and taking over the resources used by the software to identify the breaking point. This testing can be performed by testing different scenarios such as −
Usability TestingUsability testing is a black-box technique and is used to identify any error(s) and improvements in the software by observing the users through their usage and operation. According to Nielsen, usability can be defined in terms of five factors, i.e. efficiency of use, learn-ability, memory-ability, errors/safety, and satisfaction. According to him, the usability of a product will be good and the system is usable if it possesses the above factors. Nigel Bevan and Macleod considered that usability is the quality requirement that can be measured as the outcome of interactions with a computer system. This requirement can be fulfilled and the end-user will be satisfied if the intended goals are achieved effectively with the use of proper resources. Molich in 2000 stated that a user-friendly system should fulfill the following five goals, i.e., easy to Learn, easy to remember, efficient to use, satisfactory to use, and easy to understand. In addition to the different definitions of usability, there are some standards and quality models and methods that define usability in the form of attributes and sub-attributes such as ISO-9126, ISO-9241-11, ISO-13407, and IEEE std.610.12, etc. UI vs Usability TestingUI testing involves testing the Graphical User Interface of the Software. UI testing ensures that the GUI functions according to the requirements and tested in terms of color, alignment, size, and other properties. On the other hand, usability testing ensures a good and user-friendly GUI that can be easily handled. UI testing can be considered as a sub-part of usability testing. Security TestingSecurity testing involves testing a software in order to identify any flaws and gaps from security and vulnerability point of view. Listed below are the main aspects that security testing should ensure −
Portability TestingPortability testing includes testing a software with the aim to ensure its reusability and that it can be moved from another software as well. Following are the strategies that can be used for portability testing −
Portability testing can be considered as one of the sub-parts of system testing, as this testing type includes overall testing of a software with respect to its usage over different environments. Computer hardware, operating systems, and browsers are the major focus of portability testing. Some of the pre-conditions for portability testing are as follows −
Testing documentation involves the documentation of artifacts that should be developed before or during the testing of Software. Documentation for software testing helps in estimating the testing effort required, test coverage, requirement tracking/tracing, etc. This section describes some of the commonly used documented artifacts related to software testing such as −
Test PlanA test plan outlines the strategy that will be used to test an application, the resources that will be used, the test environment in which testing will be performed, and the limitations of the testing and the schedule of testing activities. Typically the Quality Assurance Team Lead will be responsible for writing a Test Plan. A test plan includes the following −
Test ScenarioIt is a one line statement that notifies what area in the application will be tested. Test scenarios are used to ensure that all process flows are tested from end to end. A particular area of an application can have as little as one test scenario to a few hundred scenarios depending on the magnitude and complexity of the application. The terms 'test scenario' and 'test cases' are used interchangeably, however a test scenario has several steps, whereas a test case has a single step. Viewed from this perspective, test scenarios are test cases, but they include several test cases and the sequence that they should be executed. Apart from this, each test is dependent on the output from the previous test. Test CaseTest cases involve a set of steps, conditions, and inputs that can be used while performing testing tasks. The main intent of this activity is to ensure whether a software passes or fails in terms of its functionality and other aspects. There are many types of test cases such as functional, negative, error, logical test cases, physical test cases, UI test cases, etc. Furthermore, test cases are written to keep track of the testing coverage of a software. Generally, there are no formal templates that can be used during test case writing. However, the following components are always available and included in every test case −
Many test cases can be derived from a single test scenario. In addition, sometimes multiple test cases are written for a single software which are collectively known as test suites. Traceability MatrixTraceability Matrix (also known as Requirement Traceability Matrix - RTM) is a table that is used to trace the requirements during the Software Development Life Cycle. It can be used for forward tracing (i.e. from Requirements to Design or Coding) or backward (i.e. from Coding to Requirements). There are many user-defined templates for RTM. Each requirement in the RTM document is linked with its associated test case so that testing can be done as per the mentioned requirements. Furthermore, Bug ID is also included and linked with its associated requirements and test case. The main goals for this matrix are −
Estimating the efforts required for testing is one of the major and important tasks in SDLC. Correct estimation helps in testing the software with maximum coverage. This section describes some of the techniques that can be useful in estimating the efforts required for testing. Functional Point AnalysisThis method is based on the analysis of functional user requirements of the software with the following categories −
Test Point AnalysisThis estimation process is used for function point analysis for black-box or acceptance testing. The main elements of this method are: Size, Productivity, Strategy, Interfacing, Complexity, and Uniformity. Mark-II MethodIt is an estimation method used for analyzing and measuring the estimation based on end-user’s functional view. The procedure for Mark-II method is as follows −
MiscellaneousYou can use other popular estimation techniques such as −
|
Software development |
---|
Core activities |
Paradigms and models |
Methodologies and frameworks |
Supporting disciplines |
Practices |
Tools |
Standards and Bodies of Knowledge |
Glossaries |
Software testing is an investigation conducted to provide stakeholders with information about the quality of the software product or service under test.[1] Software testing can also provide an objective, independent view of the software to allow the business to appreciate and understand the risks of software implementation. Test techniques include the process of executing a program or application with the intent of finding software bugs (errors or other defects), and verifying that the software product is fit for use.
Software testing involves the execution of a software component or system component to evaluate one or more properties of interest. In general, these properties indicate the extent to which the component or system under test:
- meets the requirements that guided its design and development,
- responds correctly to all kinds of inputs,
- performs its functions within an acceptable time,
- it is sufficiently usable,
- can be installed and run in its intended environments, and
- achieves the general result its stakeholders desire.
As the number of possible tests for even simple software components is practically infinite, all software testing uses some strategy to select tests that are feasible for the available time and resources. As a result, software testing typically (but not exclusively) attempts to execute a program or application with the intent of finding software bugs (errors or other defects). The job of testing is an iterative process as when one bug is fixed, it can illuminate other, deeper bugs, or can even create new ones.
Software testing can provide objective, independent information about the quality of software and risk of its failure to users or sponsors.[1]
Software testing can be conducted as soon as executable software (even if partially complete) exists. The overall approach to software development often determines when and how testing is conducted. For example, in a phased process, most testing occurs after system requirements have been defined and then implemented in testable programs. In contrast, under an agile approach, requirements, programming, and testing are often done concurrently.
- 1Overview
- 3Testing approach
- 3.3The 'box' approach
- 3.3.2Black-box testing
- 3.3The 'box' approach
- 4Testing levels
- 5Testing types, techniques and tactics
- 6Testing process
- 7Automated testing
- 8Measurement in software testing
- 12Related processes
Overview[edit]
Although software testing can determine the correctness of software under the assumption of some specific hypotheses (see the hierarchy of testing difficulty below), testing cannot identify all the defects within the software.[2] Instead, it furnishes a criticism or comparison that compares the state and behavior of the product against test oracles—principles or mechanisms by which someone might recognize a problem. These oracles may include (but are not limited to) specifications, contracts,[3] comparable products, past versions of the same product, inferences about intended or expected purpose, user or customer expectations, relevant standards, applicable laws, or other criteria.
A primary purpose of testing is to detect software failures so that defects may be discovered and corrected. Testing cannot establish that a product functions properly under all conditions, but only that it does not function properly under specific conditions.[4] The scope of software testing often includes the examination of code as well as the execution of that code in various environments and conditions as well as examining the aspects of code: does it do what it is supposed to do and do what it needs to do. In the current culture of software development, a testing organization may be separate from the development team. There are various roles for testing team members. Information derived from software testing may be used to correct the process by which software is developed.[5]:41–43
Every software product has a target audience. For example, the audience for video game software is completely different from banking software. Therefore, when an organization develops or otherwise invests in a software product, it can assess whether the software product will be acceptable to its end users, its target audience, its purchasers and other stakeholders. Software testing aids the process of attempting to make this assessment.
Defects and failures[edit]
Not all software defects are caused by coding errors. One common source of expensive defects is requirement gaps, i.e., unrecognized requirements that result in errors of omission by the program designer.[5]:426 Requirement gaps can often be non-functional requirements such as testability, scalability, maintainability, performance, and security.
Software faults occur through the following processes. A programmer makes an error (mistake), which results in a defect (fault, bug) in the software source code. If this defect is executed, in certain situations the system will produce wrong results, causing a failure.[6] Not all defects will necessarily result in failures. For example, defects in the dead code will never result in failures. A defect can turn into a failure when the environment is changed. Examples of these changes in environment include the software being run on a new computer hardware platform, alterations in source data, or interacting with different software.[6] A single defect may result in a wide range of failure symptoms.
Input combinations and preconditions[edit]
A fundamental problem with software testing is that testing under all combinations of inputs and preconditions (initial state) is not feasible, even with a simple product.[4]:17–18[7] This means that the number of defects in a software product can be very large and defects that occur infrequently are difficult to find in testing. More significantly, non-functional dimensions of quality (how it is supposed to be versus what it is supposed to do)—usability, scalability, performance, compatibility, reliability—can be highly subjective; something that constitutes sufficient value to one person may be intolerable to another.
Software developers can't test everything, but they can use combinatorial test design to identify the minimum number of tests needed to get the coverage they want. Combinatorial test design enables users to get greater test coverage with fewer tests. Whether they are looking for speed or test depth, they can use combinatorial test design methods to build structured variation into their test cases.[8]
Economics[edit]
A study conducted by NIST in 2002 reports that software bugs cost the U.S. economy $59.5 billion annually. More than a third of this cost could be avoided, if better software testing was performed.[9][dubious]
Outsourcing software testing because of costs is very common, with China, the Philippines and India being preferred destinations.[10]
Roles[edit]
Software testing can be done by dedicated software testers. Until the 1980s, the term 'software tester' was used generally, but later it was also seen as a separate profession. Regarding the periods and the different goals in software testing,[11] different roles have been established, such as test manager, test lead, test analyst, test designer, tester, automation developer, and test administrator. Software testing can also be performed by non-dedicated software testers.[12]
History[edit]
Glenford J. Myers initially introduced the separation of debugging from testing in 1979.[13] Although his attention was on breakage testing ('A successful test case is one that detects an as-yet undiscovered error.'[13]:16) it illustrated the desire of the software engineering community to separate fundamental development activities, such as debugging, from that of verification.
Testing approach[edit]
Static, dynamic and passive testing[edit]
Block launcher pro download free. There are many approaches available in software testing. Reviews, walkthroughs, or inspections are referred to as static testing, whereas executing programmed code with a given set of test cases is referred to as dynamic testing.[14][15]
Static testing is often implicit, like proofreading, plus when programming tools/text editors check source code structure or compilers (pre-compilers) check syntax and data flow as static program analysis. Dynamic testing takes place when the program itself is run. Dynamic testing may begin before the program is 100% complete in order to test particular sections of code and are applied to discrete functions or modules.[14][15] Typical techniques for these are either using stubs/drivers or execution from a debugger environment.[15]
Static testing involves verification, whereas dynamic testing also involves validation.[15]
Passive testing means verifying the system behavior without any interaction with the software product. Contrary to active testing, testers do not provide any test data but look at system logs and traces. They mine for patterns and specific behavior in order to make some kind of decisions.[16] This is related to offline runtime verification and log analysis.
Exploratory approach[edit]
Exploratory testing is an approach to software testing that is concisely described as simultaneous learning, test design and test execution. Cem Kaner, who coined the term in 1984,[17] defines exploratory testing as 'a style of software testing that emphasizes the personal freedom and responsibility of the individual tester to continually optimize the quality of his/her work by treating test-related learning, test design, test execution, and test result interpretation as mutually supportive activities that run in parallel throughout the project.'[18]
The 'box' approach[edit]
Software testing methods are traditionally divided into white- and black-box testing. These two approaches are used to describe the point of view that the tester takes when designing test cases. A hybrid approach called grey-box testing may also be applied to software testing methodology.[19][20] With the concept of grey-box testing—which develops tests from specific design elements—gaining prominence, this 'arbitrary distinction' between black- and white-box testing has faded somewhat.[21]
White-box testing[edit]
White Box Testing Diagram
White-box testing (also known as clear box testing, glass box testing, transparent box testing, and structural testing) verifies the internal structures or workings of a program, as opposed to the functionality exposed to the end-user. In white-box testing, an internal perspective of the system (the source code), as well as programming skills, are used to design test cases. The tester chooses inputs to exercise paths through the code and determine the appropriate outputs.[19][20] This is analogous to testing nodes in a circuit, e.g., in-circuit testing (ICT).
While white-box testing can be applied at the unit, integration, and system levels of the software testing process, it is usually done at the unit level.[21] It can test paths within a unit, paths between units during integration, and between subsystems during a system–level test. Though this method of test design can uncover many errors or problems, it might not detect unimplemented parts of the specification or missing requirements.
Techniques used in white-box testing include:[20][22]
- API testing – testing of the application using public and private APIs (application programming interfaces)
- Code coverage – creating tests to satisfy some criteria of code coverage (e.g., the test designer can create tests to cause all statements in the program to be executed at least once)
- Fault injection methods – intentionally introducing faults to gauge the efficacy of testing strategies
- Mutation testing methods
- Static testing methods
Code coverage tools can evaluate the completeness of a test suite that was created with any method, including black-box testing. This allows the software team to examine parts of a system that are rarely tested and ensures that the most important function points have been tested.[23] Code coverage as a software metric can be reported as a percentage for:[19][23][24]
- Function coverage, which reports on functions executed
- Statement coverage, which reports on the number of lines executed to complete the test
- Decision coverage, which reports on whether both the True and the False branch of a given test has been executed
100% statement coverage ensures that all code paths or branches (in terms of control flow) are executed at least once. This is helpful in ensuring correct functionality, but not sufficient since the same code may process different inputs correctly or incorrectly.[25] Pseudo-tested functions and methods are those that are covered but not specified (it is possible to remove their body without breaking any test case).[26]
Black-box testing[edit]
Black box diagram
Black-box testing (also known as functional testing) treats the software as a 'black box,' examining functionality without any knowledge of internal implementation, without seeing the source code. The testers are only aware of what the software is supposed to do, not how it does it.[27] Black-box testing methods include: equivalence partitioning, boundary value analysis, all-pairs testing, state transition tables, decision table testing, fuzz testing, model-based testing, use case testing, exploratory testing, and specification-based testing.[19][20][24]
Specification-based testing aims to test the functionality of software according to the applicable requirements.[28] This level of testing usually requires thorough test cases to be provided to the tester, who then can simply verify that for a given input, the output value (or behavior), either 'is' or 'is not' the same as the expected value specified in the test case.Test cases are built around specifications and requirements, i.e., what the application is supposed to do. It uses external descriptions of the software, including specifications, requirements, and designs to derive test cases. These tests can be functional or non-functional, though usually functional.
Specification-based testing may be necessary to assure correct functionality, but it is insufficient to guard against complex or high-risk situations.[29]
One advantage of the black box technique is that no programming knowledge is required. Whatever biases the programmers may have had, the tester likely has a different set and may emphasize different areas of functionality. On the other hand, black-box testing has been said to be 'like a walk in a dark labyrinth without a flashlight.'[30] Because they do not examine the source code, there are situations when a tester writes many test cases to check something that could have been tested by only one test case or leaves some parts of the program untested.
This method of test can be applied to all levels of software testing: unit, integration, system and acceptance.[21] It typically comprises most if not all testing at higher levels, but can also dominate unit testing as well.
Component interface testing
Component interface testing is a variation of black-box testing, with the focus on the data values beyond just the related actions of a subsystem component.[31] The practice of component interface testing can be used to check the handling of data passed between various units, or subsystem components, beyond full integration testing between those units.[32][33] The data being passed can be considered as 'message packets' and the range or data types can be checked, for data generated from one unit, and tested for validity before being passed into another unit. One option for interface testing is to keep a separate log file of data items being passed, often with a timestamp logged to allow analysis of thousands of cases of data passed between units for days or weeks. Tests can include checking the handling of some extreme data values while other interface variables are passed as normal values.[32] Unusual data values in an interface can help explain unexpected performance in the next unit.
Visual testing[edit]
The aim of visual testing is to provide developers with the ability to examine what was happening at the point of software failure by presenting the data in such a way that the developer can easily find the information she or he requires, and the information is expressed clearly.[34][35]
At the core of visual testing is the idea that showing someone a problem (or a test failure), rather than just describing it, greatly increases clarity and understanding. Visual testing, therefore, requires the recording of the entire test process – capturing everything that occurs on the test system in video format. Output videos are supplemented by real-time tester input via picture-in-a-picture webcam and audio commentary from microphones.
Visual testing provides a number of advantages. The quality of communication is increased drastically because testers can show the problem (and the events leading up to it) to the developer as opposed to just describing it and the need to replicate test failures will cease to exist in many cases. The developer will have all the evidence she or he requires of a test failure and can instead focus on the cause of the fault and how it should be fixed.
Ad hoc testing and exploratory testing are important methodologies for checking software integrity, because they require less preparation time to implement, while the important bugs can be found quickly.[36] In ad hoc testing, where testing takes place in an improvised, impromptu way, the ability of the tester(s) to base testing off documented methods and then improvise variations of those tests can result in more rigorous examination of defect fixes.[36] However, unless strict documentation of the procedures are maintained, one of the limits of ad hoc testing is lack of repeatability.[36]
Grey-box testing[edit]
Grey-box testing (American spelling: gray-box testing) involves having knowledge of internal data structures and algorithms for purposes of designing tests while executing those tests at the user, or black-box level. The tester will often have access to both 'the source code and the executable binary.'[37] Grey-box testing may also include reverse engineering (using dynamic code analysis) to determine, for instance, boundary values or error messages.[37] Manipulating input data and formatting output do not qualify as grey-box, as the input and output are clearly outside of the 'black box' that we are calling the system under test. This distinction is particularly important when conducting integration testing between two modules of code written by two different developers, where only the interfaces are exposed for the test.
By knowing the underlying concepts of how the software works, the tester makes better-informed testing choices while testing the software from outside. Typically, a grey-box tester will be permitted to set up an isolated testing environment with activities such as seeding a database. The tester can observe the state of the product being tested after performing certain actions such as executing SQL statements against the database and then executing queries to ensure that the expected changes have been reflected. Grey-box testing implements intelligent test scenarios, based on limited information. This will particularly apply to data type handling, exception handling, and so on.[38]
Testing levels[edit]
Broadly speaking, there are at least three levels of testing: unit testing, integration testing, and system testing.[39][40][41][42] However, a fourth level, acceptance testing, may be included by developers. This may be in the form of operational acceptance testing or be simple end-user (beta) testing, testing to ensure the software meets functional expectations.[43][44][45] Tests are frequently grouped into one of these levels by where they are added in the software development process, or by the level of specificity of the test.
Unit testing[edit]
Unit testing refers to tests that verify the functionality of a specific section of code, usually at the function level. In an object-oriented environment, this is usually at the class level, and the minimal unit tests include the constructors and destructors.[46]
These types of tests are usually written by developers as they work on code (white-box style), to ensure that the specific function is working as expected. One function might have multiple tests, to catch corner cases or other branches in the code. Unit testing alone cannot verify the functionality of a piece of software, but rather is used to ensure that the building blocks of the software work independently from each other.
Unit testing is a software development process that involves a synchronized application of a broad spectrum of defect prevention and detection strategies in order to reduce software development risks, time, and costs. It is performed by the software developer or engineer during the construction phase of the software development life cycle. Unit testing aims to eliminate construction errors before code is promoted to additional testing; this strategy is intended to increase the quality of the resulting software as well as the efficiency of the overall development process.
Depending on the organization's expectations for software development, unit testing might include static code analysis,). Normally the former is considered a better practice since it allows interface issues to be located more quickly and fixed.
Integration testing works to expose defects in the interfaces and interaction between integrated components (modules). Progressively larger groups of tested software components corresponding to elements of the architectural design are integrated and tested until the software works as a system.[47]
Integration tests usually involves a lot of code, and produce traces that are larger than those produced by unit tests. This has an impact on the ease of localizing the fault when an integration test fails. To overcome this issue, it has been proposed to automatically cut the large tests in smaller pieces to improve fault localization.[48]
System testing[edit]
System testing tests a completely integrated system to verify that the system meets its requirements.[49][obsolete source] For example, a system test might involve testing a logon interface, then creating and editing an entry, plus sending or printing results, followed by summary processing or deletion (or archiving) of entries, then logoff.
Operational acceptance testing[edit]
Operational acceptance is used to conduct operational readiness (pre-release) of a product, service or system as part of a quality management system. OAT is a common type of non-functional software testing, used mainly in software development and software maintenance projects. This type of testing focuses on the operational readiness of the system to be supported, or to become part of the production environment. Hence, it is also known as operational readiness testing (ORT) or Operations readiness and assurance (OR&A) testing. Functional testing within OAT is limited to those tests that are required to verify the non-functional aspects of the system.
In addition, the software testing should ensure that the portability of the system, as well as working as expected, does not also damage or partially corrupt its operating environment or cause other processes within that environment to become inoperative.[50]
Testing types, techniques and tactics[edit]
Different labels and ways of grouping testing may be testing types, software testing tactics or techniques.[51]
TestingCup - Polish Championship in Software Testing, Katowice, May 2016
Installation testing[edit]
Most software systems have installation procedures that are needed before they can be used for their main purpose. Testing these procedures to achieve an installed software system that may be used is known as installation testing.
Compatibility testing[edit]
A common cause of software failure (real or perceived) is a lack of its compatibility with other application software, operating systems (or operating system versions, old or new), or target environments that differ greatly from the original (such as a terminal or GUI application intended to be run on the desktop now being required to become a Web application, which must render in a Web browser). For example, in the case of a lack of backward compatibility, this can occur because the programmers develop and test software only on the latest version of the target environment, which not all users may be running. This results in the unintended consequence that the latest work may not function on earlier versions of the target environment, or on older hardware that earlier versions of the target environment were capable of using. Sometimes such issues can be fixed by proactively abstracting operating system functionality into a separate program module or library.
Smoke and sanity testing[edit]
Sanity testing determines whether it is reasonable to proceed with further testing.
Smoke testing consists of minimal attempts to operate the software, designed to determine whether there are any basic problems that will prevent it from working at all. Such tests can be used as build verification test.
Regression testing[edit]
Regression testing focuses on finding defects after a major code change has occurred. Specifically, it seeks to uncover software regressions, as degraded or lost features, including old bugs that have come back. Such regressions occur whenever software functionality that was previously working correctly, stops working as intended. Typically, regressions occur as an unintended consequence of program changes, when the newly developed part of the software collides with the previously existing code. Regression testing is typically the largest test effort in commercial software development,[52] due to checking numerous details in prior software features, and even new software can be developed while using some old test cases to test parts of the new design to ensure prior functionality is still supported.
Common methods of regression testing include re-running previous sets of test cases and checking whether previously fixed faults have re-emerged. The depth of testing depends on the phase in the release process and the risk of the added features. They can either be complete, for changes added late in the release or deemed to be risky, or be very shallow, consisting of positive tests on each feature, if the changes are early in the release or deemed to be of low risk. In regression testing, it is important to have strong assertions on the existing behavior. For this, it is possible to generate and add new assertions in existing test cases, this is known as automatic test improvement.[53]
Acceptance testing[edit]
Acceptance testing can mean one of two things:
- A smoke test is used as a build acceptance test prior to further testing, e.g., before integration or regression.
- Acceptance testing performed by the customer, often in their lab environment on their own hardware, is known as user acceptance testing (UAT). Acceptance testing may be performed as part of the hand-off process between any two phases of development.[citation needed]
Alpha testing[edit]
Alpha testing is simulated or actual operational testing by potential users/customers or an independent test team at the developers' site. Alpha testing is often employed for off-the-shelf software as a form of internal acceptance testing before the software goes to beta testing.[54]
Beta testing[edit]
Beta testing comes after alpha testing and can be considered a form of external user acceptance testing. Versions of the software, known as beta versions, are released to a limited audience outside of the programming team known as beta testers. The software is released to groups of people so that further testing can ensure the product has few faults or bugs. Beta versions can be made available to the open public to increase the feedback field to a maximal number of future users and to deliver value earlier, for an extended or even indefinite period of time (perpetual beta).[55]
Functional vs non-functional testing[edit]
Functional testing refers to activities that verify a specific action or function of the code. These are usually found in the code requirements documentation, although some development methodologies work from use cases or user stories. Functional tests tend to answer the question of 'can the user do this' or 'does this particular feature work.'
Non-functional testing refers to aspects of the software that may not be related to a specific function or user action, such as scalability or other performance, behavior under certain constraints, or security. Testing will determine the breaking point, the point at which extremes of scalability or performance leads to unstable execution. Non-functional requirements tend to be those that reflect the quality of the product, particularly in the context of the suitability perspective of its users.
Continuous testing[edit]
Continuous testing is the process of executing automated tests as part of the software delivery pipeline to obtain immediate feedback on the business risks associated with a software release candidate.[56][57] Continuous testing includes the validation of both functional requirements and non-functional requirements; the scope of testing extends from validating bottom-up requirements or user stories to assessing the system requirements associated with overarching business goals.[58][59][60]
Destructive testing[edit]
Destructive testing attempts to cause the software or a sub-system to fail. It verifies that the software functions properly even when it receives invalid or unexpected inputs, thereby establishing the robustness of input validation and error-management routines.[citation needed]Software fault injection, in the form of fuzzing, is an example of failure testing. Various commercial non-functional testing tools are linked from the software fault injection page; there are also numerous open-source and free software tools available that perform destructive testing.
Software performance testing[edit]
Performance testing is generally executed to determine how a system or sub-system performs in terms of responsiveness and stability under a particular workload. It can also serve to investigate, measure, validate or verify other quality attributes of the system, such as scalability, reliability and resource usage.
Load testing is primarily concerned with testing that the system can continue to operate under a specific load, whether that be large quantities of data or a large number of users. This is generally referred to as software scalability. The related load testing activity of when performed as a non-functional activity is often referred to as endurance testing. Volume testing is a way to test software functions even when certain components (for example a file or database) increase radically in size. Stress testing is a way to test reliability under unexpected or rare workloads. Stability testing (often referred to as load or endurance testing) checks to see if the software can continuously function well in or above an acceptable period.
There is little agreement on what the specific goals of performance testing are. The terms load testing, performance testing, scalability testing, and volume testing, are often used interchangeably.
Real-time software systems have strict timing constraints. To test if timing constraints are met, real-time testing is used.
Usability testing[edit]
Usability testing is to check if the user interface is easy to use and understand. It is concerned mainly with the use of the application. This is not a kind of testing that can be automated; actual human users are needed, being monitored by skilled UI designers.
Accessibility testing[edit]
Accessibility testing may include compliance with standards such as:
- Web Accessibility Initiative (WAI) of the World Wide Web Consortium (W3C)
Security testing[edit]
Security testing is essential for software that processes confidential data to prevent system intrusion by hackers.
The International Organization for Standardization (ISO) defines this as a 'type of testing conducted to evaluate the degree to which a test item, and associated data and information, are protected so that unauthorised persons or systems cannot use, read or modify them, and authorized persons or systems are not denied access to them.'[61]
Internationalization and localization[edit]
Testing for internationalization and localization validates that the software can be used with different languages and geographic regions. The process of pseudolocalization is used to test the ability of an application to be translated to another language, and make it easier to identify when the localization process may introduce new bugs into the product.
Globalization testing verifies that the software is adapted for a new culture (such as different currencies or time zones).[62]
Actual translation to human languages must be tested, too. Possible localization and globalization failures include:
- Software is often localized by translating a list of strings out of context, and the translator may choose the wrong translation for an ambiguous source string.
- Technical terminology may become inconsistent, if the project is translated by several people without proper coordination or if the translator is imprudent.
- Literal word-for-word translations may sound inappropriate, artificial or too technical in the target language.
- Untranslated messages in the original language may be left hard coded in the source code.
- Some messages may be created automatically at run time and the resulting string may be ungrammatical, functionally incorrect, misleading or confusing.
- Software may use a keyboard shortcut that has no function on the source language's keyboard layout, but is used for typing characters in the layout of the target language.
- Software may lack support for the character encoding of the target language.
- Fonts and font sizes that are appropriate in the source language may be inappropriate in the target language; for example, CJK characters may become unreadable, if the font is too small.
- A string in the target language may be longer than the software can handle. This may make the string partly invisible to the user or cause the software to crash or malfunction.
- Software may lack proper support for reading or writing bi-directional text.
- Software may display images with text that was not localized.
- Localized operating systems may have differently named system configuration files and environment variables and different formats for date and currency.
Development testing[edit]
Development Testing is a software development process that involves the synchronized application of a broad spectrum of defect prevention and detection strategies in order to reduce software development risks, time, and costs. It is performed by the software developer or engineer during the construction phase of the software development lifecycle. Development Testing aims to eliminate construction errors before code is promoted to other testing; this strategy is intended to increase the quality of the resulting software as well as the efficiency of the overall development process.
Depending on the organization's expectations for software development, Development Testing might include static code analysis, data flow analysis, metrics analysis, peer code reviews, unit testing, code coverage analysis, traceability, and other software testing practices.
A/B testing[edit]
A/B testing is a method of running a controlled experiment to determine if a proposed change is more effective than the current approach. Customers are routed to either a current version (control) of a feature, or to a modified version (treatment) and data is collected to determine which version is better at achieving the desired outcome.
Concurrent testing[edit]
Concurrent or concurrency testing assesses the behaviour and performance of software and systems that use concurrent computing, generally under normal usage conditions. Typical problems this type of testing will expose are deadlocks, race conditions and problems with shared memory/resource handling.
Conformance testing or type testing[edit]
In software testing, conformance testing verifies that a product performs according to its specified standards. Compilers, for instance, are extensively tested to determine whether they meet the recognized standard for that language.
Output comparison testing[edit]
Creating a display expected output, whether as data comparison of text or screenshots of the UI,[63]:195 is sometimes called snapshot testing or Golden Master Testing unlike many other forms of testing, this cannot detect failures automatically and instead requires that a human evaluate the output for inconsistencies.
Testing process[edit]
Traditional waterfall development model[edit]
A common practice in waterfall development is that testing is performed by an independent group of testers. This can happen:
- after the functionality is developed, but before it is shipped to the customer.[64] This practice often results in the testing phase being used as a project buffer to compensate for project delays, thereby compromising the time devoted to testing.[13]:145–146
- at the same moment the development project starts, as a continuous process until the project finishes.[65]
However, even in the waterfall development model, unit testing is often done by the software development team even when further testing is done by a separate team.[66]
Agile or XP development model[edit]
In contrast, some emerging software disciplines such as extreme programming and the agile software development movement, adhere to a 'test-driven software development' model. In this process, unit tests are written first, by the software engineers (often with pair programming in the extreme programming methodology). The tests are expected to fail initially. Each failing test is followed by writing just enough code to make it pass.[67] This means the test suites are continuously updated as new failure conditions and corner cases are discovered, and they are integrated with any regression tests that are developed. Unit tests are maintained along with the rest of the software source code and generally integrated into the build process (with inherently interactive tests being relegated to a partially manual build acceptance process).
The ultimate goals of this test process are to support continuous integration and to reduce defect rates.[68][67]
This methodology increases the testing effort done by development, before reaching any formal testing team. In some other development models, most of the test execution occurs after the requirements have been defined and the coding process has been completed.
A sample testing cycle[edit]
Although variations exist between organizations, there is a typical cycle for testing.[2] The sample below is common among organizations employing the Waterfall development model. The same practices are commonly found in other development models, but might not be as clear or explicit.
- Requirements analysis: Testing should begin in the requirements phase of the software development life cycle. During the design phase, testers work to determine what aspects of a design are testable and with what parameters those tests work.
- Test planning: Test strategy, test plan, testbed creation. Since many activities will be carried out during testing, a plan is needed.
- Test development: Test procedures, test scenarios, test cases, test datasets, test scripts to use in testing software.
- Test execution: Testers execute the software based on the plans and test documents then report any errors found to the development team. This part could be complex when running tests with a lack of programming knowledge.
- Test reporting: Once testing is completed, testers generate metrics and make final reports on their test effort and whether or not the software tested is ready for release.
- Test result analysis: Or Defect Analysis, is done by the development team usually along with the client, in order to decide what defects should be assigned, fixed, rejected (i.e. found software working properly) or deferred to be dealt with later.
- Defect Retesting: Once a defect has been dealt with by the development team, it is retested by the testing team.
- Regression testing: It is common to have a small test program built of a subset of tests, for each integration of new, modified, or fixed software, in order to ensure that the latest delivery has not ruined anything and that the software product as a whole is still working correctly.
- Test Closure: Once the test meets the exit criteria, the activities such as capturing the key outputs, lessons learned, results, logs, documents related to the project are archived and used as a reference for future projects.
Automated testing[edit]
Many programming groups[Like whom?] are relying more and more[vague] on automated testing, especially groups that use test-driven development. There are many frameworks[specify] to write tests in, and continuous integration software will run tests automatically every time code is checked into a version control system.
While automation cannot reproduce everything that a human can do (and all the ways they think of doing it), it can be very useful for regression testing. However, it does require a well-developed test suite of testing scripts in order to be truly useful.
Testing tools[edit]
Program testing and fault detection can be aided significantly by testing tools and debuggers.Testing/debug tools include features such as:
- Program monitors, permitting full or partial monitoring of program code including:
- Instruction set simulator, permitting complete instruction level monitoring and trace facilities
- Hypervisor, permitting complete control of the execution of program code including:-
- Program animation, permitting step-by-step execution and conditional breakpoint at source level or in machine code
- Code coverage reports
- Formatted dump or symbolic debugging, tools allowing inspection of program variables on error or at chosen points
- Automated functional Graphical User Interface (GUI) testing tools are used to repeat system-level tests through the GUI
- Benchmarks, allowing run-time performance comparisons to be made
- Performance analysis (or profiling tools) that can help to highlight hot spots and resource usage
Some of these features may be incorporated into a single composite tool or an Integrated Development Environment (IDE).
Measurement in software testing[edit]
Quality measures include such topics as correctness, completeness, security and ISO/IEC 9126 requirements such as capability, reliability, efficiency, portability, maintainability, compatibility, and usability.
There are a number of frequently used software metrics, or measures, which are used to assist in determining the state of the software or the adequacy of the testing.
Hierarchy of testing difficulty[edit]
Based on the amount of test cases required to construct a complete test suite in each context (i.e. a test suite such that, if it is applied to the implementation under test, then we collect enough information to precisely determine whether the system is correct or incorrect according to some specification), a hierarchy of testing difficulty has been proposed.[69][70] It includes the following testability classes:
- Class I: there exists a finite complete test suite.
- Class II: any partial distinguishing rate (i.e., any incomplete capability to distinguish correct systems from incorrect systems) can be reached with a finite test suite.
- Class III: there exists a countable complete test suite.
- Class IV: there exists a complete test suite.
- Class V: all cases.
It has been proved that each class is strictly included in the next. For instance, testing when we assume that the behavior of the implementation under test can be denoted by a deterministic finite-state machine for some known finite sets of inputs and outputs and with some known number of states belongs to Class I (and all subsequent classes). However, if the number of states is not known, then it only belongs to all classes from Class II on. If the implementation under test must be a deterministic finite-state machine failing the specification for a single trace (and its continuations), and its number of states is unknown, then it only belongs to classes from Class III on. Testing temporal machines where transitions are triggered if inputs are produced within some real-bounded interval only belongs to classes from Class IV on, whereas testing many non-deterministic systems only belongs to Class V (but not all, and some even belong to Class I). The inclusion into Class I does not require the simplicity of the assumed computation model, as some testing cases involving implementations written in any programming language, and testing implementations defined as machines depending on continuous magnitudes, have been proved to be in Class I. Other elaborated cases, such as the testing framework by Matthew Hennessy under must semantics, and temporal machines with rational timeouts, belong to Class II.
Testing artifacts[edit]
A software testing process can produce several artifacts. The actual artifacts produced are a factor of the software development model used, stakeholder and organisational needs.
- Test plan
- A test plan is a document detailing the approach that will be taken for intended test activities. The plan may include aspects such as objectives, scope, processes and procedures, personnel requirements, and contingency plans.[43] The test plan could come in the form of a single plan that includes all test types (like an acceptance or system test plan) and planning considerations, or it may be issued as a master test plan that provides an overview of more than one detailed test plan (a plan of a plan).[43] A test plan can be, in some cases, part of a wide 'test strategy' which documents overall testing approaches, which may itself be a master test plan or even a separate artifact.
- Traceability matrix
- A traceability matrix is a table that correlates requirements or design documents to test documents. It is used to change tests when related source documents are changed, to select test cases for execution when planning for regression tests by considering requirement coverage.
- Test case
- A test case normally consists of a unique identifier, requirement references from a design specification, preconditions, events, a series of steps (also known as actions) to follow, input, output, expected result, and the actual result. Clinically defined, a test case is an input and an expected result.[71] This can be as terse as 'for condition x your derived result is y', although normally test cases describe in more detail the input scenario and what results might be expected. It can occasionally be a series of steps (but often steps are contained in a separate test procedure that can be exercised against multiple test cases, as a matter of economy) but with one expected result or expected outcome. The optional fields are a test case ID, test step, or order of execution number, related requirement(s), depth, test category, author, and check boxes for whether the test is automatable and has been automated. Larger test cases may also contain prerequisite states or steps, and descriptions. A test case should also contain a place for the actual result. These steps can be stored in a word processor document, spreadsheet, database, or other common repositories. In a database system, you may also be able to see past test results, who generated the results, and what system configuration was used to generate those results. These past results would usually be stored in a separate table.
- Test script
- A test script is a procedure or programming code that replicates user actions. Initially, the term was derived from the product of work created by automated regression test tools. A test case will be a baseline to create test scripts using a tool or a program.
- Test suite
- The most common term for a collection of test cases is a test suite. The test suite often also contains more detailed instructions or goals for each collection of test cases. It definitely contains a section where the tester identifies the system configuration used during testing. A group of test cases may also contain prerequisite states or steps, and descriptions of the following tests.
- Test fixture or test data
- In most cases, multiple sets of values or data are used to test the same functionality of a particular feature. All the test values and changeable environmental components are collected in separate files and stored as test data. It is also useful to provide this data to the client and with the product or a project. There are techniques to generate test data.
- Test harness
- The software, tools, samples of data input and output, and configurations are all referred to collectively as a test harness.
Certifications[edit]
Several certification programs exist to support the professional aspirations of software testers and quality assurance specialists. Note that a few practitioners argue that the testing field is not ready for certification, as mentioned in the Controversy section.
Controversy[edit]
Some of the major software testing controversies include:
- Agile vs. traditional
- Should testers learn to work under conditions of uncertainty and constant change or should they aim at process 'maturity'? The agile testing movement has received growing popularity since 2006 mainly in commercial circles,[72][73] whereas government and military[74] software providers use this methodology but also the traditional test-last models (e.g., in the Waterfall model).[citation needed]
- Manual vs. automated testing
- Some writers believe that test automation is so expensive relative to its value that it should be used sparingly.[75] The test automation then can be considered as a way to capture and implement the requirements. As a general rule, the larger the system and the greater the complexity, the greater the ROI in test automation. Also, the investment in tools and expertise can be amortized over multiple projects with the right level of knowledge sharing within an organization.
- Is the existence of the ISO 29119 software testing standard justified?
- Significant opposition has formed out of the ranks of the context-driven school of software testing about the ISO 29119 standard. Professional testing associations, such as the International Society for Software Testing, have attempted to have the standard withdrawn.[76][77]
- Some practitioners declare that the testing field is not ready for certification[78]
- No certification now offered actually requires the applicant to show their ability to test software. No certification is based on a widely accepted body of knowledge. Certification itself cannot measure an individual's productivity, their skill, or practical knowledge, and cannot guarantee their competence, or professionalism as a tester.[79]
- Studies used to show the relative expense of fixing defects
- There are opposing views on the applicability of studies used to show the relative expense of fixing defects depending on their introduction and detection. For example:
It is commonly believed that the earlier a defect is found, the cheaper it is to fix it. The following table shows the cost of fixing the defect depending on the stage it was found.[80] For example, if a problem in the requirements is found only post-release, then it would cost 10–100 times more to fix than if it had already been found by the requirements review. With the advent of modern continuous deployment practices and cloud-based services, the cost of re-deployment and maintenance may lessen over time.
Cost to fix a defect | Time detected | |||||
---|---|---|---|---|---|---|
Requirements | Architecture | Construction | System test | Post-release | ||
Time introduced | Requirements | 1× | 3× | 5–10× | 10× | 10–100× |
Architecture | – | 1× | 10× | 15× | 25–100× | |
Construction | – | – | 1× | 10× | 10–25× |
The data from which this table is extrapolated is scant. Laurent Bossavit says in his analysis:
The 'smaller projects' curve turns out to be from only two teams of first-year students, a sample size so small that extrapolating to 'smaller projects in general' is totally indefensible. The GTE study does not explain its data, other than to say it came from two projects, one large and one small. The paper cited for the Bell Labs 'Safeguard' project specifically disclaims having collected the fine-grained data that Boehm's data points suggest. The IBM study (Fagan's paper) contains claims that seem to contradict Boehm's graph and no numerical results that clearly correspond to his data points.
Boehm doesn't even cite a paper for the TRW data, except when writing for 'Making Software' in 2010, and there he cited the original 1976 article. There exists a large study conducted at TRW at the right time for Boehm to cite it, but that paper doesn't contain the sort of data that would support Boehm's claims.[81]
Related processes[edit]
Software verification and validation[edit]
Software testing is used in association with verification and validation:[82]
- Verification: Have we built the software right? (i.e., does it implement the requirements).
- Validation: Have we built the right software? (i.e., do the deliverables satisfy the customer).
The terms verification and validation are commonly used interchangeably in the industry; it is also common to see these two terms defined with contradictory definitions. According to the IEEE Standard Glossary of Software Engineering Terminology:
- Verification is the process of evaluating a system or component to determine whether the products of a given development phase satisfy the conditions imposed at the start of that phase.
- Validation is the process of evaluating a system or component during or at the end of the development process to determine whether it satisfies specified requirements.
And, according to the ISO 9000 standard:
- Verification is confirmation by examination and through provision of objective evidence that specified requirements have been fulfilled.
- Validation is confirmation by examination and through provision of objective evidence that the requirements for a specific intended use or application have been fulfilled.
The contradiction is caused by the use of the concepts of requirements and specified requirements but with different meanings.
In the case of IEEE standards, the specified requirements, mentioned in the definition of validation, are the set of problems, needs and wants of the stakeholders that the software must solve and satisfy. Such requirements are documented in a Software Requirements Specification (SRS). And, the products mentioned in the definition of verification, are the output artifacts of every phase of the software development process. These products are, in fact, specifications such as Architectural Design Specification, Detailed Design Specification, etc. The SRS is also a specification, but it cannot be verified (at least not in the sense used here, more on this subject below).
But, for the ISO 9000, the specified requirements are the set of specifications, as just mentioned above, that must be verified. A specification, as previously explained, is the product of a software development process phase that receives another specification as input. A specification is verified successfully when it correctly implements its input specification. All the specifications can be verified except the SRS because it is the first one (it can be validated, though). Examples: The Design Specification must implement the SRS; and, the Construction phase artifacts must implement the Design Specification.
So, when these words are defined in common terms, the apparent contradiction disappears.
Both the SRS and the software must be validated. The SRS can be validated statically by consulting with the stakeholders. Nevertheless, running some partial implementation of the software or a prototype of any kind (dynamic testing) and obtaining positive feedback from them, can further increase the certainty that the SRS is correctly formulated. On the other hand, the software, as a final and running product (not its artifacts and documents, including the source code) must be validated dynamically with the stakeholders by executing the software and having them to try it.
Some might argue that, for SRS, the input is the words of stakeholders and, therefore, SRS validation is the same as SRS verification. Thinking this way is not advisable as it only causes more confusion. It is better to think of verification as a process involving a formal and technical input document.
Software quality assurance[edit]
Software testing may be considered a part of a software quality assurance (SQA) process.[4]:347 In SQA, software process specialists and auditors are concerned with the software development process rather than just the artefacts such as documentation, code and systems. They examine and change the software engineering process itself to reduce the number of faults that end up in the delivered software: the so-called defect rate. What constitutes an acceptable defect rate depends on the nature of the software; A flight simulator video game would have much higher defect tolerance than software for an actual airplane. Although there are close links with SQA, testing departments often exist independently, and there may be no SQA function in some companies.[citation needed]
Software testing is an activity to investigate software under test in order to provide quality-related information to stakeholders. By contrast, QA (quality assurance) is the implementation of policies and procedures intended to prevent defects from reaching customers.
See also[edit]
References[edit]
- ^ abKaner, Cem (November 17, 2006). Exploratory Testing(PDF). Quality Assurance Institute Worldwide Annual Software Testing Conference. Orlando, FL. Retrieved November 22, 2014.
- ^ abPan, Jiantao (Spring 1999). 'Software Testing' (coursework). Carnegie Mellon University. Retrieved November 21, 2017.
- ^Leitner, Andreas; Ciupa, Ilinca; Oriol, Manuel; Meyer, Bertrand; Fiva, Arno (September 2007). Contract Driven Development = Test Driven Development – Writing Test Cases(PDF). ESEC/FSE'07: European Software Engineering Conference and the ACM SIGSOFT Symposium on the Foundations of Software Engineering 2007. Dubrovnik, Croatia. Retrieved December 8, 2017.
- ^ abcKaner, Cem; Falk, Jack; Nguyen, Hung Quoc (1999). Testing Computer Software, 2nd Ed. New York, et al.: John Wiley and Sons, Inc. ISBN978-0-471-35846-6.
- ^ abKolawa, Adam; Huizinga, Dorota (2007). Automated Defect Prevention: Best Practices in Software Management. Wiley-IEEE Computer Society Press. ISBN978-0-470-04212-0.
- ^ ab'Certified Tester Foundation Level Syllabus'(pdf). International Software Testing Qualifications Board. March 31, 2011. Section 1.1.2. Retrieved December 15, 2017.
- ^'Certified Tester Foundation Level Syllabus'(PDF). International Software Testing Qualifications Board. July 1, 2005. Principle 2, Section 1.3. Retrieved December 15, 2017.
- ^Ramler, Rudolf; Kopetzky, Theodorich; Platz, Wolfgang (April 17, 2012). Combinatorial Test Design in the TOSCA Testsuite: Lessons Learned and Practical Implications. IEEE Fifth International Conference on Software Testing and Validation (ICST). Montreal, QC, Canada. doi:10.1109/ICST.2012.142.
- ^'The Economic Impacts of Inadequate Infrastructure for Software Testing'(PDF). National Institute of Standards and Technology. May 2002. Retrieved December 19, 2017.
- ^Sharma, Bharadwaj (April 2016). 'Ardentia Technologies: Providing Cutting Edge Software Solutions and Comprehensive Testing Services'. CIO Review (India ed.). Retrieved December 20, 2017.
- ^Gelperin, David; Hetzel, Bill (June 1, 1988). 'The growth of software testing'. Communications of the ACM. 31 (6): 687–695. doi:10.1145/62959.62965.
- ^Gregory, Janet; Crispin, Lisa (2014). More Agile Testing. Addison-Wesley Professional. pp. 23–39. ISBN9780133749564.
- ^ abcMyers, Glenford J. (1979). The Art of Software Testing. John Wiley and Sons. ISBN978-0-471-04328-7.
- ^ abGraham, D.; Van Veenendaal, E.; Evans, I. (2008). Foundations of Software Testing. Cengage Learning. pp. 57–58. ISBN9781844809899.
- ^ abcdOberkampf, W.L.; Roy, C.J. (2010). Verification and Validation in Scientific Computing. Cambridge University Press. pp. 154–5. ISBN9781139491761.
- ^Lee, D.; Netravali, A.N.; Sabnani, K.K.; Sugla, B.; John, A. (1997). 'Passive testing and applications to network management'. Proceedings 1997 International Conference on Network Protocols. IEEE Comput. Soc: 113–122. doi:10.1109/icnp.1997.643699. ISBN081868061X.
- ^Cem Kaner, 'A Tutorial in Exploratory Testing', p.2
- ^Cem Kaner, A Tutorial in Exploratory Testing, p. 36.
- ^ abcdLimaye, M.G. (2009). Software Testing. Tata McGraw-Hill Education. pp. 108–11. ISBN9780070139909.
- ^ abcdSaleh, K.A. (2009). Software Engineering. J. Ross Publishing. pp. 224–41. ISBN9781932159943.
- ^ abcAmmann, P.; Offutt, J. (2016). Introduction to Software Testing. Cambridge University Press. p. 26. ISBN9781316773123.
- ^Everatt, G.D.; McLeod Jr., R. (2007). 'Chapter 7: Functional Testing'. Software Testing: Testing Across the Entire Software Development Life Cycle. John Wiley & Sons. pp. 99–121. ISBN9780470146347.
- ^ abCornett, Steve (c. 1996). 'Code Coverage Analysis'. Bullseye Testing Technology. Introduction. Retrieved November 21, 2017.
- ^ abBlack, R. (2011). Pragmatic Software Testing: Becoming an Effective and Efficient Test Professional. John Wiley & Sons. pp. 44–6. ISBN9781118079386.
- ^As a simple example, the C function
intf(intx){returnx*x-6*x+8;}
consists of only one statement. All tests against a specificationf(x)>=0
will succeed, except ifx=3
happens to be chosen. - ^Vera-Pérez, Oscar Luis; Danglot, Benjamin; Monperrus, Martin; Baudry, Benoit (2018). 'A comprehensive study of pseudo-tested methods'. Empirical Software Engineering. 24 (3): 1195–1225. arXiv:1807.05030. Bibcode:2018arXiv180705030V. doi:10.1007/s10664-018-9653-2.
- ^Patton, Ron (2005). Software Testing (2nd ed.). Indianapolis: Sams Publishing. ISBN978-0672327988.
- ^Laycock, Gilbert T. (1993). The Theory and Practice of Specification Based Software Testing(PDF) (dissertation). Department of Computer Science, University of Sheffield. Retrieved January 2, 2018.
- ^Bach, James (June 1999). 'Risk and Requirements-Based Testing'(PDF). Computer. 32 (6): 113–114. Retrieved August 19, 2008.
- ^Savenkov, Roman (2008). How to Become a Software Tester. Roman Savenkov Consulting. p. 159. ISBN978-0-615-23372-7.
- ^Mathur, A.P. (2011). Foundations of Software Testing. Pearson Education India. p. 63. ISBN9788131759080.
- ^ abClapp, Judith A. (1995). Software Quality Control, Error Analysis, and Testing. p. 313. ISBN978-0815513636. Retrieved January 5, 2018.
- ^Mathur, Aditya P. (2007). Foundations of Software Testing. Pearson Education India. p. 18. ISBN978-8131716601.
- ^Lönnberg, Jan (October 7, 2003). Visual testing of software(PDF) (MSc). Helsinki University of Technology. Retrieved January 13, 2012.
- ^Chima, Raspal. 'Visual testing'. TEST Magazine. Archived from the original on July 24, 2012. Retrieved January 13, 2012.
- ^ abcLewis, W.E. (2016). Software Testing and Continuous Quality Improvement (3rd ed.). CRC Press. pp. 68–73. ISBN9781439834367.
- ^ abRansome, J.; Misra, A. (2013). Core Software Security: Security at the Source. CRC Press. pp. 140–3. ISBN9781466560956.
- ^'SOA Testing Tools for Black, White and Gray Box' (white paper). Crosscheck Networks. Archived from the original on October 1, 2018. Retrieved December 10, 2012.
- ^Bourque, Pierre; Fairley, Richard E., eds. (2014). 'Chapter 5'. Guide to the Software Engineering Body of Knowledge. 3.0. IEEE Computer Society. ISBN978-0-7695-5166-1. Retrieved January 2, 2018.
- ^Bourque, P.; Fairley, R.D., eds. (2014). 'Chapter 4: Software Testing'(PDF). SWEBOK v3.0: Guide to the Software Engineering Body of Knowledge. IEEE. pp. 4–1–4–17. ISBN9780769551661. Retrieved July 13, 2018.
- ^Dooley, J. (2011). Software Development and Professional Practice. APress. pp. 193–4. ISBN9781430238010.
- ^Wiegers, K. (2013). Creating a Software Engineering Culture. Addison-Wesley. pp. 211–2. ISBN9780133489293.
- ^ abcLewis, W.E. (2016). Software Testing and Continuous Quality Improvement (3rd ed.). CRC Press. pp. 92–6. ISBN9781439834367.
- ^Machado, P.; Vincenzi, A.; Maldonado, J.C. (2010). 'Chapter 1: Software Testing: An Overview'. In Borba, P.; Cavalcanti, A.; Sampaio, A.; Woodcook, J. (eds.). Testing Techniques in Software Engineering. Springer Science & Business Media. pp. 13–14. ISBN9783642143342.
- ^Clapp, J.A.; Stanten, S.F.; Peng, W.W.; et al. (1995). Software Quality Control, Error Analysis, and Testing. Nova Data Corporation. p. 254. ISBN978-0815513636.
- ^Binder, Robert V. (1999). Testing Object-Oriented Systems: Objects, Patterns, and Tools. Addison-Wesley Professional. p. 45. ISBN978-0-201-80938-1.
- ^Beizer, Boris (1990). Software Testing Techniques (Second ed.). New York: Van Nostrand Reinhold. pp. 21, 430. ISBN978-0-442-20672-7.
- ^Xuan, Jifeng; Monperrus, Martin (2014). 'Test case purification for improving fault localization'. Proceedings of the 22nd ACM SIGSOFT International Symposium on Foundations of Software Engineering - FSE 2014. arXiv:1409.3176. doi:10.1145/2635868.2635906.
- ^IEEE (1990). IEEE Standard Computer Dictionary: A Compilation of IEEE Standard Computer Glossaries. New York: IEEE. ISBN978-1-55937-079-0.
- ^Woods, Anthony J. (June 5, 2015). 'Operational Acceptance – an application of the ISO 29119 Software Testing standard' (Whitepaper). Capgemini Australia. Retrieved January 9, 2018.
- ^Kaner, Cem; Bach, James; Pettichord, Bret (2001). Lessons Learned in Software Testing: A Context-Driven Approach. Wiley. pp. 31–43. ISBN9780471081128.
- ^Ammann, Paul; Offutt, Jeff (January 28, 2008). Introduction to Software Testing. Cambridge University Press. p. 215. ISBN978-0-521-88038-1. Retrieved November 29, 2017.
- ^Danglot, Benjamin; Vera-Pérez, Oscar Luis; Baudry, Benoit; Monperrus, Martin (2019). 'Automatic test improvement with DSpot: a study with ten mature open-source projects'. Empirical Software Engineering. arXiv:1811.08330. doi:10.1007/s10664-019-09692-y.
- ^'Standard Glossary of Terms used in Software Testing'(PDF). Version 3.1. International Software Testing Qualifications Board. Retrieved January 9, 2018.
- ^O'Reilly, Tim (September 30, 2005). 'What is Web 2.0'. O’Reilly Media. Section 4. End of the Software Release Cycle. Retrieved January 11, 2018.
- ^Auerbach, Adam (August 3, 2015). 'Part of the Pipeline: Why Continuous Testing Is Essential'. TechWell Insights. TechWell Corp. Retrieved January 12, 2018.
- ^Philipp-Edmonds, Cameron (December 5, 2014). 'The Relationship between Risk and Continuous Testing: An Interview with Wayne Ariola'. Stickyminds. Retrieved January 16, 2018.
- ^Ariola, Wayne; Dunlop, Cynthia (October 2015). DevOps: Are You Pushing Bugs to Clients Faster?(PDF). Pacific Northwest Software Quality Conference. Retrieved January 16, 2018.
- ^Chickowski, Ericka (June 11, 2015). 'DevOps and QA: What's the real cost of quality?'. DevOps.com.
- ^Auerbach, Adam (October 2, 2014). 'Shift Left and Put Quality First'. TechWell Insights. TechWell Corp. Retrieved January 16, 2018.
- ^'Section 4.38'. ISO/IEC/IEEE 29119-1:2013 – Software and Systems Engineering – Software Testing – Part 1 – Concepts and Definitions. International Organization for Standardization. Retrieved January 17, 2018.
- ^'Globalization Step-by-Step: The World-Ready Approach to Testing. Microsoft Developer Network'. Msdn.microsoft.com. Retrieved January 13, 2012.
- ^Kaner, Cem; Falk, Jack; Nguyen, Hung Q. (April 26, 1999). Testing Computer Software. John Wiley & Sons. ISBN9780471358466.
- ^'Software Testing Lifecycle'. etestinghub. Testing Phase in Software Testing. Retrieved January 13, 2012.
- ^Dustin, Elfriede (2002). Effective Software Testing. Addison-Wesley Professional. p. 3. ISBN978-0-201-79429-8.
- ^Brown, Chris; Cobb, Gary; Culbertson, Robert (April 12, 2002). Introduction to Rapid Software Testing.
- ^ ab'What is Test Driven Development (TDD)?'. Agile Alliance. December 5, 2015. Retrieved March 17, 2018.
- ^'Test-Driven Development and Continuous Integration for Mobile Applications'. msdn.microsoft.com. Retrieved March 17, 2018.
- ^Rodríguez, Ismael; Llana, Luis; Rabanal, Pablo (2014). 'A General Testability Theory: Classes, properties, complexity, and testing reductions'. IEEE Transactions on Software Engineering. 40 (9): 862–894. doi:10.1109/TSE.2014.2331690. ISSN0098-5589.
- ^Rodríguez, Ismael (2009). 'A General Testability Theory'. CONCUR 2009 - Concurrency Theory, 20th International Conference, CONCUR 2009, Bologna, Italy, September 1–4, 2009. Proceedings. pp. 572–586. doi:10.1007/978-3-642-04081-8_38. ISBN978-3-642-04080-1.
- ^IEEE (1998). IEEE standard for software test documentation. New York: IEEE. ISBN978-0-7381-1443-9.
- ^Strom, David (July 1, 2009). 'We're All Part of the Story'. Software Test & Performance Collaborative. Archived from the original on August 31, 2009.
- ^'IEEE Xplore - Sign In'(PDF). ieee.org.
- ^Willison, John S. (April 2004). 'Agile Software Development for an Agile Force'. CrossTalk. STSC (April 2004). Archived from the original on October 29, 2005.
- ^An example is Mark Fewster, Dorothy Graham: Software Test Automation. Addison Wesley, 1999, ISBN0-201-33140-3.
- ^'stop29119'. commonsensetesting.org. Archived from the original on October 2, 2014.
- ^Paul Krill (August 22, 2014). 'Software testers balk at ISO 29119 standards proposal'. InfoWorld.
- ^Kaner, Cem (2001). 'NSF grant proposal to 'lay a foundation for significant improvements in the quality of academic and commercial courses in software testing''(PDF).
- ^Kaner, Cem (2003). Measuring the Effectiveness of Software Testers(PDF). STAR East. Retrieved January 18, 2018.
- ^McConnell, Steve (2004). Code Complete (2nd ed.). Microsoft Press. p. 29. ISBN978-0735619678.
- ^Bossavit, Laurent (November 20, 2013). The Leprechauns of Software Engineering: How folklore turns into fact and what to do about it. Chapter 10: leanpub.
- ^Tran, Eushiuan (1999). 'Verification/Validation/Certification' (coursework). Carnegie Mellon University. Retrieved August 13, 2008.
Further reading[edit]
- Meyer, Bertrand (August 2008). 'Seven Principles of Software Testing'(PDF). Computer. Vol. 41 no. 8. pp. 99–101. doi:10.1109/MC.2008.306. Retrieved November 21, 2017.
- What is Software Testing? - Answered by community of Software Testers at Software Testing Board
External links[edit]
Software Testing Process Slideshare
Wikimedia Commons has media related to Software testing. |
Software Testing Process Document
At Wikiversity, you can learn more and teach others about Software testing at the Department of Software testing |
Software Testing Process Diagram
- Software testing tools and products at Curlie
Software Testing Procedures
Retrieved from 'https://en.wikipedia.org/w/index.php?title=Software_testing&oldid=917911125'