Most do not distinguish between the terms Quality Assurance (QA), Quality Control (QC), and just Testing, considering them synonymous. 

In a few words, the difference is the following.

Quality Assurance (QA) the broadest of all concepts, is a combination of activities at all stages of the software development life cycle that are undertaken to ensure the required level of product capacity.

Quality Control (QC) covers the actions carried out to obtain information about a product’s current status: what functionality is ready and whether it meets the quality requirements for each specific period.

Software Testing is one of the QC techniques that include actions for planning test activities, designing tests, performing these tests, and analyzing the data obtained.

Thus, we can build a hierarchy model of the QA process: testing is a part of QC. QC is a part of QA.

In this article, we’ll discuss the Software Testing Process, its algorithms, types, and automation.

Testing in SDLC

The testing phase of the software development lifecycle (SDLC) is the stage when you focus on investigation and discovery. Yet, the exact time, when testing should happen, depends on the software development model that you choose.

Waterfall is considered to be a traditional model. According to it, testing is performed at the final stage, only after planning, analysis, design, and implementation. That may cause some difficulties: errors and bugs detected late in the SDLC can appear too expensive to fix. That’s why Agile methods may be more beneficial sometimes. 

In Agile, the development process goes in parallel with testing, which makes the whole project cheaper and faster. 

DevOps is one of the newest SDLC methodologies. The method is characterized by small and frequent product updates, continuous feedback, and the automation of manual tasks. 

You can find a more detailed overview of the pros and cons of all the above-mentioned models in our article: 6 SDLC Methodologies: Which One to Choose?

Software Testing as a Process

It goes without saying, the effectiveness of Software Testing heavily depends on careful planning and setting specific goals. The test management process can be divided into different phases: planning, execution, and reporting. 

Test Planning

Step 1: Strategy and Artifacts

Let’s recall the testing life cycle: each iteration begins with planning and ends with reporting, which becomes the basis for planning the next iteration, and so on. Planning and reporting are closely interrelated, and problems with one of these activities inevitably escalate into problems with the whole project. 

Without decent planning, it is not clear who should do what and team performance suffers. When the exact reasons for poor work are not clear, the performance report gets weak, and it is impossible to understand how to improve the situation and create a good plan for further work. That’s it. The vicious circle closed.

Test plan is a document that describes and regulates the testing work, as well as relevant techniques and approaches, features-(not)-to-be-tested, strategy, areas of responsibility, resources (software, human, financial), schedule with milestones and risk evaluation. A test plan is created at the beginning of the project and is corrected if necessary throughout the project by the team involved in quality assurance. The lead tester (“test lead”) is usually liable for creating a test plan.

Test strategy guides the QA team to define test coverage and testing scope to provide a clear picture of the project at any instance.

There are 7 types of strategies, which can be used separately or together:

  1. Analytical strategy. The strategy centers on risk and requirements analysis to create the basis for planning, building, and estimating tests. It works perfectly when building a well-defined product from scratch.
  2. Model-based strategy. The system is tested in accordance with the predefined model and should utterly correspond to it to be considered valid. This strategy is applied when using an existing product as the basis for a new one or enhancing the legacy system.
  3. Methodical strategy. This strategy adheres to a custom pre-planned, systematic approach. A methodical strategy is often used in heavily-regulated industries to build a product that complies with the requirements.
  4. Standards or process-compliant strategy. Unlike the previous one, this strategy relies on a standard strategy, with little or no adaptation. It is used by teams that lack experience or time for building a custom testing approach.
  5. Dynamic strategy. It prioritizes finding as many errors and defects as possible. Applicable when there is a need to find and fix the issues with minimum time and effort.
  6. Consultative strategy. It relies on users or developers to define the areas of testing or even to handle the tests themselves. The strategy is applied to domain-specific products that require additional expert guidance.
  7. Regression-averse strategy. This strategy prioritizes the automation of functional tests either before the release or after. It is best used with live, well-established products.

Step 2: Design and Execution

Test design is a stage of the software testing process on which test scenarios (test cases) are designed and created, in accordance with previously defined quality criteria and testing purposes. Not all testing involves interaction with a running application. Therefore, in the framework of this classification, static and dynamic testing are distinguished.

Static testing differs from dynamic testing in that it is performed without running the product software code. Testing is carried out by analyzing program code (code review) or compiled code. The analysis can be performed either manually or using special tools. The purpose of the analysis is the early detection of errors and potential problems in the product.

Dynamic testing is testing with code execution. It can be launched for the entire application code (system testing), as well as for the code of several complementary parts (integration testing), individual parts (unit or component testing), and even individual sections of code. The main idea of this type of testing is that it checks the real behavior (parts) of the application.

Now let’s discuss the testing methods, classified by access to code and application architecture.

  • White box testing is the situation when a tester has access to the internal structure and code of the application and also has enough knowledge to understand what he/she saw. 
  • Black box testing means that the tester either has no access to the internal structure and application code or does not have enough knowledge to understand them or refer to them during the testing process. The tester runs the application (and checks the reaction) in the same way as users or other applications do. As part of the black box testing, the main information for creating test cases is documentation and common sense (for cases when the application’s behavior is not explicitly regulated; sometimes, this is called “testing based on implicit requirements ”, but this approach has no canonical definition).
  • Grey box testing is a combination of the white and black box methods. A tester has access to part of the code and architecture, but not to the entire system. Usually, the white or black box method is applied to certain parts of the application, while the entire application is tested using the gray box method.

To sum up, white and black box methods are not mutually exclusive; on the contrary, they harmoniously complement each other, thus compensating for the existing shortcomings. 

Naturally, there are many more methods of testing. Each team should evaluate the risks and objectives and find the best combination for every project.

Step 3: Documentation and Reporting

Perfect software seems to be the eighth Wonder of the World. Hence, perfection is unattainable; there is no point where testing ends. It is rather an ongoing activity. Nevertheless, there remains a set of common “exit” criteria that identifies testing completeness. Among them, we could highlight:

  • All test cases are passed, found bugs are fixed and rechecked.
  • Existing bugs are not vital for the app’s performance.
  • The program product runs well on all the mandatory platforms.

A project closes as soon as its goal(all or one of the criteria) is met. 

Reporting is a big deal as well. For instance, the defect report might be written considering the following objectives: 

  • to provide information about the problem: notify the project team and other interested parties about the problem, describe the essence of the problem; 
  • prioritize the problem: determine the degree of danger and the desired time frame for the problem elimination; 
  • contribute to the elimination of the problem: a quality report on the defect not only provides all the necessary details for understanding the essence of what happened but also contains an analysis of the causes of the problem and recommendations for correcting the situation.

The Levels of Software Testing

In this section, we will talk about test levels classification, which includes unit, integration, system, and acceptance testing.

Unit or component testing is aimed at separate small parts of the application, which (as a rule) can be studied in isolation from other similar parts. When performing this test, individual functions or class methods, classes themselves, class interactions, small libraries, individual parts of the application can be checked. Often this type of testing is implemented using special technologies and testing automation tools, which greatly simplify and accelerate the development of appropriate test cases.

Integration testing is aimed at checking the interaction between several parts of the application (each, in turn, is tested separately at the unit testing stage). Unfortunately, even if we work with very high-quality individual components, problems often arise at the junction of their interaction. These are the problems that integration testing reveals. 

System testing is aimed at checking the entire application as a whole, assembled from parts tested in the previous two stages. Here, not only are defects detected “at the junctions” of the components but also there is the opportunity to fully interact with the application from the point of view of the end-user.

Acceptance testing is testing, the purpose of which is to determine the suitability of the product. It is usually performed using a set of tests (automatically or manually) that are prepared in advance and target a certain result. The main task is to ensure that the customer receives a product that has all the necessary functional properties.

The Types of Software Testing

International Software Testing Qualifications Board (ISTQB) is an international organization that offers software testing standards that are recognized worldwide. Here we want to present the most popular testing types according to its classification.

Test Types according to ISTQB

Functional testing is a type of testing aimed at checking the correctness of the application functionality (the correct implementation of functional requirements). It is often associated with testing using the black box method; however, using the white box method, one can verify the correctness of the implementation of functionality.

Performance testing is a study of the system reaction to a certain load, sudden or continuous.

Use case testing is a testing technique, according to which test cases are developed based on common use cases or user requirements in any form. 

Exploratory testing is a partially formalized approach; the tester works with the application according to the selected script, which, in turn, is finalized in the course of execution. This helps to examine the application more fully. The key success factor in performing research testing is to stick precisely to the test scenario. 

Usability testing is a testing method aimed at establishing the degree of usability, learning ability, comprehensibility, and attractiveness for users.

Security testing is aimed at testing the ability of an application to withstand malicious attempts to gain access to data or functions.

Automated Testing

Test automation is a set of techniques, approaches, and tools that allows you to exclude a person from performing some tasks in the testing process. Test cases are partially or fully performed by a special tool; however, the development of test cases, preparation of data, evaluation of the results of the execution, writing reports on detected defects — all this and much more is still done by a person. The speed of test cases can be several times faster than human capabilities. Among indisputable advantages, we can also find:

  • The absence of the influence of the human factor in the process of performing test cases (fatigue, inattention, etc.)
  • Minimization of costs during repeated execution of test cases (human participation is required only occasionally here).
  • The ability of automation tools to perform test cases that a person can’t handle due to their complexity, speed, or other factors.
  • The ability of automation tools to collect, store, analyze, aggregate, and present colossal amounts of data in a form that is convenient for human perception.

If we express all the advantages and disadvantages of test automation in one phrase, it turns out that automation allows you to significantly increase test coverage, but at the same time significantly increases the risks.

At Cuspy, dedicated QA engineers use all the most popular automated testing tools, including Selenium, AutomatedQA, and Jmeter. 

However, the most effective testing approaches combine manual and automated testing activities to achieve the best results.

Regression Testing

Regression testing is aimed at checking the fact that errors are caused by the changes in the application or its functioning environment. Frederick Brooks in his book Mythical Man-Month 167 wrote: “The fundamental problem with maintaining programs is that correcting one error with a high probability (20-50%) entails the appearance of a new one.” Therefore, regression testing is an integral tool for ensuring quality and is actively used in almost any project.

There are multiple regression testing techniques:

  • Retesting all test cases;
  • Selecting specific test cases;
  • Prioritizing test cases to verify the most critical ones first and then test the rest;
  • Hybrid techniques.

Conclusion

It doesn’t matter how simple the product seems, its quality is still based on a ton of work done. As Don Norman noted, “Good design is much harder to notice than bad”. In most cases, QA engineers allow people to enjoy the product by verifying that everything works well.