Note: You can find the English version after the Chinese one.
【中文版 Chinese version】
关于探索式BOB官方注册，在中有详细介绍基于测程的BOB官方注册管理（Session Based Test Management，SBTM）方法来执行探索式BOB官方注册：将BOB官方注册章程分解成一系列测程，BOB官方注册人员在测程中完成一个特定BOB官方注册章程的设计、执行和记录。
- 统计和分析缺陷：对缺陷的数量和严重程度进行统计分析其同比/环比趋势，用鱼骨图和5 Why法等分析缺陷发生的根因，定位缺陷引入的阶段，并且分析之前缺陷预防举措的执行效果等。
【英文版 English Version】
0.1 Three levels of Systematic Thinking for Testing
Do testing personnel lack systematic thinking?
How to build a quality assurance system for new product teams or projects?
Many of us have come across countless articles or training courses on testing and quality, which are not lacking in testing practices or technical content, but it is difficult to build our own testing system. Based on similar doubts from many friends and my own team practice and consulting experience over the years, I will discuss the construction of systematic thinking for testing at three different levels: basic, intermediate, and advanced.
- "Systems Thinking for Testing (Basics)"
- "Building Systematic Thinking for Testing (Intermediate)"
- "Building Systematic Thinking for Testing (Advanced)"
0.2 Overview of the Basics
I previously wrote an article "Great QA" targeting graduates who want to pursue a career in QA. The article discusses five basic responsibilities of QA:
- Understanding and clarifying business requirements
- Formulating strategies and designing tests
- Implementing and executing tests
- Defect management and analysis
- Quality feedback and risk identification
Recently, a friend asked me to share what aspects of systematic testing they should pay attention to. I thought of these five basic responsibilities again. In the original article, the five responsibilities were explained based on the demand for producing cups. This article will expand on each responsibility and analyze what testers need to do and how to do it from the perspective of testing practice and methodology.
01 First Basic Responsibility: Understanding and Clarifying Business Requirements
Business requirements are the source of software development, and correctly understanding requirements is crucial. Understanding and clarifying requirements is also an essential part of testing work.
1.1 Dimensions of Understanding and Clarifying Business Requirements
How can testers understand and clarify requirements? I believe testers can understand and clarify business requirements from the following three dimensions:
- End user
- Business process
- Business impact
Detailed content about these dimensions is introduced in the article "How Agile Testing Optimizes Business Value".
1.2 Testability of Requirements
In addition to understanding business requirements, the quality of requirement description also needs to be taken into account. Testability of requirements is the most important aspect of requirement quality, for the following reasons:
- If requirements are not testable, they cannot be accepted, and it is impossible to know whether the project has been completed successfully.
- Writing requirements in a testable manner can ensure that the requirements are correctly implemented and verified.
The testability of requirements is mainly reflected in the following three dimensions:
The completeness of requirements mainly refers to the need to consider all process paths, require complete logical links, and include both positive and negative scenarios.
Clear definitions are needed for both successful login using correct username and password and what happens when incorrect username or password is used.
Requirement descriptions should not use subjective language, but should be supported with objective data and examples.
For example, the following subjective description is a very poor requirement example:
The system should be easy for experienced engineers to use and should minimize user errors as much as possible.
It is recommended to use the method of "" to write requirement documents, which expresses business rules through examples. This method is not only easy for different roles in the team to understand, but also avoids ambiguity.
Independence mainly refers to individual business functional points (user stories in agile development), which should be as independent as possible, with clear boundaries from other functions, to reduce the untestability caused by dependencies.
Inputs and outputs should be verifiable in the same functional point, and the input of Function A cannot be verified through the output of Function B.
In agile development, user stories follow the , which separates testability and independence. However, I believe that independence also affects testability and should be considered as a factor in testability.
02 Second Basic Responsibility: Developing Strategies and Designing Tests
Developing strategies and designing tests is the most critical responsibility among the five responsibilities, covering a wide range of content. It may seem like there are two parts, strategy and test design, but in fact, it includes every aspect that needs to be considered for testing. Below are some valuable aspects that I have selected to introduce separately.
2.1 One-page Test Strategy
Strategy is direction, and to do a good job in software testing, guidance from a testing strategy is essential. Testing strategies may be challenging for testers with little experience. However, the "One-page Test Strategy" that I proposed can help testers think and develop testing strategies suitable for their projects. The "One-page Test Strategy" is shown below:
The "One-page Test Strategy" clearly defines the three parts that need to be considered in a testing strategy:
- Guiding Principles: The team is responsible for quality.
- What to Test: The content to be tested.
- How to Test: Shift Left testing, Shift Right testing, and Lean testing.
For more details, please refer to my article on the "One-page Test Strategy".
2.2 Testing Process and Quality Gate
We often find that some teams have clearly defined testing process, but there are no strict criteria for what each step should achieve, and many quality-related tasks are not done well, resulting in huge pressure on testers in the later stages of testing or low final quality of delivery.
The "One Page Test Strategy" already includes the testing process part, but it is mentioned again here mainly to emphasize the importance of quality gates. The testing process may be different for each project or team, but regardless of the steps included in the testing process, the output of each step must be clearly defined, that is, the quality gates of each step must be defined clearly, as shown in the following figure:
Note: This figure is for illustration purposes only. The actual situation needs to be adapted according to the team's own situation.
2.3 Typical types of testing
The testing process figure example above lists various types of testing, but there are actually many more types of testing than those shown. Due to length limitations and the fact that this is not the focus of this article, here only introduces four typical types of testing that are closely related to testers. These four types of testing are classified differently, and a detailed explanation is not sought here. For those who are interested but unclear, please search online for more information.
1. Smoke testing
Smoke testing originated from the testing of circuit boards, which involved powering on the board to see if it emitted smoke. If smoke was produced, it meant that the board could not function properly and there was no need to validate other functions.
In software, smoke testing verifies the basic behavior of the software, such as "Does the program run?", "Does the user interface open?" or "Is the click event effective?" Only when the smoke test passes is it necessary to carry out further validation of the software's functional testing; otherwise, a new version must be repaired before continuing.
We found that some teams only perform smoke testing on new developed features, which is not quite correct, or rather, this test is not called smoke testing. Smoke testing should verify the basic behavior of the entire system, regardless of whether the feature is old or new.
2. Regression testing
The purpose of regression testing is to verify whether the development of new features or bug fixes have affected existing features. Therefore, regression testing mainly focuses on existing features, and testing new features is not called regression testing.
There are usually four strategies for regression testing:
- Full regression: This means testing all existing features, regardless of their importance. This strategy is costly, but it provides comprehensive coverage, and is often used for financial products with high quality requirements.
- Selective regression: This involves communication between testing and development teams to understand the areas of code that could potentially impact existing functionality, and selecting those affected feature modules for regression testing. This approach may miss some unanticipated but related features, but is a more economical approach.
- Index-based regression: This approach usually requires the team to have a coverage requirement for regression testing, such as a mandate to cover 50% of existing feature test cases, and not to fall below this coverage rate. This method solely relying on coverage numbers is the least recommended, as even though coverage targets are met, some features may have been missed.
- Accurate regression: Accurate regression is a very popular method that uses technical means to associate the scope of code changes with test cases and executes affected cases precisely. This method provides the most reliable quality, but the cost of implementing accurate testing is very high.
Regression testing can be done manually or through automation, but the amount of regression testing required is usually large, so automated testing is more efficient.
3. End-to-End Testing
End-to-end testing is classified based on the granularity of test coverage, and is related to unit testing and interface testing.
End-to-end testing verifies the entire software and its integration with external interfaces from start to finish. Its purpose is to test the dependencies and data integrity of the entire software, as well as its communication with other systems, interfaces, and databases, in order to simulate complete business processes. Therefore, end-to-end testing is the most valuable type of testing as it reflects users' real business behavior.
However, because end-to-end testing involves various related components of the system and external dependencies, its stability and execution costs are relatively high. Therefore, there are interface testing and unit testing with smaller coverage ranges. These tests are usually implemented by isolating dependencies and will not be discussed in detail here.
4. Exploratory Testing
Exploratory testing was proposed by Dr. Cem Kaner in 1983 and is in contrast to scripted testing.
Dr. Cem Kaner defined exploratory testing as follows:
"Exploratory testing is a style of software testing that emphasizes the personal freedom and responsibility of the individual tester to continually optimize the quality of his/her work by treating test-related learning, test design, test execution, and test result interpretation as mutually supportive activities that run in parallel throughout the project."
The core of exploratory testing is to iterate quickly by treating test-related learning, test design, test execution, and test result interpretation as a cycle in order to continuously collect feedback, adjust testing, and optimize value.
Exploratory testing particularly requires testers to have subjective initiative and to work in an environment that encourages test innovation. If the requirements for testing metrics are too high and the testers' subjective initiative cannot be fully utilized, the effectiveness of exploratory testing is limited.
Exploratory testing is a relatively free testing style that should not be restricted by various testing models, and its execution method should not be strictly defined, as this would affect its effectiveness.
For more information on exploratory testing, please refer to the article "" by my colleague Liu Ran at Thoughtworks and the article "" by Shi Xiangyang.
2.4 Automated Testing Layered Strategy
When introducing end-to-end testing earlier, different coverage ranges of tests were mentioned, including unit testing and interface testing. The layered strategy for automated testing is to stratify these different granularities of test types and suggest that automated testing should consider the coverage ratio of different layers based on factors such as cost and stability.
According to Google's testing law shown in the figure below, we can clearly see the differences in the repair costs after different layers of tests discover problems. The repair cost of problems found by unit testing is much lower than that of end-to-end testing. Therefore, it is generally recommended that testing stratification should tend to the pattern of the d, as shown on the right side of the figure below. Ham Vocke, a colleague at Thoughtworks, provides a detailed introduction to this in his article "".
It is worth noting that the Testing Pyramid is not a silver bullet and the testing strategy is not fixed. It needs to be adjusted and evolved periodically according to the actual situation to meet the current product/project quality objectives.
For more information about automated testing layering, you can also refer to the following articles:
- "Lean Testing"
- "Thinking and Practice of Microservices Testing"
- "Testing Pyramid Is Not a Silver Bullet"
2.5 Test Cases
Designing test cases is a basic skill that every tester must have. The quality of test cases directly affects the effectiveness of testing, and the importance of test cases is self-evident. However, designing good test cases is not a simple task. Here, test cases are not distinguished between manual cases and automated cases.
1. Good Test Cases
First of all, it is necessary to understand what kind of test cases are considered good.
Good test cases should be able to completely cover the tested software system and be able to detect all issues. Therefore, good test cases should have the following characteristics:
- Overall completeness without excessive design: A set of effective test cases that can completely cover the testing requirements without exceeding them.
- Accuracy of equivalence partitioning: Each equivalence class can guarantee that if one of the inputs passes the test, the other inputs will also pass the test.
- Completeness of equivalence class collection: All possible boundary values and boundary conditions have been correctly identified.
Of course, due to the complexity of the software system, not all test cases can achieve 100% coverage, but can only achieve as much completeness as possible.
2. Test Case Design Method
To strive for complete test cases, it is necessary to understand the corresponding test case design methods. Test cases should be designed by considering both business requirements and system characteristics. The following test case design methods are commonly recommended:
- Data flow method: A method of dividing test scenarios based on data flow in the business process. Consider the data flow in the business process, cut off the process at the point where data is stored or changes, and form multiple scenario cases. This is described in my article "What Do You Think of When We Talk About BDD?".
- Equivalence partitioning method: Divide all possible input data of the program into several parts and select a few representative data from each part as test cases. Equivalence classes are divided into valid and invalid equivalence classes, and designing test cases based on equivalence partitioning method requires attention to non-redundancy and completeness.
- Boundary value method: The boundary value analysis method is a supplement to the equivalence partitioning method. Typically, test data that is just equal to, just greater than, or just less than the boundary is taken, including testing input-output boundary values and cases from equivalence class boundaries.
- Exploratory testing model: Recommended books by SHI Liang and Gao Xiang, "", classify exploratory testing into system interaction testing, interaction feature testing, and single feature testing at different levels, and introduce different exploratory models for each level. Although I do not believe that exploratory testing needs to strictly follow these models, they can help testers think during the exploration process and are also valuable references for designing test cases.
For test case design, the following articles can also be referenced:
03 Third Basic Responsibility: Implementation and Execution of Tests
The third basic responsibility of testing is the implementation and execution of tests.
Implementation and execution of tests means carrying out the corresponding test activities based on the testing strategy and the designed tests. This part is relatively simple and can be briefly introduced from two dimensions: manual testing and automated testing.
3.1 Manual Testing
Manual testing, as the name suggests, is testing that is done manually. Depending on whether there are pre-designed test cases (scripts), it can be divided into scripted testing and exploratory testing.
The execution of scripted testing is relatively simple with mature test cases. However, some tests may require complex preparation work, such as preparing test data through a long chain, or making the system reach the state of triggering the test. In addition, it may be necessary to consider the configuration adjustments corresponding to different environments, as well as the preparation and management of the environment. These are the contents that the tester may need to be involved in for manual testing.
Regarding exploratory testing, the book "" introduces the Session-Based Test Management (SBTM) method based on test levels to carry out exploratory testing. Testers decompose the testing charter into a series of test levels, and complete the design, execution, and recording of a specific testing charter in the test level.
Similarly, this method has certain guiding significance for exploratory testing, but it is not recommended to strictly follow this mode of execution, otherwise it will destroy the essence of exploratory testing and not achieve the corresponding effect.
3.2 Automated Testing
The previous section introduced the layering strategy of automated testing, and here we will focus on the implementation and execution of automated testing.
1. Tool Selection
The implementation of automated testing relies on automated testing tools, so the selection of tools is critical. Generally, the following factors need to be considered when selecting tools:
- Meeting requirements: Different projects have different requirements, and the selection should be based on the requirements. We should aim for suitability rather than the best tool.
- Easy to use: Usability is important, as well as matching the skills of the testers. It is also important that the tool is easy to get started with. If a tool is not user-friendly and difficult to get started with, it will be hard to motivate everyone to use it actively.
- Language support: The best practice is to use the same language as the project development to write automation scripts, which enables developers to flexibly add tests.
- Compatibility: Including compatibility between browsers, platforms, and operating systems.
- Reporting mechanism: The result report of automated testing is crucial, so it is preferred to select a tool with a comprehensive reporting mechanism.
- Easy maintenance of test scripts: Test code is as important as product code, and the maintenance of tests cannot be neglected. We need a tool that is easy to maintain.
- Tool stability: Instability can reduce the effectiveness of testing, so the stability of the tool itself should be ensured, otherwise, the gains may not outweigh the losses.
- Code execution speed: The execution speed of test code directly affects the efficiency of testing. For example, there is a big difference in the execution speed of test code written with Selenium and Cypress.
2. Test Implementation
Articles on automated testing can be found everywhere. Here, we emphasize not to hard code test data in the test scripts. Data should be independent and driven to improve the reusability of the test code.
3. Execution of Automated Testing
Do you think that after implementing automated testing, the execution is as simple as running the tests? Not really. The execution of testing also requires certain strategies, such as setting different execution frequencies for different tests, integrating automated testing with pipelines to achieve continuous testing and feedback, and maximizing the value of automated testing.
Regarding automated testing, we recommend reading the following articles:
04 Fourth Basic Responsibility: Defect Management and Analysis
The fourth basic responsibility of testing is defect management and analysis.
Defects are very valuable for software quality and software testing, and good defect management and analysis can bring great value, but it is often overlooked.
A crucial part of defect management is understanding the defect's life cycle. People often think that defects only need to be discovered, fixed, and verified, but there are more steps to the life cycle than these. I believe that the defect life cycle should include the following stages:
- Defect discovery: This is relatively simple, it means discovering system behavior that is inconsistent with expected behavior, or non-functional problems such as performance and security issues. Defects may be discovered during testing, reported by users, or discovered through routine log analysis or log monitoring alerts.
- Information collection and defect diagnosis: After a defect is discovered, relevant defect information needs to be collected and preliminary diagnosis performed. The relevant defect information should be collected as completely as possible, including complete reproduction steps, scope of impact, users, platforms, data, screenshots, log information, etc. Sometimes, development or operations personnel may need to help with this step.
- Defect recording: The collected log information is recorded in the log management system, associated with the corresponding functional module, and severity is defined.
- Triage/prioritization: Not all recorded defects need to be fixed, so defects need to be classified and sorted by priority to determine whether they are valid, which are to be fixed, and when to fix them. This step may need to be done with business and developer.
- Defect fixing: This step is completed by the developer, fixing the defect.
- Defect verification: Verify that the defect has been fixed by the developer and perform appropriate regression testing of the related functionality.
- Add corresponding automated testing: For defects that have already been discovered, it is best to add automated testing to detect similar problems in a timely manner. Automated testing can be unit testing, interface testing, or UI testing, depending on the actual situation and the layered automation testing strategy. This step may be reversed in order with the previous step.
- Defect statistics and analysis: Statistical analysis of the number and severity of defects, their year-on-year or month-on-month trends, analysis of the root causes of defects using fishbone diagrams and 5-Why method, identification of the stage where defects are introduced, and analysis of the effectiveness of previous defect prevention measures.
- Develop improvement measures to prevent defects: Based on the results of step 8, develop corresponding, feasible improvement measures to prevent defects from occurring.
- Regularly review and check the improvement status: Based on the statistical analysis of defects, regularly review the series of activities in defect management, check the implementation of improvement measures, continuously optimize the defect management process, and better prevent defects.
Regarding defect management and analysis, I have previously written relevant articles. Friends are welcome to read them:
- "Effective Management of Software Defects"
- "How Defect Analysis Helps Build Quality In"
- "All the trouble caused by dirty data"
05 Fifth Basic Responsibility: Quality Feedback and Risk Identification
The fifth basic responsibility of testing is quality feedback and risk identification.
Testing needs to have a clear understanding of the product's quality status, be able to identify quality risks in a timely manner, and provide feedback to the entire team.
In addition to defect information, there may be many other quality-related data, which can be collected and statistically analyzed. Visualizing this data and presenting it to the team will help team members in different roles better take responsibility for quality. It is also necessary to identify quality risks during the statistical analysis of quality data and provide feedback to the team.
Quality status information may include test coverage, defect-related data, code freeze period length, test waiting time, and other content. Specific information that needs to be collected should be customized according to the actual quality requirements of the project.
It is recommended to conduct periodic quality feedback. tester should lead the definition of the data that needs to be collected, and developer should work with tester to collect relevant data. developer may also need to participate in the subsequent analysis process.
This article is the foundation of building a systematic thinking for testing. It mainly starts from the basic responsibilities of testing, introduces related methods, tools, and practices, and is suitable for junior tester. Of course, for intermediate and senior tester, they can also check whether they have fulfilled these basic responsibilities in their own testing system.
Finally, you can follow my self-published book "Beyond Testing (in Chinese)", which introduces content that tester or QA need to pay attention to beyond the basic responsibilities of testing.