HCL SW Blogs
Select Page

In the previous blog, we discussed basic testing traits a tester should possess. Today we’ll focus on the techniques themselves. If you missed it, here the link to the prior blog: Testing Versus Checking

We at the Testing Center of Excellence (TCoE) call this process the Five-fold Testing System. Any testing that you do can be described in terms of five dimensions.

  1. Testersdescribes “who” is doing the testing
  2. Coveragedescribes “what” gets tested (i.e. in function testing, you test every function)
  3. Potential problems – describes “why” you’re testing (i.e. what risk you’re testing for)
  4. Activities – describes “how” you test (i.e. exploratory testing)
  5. Evaluation – describes the ways to tell whether the test passed or failed

Most testing involves all five dimensions. We can combine a technique that is focused on one dimension with techniques focused on the other dimensions to achieve the result you want.

Let’s see how this might work in action.

Someone might ask us to do function testing (thoroughly test every function). This tells you what to test. We still must decide who does the testing, what types of bugs you’re looking for, how to test each function, and how to decide whether the program passed or failed.

Someone else might ask you to do extreme-value testing (test for error handling when you enter extreme values into a variable). This tells you what types of problems to look for. You still must decide who will do the testing, which variables to test, how to test them, and how you’ll evaluate the results.

Finally, someone might also ask you to do beta testing (have external representatives of your market test the software). This tells you who will test. You still must decide what to tell them (and how much to tell them) about, what parts of the product to look at, and what problems they should look for (and what problems they should ignore). In some beta tests, you might also tell them specifically how to recognize certain types of problems, and you might ask them to perform specific tests in specific ways.

Different techniques utilized in the Test Automation Center of Excellence.

People-based techniques focus on “who” does the testing.

User testing. Testing with the types of people who typically would use your product. User testing might be done at any time during development, at your site or at theirs, in carefully directed exercises or at the user’s discretion. Some types of user testing, such as task analyses, look more like joint exploration (involving at least one user and at least one member of your company’s testing team) than like testing by one person.

Alpha testing. In-house testing performed by the test team (and possibly other interested, friendly insiders).

Beta testing. A type of user testing that uses testers outside of your organization. The product under test is typically very close to completion. Many testing centres will release a “pre-release code” to customers as beta testing.

Bug bashes. In-house testing using programmers, technical advisors, or anyone else who is available. A typical bug-bash lasts a half day and is done when the software is close to being ready to release.

Subject-matter expert testing. Give the product to an expert on some issues addressed by the software and request feedback (i.e. bugs, criticisms and compliments).

Paired testing. Two testers work together to find bugs. Typically, they share one computer and trade control of it while they test.

Coverage-based techniques focus on “what” gets tested.

Function testing. Test every function, one by one. Test the function thoroughly, to the extent that you can say with confidence that the function works. White box function testing is usually called unit testing and concentrates on the functions as you see them in the code.

Equivalence class analysis. An equivalence class is a set of values for a variable that you consider equivalent. Test cases are equivalent if you believe that:

(a) they all test the same thing

(b) if one of them catches a bug, the others probably will too

(c) if one of them doesn’t catch a bug, the others probably won’t either

Once you’ve found an equivalence class, test only one or two of its members.

Boundary testing. An equivalence class is a set of values. If you can map them onto a number line, the boundary values are the smallest and largest members of the class. In boundary testing, you test these, and you also test the boundary values of nearby classes that are just smaller than the smallest member of the class you’re testing and just larger than the largest member of the class you’re testing.

For example, consider an input field that accepts integer values between 10 and 50. The boundary values of interest are 10 (smallest), 9 (largest integer that is too small), 50 (largest), and 51 (smallest integer that is too large).

Logic testing. Variables have relationships in the program. For example, the program might have a decision rule that says that if PERSON-AGE is greater than 50 and if SMOKER is YES, then OFFER-INSURANCE must be NO. The decision rule expresses a logical relationship. Logic testing attempts to check every logical relationship in the program.

Specification-based testing. Testing focused on verifying every factual claim that is made about the product in the specification. This often includes every claim made in the manual, in marketing documents or advertisements, and in technical support literature sent to customers.

Combination testing. Testing two or more variables in combination with each other Most benefits provided by the program are based on the interaction of many variables. If you don’t vary them jointly in your tests, we’ll miss errors that are triggered by difficult combinations, rather than individual values.

Activity-based techniques focus on “how” you test.

Regression testing. Regression testing involves reuse of the same tests, so you can retest (with these) after a change is made. There are two kinds of regression testing:

  1. You do bug fix regression after reporting a bug and hearing later that it’s fixed. The goal is to prove that the fix is good. The goal of old bugs regression is to prove that a change to the software has caused an old bug fix to become unfixed.
  2. Side-effect regression, also called stability regression, involves retesting of substantial parts of the product.

The goal is to prove that the change has caused something that used to work to now be broken.

Smoke testing. This type of side-effect regression testing is done with the goal of proving that a new build is worth testing. This is also called build verification testing (BVT). Smoke tests are often automated and standardized from one build to the next. They test things you expect to work, and if they don’t, you’ll suspect that the program was built with the wrong file or that something basic is broken.

Exploratory testing. We expect the tester to learn, throughout the project, about the product, its market, its risks, and the ways in which it has failed previous tests. New tests are constantly created and used. They’re more powerful than older tests because they’re based on the tester’s continuously increasing knowledge.

Scenario testing. A scenario test normally involves below attributes.

  • The test must be realistic. It should reflect something that customers would do.
  • The test should be complex, involving several features, in a way that should be challenging to the program.
  • It should be easy and quick to tell whether the program passed or failed the test.
  • A stakeholder is likely to argue vigorously that the program should be fixed if it fails this test.

Tests derived from use cases are also called scenario tests or use case flow tests. A test with these above attributes will be persuasive and will probably yield bug fixes if it fails the program.

Installation testing. Install the software in the various ways and on the various types of systems that it can be installed. Check which files are added or changed on disk. Does the installed software work? What happens when you uninstall?

Load testing. The program or system under test is attacked, by being run on a system that is facing many demands for resources. Under a high enough load, the system will probably fail, but the pattern of events leading to the failure will point to vulnerabilities in the software or system under test that might be exploited under more normal use of the software under test.

Long sequence testing. Testing is done overnight or for days or weeks. The goal is to discover errors that short sequence tests will miss. Examples of the errors that are often found this way are memory leaks, stack overflows, and bad interactions among more than two features. This is sometimes called duration testing, reliability testing, or endurance testing.

Performance testing. These tests are usually run to determine how quickly the program runs, to decide whether optimization is needed. But the tests can expose many other bugs. A significant change in performance from a previous release can indicate the effect of a coding error.

For example, if you test how long a simple function test takes to run today and then run the same test on the same machine tomorrow, you’ll probably check with the programmer or write a bug report if the test runs more than three times faster or slower. Either case is suspicious because something fundamental about the program has been changed.

Evaluation-based techniques focus on how to tell whether the test passed or failed.

Self-verifying data. The data files you use in testing carry information that lets you determine whether the output data is corrupt.

Comparison with saved results: Regression testing in which “pass or fail” is determined by comparing the results you got today with the results from last week. If the result was correct last week, and it’s different now, the difference might reflect a new defect.

Comparison with a specification or other authoritative document. A mismatch with the specification is (probably) an error.

Consistency: Consistency is an important criterion for evaluating a program. Inconsistency may be a reason to report a bug, or it may reflect intentional design variation.

  • Consistent with history. Present function behavior is consistent with past behavior.
  • Consistent with user’s expectations. Function behavior is consistent with what we think users want.

In addition to the above techniques, Testing Center of Excellence (TCoE) is now offering “accessibility-as-a-service” to the entire organization. This means that we can assist in ensuring that our digital products are accessible to all individuals, including those with disabilities. If you are interested in utilizing our accessibility testing services, please contact us and we’d be happy to provide you with more information.

Comment wrap
Introducing HCLSoftware U
Digital Solutions | April 17, 2023
Introducing HCLSoftware U
We are very excited to launch our first phase of the HCLSoftware U training cloud!
Hybrid Work in the Knowledge Industry: Benefits, Challenges and Solutions
HCLSoftware | April 7, 2023
Hybrid Work in the Knowledge Industry: Benefits, Challenges and Solutions
In this blog post, we will explore the benefits and challenges of hybrid work in the knowledge industry and provide viable solutions that can benefit everyone.
Professional Troubleshooting: The “third key” to Providing Great Support
HCLSoftware | March 3, 2023
Professional Troubleshooting: The “Third Key” to Providing Great Support
Professional troubleshooting is what we call the third key for our HCLSoftware support team. What do we mean by professional troubleshooting? It is quite simple: finding a balance between "proving yourself right" or "proving yourself wrong."
Close
Filters result by
Sort:
|