Is it possible to write a unit test that covers everything?

Let's say I have a function

function (int x) {
  if (x < 10) return true;
  return false;
}

      

Ideally, you want to write 2 ^ 32 - 1 test cases to cover INT_MIN to INT_MAX? Of course, this is not practical.

To make life easier, we write test cases for

  • x <10, test x = 9 expect true
  • x == 10, test x = 10 expect false
  • x> 10, test x = 11 expect false

These test cases are fine, but this does not cover all cases. Let's say one day someone changed a function to be

function (int x) {
  if (x == 12) return true;
  if (x < 10) return true;
  return false;
}

      

it will run the test and run all the tests passed. How do we make sure we cover every senario without going to extremes. Is there a keyword for this problem I'm describing?

+3


source to share


2 answers


This is partly a comment and partly an answer because of the way you phrased the question.

A comment

Is it possible to write a unit test that covers everything?

Not. Even in your example, you are limiting test cases 2^32

, but what if the code moves to a 64-bit system and then someone adds a line with a 2^34

or whatever.

Also your question tells me what you think of static test scripts with dynamic code for example. code is dynamic because it is programmed over time by the programmer, this does not mean changing the code dynamically. You should be thinking dynamic test cases with dynamic code.

Finally, you didn't notice if it was white, gray, or black box testing.

Answer

Let the tool analyze your code and generate test data.



See: Poll on generating automatic test data

You also asked about search keywords.

Here is a Google search for this which I found the values:

analysis of automatic analysis test code analysis

Similar

I have never used one of these test tools as I use Prolog DCG to generate my test cases and currently with a project I am doing millions of test cases in about two minutes and testing them for a few minutes. Some of the test cases that fail I would never have come up with myself, so some might be considered overkill, but it works.

Since many people don't know Prolog DCG is here, a similar way is explained using C # with LINQ by Eric Lippert, Every Binary Tree There

+2


source


No, there is currently no general algorithm for this that does not require any very computationally intensive (for example, to test many, many cases), but you can write your unit tests in such a way that they have a higher probability of failure if changed. method. For example, in the answer, write a test for x = 10. For the other two cases, first pick a pair of random numbers between 11 int.Max

and test them. Then check for a pair of random numbers between int.Min

and 9. The test will not necessarily fail after the modification you described, but it is more likely to fail than if you simply fixed the value.

Also, as @GuyCoder pointed out in his excellent answer, even if you tried to do something like this, it is surprisingly difficult (or impossible) to prove that there is no possible change in the method that will break your test.

Also, keep in mind that no test automation (including unit testing) is a reliable testing method; even under ideal conditions, you usually cannot prove 100% that your program is correct. Keep in mind that virtually all approaches to software testing are fundamentally empirical methods, and empirical methods cannot truly provide 100% certainty. (They can achieve significant confidence, although, in fact, many scientific articles achieve 95% confidence or higher - sometimes much higher - so in such cases the difference may not be as important.) For example, even if you have 100% code coverage, how do you know there is no bug in your tests? Are you going to write tests for tests? (This can lead toturtle to the end ).

If you want to get a really literal about it, and you buy from David Hume, you really can't be 100% certain about something based on empirical testing; just because a test passes every time you run it does not mean that it will continue to pass in the future. I got distracted.



If you're interested, formal validation explores methods of deductively proving that software (or at least certain aspects of software) are correct. Please note that the main problem is that it is usually very difficult or impossible to achieve formal verification of a complete system program of any complexity, nevertheless, especially if you are using third-party libraries that are not officially verified. (Those, together with the difficulty learning techniques in the first place are some of the main reasons official scrutiny has not been lifted outside academia and some very narrow industry applications).

End point: The software comes with bugs. It would be difficult for you to find a complex system that was 100% defect-free at the time it was released. As I mentioned above, there is currently no known technique to ensure that your testing finds all bugs (and if you can find them, you become a very rich person), so for the most part you will have to rely on statistical measures to find out whether you have been adequately verified.

TL; DR No, you can't, and even if you still couldn't be 100% sure that your software is correct (there might be a bug in the tests, for example). You will also need unit test tags for the foreseeable future. However, you can write tests to be more resilient to change.

0


source







All Articles