Writing dynamic unit tests (flexunit) is confusing. how to make it more modular?

I have resorted to help for a long time and now I cannot find an answer to my current problem.

reverse information

I am writing some unit tests (yay!). I have 40 objects that implement the interface. one function in this interface takes two parameters, one rectangle and one array Rectangle:

public function foobar(foo:Rectangle, bar:Array/*Rectangle*/):void;

I want to write tests for each of these 40 objects. to test all possibilities, I need to run tests where there are foo variants and bar variations (in length and content). so x is the number of foo and 1 to x is the number of Rectangle in foo.

each object that implements the interface runs an algorithm that will perform some computation on each of the rectangles in the row and change their properties. each algorithm gives completely different results.

If I select 10 possible foo objects and 10 possible objects for the bar array, I end up writing thousands! tests. I don't want to write thousands of tests.


would it be too reverse for me to write an algorithm that takes possible objects and runs tests in all possible configurations producing results, and then I go back and check that all the results are correct? is this just the wrong way to do unit tests?

Is it wrong to run an algorithm that produces results THEN manually checking the output?

my other thought was that I feed the algorithm of possible objects and it spits out some xml or json that is formatted for the test harness, then I loop through each test, filling in the missing assertion values, then loop them through?

my other plan would be to write an algorithm that takes a list of foo Rectangles and a list of possible Rectangles to be used in the bar, and this algorithm creates JSON in a format that works with my test harness (it includes assertions). since the algorithm that generates the JSON would not know the assertions, I would manually write them down before sending them through the test harness.

Is this common practice?

thanks for any feedback :)


source to share

1 answer

It's not easy to come up with a good answer without knowing the details of what calculations you are doing in your implementations, but I admire your willingness to unit test, so I'll try my best and I hope the answer doesn't take too long.;)

0. Great expectations?

Honestly, there is probably no answer that matches your question exactly - many ways to do things get things right, and the only fundamental rule that applies to unit tests is that they should reliably help you make sure your system is stable. If they don't, you shouldn't write them. But if it can be done by creating an Excel sheet with millions of lines of different combinations of inputs and outputs and feeding it in CSV format into a for loop in a unit test ...

Ok, maybe there is a better way. But ultimately it all depends on how thoroughly you want to do it, and how willing you are to deviate from what you've already done to improve your tests.

1. Get ready for some great sayings

From what I've read between the lines, you don't spend a lot of time testing because you wrote your code before writing your tests. Unfortunately, this is really not the best way to conduct unit tests. Every line you add to production code should always be covered by a failed unit test before you even write it. This is the only way you can always be sure that your system is working - and this is always checked! Sounds boring? This is not the case once you get used to it.

I won't bother you too much with the fundamentals, but if you are really serious about unit testing, just let me recommend that you start applying TDD to all future projects: first, maybe watch the TDD episodes on cleancoders.com - Uncle Bob makes the way better explain these things than me and he is interested to watch (although his demos are in Java, but that shouldn't be a big problem), TDD's foundational principles apply to all languages).

In the meantime, I will still be making some great comments based on your question. Sue me;)

2. Smart-ass note # 1: How to check?

Make sure you remember that the purpose of your tests is to prove that the code you tested works correctly , not to repeat it for any possible combination of arguments. You should always keep the number of assertions down to the minimum required to validate your code.

So the answer to your first question is, you only have to have one test to prove the correctness of each algorithm you test. Various combinations of input and output values โ€‹โ€‹can be used for assertions within this test.

The only reason for adding extra tests for each algorithm is when you are failing, that is, what happens if you pass null

as an argument or anything that violates constraints. Every time you throw an error on failure, it should be tested in a separate test.

However, it is a little more difficult to choose at what level of abstraction you start writing your tests. There is usually no need to write a test for every method in a class, especially not private ones. This is another reason for using TDD - it makes you wonder what you are trying to do from the outside, i.e. You check what part of your system is supposed to do instead of checking every implementation detail. When you test before the code, it is easy to add a test here and there when you notice that your program has grown and things get complicated; it is always much harder to do after the fact.

3. Smart-ass note # 2: What to test?

The goal of your program design should be to keep your blocks as separate from other parts of the system as possible . This means that applying a combination of things to another combination of things in one device is probably not a good design. You should be able to test only the code implemented in the device under test, separately from everyone else. It means

  • make sure each method you test only does one thing (!) and

  • all other things needed in this method must either be passed as arguments or provided to the class as field variables - let me understand: there is no object creation in your method unless they are temporary variables or return values! External dependencies, then you must replace the test with doubles when testing the method.

4. Fix trying to apply this to your problem

Why am I telling you all this? It seems to me that your approach is more like an integration tag : there is a black box for testing and gzillion can come out of it.This is ok for some part, but you should still try to make this black box as small as possible.

Now, since I don't know anything about the real math you do, I'll start making a few assumptions here. I'm sorry if they don't fit, but I'll be happy to add or change information if you provide some code samples.

The obvious first guess is that you repeatedly apply the same calculation to all members of the array bar

based on the values โ€‹โ€‹of the foo

Rectangle coordinates . This would mean that you are actually doing two things in your method: a) iterating over an array, bar

and b) applying a formula:

public function foobar ( foo:Rectangle, bar:Array ) : void {
    for each ( var rect:Rectangle in bar) {
        // things done to rect based on foo


If so, you can easily improve your architecture. The first step would be to highlight the formula:

public function foobar ( foo:Rectangle, bar:Array ) : void {
    for each ( var rect:Rectangle in bar) {
        applyFooValuesToRect( foo, rect);

public function applyFooValuesToRect ( foo : Rectangle, rect : Rectangle ) : void {
    // things done to rect based on foo


You will now see that what you really should be testing is a technique applyFooValuesToRect

that suddenly makes it easier to write a test.

I can also assume that there might be variations on iteration: one implementation applies foo

to all bar

, one meets some criteria and only applies to positive matches, perhaps one of them does a chain of calculations on foo based on each of the values bar

, two formulas can be used instead one, etc. If any of these uses apply to your project, you can greatly improve your API and reduce complexity by using Strategy . For each of the 40 options, make the actual formula a separate class that implements the common interface Formula


public interface Formula {
    function applyFooToBar (foo:Rectangle, bar:Rectangle) : Rectangle;

public class FormulaOneImpl implements Formula {

    public function applyFooToBar (foo:Rectangle, bar:Rectangle) : Rectangle {
        // do things to bar
        return bar;

public class FormulaTwoImpl implements Formula ... // etc.


Now you can test each formula separately and apply assertions to the returned value.

The original class will have a field variable of the type Formula


public class MyGreatImpl implements OriginalInterface {
    public var formula:Formula;
    public function foobar (foo:Rectangle, bar:Array):void {
        for each (var rect:Rectangle in bar) formula.applyFooToBar (foo, rect);


Then you can pass all sorts of formulas - as long as they implement the interface. As a result, you can now use the mock interface to test all the rest of the algorithm: all you have to do is check what the call is applyFooToBar

and return the specified value you set for each statement. This way you can make sure that you are not really checking the formula when you are testing iterating over your array for example.

In fact, you can try to apply this to other things CriteriaMatcher

as well : A also creates a good Strategy to start with ...

When you break your code this way, you can see that you have multiple implementations that rely on the same formula but have different iteration loop options, etc. It will probably even allow you to reduce the number of implementations of your original interface - because the options are now encapsulated in different strategy classes.

Boy, this is a long text. I will stop writing. Hopefully I can help you with this - just add a comment or edit your question if I'm too far from my assumptions. Perhaps we can still narrow down the possible solution.



All Articles