Tuesday, March 31, 2009

Testing Techniques

I've written quite a few tests that look like:

String[][] cases = new String[][] {
    { "something", "expected result" },
    ...
};
for (String[] c : cases)
    assertEquals(c[0], c[1], process(c[0]));


This works well enough, but one drawback is that when one case fails, there's no easy way to "go" to that case to edit it.

The testing people would probably say each case should be a separate test method, but that's a lot of extra declarations of methods.

So lately I've been writing tests like:

    test("something", "expected result");
    ...

void test(String input, String expected) {
    assertEquals(input, expected, process(input));
}


Written this way, when there's a failure Eclipse will take me straight to the failing case.

Another Technique

Sometimes it's useful to do some manual "exploratory" testing. (I know you should program test first, but I'm afraid I don't always manage it.)

For example, working on jSuneido's expression code generation I wrote a simple read-eval-print-loop so I could type in expressions and have them compile and run and print out the result.

One of the dangers is that you don't write so many automated tests because you're doing manual testing. To get around this I had my program write the cases to a text file in the correct format to paste straight into my tests.

For example, this session:

> 123 + 456
 => 579
> f = function (x, y) { x + y}; f(123, 456)
 => 579
> "hello world".Size()
 => 11
> q
bye


Added this to the log file:

test("123 + 456", "579");
test("f = function (x, y) { x + y}; f(123, 456)", "579");
test("'hello world'.Size()", "11");

1 comment:

Larry Reid said...

Awesome ideas, especially the second one. I'm going to start thinking of all the ways you could generate tests from ad-hoc testing sessions.

By the way, at SourceMed when we started doing WinForms .NET development, one of the guys found an add-on to Nunit to test the GUI, and it worked on a "record" model, somewhat similar to your idea, I think.