Entries in unit test (2)

Tuesday
Dec282010

Unit Testing in C++ and Objective-C just got easier

Day 133-365 : Catching the bokeh.jpg

Back in May I hinted that I was working on a unit testing framework for C++. Since then I've incorporated the technique that Kevlin Henney proposed and a whole lot more. I think it's about time I introduced it to the world:

[Update]
This post is very old now, but is still the first point of contact with Catch for many people. Most of the material here still applies in concept, so is worth reading - but some of the specifics have changes. Please see the tutorial (and other docs) over on GitHub for more up-to-date coverage.

Introducing CATCH

CATCH is a brand new unit testing framework for C, C++ and Objective-C. It stands for 'C++ AdaptiveAutomated Test Cases in Headers', although that shouldn't downplay the Objective-C bindings. In fact my initial motivation for starting it was dissatisfaction with OCUnit.

Why do we need another Unit Testing framework for C++ or Objective-C?

There are plenty of unit test frameworks for C++. Not so many for Objective-C - which primarily has OCUnit (although you could also coerce a C or C++ framework to do the job).

They all have their strengths and weaknesses. But most suffer from one or more of the following problems:

  • Most take their cues from JUnit, which is unfortunate as JUnit is very much a product of Java. The idiom-mismatch in C++ is, I believe, one of the reasons for the slow uptake of unit testing and TDD in C++.
  • Most require you to build libraries. This can be a turn off to anyone who wants to get up and running quickly - especially if you just want to try something out. This is especially true of exploratory TDD coding.
  • There is typically a certain amount of ceremony or boilerplate involved. Ironically the frameworks that try to be faithful to C++ idioms are often the worst culprits. Eschewing macros for the sake of purity is a great and noble goal - in application development. For a DSL for testing application code, especially since preprocessor information (e.g. file and line number) are required anyway) the extra verbosity seems too high a price to pay to me.
  • Some pull in external dependencies
  • Some involve a code generation step

The list goes on, but these are the criteria that really had me disappointed in what was out there, and I'm not the only one. But can these be overcome? Can we do even better if we start again without being shackled to the ghost of JUnit?

What's the CATCH?

You may well ask!

Well, to start, here's my three step process for getting up and running with CATCH:

  1. Download the headers from github into subfolder of your project
  2. #include "catch.hpp"
  3. There is no step 3!

Ok, you might need to actually write some tests as well. Let's have a look at how you might do that:

[Update: Since my original post I have made some small, interface breaking, changes - for example the name of the header included below. I have updated this post to reflect these changes - in case you were wondering]

#include "catch_with_main.hpp"

TEST_CASE( "stupid/1=2", "Prove that one equals 2" )
{
    int one = 2;
    REQUIRE( one == 2 );
}

Short and to the point, but this snippet already shows a lot of what's different about CATCH:

  • The assertion macro is REQUIRE( expression ), rather than the, now traditional, REQUIRE_EQUALS( lhs, rhs ), or similar. Don't worry - lhs and rhs are captured anyway - more on this later.
  • The test case is in the form of a free function. We could have made it a method, but we don't need to
  • We didn't name the function. We named the test case. This frees us from couching our names in legal C++ identifiers. We also provide a longer form description that serves as an active comment
  • Note, too, that the name is hierarchical (as would be more obvious with more test cases). The convention is, as you might expect, "root/branch1/branch2/.../leaf". This allows us to easily group test cases without having to explicitly create suites (although this can be done too).
  • There is no test context being passed in here (although it could have been hidden by the macro - it's not). This means that you can freely call helper functions that, themselves, contain REQUIRE() assertions, with no additional overhead. Even better - you can call into application code that calls back into test code. This is perfect for mocks and fakes.
  • We have not had to explicity register our test function anywhere. And by default, if no tests are specified on the command line, all (automatically registered) test cases are executed.
  • We even have a main() defined for us by virtue of #including "catch_with_main.hpp". If we just #include that in one dedicated cpp file we would #include "catch.hpp' in our test case files instead. We could also write our own main that drives things differently.

That's a lot of interesting stuff packed into just a few lines of test code. It's also got more wordy than I wanted. Let's take a bit more of a tour by example.

Information is power

Here's another contrived example:

TEST_CASE( "example/less than 7", "The number is less than 7" )
{
    int notThisOne = 7;

    for( int i=0; i < 7; ++i )
    {
        REQUIRE( notThisOne > i+1  );
    }
}

In this case the bug is in the test code - but that's just to make it self contained. Clearly the requirement will be broken for the last iteration of i. What information do we get when this test fails?

    notThisOne > i+1 failed for: 7 > 7

(We also get the file and line number, but they have been elided here for brevity). Note we get the original expression and the values of the lhs and rhs as they were at the point of failure. That's not bad, considering we wrote it as a complete expression. This is achieved through the magic of expression templates, which we won't go into the details of here (but feel free to look at the source - it's probably simpler than you think).

Most of the time this level of information is exactly what you need. However, to keep the use of expression templates to a minimum we only decompose the lhs and rhs. We don't decompose the value of i in this expression, for example. There may also be other relevant values that are not captured as part of the test expression.

In these cases it can be useful to log additional information. But then you only want to see that information in the event of a test failure. For this purpose we have the INFO() macro. Let's see how that would improve things:

TEST_CASE( "example/less than 7", "The number is less than 7" )
{
    int notThisOne = 7;

    for( int i=0; i < 7; ++i )
    {
        INFO( "i=" << i );
        REQUIRE( notThisOne > i+1  );
    }
}

This gives us:

    info: 'i=6'
    notThisOne > i+1 failed for: 7 > 7

But if we fix the test, say by making the for loop go to i < 6, we now see no output for this test case (although we can, optionally, see the output of successful tests too).

A SECTION on specifications

There are different approaches to unit testing that influence the way the tests are written. Each approach requires a subtle shift in features, terminology and emphasis. One approach is often associated with Behaviour Driven Development (BDD). This aims to present test code in a language neutral form - encouraging a style that reads more like a specification for the code under test.

While CATCH is not a dedicated BDD framework it offers a several features that make it attractive from a BDD perspective:

  • The hiding of function and method names, writing test names and descriptions in natural language
  • The automatic test registration and default main implementation eliminate boilerplate code that would otherwise be noise
  • Test data generators can be written in a language neutral way (not fully implemented at time of writing)
  • Test cases can be divided and subdivided into SECTIONs, which also take natural language names and descriptions.

We'll look at the test data generators another time. For now we'll look at the SECTION macro.

Here's an example (from the unit tests for CATCH itself):

TEST_CASE( "succeeding/Misc/Sections/nested", "nested SECTION tests" )
{
    int a = 1;
    int b = 2;
    
    SECTION( "s1", "doesn't equal" )
    {
        REQUIRE( a != b );
        REQUIRE( b != a );

        SECTION( "s2", "not equal" )
        {
            REQUIRE_FALSE( a == b);
        }
    }
}

Again, this is not a great example and it doesn't really show the BDD aspects. The important point here is that you can divide your test case up in a way that mirrors how you might divide a specification document up into sections with different headings. From a BDD point of view your SECTION descriptions would probably be your "should" statements.

There is more planned in this area. For example I'm considering offering a GIVEN() macro for defining instances of test data, which can then be logged.

In Kevlin Henney's LHR framework, mentioned in the opening link, he used SPECIFICATION where I have used TEST_CASE, and PROPOSITION for my top level SECTIONs. His equivalent of my nested SECTIONs are (or were) called DIVIDERs. All of the CATCH macro names are actually aliases for internal names and are defined in one file (catch.hpp). If it aids utility for BDD or other purposes, the names can be aliased differently simply by creating a new mapping file and using that.

CATCH up

There is much more to cover but I wanted to keep this short. I'll follow up with more. For now here's a (yet another) list of some of the key features I haven't already covered:

  • Entirely in headers
  • No external dependencies
  • Even test fixture classes and methods are self registering
  • Full Objective-C bindings
  • Failures (optionally) break into the interactive debugger, if available
  • Floating point tolerances supported in an easy to use way
  • Several reporter classes included - including a JUnit compatible xml reporter. More can be supplied

Are there any features that you feel are missing from other frameworks that you'd like to see in CATCH? Let me know - it's not too late. There are some limiting design goals - but within those there are lots of possibilities!

Friday
May212010

The Ultimate C++ Unit Test Framework

Last night I saw Kevlin Henney's ACCU London presentation on Rethinking Unit Testing In C++. I had been looking forward to this talk for a while as I had started working on my own C++ unit test framework. I had not been satisfied with any of the other frameworks I found, so decided to write my own. I had a few guiding principles that I felt I had come through on:

  1. I wanted to capture more information than usual. I felt I could capture the expression under test, as written, as well as the important values (that is the values on the LHS and RHS of binary expressions, or just the value for unary expressions).
  2. I wanted the test expressions to be natural C++ syntax. That is I wanted comparisons to use operators such as ==, instead of a macro like ASSERT_EQUALS.
  3. I wanted automatic test registration and descriptive test names. Tests should be implementable as functions or methods.

I called my framework YACUTS (Yet Another C++ Unit Test System) and a typical test looks something like:

YACUTS_FUNCTION( testThatSomethingDoesSomethingElse )
{
	MyClass myObj;
	myObj.setup1();

	ASSERT_THAT( myObj.someValue() ) == CAP( 7 );
	ASSERT_THAT( myObj.someOtherValue() ) == CAP( myObj.someValue() + 3 );
}

If someValue() returned 7 and someOtherValue() returned 11 I'd get a result like:

testThatSomethingDoesSomethingElse failed in expression myObj.someOtherValue() == ( myObj.someValue() + 3 ).
myObj.someOtherValue() = 11, but ( myObj.someValue() + 3 ) = 10

Which I thought was pretty good. I didn't really like the way the expression had to be broken up between the two macros, but thought it a reasonable price to pay for such an unprecedented level of expressiveness. I did think about whether expression templates could help - but didn't see a way around it.

So I sat up straight when Kevlin showed how he'd achieved the same goals with something like the following:

SPECIFICATION( something_that_does_something )
{
	MyClass myObj;
	myObj.setup1();

	PROPOSITION( "values are 7 and 7+3" )
	{
		IS_TRUE( myObj.someValue() == 7 );
		IS_TRUE( myObj.someOtherValue() == myObj.someValue() + 3 );
	}
}

What I'm focusing on here is how he pulled off his IS_TRUE macro. You pass it a complete expression and it decomposes it such that you get the values of LHS and RHS, the original expression as a string, and the evaluated result - but without any additional syntax!

The details of how he achieved this are too much to go into here - and I don't remember them sufficiently to do a good job anyway. But the core trick he used to be able to "grab the first value" as he put it (and from there it's just a case of overloading operators to get those and the RHS) was to create a capturing mechanism involving the ->* operator. The reason this is significant is twofold: (1) ->* happens to have the highest precedence of the (overloadable) operators and (2) nobody else uses it (well, almost). As a result it can introduce an expression that captures everything up to the next operator (at the same level).

There is more going on here too, which is interesting. Kevlin spent most of his talk building up to the idea of "test cases" being "propositions" in a "specification". The end result is something that grammatically encourages a more declarative, specification driven, flow of assertions. His mechanism also allows him to use strings as test names (or proposition names) and to declare specification scoped variables without the need to setup a class (a specification is really a function). As well as the slight shift in emphasis it also drops a small amount of ceremony, and so is a welcome technique.

Interestingly, although I hadn't fully implemented it, I had experimented with using strings as test names too, even though my unit of test case is still the function. My mechanism was to completely generate and hide the function name, but use the string to pass to the auto registration function. However to reuse state between tests I still had to declare a class (although tests could be functions or methods), so Kevlin's approach is still an improvement here.

What interested me most was perhaps not what Kevlin did that I couldn't (although that is very interesting). But rather how remarkably similar the rest of the code looked to mine! I know that Sam Saariste has also worked on similar ideas - and Jon Jagger was having some thoughts in the same direction (although not as far I think). It seems we were all converging on not just the same goals but, to a large extent, the same implementation! Given that we were already off the beaten track that reassures me that there is a naturalness to this progression that transcends our own efforts.

Having said all that I think I prefer my name, "Yacuts" over Kevlin's LHR (for London Heathrow, the environs of which most of his work was conceived) :-)

UPDATE:
I have since fleshed out my framework - now called CATCH - and posted an entry on it.