« Unit Testing in C++ and Objective-C just got ridiculously easier still | Main | The iPad reconsidered »

Unit Testing in C++ and Objective-C just got easier

Day 133-365 : Catching the bokeh.jpg

Back in May I hinted that I was working on a unit testing framework for C++. Since then I've incorporated the technique that Kevlin Henney proposed and a whole lot more. I think it's about time I introduced it to the world:

This post is very old now, but is still the first point of contact with Catch for many people. Most of the material here still applies in concept, so is worth reading - but some of the specifics have changes. Please see the tutorial (and other docs) over on GitHub for more up-to-date coverage.

Introducing CATCH

CATCH is a brand new unit testing framework for C, C++ and Objective-C. It stands for 'C++ AdaptiveAutomated Test Cases in Headers', although that shouldn't downplay the Objective-C bindings. In fact my initial motivation for starting it was dissatisfaction with OCUnit.

Why do we need another Unit Testing framework for C++ or Objective-C?

There are plenty of unit test frameworks for C++. Not so many for Objective-C - which primarily has OCUnit (although you could also coerce a C or C++ framework to do the job).

They all have their strengths and weaknesses. But most suffer from one or more of the following problems:

  • Most take their cues from JUnit, which is unfortunate as JUnit is very much a product of Java. The idiom-mismatch in C++ is, I believe, one of the reasons for the slow uptake of unit testing and TDD in C++.
  • Most require you to build libraries. This can be a turn off to anyone who wants to get up and running quickly - especially if you just want to try something out. This is especially true of exploratory TDD coding.
  • There is typically a certain amount of ceremony or boilerplate involved. Ironically the frameworks that try to be faithful to C++ idioms are often the worst culprits. Eschewing macros for the sake of purity is a great and noble goal - in application development. For a DSL for testing application code, especially since preprocessor information (e.g. file and line number) are required anyway) the extra verbosity seems too high a price to pay to me.
  • Some pull in external dependencies
  • Some involve a code generation step

The list goes on, but these are the criteria that really had me disappointed in what was out there, and I'm not the only one. But can these be overcome? Can we do even better if we start again without being shackled to the ghost of JUnit?

What's the CATCH?

You may well ask!

Well, to start, here's my three step process for getting up and running with CATCH:

  1. Download the headers from github into subfolder of your project
  2. #include "catch.hpp"
  3. There is no step 3!

Ok, you might need to actually write some tests as well. Let's have a look at how you might do that:

[Update: Since my original post I have made some small, interface breaking, changes - for example the name of the header included below. I have updated this post to reflect these changes - in case you were wondering]

#include "catch_with_main.hpp"

TEST_CASE( "stupid/1=2", "Prove that one equals 2" )
    int one = 2;
    REQUIRE( one == 2 );

Short and to the point, but this snippet already shows a lot of what's different about CATCH:

  • The assertion macro is REQUIRE( expression ), rather than the, now traditional, REQUIRE_EQUALS( lhs, rhs ), or similar. Don't worry - lhs and rhs are captured anyway - more on this later.
  • The test case is in the form of a free function. We could have made it a method, but we don't need to
  • We didn't name the function. We named the test case. This frees us from couching our names in legal C++ identifiers. We also provide a longer form description that serves as an active comment
  • Note, too, that the name is hierarchical (as would be more obvious with more test cases). The convention is, as you might expect, "root/branch1/branch2/.../leaf". This allows us to easily group test cases without having to explicitly create suites (although this can be done too).
  • There is no test context being passed in here (although it could have been hidden by the macro - it's not). This means that you can freely call helper functions that, themselves, contain REQUIRE() assertions, with no additional overhead. Even better - you can call into application code that calls back into test code. This is perfect for mocks and fakes.
  • We have not had to explicity register our test function anywhere. And by default, if no tests are specified on the command line, all (automatically registered) test cases are executed.
  • We even have a main() defined for us by virtue of #including "catch_with_main.hpp". If we just #include that in one dedicated cpp file we would #include "catch.hpp' in our test case files instead. We could also write our own main that drives things differently.

That's a lot of interesting stuff packed into just a few lines of test code. It's also got more wordy than I wanted. Let's take a bit more of a tour by example.

Information is power

Here's another contrived example:

TEST_CASE( "example/less than 7", "The number is less than 7" )
    int notThisOne = 7;

    for( int i=0; i < 7; ++i )
        REQUIRE( notThisOne > i+1  );

In this case the bug is in the test code - but that's just to make it self contained. Clearly the requirement will be broken for the last iteration of i. What information do we get when this test fails?

    notThisOne > i+1 failed for: 7 > 7

(We also get the file and line number, but they have been elided here for brevity). Note we get the original expression and the values of the lhs and rhs as they were at the point of failure. That's not bad, considering we wrote it as a complete expression. This is achieved through the magic of expression templates, which we won't go into the details of here (but feel free to look at the source - it's probably simpler than you think).

Most of the time this level of information is exactly what you need. However, to keep the use of expression templates to a minimum we only decompose the lhs and rhs. We don't decompose the value of i in this expression, for example. There may also be other relevant values that are not captured as part of the test expression.

In these cases it can be useful to log additional information. But then you only want to see that information in the event of a test failure. For this purpose we have the INFO() macro. Let's see how that would improve things:

TEST_CASE( "example/less than 7", "The number is less than 7" )
    int notThisOne = 7;

    for( int i=0; i < 7; ++i )
        INFO( "i=" << i );
        REQUIRE( notThisOne > i+1  );

This gives us:

    info: 'i=6'
    notThisOne > i+1 failed for: 7 > 7

But if we fix the test, say by making the for loop go to i < 6, we now see no output for this test case (although we can, optionally, see the output of successful tests too).

A SECTION on specifications

There are different approaches to unit testing that influence the way the tests are written. Each approach requires a subtle shift in features, terminology and emphasis. One approach is often associated with Behaviour Driven Development (BDD). This aims to present test code in a language neutral form - encouraging a style that reads more like a specification for the code under test.

While CATCH is not a dedicated BDD framework it offers a several features that make it attractive from a BDD perspective:

  • The hiding of function and method names, writing test names and descriptions in natural language
  • The automatic test registration and default main implementation eliminate boilerplate code that would otherwise be noise
  • Test data generators can be written in a language neutral way (not fully implemented at time of writing)
  • Test cases can be divided and subdivided into SECTIONs, which also take natural language names and descriptions.

We'll look at the test data generators another time. For now we'll look at the SECTION macro.

Here's an example (from the unit tests for CATCH itself):

TEST_CASE( "succeeding/Misc/Sections/nested", "nested SECTION tests" )
    int a = 1;
    int b = 2;
    SECTION( "s1", "doesn't equal" )
        REQUIRE( a != b );
        REQUIRE( b != a );

        SECTION( "s2", "not equal" )
            REQUIRE_FALSE( a == b);

Again, this is not a great example and it doesn't really show the BDD aspects. The important point here is that you can divide your test case up in a way that mirrors how you might divide a specification document up into sections with different headings. From a BDD point of view your SECTION descriptions would probably be your "should" statements.

There is more planned in this area. For example I'm considering offering a GIVEN() macro for defining instances of test data, which can then be logged.

In Kevlin Henney's LHR framework, mentioned in the opening link, he used SPECIFICATION where I have used TEST_CASE, and PROPOSITION for my top level SECTIONs. His equivalent of my nested SECTIONs are (or were) called DIVIDERs. All of the CATCH macro names are actually aliases for internal names and are defined in one file (catch.hpp). If it aids utility for BDD or other purposes, the names can be aliased differently simply by creating a new mapping file and using that.


There is much more to cover but I wanted to keep this short. I'll follow up with more. For now here's a (yet another) list of some of the key features I haven't already covered:

  • Entirely in headers
  • No external dependencies
  • Even test fixture classes and methods are self registering
  • Full Objective-C bindings
  • Failures (optionally) break into the interactive debugger, if available
  • Floating point tolerances supported in an easy to use way
  • Several reporter classes included - including a JUnit compatible xml reporter. More can be supplied

Are there any features that you feel are missing from other frameworks that you'd like to see in CATCH? Let me know - it's not too late. There are some limiting design goals - but within those there are lots of possibilities!

PrintView Printer Friendly Version

EmailEmail Article to Friend

Reader Comments (70)

Great stuff. Really impressive. The boilerplate overhead always pit me off when trying to do tdd in c++

December 29, 2010 | Unregistered CommenterTim Reynolds

Very cool. Coming from a .Net background I see a lot of good info here.

December 29, 2010 | Unregistered CommenterEricHeadspring

is there an explicit switch to run all tests? I see you have it as default with no arguments, but I'd like to be able to call a 'run all tests' with other arguments. Thanks!

December 29, 2010 | Unregistered CommenterAndrew Helfer

Thanks Tim & Eric.

@Andrew: There's not an explicit switch (although it would be trivial to add one). When I say running all tests is the default - I just mean if you don't specify any tests to run. You can freely set other arguments (e.g. which reporter to use) and still have it run all tests. Let me know if you still feel a need for an explicit switch.

I'm hoping to have a chance to work on a bit more of the docs later - including the command line.

December 29, 2010 | Registered CommenterPhil Nash

Hm, I seem to be missing something - how do you trigger a gdb break at the failed test when it fails?

December 30, 2010 | Unregistered CommenterBenn


Make sure you are compiling with the DEBUG symbol, and pass -b on the command line.

December 30, 2010 | Registered CommenterPhil Nash

I've added an initial cut of the command line docs on the wiki. Hopefully this should clarify some of the questions in this thread.

December 30, 2010 | Registered CommenterPhil Nash

I wasn't using -b nor DEBUG before, but even with those and the latest version from git, gdb does not break.

I've tried both this:
gdb ./test
(gdb) run -b
> Program exits with code 01, all 1 tests failed <

And this:
bash$ ./test -b

December 31, 2010 | Unregistered CommenterBenn

Sorry, Ben. I neglected to ask, before: what platform are you on?

DebugBreak() for gcc is currently only implemented for OS X (it should also work for vc++ on Windows).

December 31, 2010 | Registered CommenterPhil Nash

I tried compiling the TestMain.cpp and it ran into some compile errors with RunnerConfig not existing. After fixing the RunnerConfig -> Config issues I am getting a segfault when I compiled it with g++. I then just tried to compile a main.cpp that had #include "catch_default_main.hpp" which also segfaults. Maybe you could drop a make file in Test if there are some assumed CFLAGS or something that need to be compiled with it?

January 5, 2011 | Unregistered CommenterPatrick McElwee

Sorry to hear you're having trouble building.
What did you do to "fix" the RunnerConfig issue you had (was it path thing?)
You say you got a segfault when you *compiled* - was it the compiler giving the segfault - or do you mean that, having compiled with g++, it then segfaulted when you tried to run it?
Do have any more information about the SF?
There should not be any compiler flags required, other than to enable things like exception handling if that's not the default.
I am building using Apple's modified GCC. Which version are you using, and on which platform?

January 5, 2011 | Registered CommenterPhil Nash

Looks like the RunnerConfig was my fault. That should all be fixed now. Still interested to hear back on the segfaults - if they are still there.

January 6, 2011 | Registered CommenterPhil Nash

I've only used it to write a dozen or so tests for a small toy project, but so far, I love it. I used with Boost.Test until now, but this really is so much more convenient to use.

Just thought I'd give you some positive feedback. :)

So far, I think my biggest issue with it is that I find the terminology a bit odd. The SECTION name seems kind of random or unrelated to everything else, and would IMO be better named TEST_CASE. Perhaps the outer TEST_CASE could then be renamed TEST_SUITE? Or simply TEST? Ideally, I think both could be called TEST_CASE so that you could just define a test case, and then nest other test cases freely inside it, but I can't see a way to make that work with C++'s macros.
Anyway, I think TEST_CASE sounds fairly specific or narrow, so I'd expect that to name the innermost macro, with something more general for the outermost one.

But that's just minor nitpicking. Overall, you've easily won me over.

January 9, 2011 | Unregistered Commenterjalf

@Jaif Thanks for the comments. Nice to know you're appreciating it.
As for the sections and terminology: I agree that it needs more thought (as I hinted at in the post). I'd ideally like to support a consistent set of macro names for "classic" TDD and BDD approaches - but I keep coming back to the idea of having different names for the two purposes that map onto the same underlying macros - which I don't entirely like either.
One thing I'm considering is limiting the use of sections to functions defined using a different macro (e.g. FIXTURE() or SPECIFICATION() ) - but don't think that could be enforced at compile time unless the fixture function takes some argument that the section depends on. That limits being able to call other functions and have SECTIONs in those, but I'm not sure that's a problem (you could still call other functions from within a SECTION, and have REQUIREs in that function - which is more useful). It just doesn't feel right to pass an argument just to stop something unrelated from compiling!

I'm still thinking in this area - any suggestions are very welcome!

January 10, 2011 | Unregistered CommenterPhil Nash

When restructuring my unit test framework to a similar style (without the really clever stuff you and Kevlin have done!) I kept the idea of optional test & test-fixture setup and teardown functions (implemented using local structs and static methods) because I found it useful to surround them with a try/catch block so that you can handle unexpected exceptions more gracefully. I didn't like the fact that exceptions from the code surrounding the tests were so disruptive. Of course you may have a much better solution already :-)

Admittedly the tests where I've used these are more in the realm of integration tests than pure unit tests (e.g. my COM + WMI wrapper libraries) so the chances of errors in these helper functions are much higher.

January 16, 2011 | Unregistered CommenterChris Oldwood

@Chris. I'm not 100% sure what you mean.
First, class based fixtures are still supported in Catch - in several ways.
Secondly exceptions thrown from within a test case are caught by the framework - either explicitly if you are testing for them - or as a test failure if they propagate out of the test function. Catch wouldn't be a very good name if it didn't ;-)

January 20, 2011 | Registered CommenterPhil Nash

The current git head still shows a number of warnings for me with g++. The compiler flags I use are:

g++ -I../catch -g -fPIC -I. -W -Wall -Wfloat-equal -Wundef -Wshadow -Wpointer-arith -Wcast-qual -Wcast-align -Wwrite-strings -Wconversion -Wsign-compare -Wmissing-noreturn -Wmissing-format-attribute -Wpacked -Winline -Wdisabled-optimization -Wctor-dtor-privacy -Wnon-virtual-dtor -Wreorder -Wold-style-cast -Woverloaded-virtual -ffor-scope -o runtests tests/

January 29, 2011 | Unregistered CommenterWichert Akkerman

Thanks Witchert - all fixed

February 1, 2011 | Registered CommenterPhil Nash

Looks like Windows support is a bit sketchy?

I just downloaded the latest version, and am now getting this error with MSVC:

error C2373: 'DebugBreak' : redefinition; different type modifiers
D:\Program Files\Microsoft SDKs\Windows\v7.0A\include\winbase.h(4550) : see declaration of 'DebugBreak'

Microsoft defines a function DebugBreak which does pretty much the same as asm(int 3) (but it's declared as a function, not a macro, so your #ifndef doesn't detect it)

They also have an intrinsic __debugbreak() which should be preferred instead, btw -- doesn't pollute the call stack like the function does (and inline asm is forbidden on MSVC 64-builds, so __debugbreak() is really the only sane option for MSVC)

Re. naming, I don't like your idea of using a separate macro for tests containing SECTIONS. There seems to be little point, other than making it harder to figure out which macro to use when.

In an ideal world, I'd have liked to see TEST_CASE replace SECTION (so you only have that one macro to worry about, and could just nest a TEST_CASE inside another TEST_CASE to achieve the same effect as SECTION currently allows), but I can't see how that could be implemented in practice.

Still, I'd go for the simplest possible setup with as few macros as possible, each being as general as they can. I don't really see why SECTION needs to look so special from the users point of view. I realize the implementation is different, but as a user, I just think of it as a "nested test case". Am I wrong to think of it like that?

Otherwise, I'd consider simply renaming them NESTED_TEST or something like that.

February 12, 2011 | Unregistered Commenterjalf

@jalf: You were right about the Windows support. All fixed now. Thanks. Hoping to get time to finish setting up CI-in-the-cloud (on Amazon EC2) to minimise such breakages going forward.

I also used your __debugbreak suggestion, thanks for that.

As for SECTIONs, yeah I'm still musing over that one.
You say, "I don't really see why SECTION needs to look so special from the users point of view". That depends on your use case. If you're just writing nest test cases, or using them as test cases within a "fixture" then I agree. But if you're using them in a BDD way I'm not so sure. Obviously I appreciate all the perspectives I can get on that - so thanks for your comments.

February 17, 2011 | Registered CommenterPhil Nash

PostPost a New Comment

Enter your information below to add a new comment.
Author Email (optional):
Author URL (optional):
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>