Entries in tdd (7)

Wednesday
Jul082015

A Game of Tag

One of the tent-pole features of Catch is the ability to write test names as free-form strings. When you run a Catch executable from the command line you can specify a test case by name, to run just that one:

./MyTestExe "a very nice test case"

or you can use wildcards to run a group of test cases (or just one with less typing):

./MyTestExe "*very nice*"

If you want to use wildcards but you're not sure what they'll match you can combine this with the listing option, -l, to see which test cases match the pattern:

./MyTestExe "*very nice*" -l
Matching test cases:
  a very nice test case
  a not very nice test case
2 matching test cases

This is already quite a powerful way to group test cases into ad-hoc "suites". However we don't want to twist our test names into artificial schemes for this purposes (although, early on, that's exactly what I proposed). Instead Catch allows you to add "tags" to test cases.

TEST_CASE( "a very nice test case", "[nice][good]" ) { /* ... */ }
TEST_CASE( "a not very nice test case", "[nice][bad]" ) { /* ... */ }

Now we can run all tests with a certain tag:

./MyTestExe [good]

or combination of tags:

./MyTestExe [nice][good]

also with exclusions:

./MyTestExe [nice]~[bad]

unions are supported with ,:

./MyTestExe [nice],[pleasant]

Very powerful! And this functionality has been around for a while.

More recent, and less well known (mostly because they weren't documented until recently) are a set of "special tags": Instruction Tags, Hiding Tags, Tag Aliases and some automatically generated tags.

Let's see what they're all about.

Instruction Tags

In general all tags that start with a symbol are reserved by Catch (or, put another way, user defined tag names must start with an alpha-numeric character). This allows a nice rich range of namespaces for special tags. Tags that start with the ! character are Instruction tags. They inform Catch something about the test case that they apply to. At time of writing the following are defined:

  • [!hide] This "hides" the test from the default run (i.e. if you run the test executable without specifying any names or tags). This feature was originally introduced with the [hide] tag (note, no: !) - and is still supported, though deprecated. There is also a shortcut form, [.] which we'll revisit in a moment.
  • [!throws] This tells Catch that an exception may be thrown in the course of executing the test - even if it is caught and dealt with. If you've ever tried to track down a rogue exception in your debugger - and so have set the debugger to break on exceptions as they're thrown - you'll know how frustrating all the false positives coming from such tests are! So Catch provides a way to suppress exceptions it is expecting - through the -e or --nothrow options on the command line. This already skips over REQUIRE_THROWS... or CHECK_THROWS... assertions. The [!throws] tag covers you for cases where the exception is caught and handled in the code under test (or your test code).
  • [!shouldfail] This tells Catch that you're expecting this test to fail! Furthermore, if it does fail then it should treat that as a pass!
  • [!mayfail] Rather than explicitly inverting the pass/ fail logic as the previous tag does, this tag just says that the test may fail but that's ok (although it is still reported). It's also ok if it passes.

Hiding Tags

We already looked at [!hide] (and the deprecated [hide]) above, and mentioned that [.] was a shortcut for the same.

It turns that when one of these tags is used it is often combined with another tag that is used when you do want to run the test. The classic example is where you write integration tests in the same executable as unit tests. By default you don't want the integration tests to run as you want the shortest possible path to running just unit tests. So you hide them but also tag them [integration], or something similar (the word "integration" has no significance to Catch). So pairings like, [.][integration] or [.][performance] are frequently found together.

So, as a convenience, Catch now supports . as a tag prefix. The rest of the tag can be completely custom and works exactly like any other normal tag - except that the test is also hidden. Our examples would, thus, be written as [.integration] and [.performance]

One final point to mention about hiding tags is that, due to the way they have evolved through a number of forms (including the severely deprecated "./" name prefix) whichever form is used will not only hide the test, but any of the other forms will match it in a tag pattern. e.g. if you tag a test with [.] you can match it with [!hide].

Tag Aliases

As we saw earlier, tags can be combined in fairly complex ways. While this is powerful and flexible, it can be a bit awkward if you often want to use the same tag expression. Wouldn't it be nice if there was a way of writing the expression once then getting Catch to remember it for you - and associate it with an easier to remember name?

Well there is! You can associate any tag pattern with a name that you can use just like any normal tag - except that it must begin with the @ character.

You create a tag alias, in code, using the CATCH_REGISTER_TAG_ALIAS macro. E.g.

CATCH_REGISTER_TAG_ALIAS( "[@not nice]", "~[nice]~[!hide]" );

This registers a tag alias, [@not nice] which, when expanded will match all tests that are not tagged [nice] but also are not hidden. The second part is important because if you have any hidden tests then they will usually be included any time you use a not expression (~) because the rule is that tests are only hidden if no pattern is specified!

Also did you notice that we had a space in the tag name? Surprised? I never said that tags could not include spaces. Of course they can.

You can register as many aliases as you like and you can put them anywhere you like (as long as catch.hpp is #included). However I recommend keeping them all in your main source file (the one you #define CATCH_CONFIG_MAIN, or equivalent) - simply so you only have to look in one place for them.

Filenames As Tags

The newest special tag form is the result of automatically generating a set of tags. The tags all begin with the # character (I've resisted the urge to call them "hash tags"). The rest of the tag is generated from the name of the source file that the test is implemented in. The full path (as reported by __FILE__) is stripped of its directories and extension - so all tests in /Development/Tests/SquirrelTests.cpp would be tagged, [#SquirrelTests].

At time of writing this feature is only available on the develop branch on GitHub - and must be specifically enabled running with the --filenames-as-tags or -# command line options. It's possible that situation may change by the time it makes it onto master.

The Tag Line

So tags not only provide a rich grouping mechanism in Catch - they also allow you to control some aspects of how Catch runs and treats test cases. Some tags can be generated for you - and some tags can be expanded from simpler forms. We've covered here the complete set of special tags at the time of writing. If you're reading this in the future there may be more - I'll try and be better at keeping the docs up-to-date there. Also any stock price tips you might have from the future would be welcome too.
Thursday
Jul112013

Injecting Singletons in Objective-C Unit Tests

I've promised to write this up a few times now. As I've just given another talk that covers it I thought it was time to make good on that promise.

The topic is the use of singletons in UIKit (and AppKit) and how that makes code using them hard to test. These APIs are riddled with singletons and you can't really avoid them. In case you need convincing that singletons are problematic take this contrived function:

NSString* makeWidget() {
    NSString* colour = 
        [[NSUserDefaults standardUserDefaults] stringForKey: @"defaultColour"];
    return [colour stringByAppendingString: @"Widget"];
}

NSUserDefaults is a singleton - the sole instance of which is returned when you call standardUserDefaults.

Monster1

A perturbing problem

Now consider how we might test this code. Obviously in an example this trivial there are various ways we could change the code to make the problem go away. Consider this a scaled down example of a problem that may be deeper in the code - perhaps a legacy code-base (or even some third party library!).

A naive test might set the "defaultColour" key in NSUserDefaults prior to calling makeWidget(). The problem with that is that the environment is left in a changed state after the test. Subsequent tests may now pick up a different value if they use NSUserDefaults. Worse: NSUserDefaults is backed by persistent storage that can potentially leave your whole user account in a changed state!

So, at the very least, we should restore the prior value at the end of the test. This leads to further problems: If the test fails, or an exception is otherwise thrown, the clean-up would not be called. So we'd need to wrap it in a @try-@finally too. Then, can we be sure we know what value to restore it to. It's probably nil - but if it's not the environment is still in a different state. So we should capture the prior value first and hold it in a variable.

Now what if you need to set more than one value. Or you change the keys used. We're starting to do a lot of bookkeeping just to compensate for the fact that a singleton is being used. Not only is it ugly but it's increasingly error prone.

Better if we can avoid this in the first place. If we have the option - prefer to pass dependencies in - rather than have your code reach out to these Dependency Singularities. In our example either pass in the default colour, or failing that, pass in NSUserDefaults.

NSString* makeWidget( NSUserDefaults* defaults ) {
    NSString* colour = [defaults stringForKey: @"defaultColour"];
    return [colour stringByAppendingString: @"Widget"];
}

At first this doesn't seem to buy us much. We still need an instance of NSUserDefaults. Even if we alloc-init it we'll get a copy of the global one. That's better but we'd still be dependent on the environment and have to take steps to compensate. And in other cases we may not even have that option

Monster2

If you can't make it - fake it!

We might not be able to create completely fresh instances of NSUserDefaults - but we can create instances of a stand-in class. Due to Objective-C's dynamic nature we don't even need to subclass - and we only have to implement the methods that are actually called - in this case stringForKey:. We could do that with a Mock Object. Or we can build our own Fake. Let's assume you've written a Fake called FakeUserDefaults, which contains an NSMutableDictionary, a means to populate it (perhaps via an initialiser) and an implementation of stringForKey: that looks the key up in the dictionary. Now we can test like this:

TEST_CASE() {
    id defaults =
        [[FakeUserDefaults alloc] initWithValue: @"Red" 
                                         forKey: @"defaultColour"];
    REQUIRE_THAT( makeWidget( defaults ), StartsWith( @"Red" ) );    
}

Great. That seems to tick all the boxes. We have complete control of the default value and we haven't perturbed our environment. No clean-up is required at the end of the test (not even memory, if we're using ARC)

Assuming you have the freedom to change the code under test, here, of course. If makeWidget() was buried deep in some legacy code, for example, it may not be feasible to make such a change (yet). Even if we can make the change it can be useful to be able to put the test in first to watch your back while you change it. If we need to leave the call to [NSUserDefaults standardUserDefaults] baked into the code under test for whatever reason what else can we do?

Monster3

To catch a singleton we must think like a singleton

What we'd like is that, when standardUserDefaults is called on NSUserDefaults deep in the bowels of the code under test, it returns an instance of our fake class instead - but only while we're testing. Again, due to Objective-C's dynamic nature we can achieve this. But it starts to get messier. It involves gritty low-level functions from objc/runtime.h. Can we package that away somewhere?

Of course we can! Enter TBCSingletonInjector. I've uploaded the code to GitHub, but there's actually not much to it. It exposes one public (class) method:

+(void) injectSingleton: (id) injectedSingleton
              intoClass: (Class) originalClass
            forSelector: (SEL)originalSelector
              withBlock: (void (^)(void) ) code;

The usage is best explained by example:

TEST_CASE() {
    id defaults =
        [[FakeUserDefaults alloc] initWithValue:@"Red" forKey:@"defaultColour"];

    [TBCSingletonInjector injectSingleton: defaults
                                intoClass: [NSUserDefaults class]
                              forSelector: @selector(standardUserDefaults)
                                withBlock: ^ {
            REQUIRE_THAT( makeWidget(), StartsWith( @"Red" ) );
        } ];
}

Magic! How does it work? It uses a technique known as "method swizzling" (Ruby or Pythonists know it as "monkey patching"). In short we replace a singleton accessor method (such as standardUserDefaults) with one we control (actually another, not otherwise exposed, class method of TBCSingletonInjector). More specifically we swap the two implementations. This is so we can swap them back again when we're done. Then we call the code block - all within a @try-@finally - so no matter what happens we always restore everything to its previous state.

What does the method we swap in do? It returns a global variable.

Wait, what? I thought globals and singletons were basically the same thing? Aren't we out of the frying pan into the fire?

In the war against singletons we must fight them with singletons! Well it's not all bad. This global is only in our test code and we have full control over it. It gets set to our "injected" singleton instance (and set back to nil at the end). It's not perfect - we can only use this implementation to handle one singleton at a time. I've not yet needed to handle more than one but I daresay the implementation could be extended to handle it.

Keep it clean

Since we've hand rolled our own fake class here (FakeUserDefaults) we can tidy things up further if we encapsulate the use of the singleton injector within it. Just adding a method like this should do the trick:

-(void) use:(void (^)(void) ) code
{
    [TBCSingletonInjector injectSingleton: self
                                intoClass: [NSUserDefaults class]
                              forSelector: @selector(standardUserDefaults)
                                withBlock: code ];
}

Now the test code becomes:

    FakeUserDefaults* defs = 
        [[FakeUserDefaults alloc] initWithValue: @"Red" 
                                         forKey: @"defaultColour"];
    [defs use:^{
            REQUIRE_THAT( makeWidget(), StartsWith( @"Red" ) );
         }];

Or, if you prefer, even:

    [[[FakeUserDefaults alloc] initWithValue: @"Red" 
                                      forKey: @"defaultColour"]
    	use:^{
            REQUIRE_THAT( makeWidget(), StartsWith( @"Red" ) );
         }];

Not too bad, really. But, still, prefer to avoid the singletons in the first place if you have the option.

Monster4

Mocking a monster

Rather than hand rolling a Fake you might prefer to use a Mock object too. I've found OCMock does the job well enough. I'm sure other mocking frameworks would do so at least as well. I prefer to use mocks when I want to test the behaviour, though. In this context that might equate to testing that some code under test sets a value in a singleton (e.g. sets a key in NSUserDefaults). The Singleton Injector works just as well for that, of course.

So there we have it. When you really have to deal with the beast you now have some tools to do so. If you do it please consider only doing so until you are able to replace the singularity with something better behaved instead.

Friday
Jun282013

Catch 1.0

Catch logo

Since Catch first went public, two and a half years ago, at time of this writing, I've made a point of describing it as a "developer preview". Think of it as you might a Google beta and you won't go far wrong. I did this because I knew that there was a lot that needed doing - and in particular that some of the public interfaces would be subject to change. While I have tried to mitigate exposure to this as much as possible (as we'll see) I had wanted to reach a point that I could say things have stabilised and I'm happy to call it a true 1.0 release.

That time has come.

As of today the version of Catch available on the Master branch on GitHub is 1.0 and I would encourage you to update to it if you're already using an older version. But before you do so please read the rest of this post as there are a few breaking changes that may impact you

What's new?

Output

One of the biggest changes is in the console reporter's output. This has been rethought from the ground up (perhaps more accurately: it has now been thought through at all). For example a failure now looks like this:
ClassTests.cpp:28: FAILED:
  REQUIRE( s == "world" )
with expansion:
  "hello" == "world"

That indentation is applied after a wrap too, so long lines of output are much more clearly separated from the surrounding context. This use of indentation has been used throughout.

But there's a lot more to the new look. You'll just have to try it for yourself

Naming and tags

One of the features of Catch since day one has been the ability to name test cases (and sections) using free-form strings. Except that I then went and imposed a convention on those names so they should be hierarchical. You didn't have to follow the convention but if you did you got the ability to group related tests together in a similar manner to related files in a folder in a file system. When combined with wild-cards this gave a lot of power.

The trouble was test names needed to be short and simple otherwise they got very long. So I felt the need to have a second argument where you could supply a longer form description. Of course this was rarely used (even by me!) and so you'd see a lot of this:

TEST_CASE( "project/widgets/foo/bar/size", "" ) { /*..*/ }

The name doesn't really tell you what the test does and the description (which should have) is unused but must be supplied anyway so is just an ugly empty string.

This was not what I signed up for!

Now there is a better way.

It has all the advantages of the old system, but none of the disadvantages - and all without breaking backwards compatibility - so you won't have to go back and rewrite all your existing test cases. Phew!

Test cases can now be tagged. Tags are placed in the second argument (that description argument that nobody was using) and are each enclosed in square brackets. Anything outside of square brackets are still considered the description - although that use is now deprecated. Tags fulfil the same role (and more) as the old hierarchical names, so the name field is now freed up to be more descriptive. The previous example might now look like:

TEST_CASE( "the size changes when the bar grows", "[widgets][foo][bar]" ) 
{ /*..*/ }

But now you can run all tests with the [widgets] tag. Or with the [foo] tag. Or with the [bar] tag. Or all tests tagged [foo] and [bar] but not [widgets]. Tags are much more powerful.

Variadic macros

But if you don't need tags the second argument is now optional (assuming your compiler supports variadic macros - or, more specifically, Catch knows that it supports them). So TEST_CASEs can be written with one argument - or even none at all (an anonymous test case is given a generated name internally - useful if you're just exploring an idea).

Most, if not all, macros where it makes sense now take advantage of variadic macro support.

If you know that your compiler supports variadic macros, yet Catch is not letting you, please let me know and we'll see if we can add the support in.

On your best behaviour

In my first post on Catch, under "A SECTION on specifications", I talked a little about how, while Catch is not a BDD framework, it supports writing in a BDD style. Of note I said,
There is more planned in this area. For example I'm considering offering a GIVEN() macro for defining instances of test data, which can then be logged.
Well I've taken this further and you can now write tests using the following form:
SCENARIO( "name for scenario", "[optional tags]" ) {
    GIVEN( "some initial state" ) {
        // set up initial state

        WHEN( "an operation is performed" ) {
            // perform operation

            THEN( "we arrive at some expected state" ) {
                // assert expected state
            }
        }
    }
}

You can have as many peer WHEN and THEN and even GIVEN sections as you like. You can even nest them with AND_WHEN and AND_THEN. In fact all of these macros are (currently) just aliases for SECTION. SCENARIO is an alias for TEST_CASE.

Although I mentioned BDD you do not need to assert on behaviour here. I typically use the THEN block to assert purely on state. Nonetheless I often find the GIVEN-WHEN-THEN structure useful in organising my tests. They also read well in the output. Here's an example straight from the self test suite:

-------------------------------------------------------------------------------
Scenario: Vector resizing affects size and capacity
     Given: an empty vector
      When: we reserve more space
      Then: the capacity is increased but the size remains the same
-------------------------------------------------------------------------------
That alignment of the colons of Given, When and Then is very deliberate - and is treated specially in the reporter. If the description strings get very long they will wrap after the colons.

Meet Clara

Catch has always had rich command line support. The first implementation was very ad-hoc but as it evolved it become more like an embedded library in itself. For this release I have taken this to its logical conclusion and spun the - completely rewritten - command line parser out into its own library. At time of writing this is still part of the Catch code-base, and depends on a couple of other parts of Catch. The intention is to break those dependencies and extract the code into its own repository on GitHub. But what of the zero-dependency ethos of Catch? Don't worry - the new library will follow the same principle of being header-only and embeddable. So a copy will continue to be included in the Catch code-base and Catch will continue to be distributed as a single header file.

A new library needs a new name. Since it's a Command Line ARgument Assigner I felt Clara was a good name.

As a result of this change some of the specific options have changed (details in the "breaking changes" section). This is to accommodate a closer adherence to POSIX guidelines for command line options. All short-form option names are now single characters and those that take no arguments can be combined after a single -. e.g. to combine -s, -a and -b you can now use -sab.

Options with arguments always have arguments (and can only have one). This leads to a couple of interesting consequences: first the separator character between option and argument can be a space or either : or =. Secondly the non-option arguments (test specs) can appear before or after options.

So the following are all equivalent:
./CatchSelfTest "test name" -b -x 1
./CatchSelfTest "test name" -b -x:1
./CatchSelfTest -b -x 1 "test name"
./CatchSelfTest -x=1 "test name" -b

What's up, Doc?

The documentation for Catch, such as it was, had been provided in the wiki for the GitHub repos. There were a couple of drawbacks to this - most significantly it meant I couldn't have different documentation for different branches, or earlier versions. I also find it much easier to edit documents offline.

So I've now moved (and updated) all the existing documentation into markdown files in the repository itself. These are in the /docs folder, but the README.md file in the root links into them, acting as a convenient launch point.

Breaking changes

This section is only really of interest if you are an active user of an earlier version of Catch.

Under new command

As well as the improvements described there have had to be some changes to the command line options to accommodate them. The list of available options continues to be available by running with the -?, -h or --help switches. They are more fully described in the documentation, now in the repository (rather than the wiki). The in-depth descriptions have been removed from the code.

But here's a quick summary of the changes to be aware of

  1. Test case specs (names or wild carded patterns) and tags are now only specified as application arguments (previously they were introduced using the -t or -g options). In fact -t now means something different!
  2. Listing tests, tags or reporters now all have their own options. Previously you used -l for all of them, with an optional argument to disambiguate. -l no longer takes an argument and just means "list tests". Tags are listed with -t (which formerly meant "run with this/ these test case(s)". Listing reporters is less commonly used so has no short-form. They can be listed with --list-reporters
  3. -nt ("no throw") has become -e (because short form options are single character only)
  4. -a ? has been split into -a and -x ? (because options may have zero or on arguments - but not both)

Writing your own main()

Catch can provide its own main() function but if you write your own there were a few points you could hook into Catch, with different degrees of control over how it is configured.

This continues to be the case but the interface has completely changed. The new interface is more flexible, safer and better encapsulates things like the clean-up of statically allocated state (important if you do leak-detection).

The new interface is documented in the own-main.md file in the docs folder. It is based around a Session class - which must have exactly one instantiation in your code. However, within the instantiation you can invoke Catch test runs as many times as you like (the Session class encapsulates the config and is responsible for the clean-up of statics - in the future those statics may migrate to the session class itself).

Reporters

Catch has a modular reporting system and comes with three reporters bundled (console, xml and JUnit). You can also supply your own reporter by (previously) implementing the IReporter interface. This was one area that was often being slightly tweaked - and would frequently break implementations of the interface. More often than not any changes need not be used by client code - but they would have to update their interfaces anyway!

To make the reporter interface more robust to change I've created a whole new interface, (IStreamingReporter). Most of the methods of this new interface take structs instead of a list of arguments. Those structs can now change with little to no impact on client code (obviously depending on the changes). They are also richer and provide more information than before so I think we're set for a while now

To ease the transition for anyone who has already implemented IReporter I've provided the INTERNAL_CATCH_REGISTER_LEGACY_REPORTER macro (which wraps your reporter in the LegacyReporterAdapter adapter class).

At time of writing documentation for the new reporter interface is coming

It's not just me

Although I have used the personal pronoun, I, a lot in this post (and I continue to be the benevolent dictator on this project) Catch has greatly benefited from the on-going contributions of others - whether that be through pull-requests, bug reports, feature requests and other suggestions, actively maintained forks or just plain evangelising. All of this has been much appreciated and I hope to grow that even more now we have a stable base. Thanks!

Where to go from here

Catch is hosted on GitHub. The preferred url to follow is catch-lib.net, which redirects there - but may become a landing page in the future (an embryonic version of which is already at builds.catch-lib.net).

There's also a forum on Google Groups.

Wednesday
Dec262012

TDD - is it worth it?

There are many articles on the subject of what TDD is, why and when it is worth it, and which attempt to counter common objections.

This is not one of those.
Well. Maybe a bit.

This is more specifically a response to Marco Arment's comments in his podcast, Build & Analyze, episodes 107 and 108. Episode 108 was the last episode so there is an air of finality to the subject matter. Many Mac and iOS developers (as well as developers for other platforms) listen to the show and, while you'd hope they can all think for themselves and reach their own conclusions, it's undeniable that opinions, if not already well formed, may easily be swayed by what a respected figure says in a high profile, and well polished, medium. This can be unfortunate. I'm sure Marco didn't intend to do any damage. I've listened to every episode of Build & Analyze for over a year and enjoyed it. This is certainly not a flame against Marco or the show. However I'm going to walk through Marco's comments as a proxy for many who make similar statements. In doing so I quote him liberally, rearranging to fit my narrative. I'm including the time markers so you can easily listen to it in the original context

Episode 107: 6:50 The comments in question started in response to a listener question about testing and logging. "Test first development" was mentioned in passing, but Marco groups these all together as "a whole lot of formalism" which "matters a lot in an enterprise environment", but he, "[doesn't] really do any of this stuff".

These are quite different things but let's set that aside for the moment. So far he's just told us he doesn't use any of it and it works ok for him. Ok.

But he carries on, "All of this structure and all of this overhead - I wouldn't be able to release this software as one person and have it stay competitive and have it released frequently - there wouldn't be enough time for that".

Now he's making a claim.

Some more choice phrases:

"You've gotta keep moving so fast"
"It isn't worth it in those environments"
"I don't care about all that stuff"

(The last seems to contradict an earlier statement, "Some of this stuff I do regret not knowing what it is", but we'll let that pass)

The first two continue on the theme of needing to move fast and "all those formalisms" "slowing you down".

Ok. Let's stop a moment. I want to be clear again: I'm not trying to attack Marco here. It is not my intention to pick apart the details of his extemporaneous words. He honestly appears to be speaking on the assumption that all the things the listener had asked about are excessive, "formal things", that "matter a lot in […] medical […] or banking" enterprises, or for a "space shuttle" - but are just unnecessary overhead for a one-man independent developer writing, "for apps on phones that don't do anything important".

If you accept the faulty premise the rest of his reasoning makes a lot of sense.
So after I listened to episode 107 I mailed in to explain the difference between low-level, developer oriented, disciplines - TDD in particular - and the higher level, quality-driven, testing approaches more commonly associated with large enterprises or critical industries. I attempted to briefly summarise the benefits that even a one-man developer shop might get from adopting such a discipline.

Unsurprisingly I was not the only one.

Episode 108, from 28:58: Marco brings up the feedback he received, amused to note that everyone pointed out that TDD was, "not only about testing, but it's about writing, 'self-contained code that makes it easier to refactor…'". He recognises that this (and other principles of software design) are "good programming practice". He references The Pragmatic Programmer as being a good source of such principles - and it is. However he appears to believe that proponents of TDD think this is "exclusive to TDD" and, unfortunately, this takes him off-track again. (These days I would probably recommend Growing Objected Oriented Software Guided By Tests and Clean Code as more up-to-date works that cover similar ground.)

Interestingly, The Pragmatic Programmer thoroughly recommends various forms of testing - including TDD (although not by that name - I'm not sure that term had been coined when the book was written). Pertinent to Marco's objections it has this to say:

"A good project may well have more test code than production code. The time it takes to produce this test code is worth the effort. It ends up being much cheaper in the long run" (emphasis mine).

The book also has a section on Refactoring. The topic was new (by that name) at the time - it even mentions that the "first major book on refactoring" was being published around the same time. Nor had the central role of refactoring in TDD's, "Red, Green, Refactor", cycle been clearly established at the time. Nonetheless it makes these two points that cut to the heart of it:

  1. "Don't try to refactor and add functionality at the same time."
  2. "Make sure you have good tests before you begin refactoring. Run the tests as often as possible. That way you will know quickly if your changes have broken anything. […] [Martin] Fowler's point of maintaining good regression tests is the key to refactoring with confidence."

As to the other principles: can they be followed without using a TDD approach? Of course they can. So what was all Marco's feedback about? Simply this: Driving your code from tests forces you to make the code testable. This naturally leads to code that is less coupled, has less responsibilities per unit, with higher cohesion and is written from the start with the idea that it is easy to change. As a by-product you have a great set of automated tests that give you the confidence to refactor to improve all these - and other - principles.

On their own that would all require a lot of discipline - and can be hard to measure (to know how well you are doing). In TDD it's usually easier to follow them from the start - and if you don't the tests will "tell" you by becoming harder to write. TDD is, itself, a discipline - and does require some experience and a prior understanding of the design principles that make it go smoothly. But in my experience it is much easier - and more gratifying - to follow the simple discipline of TDD than to remember to apply all the other individual principles with nothing to guide you.

The result is that your code will tend to be lean and supple and can respond to changes in requirements quickly and easily. You'll spend less time on finding and fixing bugs, and less time on writing code you didn't really need. The time spent writing the tests usually pays for itself almost straight away - often several times over. Saving time and being able to respond to change quickly - are these not the very qualities Marco values so much?

Back in Episode 107, from around 25:57 - right after his first comments on testing - he talks about a situation he got into with The Magazine - a brand new code base for him. "The code was just getting messier and messier". He had a bug but he "couldn't figure out what the heck was going on" and "had all these other things that were problems with that system and it was really clunky". He goes on to say, "it was starting to get to a point where I was fearing adding anything to it. When code gets to a point where it's like a giant pile of messy spaghetti and it feels very fragile and you feel like, 'Oh my God. I need to add an attribute to this - is that going to break anything?'"

It's a shame that he appears to have just dismissed what is probably the best tool we have come up with to date for avoiding this ever happening in the first place.


Thanks to all those who I coerced into reviewing this post for me: Seb Rose, Jez Higgins, Paul Grenyer, Claudius Link, Pal Balog, Hubert Matthews, Peter Pilgrim, Martin Moene, Giovanni Asproni, Yechiel Kimchi, Chris O'Dell, Peter Sommerlad and Graham Lee

Friday
May272011

Unit Testing in C++ and Objective-C just got ridiculously easier still

Spider web in morning sun

'Spider Web in Morning Sun' by Rob van Hilten

In my previous post I introduced Catch - my unit testing framework for C++ and Objective-C.

The response was overwhelming. Thanks to all who commented, offered support - and even contributed to the code with fixes and features.

It certainly gave me the motivation to continue active development and a lot has changed since that post. I'm going to cover some highlights, but first I want to focus on what has been one of the most distinguishing features of Catch that has attracted so much attention - and how I have not rested but made that even better!

How easy is easy enough?

Back in April I gave a five minute lightning talk on Catch at the ACCU conference in Oxford (I highly recommend the conference). With just five minutes to talk about what makes Catch special what was I going to cover? The natural operator-based comparison syntax? The use of Sections instead of class-based fixtures? Data generators?

Well I did touch on the first point. But I decided to use the short amount of time to drive home just how quickly and easily you can get up and running with Catch. So after a 30 second intro I went to the GitHub page for Catch (now aliased as catch-test.net), downloaded the zip of the source (over a 3G connection), unzipped and copied to a central location, fired up XCode, started a fresh C++ project, added the path to Catch's headers, #include'd "catch_with_main.hpp", wrote an anonymous test case, compiled and ran it, demonstrated how it caught a bug, fixed the bug and finally recompiled and re-ran to see the bug go away.

Phew! Not bad for five minutes, I thought. And from the feedback I got afterwards it really did drive the point home.

Compare that with my first experience of using Google Test. It took me over an hour to get it downloaded and building in XCode (the XCode projects don't seem to have been maintained recently - so perhaps that is a little unfair). There are other frameworks that I've tried where I have just run out of patience and never got them going.

Of course I'm biased. But I have had several people tell me that they tried Catch and found it to be the easiest C++ Unit Test framework they have used.

But still I wasn't completely satisfied with the initial experience and ease of incorporating Catch into your own projects.

In particular, if you maintain your own open source project and want to bundle it with a set of unit tests (and why wouldn't you?) then it starts to get fiddly. Do you list Catch as an external dependency that the user must install on their own? (no matter how easy they are to install external dependencies are one or my least favourite things). Do you include all the source to Catch directly in your project tree? That can get awkward to maintain and makes it look like your project is much bigger than it is. If you host your project on GitHub too (or some other Git based repository) you could include Catch as a submodule. That's still not ideal, has some of the problems of the first two options, and is not possible for everyone.

There can be only one

Since Catch, as a library, is fully header-only I decided provided a single header version that is ideal for direction inclusion in third-party projects.

How did I do this?

Go on guess.

Did you guess that I wrote a simple Python script to partially preprocess the headers so that the #includes within the library are expanded out (just once, of course), leaving the rest untouched?

If you did you're not far off. Fortunately some of the conventions I have used within the source meant I could drastically simplify the script. It doesn't need to be a full C preprocessor. It only needs to understand #include and #ifndef/#endif for include guards. Even those are simplified. The whole script is just 42 lines of code. 42 always seems to be the answer.

The result is https://github.com/philsquared/Catch/blob/master/single_include/catch.hpp

I see no reason why this should not be the default way to use Catch - unless you are developing Catch itself. So I'm now providing this file as a separate download from within GitHub. Think of it as the "compiled" header. The lib file of the header-only world.

Licence To Catch

But Open Source is a quagmire of licensing issues, isn't it?

Well it certainly can be. Those familiar with GPL and similar open source licences may be very wary of embedding one open source library (Catch) within another (their own).

IANAL but my understanding is that, contrary to what might seem intuitive, source code with no license at all can be more dangerous, legally speaking, than if it does have one (and if you thought that sentence was difficult to parse you should try reading a software license).

So Catch is licensed. I've used the Boost license. For a number of reasons:

  • It is very permissive. In particular it is not viral. It explicitly allows the case of including the source of Catch along with the distribution of your own source code with no requirements on your own code
  • It's been around for a while now - long enough, I think, that most people are comfortable with it. I work with banks, who can be very nervous about software licensing issues - especially open source. But every one I have worked at has already got Boost through it's compliance process. I'm hoping that will ease any barriers to adoption.
  • I'm familiar with Boost, know many of it's contributors personally, and generally trust the spirit of the licence. Boost itself is a very well known and highly respected set of libraries - with very widespread adoption. A large part of Boost is in header-only libraries and people are already comfortable including them in their own projects.

So what's the Catch? The catch is that I retain the right to keep using that joke - well beyond its humorous lifetime.

The important bit:

In short: any open source author who wants to use Catch to write unit tests for their own projects should feel very free to do so and to include the single-header (or full) version of the library in their own repository and along with their distribution.

That fully applies to commercial projects too, of course.

What else?

Here's a quick run down of some of the other changes and features that have gone in:
  • Single evaluation of test expressions. The original implementation evaluated the expression being tested twice - once to get the result, and then again to get the component values. There were some obstacles to getting this to work whilst only evaluating the expression once. But we got there in the end. This is critical if you want to write test expressions that have side-effects.
  • Anonymous test cases. A little thing, but I find them really handy when starting a new project or component and I'm just exploring the space. The idea is that you don't need to think of a name and description for your test - you can just dive straight in and write code. If you end up with something more like a test case it's trivial to go back and name it.
  • Generators. These are in but not fully tested yet. Consider them experimental - but they are very cool and very powerful.
  • Custom exception handlers. (C++) Supply handlers for your own exception types - even those that don't derive from std::exception, so you can report as much detail as you like when an exception is caught within Catch. I'm especially pleased this went in - given the name of the library!
  • Low build time overhead. I've been aggressive at keeping the compile-time footprint to a minimum. This is one of the concerns when using header only libraries - especially those with a lot of C++ templates. Catch uses a fair bit of templates, but nothing too deeply recursive. I've also organised the code so that as much as the implementation as possible is included in only one translation unit (the one with main() or the test runner). I think you'll be pushed to notice any build-time overhead due to Catch.
  • Many fixes, refactorings and minor improvements. What project doesn't have them? This is where a lot of the effort - possibly the majority - has gone, though. I've wanted to keep the code clean, well factored, and the overhead low. I've also wanted it to be possible to compile at high warning levels without any noise from Catch. This has been challenging at times - especially after the Single Evaluation work. If you see any Catch-related warnings please let me know.

Are we there yet?

As well as my own projects I've been using Catch on a large scale project for a bank. I believe it is already more than just a viable alternative to other frameworks.

Of course it will continue to be refined. There are still bugs being found and fixed.

But there are also more features to be added! I need to finish the work on generators. I'd like to add the tagging system I've mentioned before. I need to look at Matchers. Whether Catch provides its own, or whether I just provide the hooks for a third-party library to be integrated, I think Matchers are an important aspect to unit testing.

I also have a stub project for an iPhone test runner - for testing code on an iOS device. Several people have expressed an interest in this so that is also on my list.

And, yes, I will fill out the documentation!