Unit Testing in C++ and Objective-C just got ridiculously easier still

Spider web in morning sun

'Spider Web in Morning Sun' by Rob van Hilten

In my previous post I introduced Catch - my unit testing framework for C++ and Objective-C.

The response was overwhelming. Thanks to all who commented, offered support - and even contributed to the code with fixes and features.

It certainly gave me the motivation to continue active development and a lot has changed since that post. I'm going to cover some highlights, but first I want to focus on what has been one of the most distinguishing features of Catch that has attracted so much attention - and how I have not rested but made that even better!

How easy is easy enough?

Back in April I gave a five minute lightning talk on Catch at the ACCU conference in Oxford (I highly recommend the conference). With just five minutes to talk about what makes Catch special what was I going to cover? The natural operator-based comparison syntax? The use of Sections instead of class-based fixtures? Data generators?

Well I did touch on the first point. But I decided to use the short amount of time to drive home just how quickly and easily you can get up and running with Catch. So after a 30 second intro I went to the GitHub page for Catch (now aliased as, downloaded the zip of the source (over a 3G connection), unzipped and copied to a central location, fired up XCode, started a fresh C++ project, added the path to Catch's headers, #include'd "catch_with_main.hpp", wrote an anonymous test case, compiled and ran it, demonstrated how it caught a bug, fixed the bug and finally recompiled and re-ran to see the bug go away.

Phew! Not bad for five minutes, I thought. And from the feedback I got afterwards it really did drive the point home.

Compare that with my first experience of using Google Test. It took me over an hour to get it downloaded and building in XCode (the XCode projects don't seem to have been maintained recently - so perhaps that is a little unfair). There are other frameworks that I've tried where I have just run out of patience and never got them going.

Of course I'm biased. But I have had several people tell me that they tried Catch and found it to be the easiest C++ Unit Test framework they have used.

But still I wasn't completely satisfied with the initial experience and ease of incorporating Catch into your own projects.

In particular, if you maintain your own open source project and want to bundle it with a set of unit tests (and why wouldn't you?) then it starts to get fiddly. Do you list Catch as an external dependency that the user must install on their own? (no matter how easy they are to install external dependencies are one or my least favourite things). Do you include all the source to Catch directly in your project tree? That can get awkward to maintain and makes it look like your project is much bigger than it is. If you host your project on GitHub too (or some other Git based repository) you could include Catch as a submodule. That's still not ideal, has some of the problems of the first two options, and is not possible for everyone.

There can be only one

Since Catch, as a library, is fully header-only I decided provided a single header version that is ideal for direction inclusion in third-party projects.

How did I do this?

Go on guess.

Did you guess that I wrote a simple Python script to partially preprocess the headers so that the #includes within the library are expanded out (just once, of course), leaving the rest untouched?

If you did you're not far off. Fortunately some of the conventions I have used within the source meant I could drastically simplify the script. It doesn't need to be a full C preprocessor. It only needs to understand #include and #ifndef/#endif for include guards. Even those are simplified. The whole script is just 42 lines of code. 42 always seems to be the answer.

The result is

I see no reason why this should not be the default way to use Catch - unless you are developing Catch itself. So I'm now providing this file as a separate download from within GitHub. Think of it as the "compiled" header. The lib file of the header-only world.

Licence To Catch

But Open Source is a quagmire of licensing issues, isn't it?

Well it certainly can be. Those familiar with GPL and similar open source licences may be very wary of embedding one open source library (Catch) within another (their own).

IANAL but my understanding is that, contrary to what might seem intuitive, source code with no license at all can be more dangerous, legally speaking, than if it does have one (and if you thought that sentence was difficult to parse you should try reading a software license).

So Catch is licensed. I've used the Boost license. For a number of reasons:

  • It is very permissive. In particular it is not viral. It explicitly allows the case of including the source of Catch along with the distribution of your own source code with no requirements on your own code
  • It's been around for a while now - long enough, I think, that most people are comfortable with it. I work with banks, who can be very nervous about software licensing issues - especially open source. But every one I have worked at has already got Boost through it's compliance process. I'm hoping that will ease any barriers to adoption.
  • I'm familiar with Boost, know many of it's contributors personally, and generally trust the spirit of the licence. Boost itself is a very well known and highly respected set of libraries - with very widespread adoption. A large part of Boost is in header-only libraries and people are already comfortable including them in their own projects.

So what's the Catch? The catch is that I retain the right to keep using that joke - well beyond its humorous lifetime.

The important bit:

In short: any open source author who wants to use Catch to write unit tests for their own projects should feel very free to do so and to include the single-header (or full) version of the library in their own repository and along with their distribution.

That fully applies to commercial projects too, of course.

What else?

Here's a quick run down of some of the other changes and features that have gone in:
  • Single evaluation of test expressions. The original implementation evaluated the expression being tested twice - once to get the result, and then again to get the component values. There were some obstacles to getting this to work whilst only evaluating the expression once. But we got there in the end. This is critical if you want to write test expressions that have side-effects.
  • Anonymous test cases. A little thing, but I find them really handy when starting a new project or component and I'm just exploring the space. The idea is that you don't need to think of a name and description for your test - you can just dive straight in and write code. If you end up with something more like a test case it's trivial to go back and name it.
  • Generators. These are in but not fully tested yet. Consider them experimental - but they are very cool and very powerful.
  • Custom exception handlers. (C++) Supply handlers for your own exception types - even those that don't derive from std::exception, so you can report as much detail as you like when an exception is caught within Catch. I'm especially pleased this went in - given the name of the library!
  • Low build time overhead. I've been aggressive at keeping the compile-time footprint to a minimum. This is one of the concerns when using header only libraries - especially those with a lot of C++ templates. Catch uses a fair bit of templates, but nothing too deeply recursive. I've also organised the code so that as much as the implementation as possible is included in only one translation unit (the one with main() or the test runner). I think you'll be pushed to notice any build-time overhead due to Catch.
  • Many fixes, refactorings and minor improvements. What project doesn't have them? This is where a lot of the effort - possibly the majority - has gone, though. I've wanted to keep the code clean, well factored, and the overhead low. I've also wanted it to be possible to compile at high warning levels without any noise from Catch. This has been challenging at times - especially after the Single Evaluation work. If you see any Catch-related warnings please let me know.

Are we there yet?

As well as my own projects I've been using Catch on a large scale project for a bank. I believe it is already more than just a viable alternative to other frameworks.

Of course it will continue to be refined. There are still bugs being found and fixed.

But there are also more features to be added! I need to finish the work on generators. I'd like to add the tagging system I've mentioned before. I need to look at Matchers. Whether Catch provides its own, or whether I just provide the hooks for a third-party library to be integrated, I think Matchers are an important aspect to unit testing.

I also have a stub project for an iPhone test runner - for testing code on an iOS device. Several people have expressed an interest in this so that is also on my list.

And, yes, I will fill out the documentation!


Unit Testing in C++ and Objective-C just got easier

Day 133-365 : Catching the bokeh.jpg

Back in May I hinted that I was working on a unit testing framework for C++. Since then I've incorporated the technique that Kevlin Henney proposed and a whole lot more. I think it's about time I introduced it to the world:

This post is very old now, but is still the first point of contact with Catch for many people. Most of the material here still applies in concept, so is worth reading - but some of the specifics have changes. Please see the tutorial (and other docs) over on GitHub for more up-to-date coverage.

Introducing CATCH

CATCH is a brand new unit testing framework for C, C++ and Objective-C. It stands for 'C++ AdaptiveAutomated Test Cases in Headers', although that shouldn't downplay the Objective-C bindings. In fact my initial motivation for starting it was dissatisfaction with OCUnit.

Why do we need another Unit Testing framework for C++ or Objective-C?

There are plenty of unit test frameworks for C++. Not so many for Objective-C - which primarily has OCUnit (although you could also coerce a C or C++ framework to do the job).

They all have their strengths and weaknesses. But most suffer from one or more of the following problems:

  • Most take their cues from JUnit, which is unfortunate as JUnit is very much a product of Java. The idiom-mismatch in C++ is, I believe, one of the reasons for the slow uptake of unit testing and TDD in C++.
  • Most require you to build libraries. This can be a turn off to anyone who wants to get up and running quickly - especially if you just want to try something out. This is especially true of exploratory TDD coding.
  • There is typically a certain amount of ceremony or boilerplate involved. Ironically the frameworks that try to be faithful to C++ idioms are often the worst culprits. Eschewing macros for the sake of purity is a great and noble goal - in application development. For a DSL for testing application code, especially since preprocessor information (e.g. file and line number) are required anyway) the extra verbosity seems too high a price to pay to me.
  • Some pull in external dependencies
  • Some involve a code generation step

The list goes on, but these are the criteria that really had me disappointed in what was out there, and I'm not the only one. But can these be overcome? Can we do even better if we start again without being shackled to the ghost of JUnit?

What's the CATCH?

You may well ask!

Well, to start, here's my three step process for getting up and running with CATCH:

  1. Download the headers from github into subfolder of your project
  2. #include "catch.hpp"
  3. There is no step 3!

Ok, you might need to actually write some tests as well. Let's have a look at how you might do that:

[Update: Since my original post I have made some small, interface breaking, changes - for example the name of the header included below. I have updated this post to reflect these changes - in case you were wondering]

#include "catch_with_main.hpp"

TEST_CASE( "stupid/1=2", "Prove that one equals 2" )
    int one = 2;
    REQUIRE( one == 2 );

Short and to the point, but this snippet already shows a lot of what's different about CATCH:

  • The assertion macro is REQUIRE( expression ), rather than the, now traditional, REQUIRE_EQUALS( lhs, rhs ), or similar. Don't worry - lhs and rhs are captured anyway - more on this later.
  • The test case is in the form of a free function. We could have made it a method, but we don't need to
  • We didn't name the function. We named the test case. This frees us from couching our names in legal C++ identifiers. We also provide a longer form description that serves as an active comment
  • Note, too, that the name is hierarchical (as would be more obvious with more test cases). The convention is, as you might expect, "root/branch1/branch2/.../leaf". This allows us to easily group test cases without having to explicitly create suites (although this can be done too).
  • There is no test context being passed in here (although it could have been hidden by the macro - it's not). This means that you can freely call helper functions that, themselves, contain REQUIRE() assertions, with no additional overhead. Even better - you can call into application code that calls back into test code. This is perfect for mocks and fakes.
  • We have not had to explicity register our test function anywhere. And by default, if no tests are specified on the command line, all (automatically registered) test cases are executed.
  • We even have a main() defined for us by virtue of #including "catch_with_main.hpp". If we just #include that in one dedicated cpp file we would #include "catch.hpp' in our test case files instead. We could also write our own main that drives things differently.

That's a lot of interesting stuff packed into just a few lines of test code. It's also got more wordy than I wanted. Let's take a bit more of a tour by example.

Information is power

Here's another contrived example:

TEST_CASE( "example/less than 7", "The number is less than 7" )
    int notThisOne = 7;

    for( int i=0; i < 7; ++i )
        REQUIRE( notThisOne > i+1  );

In this case the bug is in the test code - but that's just to make it self contained. Clearly the requirement will be broken for the last iteration of i. What information do we get when this test fails?

    notThisOne > i+1 failed for: 7 > 7

(We also get the file and line number, but they have been elided here for brevity). Note we get the original expression and the values of the lhs and rhs as they were at the point of failure. That's not bad, considering we wrote it as a complete expression. This is achieved through the magic of expression templates, which we won't go into the details of here (but feel free to look at the source - it's probably simpler than you think).

Most of the time this level of information is exactly what you need. However, to keep the use of expression templates to a minimum we only decompose the lhs and rhs. We don't decompose the value of i in this expression, for example. There may also be other relevant values that are not captured as part of the test expression.

In these cases it can be useful to log additional information. But then you only want to see that information in the event of a test failure. For this purpose we have the INFO() macro. Let's see how that would improve things:

TEST_CASE( "example/less than 7", "The number is less than 7" )
    int notThisOne = 7;

    for( int i=0; i < 7; ++i )
        INFO( "i=" << i );
        REQUIRE( notThisOne > i+1  );

This gives us:

    info: 'i=6'
    notThisOne > i+1 failed for: 7 > 7

But if we fix the test, say by making the for loop go to i < 6, we now see no output for this test case (although we can, optionally, see the output of successful tests too).

A SECTION on specifications

There are different approaches to unit testing that influence the way the tests are written. Each approach requires a subtle shift in features, terminology and emphasis. One approach is often associated with Behaviour Driven Development (BDD). This aims to present test code in a language neutral form - encouraging a style that reads more like a specification for the code under test.

While CATCH is not a dedicated BDD framework it offers a several features that make it attractive from a BDD perspective:

  • The hiding of function and method names, writing test names and descriptions in natural language
  • The automatic test registration and default main implementation eliminate boilerplate code that would otherwise be noise
  • Test data generators can be written in a language neutral way (not fully implemented at time of writing)
  • Test cases can be divided and subdivided into SECTIONs, which also take natural language names and descriptions.

We'll look at the test data generators another time. For now we'll look at the SECTION macro.

Here's an example (from the unit tests for CATCH itself):

TEST_CASE( "succeeding/Misc/Sections/nested", "nested SECTION tests" )
    int a = 1;
    int b = 2;
    SECTION( "s1", "doesn't equal" )
        REQUIRE( a != b );
        REQUIRE( b != a );

        SECTION( "s2", "not equal" )
            REQUIRE_FALSE( a == b);

Again, this is not a great example and it doesn't really show the BDD aspects. The important point here is that you can divide your test case up in a way that mirrors how you might divide a specification document up into sections with different headings. From a BDD point of view your SECTION descriptions would probably be your "should" statements.

There is more planned in this area. For example I'm considering offering a GIVEN() macro for defining instances of test data, which can then be logged.

In Kevlin Henney's LHR framework, mentioned in the opening link, he used SPECIFICATION where I have used TEST_CASE, and PROPOSITION for my top level SECTIONs. His equivalent of my nested SECTIONs are (or were) called DIVIDERs. All of the CATCH macro names are actually aliases for internal names and are defined in one file (catch.hpp). If it aids utility for BDD or other purposes, the names can be aliased differently simply by creating a new mapping file and using that.


There is much more to cover but I wanted to keep this short. I'll follow up with more. For now here's a (yet another) list of some of the key features I haven't already covered:

  • Entirely in headers
  • No external dependencies
  • Even test fixture classes and methods are self registering
  • Full Objective-C bindings
  • Failures (optionally) break into the interactive debugger, if available
  • Floating point tolerances supported in an easy to use way
  • Several reporter classes included - including a JUnit compatible xml reporter. More can be supplied

Are there any features that you feel are missing from other frameworks that you'd like to see in CATCH? Let me know - it's not too late. There are some limiting design goals - but within those there are lots of possibilities!


The iPad reconsidered

As promised, here are my current thoughts on the iPad - as developed on from my thoughts of four months ago.

I still find the screen too glossy and hard to read in some circumstances. This has been somewhat mitigated by the onset of autumn as my train journey is usually in the dark or half-light. However, now that I'm used to the iPhone 4's retina display I'm also very distracted by the sight of all those rough edges and pixels. I'm sure they weren't there before.

As I understand it there are some rather large technical challenges to bringing retina display technology to a panel the size of the current iPad. I do wonder if this feeds into some of the rumours of an upcoming 7" iPad (or other smaller form factor).

On the flip side I've found that, with the retina display, the iPhone is now even more useful in some situations where I might otherwise have preferred the iPad - e.g. reading. It's now almost as comfortable to read material on the iPhone, at the higher resolution, as on the iPad - and without those distracting pixels.

With this in mind I think I would welcome a smaller form factor (as an option in a range) - as long as it had higher pixel density (ideally "retina" level).

Aside from the screen, we've certainly seem a lot more apps targeting the iPad. Some more successfully than others. We're seeing innovation in this area and its an exciting time - but I think for the casual observer it's still a bit too early to really benefit. Unless you have something specific in mind I'd advise holding off to see what the next generation - or possible an upcoming competitor - hold. If you just want an eReader I think the latest Kindles are a good bet - and needn't preclude getting a next gen iPad later.

I think that is the mark of something truly new. We're finding our feet as a community. The hype behind Apple and the iPad seems to be sufficient to keep the momentum going until we reach the next level.

It's also been interesting to see how other vendors have responded. There has been the inevitable glut of cheap knock-off clones, of course. But I think we're starting to see some real contenders. The Android space has really taken off this year. In some ways Android is even ahead - but I think Apple is still leading on innovation at this point. There are fundamentally different philosophies behind Google's approach and Apple's. But you can't deny that that Google are following the strategy that Jeff Attwood recently coined as, "Go that way, really fast". Even Blackberry are making credible contributions to the space (or are they?) - focusing, as is their wont, on the business side of things.

Competition is good. Not just because it will encourage Apple to keep innovating, and keep iterating. I genuinely think this area of computing is the one to watch. It's only just getting off the ground. I think Apple will be the thought leaders in the space for a little while yet, but it would be unhealthy for that to remain so in the long term. That was less true of the iPhone, but I still think it's good that competitors are catching up there too (and ahead of schedule). The reason I think Apple are still ahead is that they control the end-to-end user experience and that is, right now, critical. That may not always be the most important factor.

But back to the specifics of the, current, iPad. Most of the areas I commented on in the previous post were software related. I now have iOS 4.2b2 installed. How has that changed things? Well I can't talk about specifics, beyond what has already been advertised, as it is still under NDA. But I can say that just between Multitasking and Folders alone I can't imagine going back to 3.2. I actually jumped on 4.2b1 as soon as it came out - despite my usual policy of not installing first betas, if any, on devices I use in daily life (I have an iPod touch and an older iPhone specifically for testing). Android users will be quick to point out that corresponding features have been available on that platform for some time. Not having used them I can't comment, but I have heard that the experience is less satisfying.

However, despite all this I have to say that I've been using it less than I was even four months ago.


Well, in part, it's a matter of time. My typically daily computing experience is something like this:


Catch up on Twitter on my iPhone while waiting for the train. Development (or writing blog posts) on my laptop on the train. I sometimes use the iPad for looking up reference material as the internet connection is more stable than my 3 Mi-Fi

Work on a PC running Windows. Occasional emails and Twitter checks are catered for by the iPhone. Music supplied by the iPhone.

Evening commute:
Catch up on Twitter on my iPhone while waiting for the train. Development (or writing blog posts) on my laptop on the train.

Evening at home:
Watching an episode of something (currently: Lost) from my Mac Mini (possibly soon to be my Apple TV 2) while having dinner. Occasionally a little more development on my laptop, or downloading iOS betas. Sometimes some reading in bed - either on the iPhone or the iPad.

So there is not a lot of scope for the iPad to make inroads there. Currently the only regular appearance is the late night reading - which the iPhone is often more convenient for anyway. I didn't mention, too, that I sometimes use the iPad during the day as a photo frame, but it has been disappointing even in that capacity as there is, currently, no way to change the update frequency (iOS4.2 may or may not address that - I'll say no more) - and no way to create photo albums on the device. If these things are addressed I will probably use it more for that - but that's hardly revolutionary.

Once my Apple TV 2 arrives and AirPlay starts working I may well use the iPad as a source for that too - but the iPhone may serve that role just as well.

So, in summary, I'm making it sound as though the current iPad is much less useful than I had originally hoped for. There is some truth in that. I think an option of a smaller form factor and/ or a high density display will help there. But I think another obstacle has just been my time to find apps that make better use of what the iPad has to offer (this would be true of any tablet device) - and more importantly - to write my own!

The time for the iPad is, maybe, yet to come, but the revolution has already begun


The iPad - two ... no, fifteen, weeks in

I've had a draft post in MarsEdit for some time now called "The iPad - two weeks in". Clearly that title is a little outdated now.

What's interesting is that my opinions haven't really changed in that time. So what follows is my unedited thoughts just over four months ago. I'll follow up with what has changed in the meantime - but that is all due to external factors.

The iPad - two weeks in

I've had my iPad now for two whole weeks. I've not used it as heavily as some in that time, but I think it's long enough to give my initial impression. I've publicly been quite excited about the iPad in principle since it was announced - but now I've been able to taste the proof in the pudding.

After the initial opening, where you find out for yourself how natural using apps like Safari and Maps are on the new device there's an inevitable awkward period where you realise that it doesn't do anything (yet) that you couldn't already do with your laptop or your phone. For some people this is all they see. That is, of course, missing the point.

First of all I'm used to taking my laptop with me everywhere - and if I don't have that I still have my iPhone. There are not many occasions where I need more than my iPhone, don't have my laptop but would have my iPad. But that's mostly because I'm a developer. If I wasn't using it for coding then most days I would probably leave my laptop at home and just use the iPad on the train.

In time, however, I've started to reach for the iPad first, even if I have the laptop with me. Why? Well it's smaller for a start. I have a 17" Macbook Pro - which is quite a lot to pull out on the train if I don't really need it. When I'm coding I really appreciate the extra screen estate - but for just about anything else it's not needed.

I'm also finding it generally a nicer, more natural, experience to interact with apps and content through the touch metaphor - especially Apple's implementation. After three years of iPhone O..., I mean iOS I still get great satisfaction in working with the interial scrolling views, for example.

So far this has just been a refinement of an experience I already had - it's not adding anything truly new - and there are some downsides, which I'll come on to. It's worth mentioning here, though, that we're only just getting off the ground with this. I'm very much an early adopter here. It's a little unusual that the hype around the iPhone and iPad have lead to such mass adoption already. There are bound to be people who expected more or are still wondering what you can actually do with these things to make them worth their keep. It will come. It will all come (and, as alluded to in that blog post I linked earlier, I hope to have my own part in that).

So that's the positive and the realistic. What about those negatives that I mentioned.

Well the first is that with the larger display and extra power you really do miss multi-tasking. Of course that's coming soon, to a degree, and that will mitigate most of my concerns here. However I do feel that in some cases it would be nice to have more than one app on screen at a time. I wouldn't want this to be the default way of working - as it is with desktop OSes. But the ability to do this selectively, perhaps with the widget metaphor, would be a nice addition. That said I'm a power user and not everyone would need or be comfortable with this. Even if we never get it, with the service-based multi-tasking that's coming it's going to be a good experience.

On a similar note I'm finding mobile Safari to be much more frustrating than in the iPhone context. Two things - the lack of tabs is annoying. While you have a somewhat similar mechanism in the form of the page toggle view, it's not the same and if you want to do a bit of research it's very limiting. Of course this is entirely a software implementation issue and there's no reason it couldn't be added in a future release (allowing for my next point).

The other issue with Safari, which it also inherits from the iPhone, is that it doesn't seem to do any disk caching. It holds a whole page in memory. If you switch to another page and it runs low on memory it will purge the first from memory and if you then navigate back it has to load the whole page over the air again! I feel this would need to be addressed before tabbed browsing could be offered.

Finally - and I think this is the biggest grievance I have with the iPad today - is the glossy screen. It's fine in low light conditions (if you turn the brightness right down). But outdoors, especially if the sun is out - or even indoors if the lights are bright - the display is really hard to read from and tires the eyes very quickly. What concerns me most is that Apple seem to be fine with this. Their "solution" is just to crank the brightness up until it overcomes the glare. This almost works. Sometimes even that is not enough - and it certainly doesn't address the eye strain issue - tiring them even more.

Before the announcement back in January the display technology was probably the most talked about aspect of the then-rumoured device. From reading the opinions at the time it sounded like if the iPad launched with backlight display at all - let alone a glossy one - it would be an instant failure. After the announcement those opinions became a distant minority as everyone else focused on what's great about the device. Sales so far certainly don't seem to be hindered by this weakness. This is a shame because I think it will just give Apple reason to ignore it altogether. I hope I'm wrong. After all they did do a U-turn over the same issue with the Macbook Pros when they went glossy. I held off getting a new laptop until they finally offered a matt display option again. I'm not so hopeful with the iPad, however since it's the glass that makes it glossy and that really needs to be there on a multi-touch display. The glimmer of hope, no pun intended, comes from the iPhone 4 which apparently pioneers a new manufacturing technique for connecting the LCD to the display which closes the gap between them. I'm hoping this will reduce glare - at least a little - and that this technology will work its way into the next generation of iPad devices.

In summary, there are irritations and weakness but all of these, with the exception of the glossy display, can be fixed with software updates - and I'm confident that some of these will filter through. The display is particularly disappointing but for many people it's fine. It's potentially "fixable" in future hardware revisions. An anti-glare screen protector may help too, although I've been reluctant to try one just yet.

Despite these downsides, and the early stage that the eco-system is at in terms of must-have apps, I still find the iPad to be a really great device that currently has no equal. It's not yet for everybody but I really do believe that the trend is that this gap will close.

The one area that I think the iPad will really shine - and we're only seeing embryonic examples yet - is in note capture and consumption. The immediacy of iOS, the natural interaction of multi-touch and the larger display/ touch surface of the iPad are, I think, the perfect ingredients for making the capture of notes and ideas directly into digital form more practical and accessible than ever before. This is the direction my app ideas lie in and I'm really excited by the possibilities now on offer.

Remember - the revolution is only just starting.


Welcome to the new decade

What makes a tweet take off? I recently had a tweet go viral and it gave me a fascinating insight into the how and why these things spread. In some ways Twitter amplifies existing social epidemiology. In others it is unique.

So what was the tweet? It was a Saturday afternoon - about 4pm here in the UK. I was having a shower (where most of my best ideas are formed) and I was thinking about recent tech news. It struck me that recent events (Oracle suing Google over Java, Google net neutrality controversy with Verizon) have added to other changes (Apple's rise to the top of the mobile, online music and tablet spaces - and even eBooks - and Microsofts inability to get into - or back into - and of these area) has resulted in a reversal of some of the positions we have taken for granted over the last ten years or so.

So, as I was drying off, I posted a casual message on Twitter. I had around 200 followers at the time - at least half of whom I know personally. I thought I might even get a couple of retweets.

The tweet read as follows:

Welcome to the new decade: Java is a restricted platform, Google is evil, Apple is a monopoly and Microsoft are the underdogs

That's 125 characters to summarise the juxtapositions I had been pondering. Like most tweets where several thoughts are being conveyed it took a couple of iterations to prune it enough that it fit into the 140 character limit - and leaving just enough space for the RT.

Just in case.

After that I took my family off to my sister's, where we had been invited for the evening, and didn't think any more of it.

My sister's house is a bit of a 3G (or any G, for that matter) blackspot. If you've seen those adverts for femtocell repeaters that have people hanging out of windows to get a signal then you have an idea of what it's like.

However, even there, my Magical iPhone 4 antenna occasionally picked up a signal and I'd get email in bursts. During the course of our meal I heard a few emails popping in and took a look. My unread emails badge told me I had 50 emails waiting. 50! As it turns out, 50 is the maximum number of email headers the iPhone will download automatically. I don't know how many I already had at that point. But it was a lot.

So what where they? They were notifications from Twitter of new followers.

I managed to get a connection to my brother-in-law's wifi - which has a 128 character hash as a password (!) - and checked in on my twitter account. And there it was. Screen after screen of my tweet retweeted over and over again! A three letter word came to mind. It starts with W and ends with TF!

I couldn't investigate fully until I got home. By that time I found I was #1 in two categories on Reddit - and later found I had also been #1 on Hackernews all night! A little more searching showed the tweeting popping up in other places too, mostly blogs. By monday I heard I'd even had a mention on "This Week in Tech".

By Tuesday (already three whole days later) I was still seeing retweets in my timeline every few minutes, and new followers were trickling through. They seemed to have levelled out at around 900 (some people were already unfollowing) - but then over Tuesday night (UK time - so day/ evening across the US) it picked up again leaving me with 930 by Wednesday morning.

So what happened?

My Twitter Social Ego Networks.jpg

Tipping Point? Or Life Of Brian

If you've not read Malcolm Gladwell's Tipping Point you're probably at least familiar with the phrase, or can take a guess at what it means. It's all about the factors that contribute to the adoption or awareness of something taking off - usually by several orders of magnitude. Almost by definition this is not an exact science. If you want to paint a scientific face on it I'd probably paint it with Chaos Theory. In practice the elements that Gladwell covers tend to be more sociological, and usually highly anecdotal.

But a tipping point seems to have been what was reached here - so what does Gladwell have to say about the process?

Connectors, Mavens and Salesmen

Probably the most obvious example of a Tipping Point factor are Gladwell's: "Connectors". These are people who have a lot of social connections. They know people. (Even more) people know them and generally trust their opinions. Often these people will be celebrities or with some other form of media presence. Book authors, tech journalists or high profile employees of big name companies are common Connectors in the tech world. There were certainly a number of these in the mix and they would have played a huge amplifying role in the process. I can't imagine my tweet would have "tipped" without them. Some of the names I've seen are: Robert Scoble, Leo Laporte (who had mentioned me on TWiT and Travis Swicegood (author of "Pragmatic Version Control With GIT", and who posted the tweet to Hackernews). With a bit more digging I'm sure I'll turn up more, perhaps even bigger names.

Gladwell also talks about Mavens and Salesmen. In this case I believe the Mavens involved where probably also the Connectors. Salesmen have less of a role in a Twitter epidemic.

Stickiness and tl;dr

Gladwell's concept of the "Stickiness Factor", I believe, translates to the quality of the tweet that led it to take off in the first place. After all it needed enough momentum to reach the connectors.

In retrospect I can have a good stab at what it was about the tweet that gave it Stickiness. I want to emphasise that I can't claim credit for how effective it turned out to be. That was mostly down to luck and the constraints imposed by Twitter itself.

Twitter's famous 140 character limit, while it has other historical reasons, has proved to be one of its most compelling (and sometimes frustrating) "features". It's an oasis in today's crisis of information overload. Anything else is tl;dr.

And yet at the same time we are addicted to content - especially social content. We want more of it, but in smaller amounts. And that's precisely what Twitter gives us. Furthermore we are forced to think about how we can keep within that limit - compressing paragraphs of material into a couple of laser focused sentences. We often surprise ourselves at how much unnecessary waffle we can distill down to essence of the point we wanted to make.

And that's exactly the process I went through to arrive at the wording in my tweet.

But it wasn't just the fat that was trimmed. There was no room to explain the nuances, resolve the ambiguities, or balance the controversy. Twitter editing is brutal. It has to be left to the reader to add the flesh back to the bone.

Anyone familiar with recent events in the tech world could identify with the sequence of statements - whether they agreed with them or not. But each person also read into them their own interpretation. Some took exception to what they thought I was implying. This was important. If you look at the discussion threads that exploded on Reddit and Hackernews you'll see how many possible interpretations and opinions about each word were represented.

It's a sad, but well known, fact that controversy "sells". Each point I made had enough truth that it could be talked about in serious debate, but was controversial enough that people wanted to do so. The fact that each point was also a reversal of a previous view was the light and amusing packaging for this combustible concoction. Top that off with the easy to read meter, practically forced on it by the Twitter limit and it's hard to imagine a more carefully planned Stickiness assault on the Twitterverse.

Yet it really wasn't planned that way.

Anatomy of a perfect tweet

We've looked at the general Gladwell effects that probably contributed to the tweet "tipping". Considerable research has also gone into to more specialised effects within the context of retweeting. What makes the difference between something being retweeted 0-5 times compared to something that takes off to hundreds, thousands, or more?

Perhaps the most notable Social Media expert in this area is Dan Zarrella. He has broken down vast numbers of statistics and correlations and come up with some key observations. These include things such as the type of words that are most retweetable, sentence structure, time of day, day of week, and many more. Some of the effects are more pronounced than others.

Looking at the timing: Dan suggests the best time of day to be retweeted is sometime in the early evening. According to his graph this peaks about 5pm. Sure enough that was almost exactly the time I posted. So that goes to prove the point? Well it would be a mistake to look at a sample size of one and draw conclusions. There are also problems with this statistic. Twitter is a global phenomenon. Most people have at least some international members of their network. So time-of-day is all but meaningless. In my case, being based in the UK, I think 5pm worked out well because the U.S. was coming online and spread throughout the day.

On the flip side, the best day of the week is apparently Thursday. The worst day of the week is Saturday! As I said these are statistical biases - not absolutes. Nonetheless his findings are very interesting and can make you rethink the quality of your tweets.

I was a little surprised that he didn't even mention, at least in the linked article, network effects of the Tipping Point variety (although arguably most of his findings relate to the Stickiness of the content).

A numbers game

As this saga unraveled I became more and more fascinated with it. I wanted to see how many times it had been retweeted, by whom, seen by how many, and who the Connectors were. There are tools online to help, but they are constrained by Twitter limits (for example only the last 1500 tweets from a search). To get around this I made multiple searches using the since: and until: commands. Unfortunately these only work for dates - not times. On Sunday I had more than 1500 retweets in the final eight hours alone!

So I wasn't able to piece together the whole story, but by throwing in a few estimates for the missing data I arrived at a figure of about 3-4 million impressions (that is, people who would have seen the tweet, given the followers of those that retweeted - allowing for overlaps).

I think the first big Connector in the mix was Robert Scoble, about 3 hours and 30 retweets in. During the rest of saturday evening it was picked up by five others with more than 10k followers each - including @toptweets with 354k followers alone! I suspect Sunday had the most Connectors at play. Unfortunately most of Sunday is a bit of a black hole due to those Twitter limits.


Blessed are the cheesemakers

At times it seemed like I'd got trapped in Life Of Brian - only with less Romans. When anything plays out on a large enough social network; the thing itself takes on a life of its own - detached from its origins. This was Dawkins' memetics at play in a highly concentrated form. But being so accelerated, even a network as large as Twitter reaches saturation fairly quickly. I was amazed that by Wednesday I was still being retweeted so much, but by Friday it was finally abating - with only about 3-4 retweets an hour.

By next week it will have gone the way of the dinosaurs. My days as a micro-celebrity are numbered.

Page 1 ... 2 3 4 5 6 ... 9 Next 5 Entries ยป