Friday
Jun282013

Catch 1.0

Catch logo

Since Catch first went public, two and a half years ago, at time of this writing, I've made a point of describing it as a "developer preview". Think of it as you might a Google beta and you won't go far wrong. I did this because I knew that there was a lot that needed doing - and in particular that some of the public interfaces would be subject to change. While I have tried to mitigate exposure to this as much as possible (as we'll see) I had wanted to reach a point that I could say things have stabilised and I'm happy to call it a true 1.0 release.

That time has come.

As of today the version of Catch available on the Master branch on GitHub is 1.0 and I would encourage you to update to it if you're already using an older version. But before you do so please read the rest of this post as there are a few breaking changes that may impact you

What's new?

Output

One of the biggest changes is in the console reporter's output. This has been rethought from the ground up (perhaps more accurately: it has now been thought through at all). For example a failure now looks like this:
ClassTests.cpp:28: FAILED:
  REQUIRE( s == "world" )
with expansion:
  "hello" == "world"

That indentation is applied after a wrap too, so long lines of output are much more clearly separated from the surrounding context. This use of indentation has been used throughout.

But there's a lot more to the new look. You'll just have to try it for yourself

Naming and tags

One of the features of Catch since day one has been the ability to name test cases (and sections) using free-form strings. Except that I then went and imposed a convention on those names so they should be hierarchical. You didn't have to follow the convention but if you did you got the ability to group related tests together in a similar manner to related files in a folder in a file system. When combined with wild-cards this gave a lot of power.

The trouble was test names needed to be short and simple otherwise they got very long. So I felt the need to have a second argument where you could supply a longer form description. Of course this was rarely used (even by me!) and so you'd see a lot of this:

TEST_CASE( "project/widgets/foo/bar/size", "" ) { /*..*/ }

The name doesn't really tell you what the test does and the description (which should have) is unused but must be supplied anyway so is just an ugly empty string.

This was not what I signed up for!

Now there is a better way.

It has all the advantages of the old system, but none of the disadvantages - and all without breaking backwards compatibility - so you won't have to go back and rewrite all your existing test cases. Phew!

Test cases can now be tagged. Tags are placed in the second argument (that description argument that nobody was using) and are each enclosed in square brackets. Anything outside of square brackets are still considered the description - although that use is now deprecated. Tags fulfil the same role (and more) as the old hierarchical names, so the name field is now freed up to be more descriptive. The previous example might now look like:

TEST_CASE( "the size changes when the bar grows", "[widgets][foo][bar]" ) 
{ /*..*/ }

But now you can run all tests with the [widgets] tag. Or with the [foo] tag. Or with the [bar] tag. Or all tests tagged [foo] and [bar] but not [widgets]. Tags are much more powerful.

Variadic macros

But if you don't need tags the second argument is now optional (assuming your compiler supports variadic macros - or, more specifically, Catch knows that it supports them). So TEST_CASEs can be written with one argument - or even none at all (an anonymous test case is given a generated name internally - useful if you're just exploring an idea).

Most, if not all, macros where it makes sense now take advantage of variadic macro support.

If you know that your compiler supports variadic macros, yet Catch is not letting you, please let me know and we'll see if we can add the support in.

On your best behaviour

In my first post on Catch, under "A SECTION on specifications", I talked a little about how, while Catch is not a BDD framework, it supports writing in a BDD style. Of note I said,
There is more planned in this area. For example I'm considering offering a GIVEN() macro for defining instances of test data, which can then be logged.
Well I've taken this further and you can now write tests using the following form:
SCENARIO( "name for scenario", "[optional tags]" ) {
    GIVEN( "some initial state" ) {
        // set up initial state

        WHEN( "an operation is performed" ) {
            // perform operation

            THEN( "we arrive at some expected state" ) {
                // assert expected state
            }
        }
    }
}

You can have as many peer WHEN and THEN and even GIVEN sections as you like. You can even nest them with AND_WHEN and AND_THEN. In fact all of these macros are (currently) just aliases for SECTION. SCENARIO is an alias for TEST_CASE.

Although I mentioned BDD you do not need to assert on behaviour here. I typically use the THEN block to assert purely on state. Nonetheless I often find the GIVEN-WHEN-THEN structure useful in organising my tests. They also read well in the output. Here's an example straight from the self test suite:

-------------------------------------------------------------------------------
Scenario: Vector resizing affects size and capacity
     Given: an empty vector
      When: we reserve more space
      Then: the capacity is increased but the size remains the same
-------------------------------------------------------------------------------
That alignment of the colons of Given, When and Then is very deliberate - and is treated specially in the reporter. If the description strings get very long they will wrap after the colons.

Meet Clara

Catch has always had rich command line support. The first implementation was very ad-hoc but as it evolved it become more like an embedded library in itself. For this release I have taken this to its logical conclusion and spun the - completely rewritten - command line parser out into its own library. At time of writing this is still part of the Catch code-base, and depends on a couple of other parts of Catch. The intention is to break those dependencies and extract the code into its own repository on GitHub. But what of the zero-dependency ethos of Catch? Don't worry - the new library will follow the same principle of being header-only and embeddable. So a copy will continue to be included in the Catch code-base and Catch will continue to be distributed as a single header file.

A new library needs a new name. Since it's a Command Line ARgument Assigner I felt Clara was a good name.

As a result of this change some of the specific options have changed (details in the "breaking changes" section). This is to accommodate a closer adherence to POSIX guidelines for command line options. All short-form option names are now single characters and those that take no arguments can be combined after a single -. e.g. to combine -s, -a and -b you can now use -sab.

Options with arguments always have arguments (and can only have one). This leads to a couple of interesting consequences: first the separator character between option and argument can be a space or either : or =. Secondly the non-option arguments (test specs) can appear before or after options.

So the following are all equivalent:
./CatchSelfTest "test name" -b -x 1
./CatchSelfTest "test name" -b -x:1
./CatchSelfTest -b -x 1 "test name"
./CatchSelfTest -x=1 "test name" -b

What's up, Doc?

The documentation for Catch, such as it was, had been provided in the wiki for the GitHub repos. There were a couple of drawbacks to this - most significantly it meant I couldn't have different documentation for different branches, or earlier versions. I also find it much easier to edit documents offline.

So I've now moved (and updated) all the existing documentation into markdown files in the repository itself. These are in the /docs folder, but the README.md file in the root links into them, acting as a convenient launch point.

Breaking changes

This section is only really of interest if you are an active user of an earlier version of Catch.

Under new command

As well as the improvements described there have had to be some changes to the command line options to accommodate them. The list of available options continues to be available by running with the -?, -h or --help switches. They are more fully described in the documentation, now in the repository (rather than the wiki). The in-depth descriptions have been removed from the code.

But here's a quick summary of the changes to be aware of

  1. Test case specs (names or wild carded patterns) and tags are now only specified as application arguments (previously they were introduced using the -t or -g options). In fact -t now means something different!
  2. Listing tests, tags or reporters now all have their own options. Previously you used -l for all of them, with an optional argument to disambiguate. -l no longer takes an argument and just means "list tests". Tags are listed with -t (which formerly meant "run with this/ these test case(s)". Listing reporters is less commonly used so has no short-form. They can be listed with --list-reporters
  3. -nt ("no throw") has become -e (because short form options are single character only)
  4. -a ? has been split into -a and -x ? (because options may have zero or on arguments - but not both)

Writing your own main()

Catch can provide its own main() function but if you write your own there were a few points you could hook into Catch, with different degrees of control over how it is configured.

This continues to be the case but the interface has completely changed. The new interface is more flexible, safer and better encapsulates things like the clean-up of statically allocated state (important if you do leak-detection).

The new interface is documented in the own-main.md file in the docs folder. It is based around a Session class - which must have exactly one instantiation in your code. However, within the instantiation you can invoke Catch test runs as many times as you like (the Session class encapsulates the config and is responsible for the clean-up of statics - in the future those statics may migrate to the session class itself).

Reporters

Catch has a modular reporting system and comes with three reporters bundled (console, xml and JUnit). You can also supply your own reporter by (previously) implementing the IReporter interface. This was one area that was often being slightly tweaked - and would frequently break implementations of the interface. More often than not any changes need not be used by client code - but they would have to update their interfaces anyway!

To make the reporter interface more robust to change I've created a whole new interface, (IStreamingReporter). Most of the methods of this new interface take structs instead of a list of arguments. Those structs can now change with little to no impact on client code (obviously depending on the changes). They are also richer and provide more information than before so I think we're set for a while now

To ease the transition for anyone who has already implemented IReporter I've provided the INTERNAL_CATCH_REGISTER_LEGACY_REPORTER macro (which wraps your reporter in the LegacyReporterAdapter adapter class).

At time of writing documentation for the new reporter interface is coming

It's not just me

Although I have used the personal pronoun, I, a lot in this post (and I continue to be the benevolent dictator on this project) Catch has greatly benefited from the on-going contributions of others - whether that be through pull-requests, bug reports, feature requests and other suggestions, actively maintained forks or just plain evangelising. All of this has been much appreciated and I hope to grow that even more now we have a stable base. Thanks!

Where to go from here

Catch is hosted on GitHub. The preferred url to follow is catch-lib.net, which redirects there - but may become a landing page in the future (an embryonic version of which is already at builds.catch-lib.net).

There's also a forum on Google Groups.

Wednesday
Dec262012

TDD - is it worth it?

There are many articles on the subject of what TDD is, why and when it is worth it, and which attempt to counter common objections.

This is not one of those.
Well. Maybe a bit.

This is more specifically a response to Marco Arment's comments in his podcast, Build & Analyze, episodes 107 and 108. Episode 108 was the last episode so there is an air of finality to the subject matter. Many Mac and iOS developers (as well as developers for other platforms) listen to the show and, while you'd hope they can all think for themselves and reach their own conclusions, it's undeniable that opinions, if not already well formed, may easily be swayed by what a respected figure says in a high profile, and well polished, medium. This can be unfortunate. I'm sure Marco didn't intend to do any damage. I've listened to every episode of Build & Analyze for over a year and enjoyed it. This is certainly not a flame against Marco or the show. However I'm going to walk through Marco's comments as a proxy for many who make similar statements. In doing so I quote him liberally, rearranging to fit my narrative. I'm including the time markers so you can easily listen to it in the original context

Episode 107: 6:50 The comments in question started in response to a listener question about testing and logging. "Test first development" was mentioned in passing, but Marco groups these all together as "a whole lot of formalism" which "matters a lot in an enterprise environment", but he, "[doesn't] really do any of this stuff".

These are quite different things but let's set that aside for the moment. So far he's just told us he doesn't use any of it and it works ok for him. Ok.

But he carries on, "All of this structure and all of this overhead - I wouldn't be able to release this software as one person and have it stay competitive and have it released frequently - there wouldn't be enough time for that".

Now he's making a claim.

Some more choice phrases:

"You've gotta keep moving so fast"
"It isn't worth it in those environments"
"I don't care about all that stuff"

(The last seems to contradict an earlier statement, "Some of this stuff I do regret not knowing what it is", but we'll let that pass)

The first two continue on the theme of needing to move fast and "all those formalisms" "slowing you down".

Ok. Let's stop a moment. I want to be clear again: I'm not trying to attack Marco here. It is not my intention to pick apart the details of his extemporaneous words. He honestly appears to be speaking on the assumption that all the things the listener had asked about are excessive, "formal things", that "matter a lot in […] medical […] or banking" enterprises, or for a "space shuttle" - but are just unnecessary overhead for a one-man independent developer writing, "for apps on phones that don't do anything important".

If you accept the faulty premise the rest of his reasoning makes a lot of sense.
So after I listened to episode 107 I mailed in to explain the difference between low-level, developer oriented, disciplines - TDD in particular - and the higher level, quality-driven, testing approaches more commonly associated with large enterprises or critical industries. I attempted to briefly summarise the benefits that even a one-man developer shop might get from adopting such a discipline.

Unsurprisingly I was not the only one.

Episode 108, from 28:58: Marco brings up the feedback he received, amused to note that everyone pointed out that TDD was, "not only about testing, but it's about writing, 'self-contained code that makes it easier to refactor…'". He recognises that this (and other principles of software design) are "good programming practice". He references The Pragmatic Programmer as being a good source of such principles - and it is. However he appears to believe that proponents of TDD think this is "exclusive to TDD" and, unfortunately, this takes him off-track again. (These days I would probably recommend Growing Objected Oriented Software Guided By Tests and Clean Code as more up-to-date works that cover similar ground.)

Interestingly, The Pragmatic Programmer thoroughly recommends various forms of testing - including TDD (although not by that name - I'm not sure that term had been coined when the book was written). Pertinent to Marco's objections it has this to say:

"A good project may well have more test code than production code. The time it takes to produce this test code is worth the effort. It ends up being much cheaper in the long run" (emphasis mine).

The book also has a section on Refactoring. The topic was new (by that name) at the time - it even mentions that the "first major book on refactoring" was being published around the same time. Nor had the central role of refactoring in TDD's, "Red, Green, Refactor", cycle been clearly established at the time. Nonetheless it makes these two points that cut to the heart of it:

  1. "Don't try to refactor and add functionality at the same time."
  2. "Make sure you have good tests before you begin refactoring. Run the tests as often as possible. That way you will know quickly if your changes have broken anything. […] [Martin] Fowler's point of maintaining good regression tests is the key to refactoring with confidence."

As to the other principles: can they be followed without using a TDD approach? Of course they can. So what was all Marco's feedback about? Simply this: Driving your code from tests forces you to make the code testable. This naturally leads to code that is less coupled, has less responsibilities per unit, with higher cohesion and is written from the start with the idea that it is easy to change. As a by-product you have a great set of automated tests that give you the confidence to refactor to improve all these - and other - principles.

On their own that would all require a lot of discipline - and can be hard to measure (to know how well you are doing). In TDD it's usually easier to follow them from the start - and if you don't the tests will "tell" you by becoming harder to write. TDD is, itself, a discipline - and does require some experience and a prior understanding of the design principles that make it go smoothly. But in my experience it is much easier - and more gratifying - to follow the simple discipline of TDD than to remember to apply all the other individual principles with nothing to guide you.

The result is that your code will tend to be lean and supple and can respond to changes in requirements quickly and easily. You'll spend less time on finding and fixing bugs, and less time on writing code you didn't really need. The time spent writing the tests usually pays for itself almost straight away - often several times over. Saving time and being able to respond to change quickly - are these not the very qualities Marco values so much?

Back in Episode 107, from around 25:57 - right after his first comments on testing - he talks about a situation he got into with The Magazine - a brand new code base for him. "The code was just getting messier and messier". He had a bug but he "couldn't figure out what the heck was going on" and "had all these other things that were problems with that system and it was really clunky". He goes on to say, "it was starting to get to a point where I was fearing adding anything to it. When code gets to a point where it's like a giant pile of messy spaghetti and it feels very fragile and you feel like, 'Oh my God. I need to add an attribute to this - is that going to break anything?'"

It's a shame that he appears to have just dismissed what is probably the best tool we have come up with to date for avoiding this ever happening in the first place.


Thanks to all those who I coerced into reviewing this post for me: Seb Rose, Jez Higgins, Paul Grenyer, Claudius Link, Pal Balog, Hubert Matthews, Peter Pilgrim, Martin Moene, Giovanni Asproni, Yechiel Kimchi, Chris O'Dell, Peter Sommerlad and Graham Lee

Friday
Apr202012

Upcoming speaker engagements

I seem to have gotten myself committed to some speaker engagements over the next couple of months:
Accu2012web

Next week, on 26th April, I'm giving the thursday keynote at the ACCU 2012 conference in Oxford. My topic there is "The Congruent Programmer" and is about aligning what we do with our motivations and gaining clarity on why we do things.


Mobile East

Then, on 29th June, I'll be over at Mobile East talking about how to TDD your iOS apps. There seems to be growing interest in this area. I've just been engaged with a team at the BBC doing just that. And a few days ago Graham Lee's new book on "Test Driven iOS Development" was released. Graham's book mentions my C++/ Objective-C test framework, CATCH, so it must be good!

Thursday
Jan122012

Finesse

I've been playing piano since I was about 11.
Not continuously, of course - my fingers would have fallen off a long time ago! In fact I've barely touched one for a decade.

I'm really more of a synth player - and I've never been a great performer - my interest lay more in composition anyway (some old pieces of mine over on soundcloud.com/phil_nash). But I appreciate a good piano action on a synth keyboard. I chose my Ensoniq TS-12 synth as it had one of the better piano actions (and piano sounds) when I bought it in the mid 90s.

But something happened in 2002 that changed the way I thought about it. I was trying to get back into playing again after a few dry years. I'd just bought a MOTU 828 (effectively a very low-latency external sound card) and a copy of Steinberg's "The Grand". The Grand was a VST instrument that was one of the first to use high definition, full decay, samples of every key on the piano at multiple velocities. That was amazing enough. But then it could perform addition processing - to apply the sympathetic resonance of the open piano strings when the sustain pedal is down, for example, or add in the sounds of the felt and hammers themselves. The result was a breathtaking leap forward in authenticity in digital piano sound.

There was only one problem. At the time the computer processing power, as well as disk IO, was limited enough that it didn't take much layering to push the boundaries. This resulted in note-stealing (where notes deemed least audible are culled, freeing up processing power for those more to the fore), freezes or even crashes. One option to counter this was to reduce the complexity of the instruments. Turning off features such as open string resonance - or using a simpler version of the instrument (e.g. my keyboard's built-in piano sample).

In theory that was an acceptable trade-off as it only really affected live playback and recording. The finished mix could be rendered in non-real-time, including all those CPU-intensive features in the final recording.

That's when I realised something quite surprising. When I played the full-featured version of The Grand I found I played differently to when I was playing the TS-12's on-board piano sound.
Even more surprising was that even playing a simplified Grand was noticeably different to playing the fully-enabled version!
And when I say I played differently the difference really was stark! The more authentic the piano sound, the more my fingers flowed across the keys. I was more accurate, more musical, and felt more connected with the music. Remember this was using the same physical keyboard with, essentially, the same instrument.

Audibly, the difference between a no-holds-barred Grand and one with the extra processing disabled, was very subtle - especially during normal playing. If you played a chord and let it ring you could hear the harmonics "shimmer" with the processing enabled. But I wouldn't consciously notice that while playing in general.

And yet I was quite clearly picking up on it and behaving differently as a result of it. Why?
Obviously all of that extra disk IO and processing was there to make the sound more authentic. To more closely mimic the nuances of the real world instrument. That's all intended to trick the listener's brain into thinking it is the real world instrument. But the player is a listener too. And the player, even one as unaccomplished as I, has a different interaction experience with a real instrument than an artificial one.

This has been quite a long anecdote to make one point: that small, barely perceptible, differences may have a huge impact on our experience - although not necessarily in ways we are consciously aware of. This seems especially true when applied to the way we interact with digital interfaces - whether that be a synth keyboard pretending to be a piano, or a touch screen pretending to be, say, a piece of paper. The details matter. We are participating in a fragile sensual suspension of disbelief. The tiniest crack that betrays the deception brings the whole thing down.

And we're only just at the beginning of a revolution in interaction metaphor.

Wednesday
Jun152011

Could the Internet please stop changing while I finish this blog post?

Whenever I write a blog entry I iterate it a few times - minor corrections here, typographic fixes there - or often rewriting (or deleting) large chunks of it.

Sometimes I'll have a half-finished entry in draft and come back to it months later - only to make substantial changes to it.

Normally I wouldn't mention that in the final version (with this previous exception). But on this occasion the changes reflect the mercurial nature of the subject matter so nicely that I hope you'll forgive a brief aside.

I'd started an entry about a year ago with the title, "Welcome to the semi-connected age". It was meant to be a summary of the current state of the art in "connected" apps, including the stand-off between web apps and native apps - all leading up to my take on it and what I'm working on in that area.

I'd already written a lot.

But when I picked it up a couple of week ago to see if I could finish it I realised that just about everything I touched on in it had changed! I had referenced Silverlight as a way forward with much potential (it has since been sidelined as a desktop web technology), Mono (in connection with Moonlight, in particular) - just had it's staff laid off, and the inadequacy of Javascript and HTML (Javascript is a belatedly rising star and HTML5+CSS2/3 has been thrown into the mainstream despite not having settled into a standard yet). I even touched on Dropbox as being the poster child for cloud storage (they have since been embroiled in security concerns).

So I started again from scratch, with a piece called, "That syncing feeling". But even before the end of that day things were changing again! Microsoft started showing off Windows 8 - which pushes HTML5+CSS3+JS even further into the limelight on the desktop - much to the dismay of Silverlight developers. And Apple confirmed that they would be launching iCloud at the, then, following week's WWDC. That's what prompted the tweet that I took the title of this post from.

I thought it might be better to wait until things had settled down a bit.

And I'm glad I did. The WWDC Keynote has really stirred things up. I'm not sure if most people really "get" why, yet. But what I find reassuring is that Apple seem to be moving in exactly the direction that my original post was trying to promote.

So that brings us full circle. I can now put my points across, but this time with Apple to back me up.

To The Cloud

Even after writing that intro I've abandoned the rest of this post and restarted from scratch a couple of times. There's plenty on the cutting room floor for follow-up posts.

I'm going to use this post to cover why I see Apple's iCloud services as doing it right where some people see shortcomings.

What are these "shortcomings"?

iCloud is about transferring and syncing data. It's about being the canonical store of that data. I've seen a number of complaints that this is not really "the cloud", and that Apple are giving us a half-baked solution. What is the other half? The True Cloud, they say, hosts the apps themselves - not just the data (we'll ignore the MobileMe apps for the moment).

I couldn't disagree more! Why?

Dire RIA

First. Even web apps run on your local machine. They might be hosted on a server but they are effectively deployed to your desktop/ device every time you use them! Caching may play a role here, but that's really just a deployment performance tweak.

So a web app is just a Javascript (or some RIA language) app that is continuously deployed then interpreted on your desktop. It has some cross-platform advantages, due to being browser hosted - although it does trade these for cross-browser issues instead.

Second. Writing a good, responsive, sophisticated web app is hard. Harder than the equivalent native app. But getting sync right between distributed clients is harder. Much harder. It could be argued that no-one has got it quite right yet. You could make a case, and this is my position, that hosting data for distributed native apps is The True Cloud. Web apps, in a way, are the half-baked solution.

So, what are the pros and cons of each?

Web apps:

Pros: Continuously deployed - always up-to-date. Minimal data integrity issues (always working off canonical version).
Cons: Requires constant connectivity. Slower. poorer UX.

Native apps:

Pros: Can work disconnected. Can be much faster. Matches look-and-feel of your chosen platform. Integration with other apps.
Cons: Installation/ Updates can be more onerous or require user action and take time. Must deal with sync issues.

In my earlier drafts I went into much more detail on these points - especially connectivity (e.g. RIA technologies that allow disconnected working). But this time I'm just going to jump straight into how last week's WWDC announcements change the score:

But let's add a third category

iCloud enabled, Mac OS-X Lion or iOS 5 app:

Pros: Can work disconnected. Can be much faster. Matches look-and-feel of your chosen platform. Integration with other apps. One click install, automatic pushed updates using delta patches (fast!). Sync issues taken care of.

What happened to the Cons field? Well you might still have some reasons to prefer web apps - such as the cross-platform promise. But for me, at least, now there are no cons! Especially if you combine native apps with web-hosted versions. That makes senses for PIM apps, like contacts, email and calenders. Maybe it makes sense for productivity apps too, like word processors, spreadsheets and slide presentation apps.

And guess what, Apple has cloud hosted versions of all those too - which work seamlessly with their native counterparts. At time of writing the future of these is uncertain, but I think it highly likely that they will continue to exist.

Best of all worlds?

Maybe. It does severely lock you into Apple's products, of course. I'm a big fan of Apple hardware and software in general - but this is something that must transcend a single company. They're not doing anything new at the small scale but, at the moment, it's only really Apple that have everything necessary to be able to pull this off end-to-end. I hope that in doing so they pave the way for the community to piece together a more coherent alternative picture. We have all the components out there. Many of them better than Apple is offering.

That syncing feeling

There are those who have been claiming that iCloud does not sync, but merely pushes content that it holds down to devices. It's true that Jobs didn't use the word, "sync" in his WWDC Keynote coverage. In fact he seemed to be specifically avoiding the word. Does that mean there really is no syncing capability in iCloud?

Well remember that, whether iCloud assimilates the MobileMe services that sync contacts, mail, calendars, etc. But even for the new services sync is fundamental to how they work. You add a song on one device, the other devices get it (which may involve the song being uploaded). You take a photo on one device, it gets synced to other devices.

However these new services seem to be designed in such a way as to avoid, or at least minimise, the possibility of conflicts. If it was just a case of holding songs and photos in a file system and then syncing the file system then all those thorny conflict resolution challenges that are traditionally associated with sync arise.

A lot of us have been working for 10 years to get rid of the file system

But Apple have been very careful to keep away from those issues by managing the content at a higher level. Jobs seemed particularly proud when he said, "A lot of us have been working for 10 years to get rid of the file system". This is not just about simplification - it's about the file system being the wrong tool for the cloud - and I say this as someone who has worked for a file-based Cloud Storage company.

It's this "post-file-system era" that is central to what I'm going to cover in more detail in a future post.