I've been playing piano since I was about 11.
Not continuously, of course - my fingers would have fallen off a long time ago! In fact I've barely touched one for a decade.

I'm really more of a synth player - and I've never been a great performer - my interest lay more in composition anyway (some old pieces of mine over on But I appreciate a good piano action on a synth keyboard. I chose my Ensoniq TS-12 synth as it had one of the better piano actions (and piano sounds) when I bought it in the mid 90s.

But something happened in 2002 that changed the way I thought about it. I was trying to get back into playing again after a few dry years. I'd just bought a MOTU 828 (effectively a very low-latency external sound card) and a copy of Steinberg's "The Grand". The Grand was a VST instrument that was one of the first to use high definition, full decay, samples of every key on the piano at multiple velocities. That was amazing enough. But then it could perform addition processing - to apply the sympathetic resonance of the open piano strings when the sustain pedal is down, for example, or add in the sounds of the felt and hammers themselves. The result was a breathtaking leap forward in authenticity in digital piano sound.

There was only one problem. At the time the computer processing power, as well as disk IO, was limited enough that it didn't take much layering to push the boundaries. This resulted in note-stealing (where notes deemed least audible are culled, freeing up processing power for those more to the fore), freezes or even crashes. One option to counter this was to reduce the complexity of the instruments. Turning off features such as open string resonance - or using a simpler version of the instrument (e.g. my keyboard's built-in piano sample).

In theory that was an acceptable trade-off as it only really affected live playback and recording. The finished mix could be rendered in non-real-time, including all those CPU-intensive features in the final recording.

That's when I realised something quite surprising. When I played the full-featured version of The Grand I found I played differently to when I was playing the TS-12's on-board piano sound.
Even more surprising was that even playing a simplified Grand was noticeably different to playing the fully-enabled version!
And when I say I played differently the difference really was stark! The more authentic the piano sound, the more my fingers flowed across the keys. I was more accurate, more musical, and felt more connected with the music. Remember this was using the same physical keyboard with, essentially, the same instrument.

Audibly, the difference between a no-holds-barred Grand and one with the extra processing disabled, was very subtle - especially during normal playing. If you played a chord and let it ring you could hear the harmonics "shimmer" with the processing enabled. But I wouldn't consciously notice that while playing in general.

And yet I was quite clearly picking up on it and behaving differently as a result of it. Why?
Obviously all of that extra disk IO and processing was there to make the sound more authentic. To more closely mimic the nuances of the real world instrument. That's all intended to trick the listener's brain into thinking it is the real world instrument. But the player is a listener too. And the player, even one as unaccomplished as I, has a different interaction experience with a real instrument than an artificial one.

This has been quite a long anecdote to make one point: that small, barely perceptible, differences may have a huge impact on our experience - although not necessarily in ways we are consciously aware of. This seems especially true when applied to the way we interact with digital interfaces - whether that be a synth keyboard pretending to be a piano, or a touch screen pretending to be, say, a piece of paper. The details matter. We are participating in a fragile sensual suspension of disbelief. The tiniest crack that betrays the deception brings the whole thing down.

And we're only just at the beginning of a revolution in interaction metaphor.


Could the Internet please stop changing while I finish this blog post?

Whenever I write a blog entry I iterate it a few times - minor corrections here, typographic fixes there - or often rewriting (or deleting) large chunks of it.

Sometimes I'll have a half-finished entry in draft and come back to it months later - only to make substantial changes to it.

Normally I wouldn't mention that in the final version (with this previous exception). But on this occasion the changes reflect the mercurial nature of the subject matter so nicely that I hope you'll forgive a brief aside.

I'd started an entry about a year ago with the title, "Welcome to the semi-connected age". It was meant to be a summary of the current state of the art in "connected" apps, including the stand-off between web apps and native apps - all leading up to my take on it and what I'm working on in that area.

I'd already written a lot.

But when I picked it up a couple of week ago to see if I could finish it I realised that just about everything I touched on in it had changed! I had referenced Silverlight as a way forward with much potential (it has since been sidelined as a desktop web technology), Mono (in connection with Moonlight, in particular) - just had it's staff laid off, and the inadequacy of Javascript and HTML (Javascript is a belatedly rising star and HTML5+CSS2/3 has been thrown into the mainstream despite not having settled into a standard yet). I even touched on Dropbox as being the poster child for cloud storage (they have since been embroiled in security concerns).

So I started again from scratch, with a piece called, "That syncing feeling". But even before the end of that day things were changing again! Microsoft started showing off Windows 8 - which pushes HTML5+CSS3+JS even further into the limelight on the desktop - much to the dismay of Silverlight developers. And Apple confirmed that they would be launching iCloud at the, then, following week's WWDC. That's what prompted the tweet that I took the title of this post from.

I thought it might be better to wait until things had settled down a bit.

And I'm glad I did. The WWDC Keynote has really stirred things up. I'm not sure if most people really "get" why, yet. But what I find reassuring is that Apple seem to be moving in exactly the direction that my original post was trying to promote.

So that brings us full circle. I can now put my points across, but this time with Apple to back me up.

To The Cloud

Even after writing that intro I've abandoned the rest of this post and restarted from scratch a couple of times. There's plenty on the cutting room floor for follow-up posts.

I'm going to use this post to cover why I see Apple's iCloud services as doing it right where some people see shortcomings.

What are these "shortcomings"?

iCloud is about transferring and syncing data. It's about being the canonical store of that data. I've seen a number of complaints that this is not really "the cloud", and that Apple are giving us a half-baked solution. What is the other half? The True Cloud, they say, hosts the apps themselves - not just the data (we'll ignore the MobileMe apps for the moment).

I couldn't disagree more! Why?

Dire RIA

First. Even web apps run on your local machine. They might be hosted on a server but they are effectively deployed to your desktop/ device every time you use them! Caching may play a role here, but that's really just a deployment performance tweak.

So a web app is just a Javascript (or some RIA language) app that is continuously deployed then interpreted on your desktop. It has some cross-platform advantages, due to being browser hosted - although it does trade these for cross-browser issues instead.

Second. Writing a good, responsive, sophisticated web app is hard. Harder than the equivalent native app. But getting sync right between distributed clients is harder. Much harder. It could be argued that no-one has got it quite right yet. You could make a case, and this is my position, that hosting data for distributed native apps is The True Cloud. Web apps, in a way, are the half-baked solution.

So, what are the pros and cons of each?

Web apps:

Pros: Continuously deployed - always up-to-date. Minimal data integrity issues (always working off canonical version).
Cons: Requires constant connectivity. Slower. poorer UX.

Native apps:

Pros: Can work disconnected. Can be much faster. Matches look-and-feel of your chosen platform. Integration with other apps.
Cons: Installation/ Updates can be more onerous or require user action and take time. Must deal with sync issues.

In my earlier drafts I went into much more detail on these points - especially connectivity (e.g. RIA technologies that allow disconnected working). But this time I'm just going to jump straight into how last week's WWDC announcements change the score:

But let's add a third category

iCloud enabled, Mac OS-X Lion or iOS 5 app:

Pros: Can work disconnected. Can be much faster. Matches look-and-feel of your chosen platform. Integration with other apps. One click install, automatic pushed updates using delta patches (fast!). Sync issues taken care of.

What happened to the Cons field? Well you might still have some reasons to prefer web apps - such as the cross-platform promise. But for me, at least, now there are no cons! Especially if you combine native apps with web-hosted versions. That makes senses for PIM apps, like contacts, email and calenders. Maybe it makes sense for productivity apps too, like word processors, spreadsheets and slide presentation apps.

And guess what, Apple has cloud hosted versions of all those too - which work seamlessly with their native counterparts. At time of writing the future of these is uncertain, but I think it highly likely that they will continue to exist.

Best of all worlds?

Maybe. It does severely lock you into Apple's products, of course. I'm a big fan of Apple hardware and software in general - but this is something that must transcend a single company. They're not doing anything new at the small scale but, at the moment, it's only really Apple that have everything necessary to be able to pull this off end-to-end. I hope that in doing so they pave the way for the community to piece together a more coherent alternative picture. We have all the components out there. Many of them better than Apple is offering.

That syncing feeling

There are those who have been claiming that iCloud does not sync, but merely pushes content that it holds down to devices. It's true that Jobs didn't use the word, "sync" in his WWDC Keynote coverage. In fact he seemed to be specifically avoiding the word. Does that mean there really is no syncing capability in iCloud?

Well remember that, whether iCloud assimilates the MobileMe services that sync contacts, mail, calendars, etc. But even for the new services sync is fundamental to how they work. You add a song on one device, the other devices get it (which may involve the song being uploaded). You take a photo on one device, it gets synced to other devices.

However these new services seem to be designed in such a way as to avoid, or at least minimise, the possibility of conflicts. If it was just a case of holding songs and photos in a file system and then syncing the file system then all those thorny conflict resolution challenges that are traditionally associated with sync arise.

A lot of us have been working for 10 years to get rid of the file system

But Apple have been very careful to keep away from those issues by managing the content at a higher level. Jobs seemed particularly proud when he said, "A lot of us have been working for 10 years to get rid of the file system". This is not just about simplification - it's about the file system being the wrong tool for the cloud - and I say this as someone who has worked for a file-based Cloud Storage company.

It's this "post-file-system era" that is central to what I'm going to cover in more detail in a future post.


Unit Testing in C++ and Objective-C just got ridiculously easier still

Spider web in morning sun

'Spider Web in Morning Sun' by Rob van Hilten

In my previous post I introduced Catch - my unit testing framework for C++ and Objective-C.

The response was overwhelming. Thanks to all who commented, offered support - and even contributed to the code with fixes and features.

It certainly gave me the motivation to continue active development and a lot has changed since that post. I'm going to cover some highlights, but first I want to focus on what has been one of the most distinguishing features of Catch that has attracted so much attention - and how I have not rested but made that even better!

How easy is easy enough?

Back in April I gave a five minute lightning talk on Catch at the ACCU conference in Oxford (I highly recommend the conference). With just five minutes to talk about what makes Catch special what was I going to cover? The natural operator-based comparison syntax? The use of Sections instead of class-based fixtures? Data generators?

Well I did touch on the first point. But I decided to use the short amount of time to drive home just how quickly and easily you can get up and running with Catch. So after a 30 second intro I went to the GitHub page for Catch (now aliased as, downloaded the zip of the source (over a 3G connection), unzipped and copied to a central location, fired up XCode, started a fresh C++ project, added the path to Catch's headers, #include'd "catch_with_main.hpp", wrote an anonymous test case, compiled and ran it, demonstrated how it caught a bug, fixed the bug and finally recompiled and re-ran to see the bug go away.

Phew! Not bad for five minutes, I thought. And from the feedback I got afterwards it really did drive the point home.

Compare that with my first experience of using Google Test. It took me over an hour to get it downloaded and building in XCode (the XCode projects don't seem to have been maintained recently - so perhaps that is a little unfair). There are other frameworks that I've tried where I have just run out of patience and never got them going.

Of course I'm biased. But I have had several people tell me that they tried Catch and found it to be the easiest C++ Unit Test framework they have used.

But still I wasn't completely satisfied with the initial experience and ease of incorporating Catch into your own projects.

In particular, if you maintain your own open source project and want to bundle it with a set of unit tests (and why wouldn't you?) then it starts to get fiddly. Do you list Catch as an external dependency that the user must install on their own? (no matter how easy they are to install external dependencies are one or my least favourite things). Do you include all the source to Catch directly in your project tree? That can get awkward to maintain and makes it look like your project is much bigger than it is. If you host your project on GitHub too (or some other Git based repository) you could include Catch as a submodule. That's still not ideal, has some of the problems of the first two options, and is not possible for everyone.

There can be only one

Since Catch, as a library, is fully header-only I decided provided a single header version that is ideal for direction inclusion in third-party projects.

How did I do this?

Go on guess.

Did you guess that I wrote a simple Python script to partially preprocess the headers so that the #includes within the library are expanded out (just once, of course), leaving the rest untouched?

If you did you're not far off. Fortunately some of the conventions I have used within the source meant I could drastically simplify the script. It doesn't need to be a full C preprocessor. It only needs to understand #include and #ifndef/#endif for include guards. Even those are simplified. The whole script is just 42 lines of code. 42 always seems to be the answer.

The result is

I see no reason why this should not be the default way to use Catch - unless you are developing Catch itself. So I'm now providing this file as a separate download from within GitHub. Think of it as the "compiled" header. The lib file of the header-only world.

Licence To Catch

But Open Source is a quagmire of licensing issues, isn't it?

Well it certainly can be. Those familiar with GPL and similar open source licences may be very wary of embedding one open source library (Catch) within another (their own).

IANAL but my understanding is that, contrary to what might seem intuitive, source code with no license at all can be more dangerous, legally speaking, than if it does have one (and if you thought that sentence was difficult to parse you should try reading a software license).

So Catch is licensed. I've used the Boost license. For a number of reasons:

  • It is very permissive. In particular it is not viral. It explicitly allows the case of including the source of Catch along with the distribution of your own source code with no requirements on your own code
  • It's been around for a while now - long enough, I think, that most people are comfortable with it. I work with banks, who can be very nervous about software licensing issues - especially open source. But every one I have worked at has already got Boost through it's compliance process. I'm hoping that will ease any barriers to adoption.
  • I'm familiar with Boost, know many of it's contributors personally, and generally trust the spirit of the licence. Boost itself is a very well known and highly respected set of libraries - with very widespread adoption. A large part of Boost is in header-only libraries and people are already comfortable including them in their own projects.

So what's the Catch? The catch is that I retain the right to keep using that joke - well beyond its humorous lifetime.

The important bit:

In short: any open source author who wants to use Catch to write unit tests for their own projects should feel very free to do so and to include the single-header (or full) version of the library in their own repository and along with their distribution.

That fully applies to commercial projects too, of course.

What else?

Here's a quick run down of some of the other changes and features that have gone in:
  • Single evaluation of test expressions. The original implementation evaluated the expression being tested twice - once to get the result, and then again to get the component values. There were some obstacles to getting this to work whilst only evaluating the expression once. But we got there in the end. This is critical if you want to write test expressions that have side-effects.
  • Anonymous test cases. A little thing, but I find them really handy when starting a new project or component and I'm just exploring the space. The idea is that you don't need to think of a name and description for your test - you can just dive straight in and write code. If you end up with something more like a test case it's trivial to go back and name it.
  • Generators. These are in but not fully tested yet. Consider them experimental - but they are very cool and very powerful.
  • Custom exception handlers. (C++) Supply handlers for your own exception types - even those that don't derive from std::exception, so you can report as much detail as you like when an exception is caught within Catch. I'm especially pleased this went in - given the name of the library!
  • Low build time overhead. I've been aggressive at keeping the compile-time footprint to a minimum. This is one of the concerns when using header only libraries - especially those with a lot of C++ templates. Catch uses a fair bit of templates, but nothing too deeply recursive. I've also organised the code so that as much as the implementation as possible is included in only one translation unit (the one with main() or the test runner). I think you'll be pushed to notice any build-time overhead due to Catch.
  • Many fixes, refactorings and minor improvements. What project doesn't have them? This is where a lot of the effort - possibly the majority - has gone, though. I've wanted to keep the code clean, well factored, and the overhead low. I've also wanted it to be possible to compile at high warning levels without any noise from Catch. This has been challenging at times - especially after the Single Evaluation work. If you see any Catch-related warnings please let me know.

Are we there yet?

As well as my own projects I've been using Catch on a large scale project for a bank. I believe it is already more than just a viable alternative to other frameworks.

Of course it will continue to be refined. There are still bugs being found and fixed.

But there are also more features to be added! I need to finish the work on generators. I'd like to add the tagging system I've mentioned before. I need to look at Matchers. Whether Catch provides its own, or whether I just provide the hooks for a third-party library to be integrated, I think Matchers are an important aspect to unit testing.

I also have a stub project for an iPhone test runner - for testing code on an iOS device. Several people have expressed an interest in this so that is also on my list.

And, yes, I will fill out the documentation!


Unit Testing in C++ and Objective-C just got easier

Day 133-365 : Catching the bokeh.jpg

Back in May I hinted that I was working on a unit testing framework for C++. Since then I've incorporated the technique that Kevlin Henney proposed and a whole lot more. I think it's about time I introduced it to the world:

This post is very old now, but is still the first point of contact with Catch for many people. Most of the material here still applies in concept, so is worth reading - but some of the specifics have changes. Please see the tutorial (and other docs) over on GitHub for more up-to-date coverage.

Introducing CATCH

CATCH is a brand new unit testing framework for C, C++ and Objective-C. It stands for 'C++ AdaptiveAutomated Test Cases in Headers', although that shouldn't downplay the Objective-C bindings. In fact my initial motivation for starting it was dissatisfaction with OCUnit.

Why do we need another Unit Testing framework for C++ or Objective-C?

There are plenty of unit test frameworks for C++. Not so many for Objective-C - which primarily has OCUnit (although you could also coerce a C or C++ framework to do the job).

They all have their strengths and weaknesses. But most suffer from one or more of the following problems:

  • Most take their cues from JUnit, which is unfortunate as JUnit is very much a product of Java. The idiom-mismatch in C++ is, I believe, one of the reasons for the slow uptake of unit testing and TDD in C++.
  • Most require you to build libraries. This can be a turn off to anyone who wants to get up and running quickly - especially if you just want to try something out. This is especially true of exploratory TDD coding.
  • There is typically a certain amount of ceremony or boilerplate involved. Ironically the frameworks that try to be faithful to C++ idioms are often the worst culprits. Eschewing macros for the sake of purity is a great and noble goal - in application development. For a DSL for testing application code, especially since preprocessor information (e.g. file and line number) are required anyway) the extra verbosity seems too high a price to pay to me.
  • Some pull in external dependencies
  • Some involve a code generation step

The list goes on, but these are the criteria that really had me disappointed in what was out there, and I'm not the only one. But can these be overcome? Can we do even better if we start again without being shackled to the ghost of JUnit?

What's the CATCH?

You may well ask!

Well, to start, here's my three step process for getting up and running with CATCH:

  1. Download the headers from github into subfolder of your project
  2. #include "catch.hpp"
  3. There is no step 3!

Ok, you might need to actually write some tests as well. Let's have a look at how you might do that:

[Update: Since my original post I have made some small, interface breaking, changes - for example the name of the header included below. I have updated this post to reflect these changes - in case you were wondering]

#include "catch_with_main.hpp"

TEST_CASE( "stupid/1=2", "Prove that one equals 2" )
    int one = 2;
    REQUIRE( one == 2 );

Short and to the point, but this snippet already shows a lot of what's different about CATCH:

  • The assertion macro is REQUIRE( expression ), rather than the, now traditional, REQUIRE_EQUALS( lhs, rhs ), or similar. Don't worry - lhs and rhs are captured anyway - more on this later.
  • The test case is in the form of a free function. We could have made it a method, but we don't need to
  • We didn't name the function. We named the test case. This frees us from couching our names in legal C++ identifiers. We also provide a longer form description that serves as an active comment
  • Note, too, that the name is hierarchical (as would be more obvious with more test cases). The convention is, as you might expect, "root/branch1/branch2/.../leaf". This allows us to easily group test cases without having to explicitly create suites (although this can be done too).
  • There is no test context being passed in here (although it could have been hidden by the macro - it's not). This means that you can freely call helper functions that, themselves, contain REQUIRE() assertions, with no additional overhead. Even better - you can call into application code that calls back into test code. This is perfect for mocks and fakes.
  • We have not had to explicity register our test function anywhere. And by default, if no tests are specified on the command line, all (automatically registered) test cases are executed.
  • We even have a main() defined for us by virtue of #including "catch_with_main.hpp". If we just #include that in one dedicated cpp file we would #include "catch.hpp' in our test case files instead. We could also write our own main that drives things differently.

That's a lot of interesting stuff packed into just a few lines of test code. It's also got more wordy than I wanted. Let's take a bit more of a tour by example.

Information is power

Here's another contrived example:

TEST_CASE( "example/less than 7", "The number is less than 7" )
    int notThisOne = 7;

    for( int i=0; i < 7; ++i )
        REQUIRE( notThisOne > i+1  );

In this case the bug is in the test code - but that's just to make it self contained. Clearly the requirement will be broken for the last iteration of i. What information do we get when this test fails?

    notThisOne > i+1 failed for: 7 > 7

(We also get the file and line number, but they have been elided here for brevity). Note we get the original expression and the values of the lhs and rhs as they were at the point of failure. That's not bad, considering we wrote it as a complete expression. This is achieved through the magic of expression templates, which we won't go into the details of here (but feel free to look at the source - it's probably simpler than you think).

Most of the time this level of information is exactly what you need. However, to keep the use of expression templates to a minimum we only decompose the lhs and rhs. We don't decompose the value of i in this expression, for example. There may also be other relevant values that are not captured as part of the test expression.

In these cases it can be useful to log additional information. But then you only want to see that information in the event of a test failure. For this purpose we have the INFO() macro. Let's see how that would improve things:

TEST_CASE( "example/less than 7", "The number is less than 7" )
    int notThisOne = 7;

    for( int i=0; i < 7; ++i )
        INFO( "i=" << i );
        REQUIRE( notThisOne > i+1  );

This gives us:

    info: 'i=6'
    notThisOne > i+1 failed for: 7 > 7

But if we fix the test, say by making the for loop go to i < 6, we now see no output for this test case (although we can, optionally, see the output of successful tests too).

A SECTION on specifications

There are different approaches to unit testing that influence the way the tests are written. Each approach requires a subtle shift in features, terminology and emphasis. One approach is often associated with Behaviour Driven Development (BDD). This aims to present test code in a language neutral form - encouraging a style that reads more like a specification for the code under test.

While CATCH is not a dedicated BDD framework it offers a several features that make it attractive from a BDD perspective:

  • The hiding of function and method names, writing test names and descriptions in natural language
  • The automatic test registration and default main implementation eliminate boilerplate code that would otherwise be noise
  • Test data generators can be written in a language neutral way (not fully implemented at time of writing)
  • Test cases can be divided and subdivided into SECTIONs, which also take natural language names and descriptions.

We'll look at the test data generators another time. For now we'll look at the SECTION macro.

Here's an example (from the unit tests for CATCH itself):

TEST_CASE( "succeeding/Misc/Sections/nested", "nested SECTION tests" )
    int a = 1;
    int b = 2;
    SECTION( "s1", "doesn't equal" )
        REQUIRE( a != b );
        REQUIRE( b != a );

        SECTION( "s2", "not equal" )
            REQUIRE_FALSE( a == b);

Again, this is not a great example and it doesn't really show the BDD aspects. The important point here is that you can divide your test case up in a way that mirrors how you might divide a specification document up into sections with different headings. From a BDD point of view your SECTION descriptions would probably be your "should" statements.

There is more planned in this area. For example I'm considering offering a GIVEN() macro for defining instances of test data, which can then be logged.

In Kevlin Henney's LHR framework, mentioned in the opening link, he used SPECIFICATION where I have used TEST_CASE, and PROPOSITION for my top level SECTIONs. His equivalent of my nested SECTIONs are (or were) called DIVIDERs. All of the CATCH macro names are actually aliases for internal names and are defined in one file (catch.hpp). If it aids utility for BDD or other purposes, the names can be aliased differently simply by creating a new mapping file and using that.


There is much more to cover but I wanted to keep this short. I'll follow up with more. For now here's a (yet another) list of some of the key features I haven't already covered:

  • Entirely in headers
  • No external dependencies
  • Even test fixture classes and methods are self registering
  • Full Objective-C bindings
  • Failures (optionally) break into the interactive debugger, if available
  • Floating point tolerances supported in an easy to use way
  • Several reporter classes included - including a JUnit compatible xml reporter. More can be supplied

Are there any features that you feel are missing from other frameworks that you'd like to see in CATCH? Let me know - it's not too late. There are some limiting design goals - but within those there are lots of possibilities!


The iPad reconsidered

As promised, here are my current thoughts on the iPad - as developed on from my thoughts of four months ago.

I still find the screen too glossy and hard to read in some circumstances. This has been somewhat mitigated by the onset of autumn as my train journey is usually in the dark or half-light. However, now that I'm used to the iPhone 4's retina display I'm also very distracted by the sight of all those rough edges and pixels. I'm sure they weren't there before.

As I understand it there are some rather large technical challenges to bringing retina display technology to a panel the size of the current iPad. I do wonder if this feeds into some of the rumours of an upcoming 7" iPad (or other smaller form factor).

On the flip side I've found that, with the retina display, the iPhone is now even more useful in some situations where I might otherwise have preferred the iPad - e.g. reading. It's now almost as comfortable to read material on the iPhone, at the higher resolution, as on the iPad - and without those distracting pixels.

With this in mind I think I would welcome a smaller form factor (as an option in a range) - as long as it had higher pixel density (ideally "retina" level).

Aside from the screen, we've certainly seem a lot more apps targeting the iPad. Some more successfully than others. We're seeing innovation in this area and its an exciting time - but I think for the casual observer it's still a bit too early to really benefit. Unless you have something specific in mind I'd advise holding off to see what the next generation - or possible an upcoming competitor - hold. If you just want an eReader I think the latest Kindles are a good bet - and needn't preclude getting a next gen iPad later.

I think that is the mark of something truly new. We're finding our feet as a community. The hype behind Apple and the iPad seems to be sufficient to keep the momentum going until we reach the next level.

It's also been interesting to see how other vendors have responded. There has been the inevitable glut of cheap knock-off clones, of course. But I think we're starting to see some real contenders. The Android space has really taken off this year. In some ways Android is even ahead - but I think Apple is still leading on innovation at this point. There are fundamentally different philosophies behind Google's approach and Apple's. But you can't deny that that Google are following the strategy that Jeff Attwood recently coined as, "Go that way, really fast". Even Blackberry are making credible contributions to the space (or are they?) - focusing, as is their wont, on the business side of things.

Competition is good. Not just because it will encourage Apple to keep innovating, and keep iterating. I genuinely think this area of computing is the one to watch. It's only just getting off the ground. I think Apple will be the thought leaders in the space for a little while yet, but it would be unhealthy for that to remain so in the long term. That was less true of the iPhone, but I still think it's good that competitors are catching up there too (and ahead of schedule). The reason I think Apple are still ahead is that they control the end-to-end user experience and that is, right now, critical. That may not always be the most important factor.

But back to the specifics of the, current, iPad. Most of the areas I commented on in the previous post were software related. I now have iOS 4.2b2 installed. How has that changed things? Well I can't talk about specifics, beyond what has already been advertised, as it is still under NDA. But I can say that just between Multitasking and Folders alone I can't imagine going back to 3.2. I actually jumped on 4.2b1 as soon as it came out - despite my usual policy of not installing first betas, if any, on devices I use in daily life (I have an iPod touch and an older iPhone specifically for testing). Android users will be quick to point out that corresponding features have been available on that platform for some time. Not having used them I can't comment, but I have heard that the experience is less satisfying.

However, despite all this I have to say that I've been using it less than I was even four months ago.


Well, in part, it's a matter of time. My typically daily computing experience is something like this:


Catch up on Twitter on my iPhone while waiting for the train. Development (or writing blog posts) on my laptop on the train. I sometimes use the iPad for looking up reference material as the internet connection is more stable than my 3 Mi-Fi

Work on a PC running Windows. Occasional emails and Twitter checks are catered for by the iPhone. Music supplied by the iPhone.

Evening commute:
Catch up on Twitter on my iPhone while waiting for the train. Development (or writing blog posts) on my laptop on the train.

Evening at home:
Watching an episode of something (currently: Lost) from my Mac Mini (possibly soon to be my Apple TV 2) while having dinner. Occasionally a little more development on my laptop, or downloading iOS betas. Sometimes some reading in bed - either on the iPhone or the iPad.

So there is not a lot of scope for the iPad to make inroads there. Currently the only regular appearance is the late night reading - which the iPhone is often more convenient for anyway. I didn't mention, too, that I sometimes use the iPad during the day as a photo frame, but it has been disappointing even in that capacity as there is, currently, no way to change the update frequency (iOS4.2 may or may not address that - I'll say no more) - and no way to create photo albums on the device. If these things are addressed I will probably use it more for that - but that's hardly revolutionary.

Once my Apple TV 2 arrives and AirPlay starts working I may well use the iPad as a source for that too - but the iPhone may serve that role just as well.

So, in summary, I'm making it sound as though the current iPad is much less useful than I had originally hoped for. There is some truth in that. I think an option of a smaller form factor and/ or a high density display will help there. But I think another obstacle has just been my time to find apps that make better use of what the iPad has to offer (this would be true of any tablet device) - and more importantly - to write my own!

The time for the iPad is, maybe, yet to come, but the revolution has already begun

Page 1 ... 2 3 4 5 6 ... 9 Next 5 Entries ยป