Friday
Apr092010

iPhone Multitasking Revisited

Back in January I posted to this blog my thoughts on what form background processing might take in iPhone OS. I proposed that we would not see what you might consider, "full multitasking", but went on to explain why I didn't think we needed it (and that that would be a good thing) and what should happen instead. Yesterday Apple announced iPhone OS 4 and the headline feature, of course, was multitasking. So how did my musings (and those of the commenters to that article) stack up?

slide_5923_79686_large.jpg

I listed three primary areas that most multitasking usages involve:

1. Background apps, such as music players. The iPod app does run in the background, of course, but there is certainly a case for third-party apps such as Spotify or Last.fm to have the same ability.

This was the first multi-tasking service discussed - the background audio service. Pandora was the poster child.

2. Notifications. This is at least partly addressed by the Push Notification service that Apple now provides. More on that in a moment.

I went on to discuss in more depth what I meant by this and Apple have implemented this exactly as I proposed (although it is conceivable that they arrived at this concept independently). They have called it Local Notifications.

3. Fast switching between apps

I also went on to suggest that this is what most people want most of the time they lament the lack of multi-tasking. App switching has always been fairly smooth on the iPhone and iPad, but new services to allow apps to persist and restore their state quickly make that process really seamless - and even allow apps to respond to beings switched back to (the example given was of playing Tap Tap Revenge which, when switched back to after switching away, has paused the game and gives a 3 second countdown before resuming again).

I felt that these were probably the biggest uses of multi-tasking that could be provided without disrupting the performance and battery life. They are not the only areas, of course. I invited people to suggest other areas that could be served in a similar way.

The first comment I received was from Graeme Foster, who said:

Location-tracking apps like Google Latitude and FourSquare?

And the second, from James Webster, said:

Running the GPS and a background app would be battery intensive

I'm sure Apple were reading these comments. They have addressed both of these concerns. There are two background location services. One is based on cell tower triangulation and is aimed at Graeme's use case. The other is full on location services - aimed at navigation apps - which assume the device will be connected to power - so acknowledging James' concern.

One use, which wasn't mentioned by me or my commenters, but I did see discussed elsewhere, was for VoIP apps, such as Skype - and it was nice to see this catered for too. In fact I wonder if it becomes possible to use an iPod touch just like a normal phone now!

Another feature that I didn't bring out (but others did) was completion of upload tasks. In retrospect this is perhaps the most important from my own perspective. In another iPad related blog post I alluded to an app that I'm working on that fits the iPad like a glove (no, not literally). I'm still holding my cards quite close on that one, but what I will say is that it involves cloud syncing. The trouble with cloud sync enabled apps right now on iPhone OS is that the syncing tends to happen in the background. If the user closes the app while the syncing is in progress then, at best, the app will not be fully synced up and the "cloud" version will be out of date. At worst, if not carefully designed, data corruption may result.

iPhone OS 4 includes a service called Task Completion and this solves the problem cleanly. Tasks such as syncing and uploads that are kicked off while the app is running can continue to completion in the background even after the app is closed. This was one area that I was agonising over for my app and I'm really pleased that it is solved in OS 4.

Finally, one of my commenters, Yitz, posted an interesting link to his own blog with his views on another "iPhoneOS multi-tasking alternative". In fact the bulk of his post was arguably not about multi-tasking as such, but rather about allowing developers to write services that other apps could call into. In retrospect this is now somewhat provided for by the QuickLook framework.

So, in summary, Apple's multitasking features include everything I expected, everything I hoped for and everything I dared not dream of (or, indeed, blog about).

Now things get interesting.

Friday
Mar052010

The 80s called; they want their memory manager back!

Not too long ago I was presenting on the subject of iPhone development and Objective-C at the Stackoverflow conference. As I wrote at the time, the audience was of predominantly .Net developers and the twitter backchat was filled with a whole range of reactions. Actually there were two themes to that reaction. One was to the syntax of Objective-C which is certainly very different to most C-family languages. The other was to the fact that Objective-C for the iPhone is not garbage collected (although it is on the Mac these days). My favourite comment on twitter was the title of this post.[*]

Much has been said from both sides about whether the iPhone should support garbage collection or not and I won't go into the debate here, other than to say that I think there are valid reasons for it being that way - but that's not to say that it couldn't be made to work (it would be nice if it was at least optional).

A few pointers


As a grizzled C++ developer in past (and, as it happens, present) lives I'm not fazed by the sight of a raw pointer or the need to count references. That said I struggle to remember the last time I had a memory leak, a dangling pointer or a double-deletion. Is that because I'm an awesome C++ demigod who eats managed programmers for breakfast? No! Not just that! It's also because I use smart pointers.

Smart pointers make use of a feature of C++ that is disappointingly rare in modern programming languages (and often seen as incompatible with garbage collection): deterministic destruction. Put simply when a value goes out of scope it is destroyed there and then. There is no waiting for a garbage collector to kick in at some undetermined, and indeterminate, time to clean up after you. Then why does C++ have the delete keyword? Well I said when a value goes out of scope. If you allocate an object on the heap, the value you have in scope is the pointer to the object. C++ also allows you to create objects on the stack, and these are destroyed at the end of the scope in which they were created (there is a third case where an object is declared by value as a member of another object that lives on the heap - but I'll ignore that for simplicity). When that object is an instance of a class the developer has defined they can supply a destructor that is run when the object is destroyed. Often this is used to delete memory owned by the object - but it could be used to clean up any resources the object is managing, such as file handles, database collections etc. Smart pointers manage objects on the heap. Depending on the type of smart pointer they may simply delete an object in their destructor - or they may offer shared ownership semantics by managing a reference count - so destructors decrement the ref count and only delete when the count reaches zero.

I've crammed an overview of a rich and incredible powerful language feature into a single paragraph above. I haven't even touched on how smart pointers use operator overloading to look like pointers. You can read more details elsewhere. My point is that, because of smart pointers - made possible by deterministic destruction (or more generally an idiom unfortunately known as RAII) - garbage collection is not really missed in C++.

Using {

All of which is the long winded setup to my story for this post. The last couple of days I have been involved with tracking down an issue in a .Net app. I say "involved with" because there were three of us looking at this one problem. Three developers, with more than half a century of professional development experience between us, all on the critical path, tracking down this one problem. In .Net. The problem turned out to be with garbage collection. Actually there was more than one problem.

Is that a jagged array in your pocket?

Now this is not about knocking garbage collection. Or .Net. This is about how things are not always as they appear. In fact that's exactly what the first issue was. We have C++/CLI classes that are really just thin proxy wrappers for native C++ objects. The C++ objects are often multi-dimensional arrays and can get quite big. The problem was that the "payload" of these objects was effectively invisible to .Net. As far as it was concerned it was dealing with thousands of tiny objects. The result was that, when called from C#, it was happily creating more and more of these without garbage collecting the large number of, now unreferenced, older objects!

Actually we had anticipated this state of affairs and had made sure that all these proxy objects were disposable. This meant that in C# we could wrap the usage in a using block and the objects would be disposed of at the end of the using scope. The problem with this is that there is no way to enforce the use of using. By contrast: in C++ if you implement clean-up code in a destructor it will always be called at the end of scope (ok, if you create it on the heap there is no "end of scope" and you're back to square one. I'd argue that you have to make more of an effort to do this, however - and there are also tricks you can use to detect if).

As it happens using is not really the right tool here anyway. The vast majority of these objects, whether boxed primitives (we have a Variant type) or arrays, are semantically just values. Using using with "value" types is clumsy and often impractical. What we would really like is a way to tell the GC that these proxies have a native payload - and have it take that into consideration when deciding when and what to collect. Of course we have just such a facility: GC.AddMemoryPressure() and GC.RemoveMemoryPressure(). These methods allow you to tell .Net that a particular managed object also refers to unmanaged memory of a particular size (in fact you can incrementally add and remove "pressure"). Sounds like just what we need.

Unfortunately there is more to it. First we need to know what "pressure" to apply. If the data is immutable and fully populated by the time the proxy sees it then that is one thing. But if data can be added and removed from both managed and native sides it becomes more tricky. To do it properly we'd have to add hooks deeper into the native array objects so that we know when to apply more or less pressure. Furthermore many of our arrays are really jagged arrays, implemented in terms of Variants containing Variant arrays of more Variants of Variant arrays (if that sounds appalling you probably haven't worked in a bank. Or maybe you have). How can we keep track of the memory being consumed by such structures? Well it helps to keep in mind that we don't really need to know the exact number of bytes. As long as we can give the GC enough hints that it could bucket sizes into, say, orders of magnitude (I don't know exactly how the GC builds its collection strategies and I don't want to second guess it. I'd imagine even this is more granularity than it needs most of the time, but it seems a reasonable level of accuracy to strive for). Fortunately I already have code that can walk these structures and come back with effective fixed array dimensions, so multiplying that by the size of a Variant should get us well into the ballpark we need.

Garbage Only.jpg
Photo by Peter Kaminski - http://flic.kr/p/2A5db

A weak solution

So that was one problem (with several sub-problems). The other was that a certain set of these objects were being tracked in a static Dictionary in the proxy infrastructure. Setting aside, for a moment, the horrors of globals, statics and (shudder) singletons, let's assume that there were valid reasons for needing to do this. The problem here was that the references in the dictionary were naturally keeping the objects alive beyond their useful lifetime! In a way this was a memory leak! Yes you can get memory leaks in garbage collected languages. Of course what we should have been doing (and now are doing) is to hold weak references in the static dictionary. I'd guess, however, that many .Net developers are not even aware of WeakReference. Why should they be? Managed Code is supposed to, well, "manage" your "code" isn't it? Anyway, not surprisingly switching to weak references here solved that problem (and somewhat more easily than the other one).

I'll say again - I'm not having a dig at .Net developers - or anyone else who primarily targets "managed" languages these days. I've used them for years myself. My point is that there is a lot more to it and you don't have to go too far to hit the pathological cases. And when you do hit them things get tricky pretty fast. In addition to the fundamental problems discussed above I spent a lot of time profiling and experimenting with different frequencies and placements of calls to GC.Collect() and GC.WaitForPendingFinalizers(). None of these things were necessary in the pure C++ code (admittedly they are not often necessary in pure C# code either) but when they are it can be very confusing if you're not prepared

}

phew!

Now I started out talking about Objective-C but have ended up talking mostly about C++ and C# (although the lessons have been more general). To bring us back to Objective-C, though, where does it fit in to all this?

Clearly destructive

Well on the iPhone Objective-C lacks garbage collection, as we already noted. It also lacks deterministic destruction (or destructors as a language concept at all). So does that leave us in the Wild West of totally manual C-style memory management? Not quite. Probably the biggest issue with memory management in C is not that you have to do it manually. Personally I think a bigger issue is a lack of universally agreed conventions for ownership. If a function allocates some memory who owns that memory? The caller? Maybe - except when they don't! Some libraries, and most frameworks, establish their own conventions for these things - which is fine. But they're not all the same conventions. So not only is it not always immediately obvious if you are supposed to clean-up some memory - but the confusion makes it less likely that you will remember to at all (because, sub-consciously you're putting off the need to work it out for as long as possible).

Objective-C, at least in Apple's world, doesn't have this problem. There is a very simple, clear and concise set of rules for ownership that are universally adopted:

If a function or method begins with alloc or new, or has the word copy in it, or you explicitly call retain - the caller owns it. Otherwise they don't.

That's it! There are some extra wrinkles, mostly to do with autorelease, but they still follow those rules.

Other than some circular references I had once, and quickly resolved, I don't recall having any problems with memory management in Objective-C for a long time either

Epilogue

There. I've discussed memory management in four languages in a single post - and not mentioned the iPad once! As it happens you can use all four languages to write for the iPhone or iPad. So take your pick of memory management trade-offs.


[*] I tried to track down who tweeted "The 80s called; they want their memory manager back" during Stackoverflow Devdays. As it's difficult to search Twitter very far into the past I couldn't find the original tweet - but thanks to a retweet by Nick Sertis it appears that it was J. Mark Pim. Thanks J.

Tuesday
Mar022010

Interlude

I don't usually use this blog to get meta about, well, this blog. But I just wanted to take a moment to reset some expectations. This is probably not of great interest to most readers so please feel free to skip it. I promise I won't do it very often.

I've had a series of posts recently about the iPad, and historically one or two others that have touched on products rather than code. Primarily this blog has been about software development. The target audience is software developers. I will continue to write about software development. However I also occasionally intersperse this with commentary on technology that I feel is relevant to software development as a scene, if not as a discipline. I feel that the iPad is going to have a big impact on he future of software - maybe not in total, but in significant part. Not everyone will agree with that, but what are blogs for if not for putting opinions out there to be dissected (don't answer that)?

It's also interesting that, while my iPad posts gave it a run for its money for a while, my highest ranking post to date remains the one about setting up Time Machine backups to network drives. I'm not sure if this is a good sign or not. But one thing is sure. I'm not going to switch my focus to writing about products just to get the page hits. That's not why I write this blog.

Well that's enough indulgence of the forbidden fruit of meta for now. We will now resume our usual programming (pun very much intended).

Sunday
Jan312010

Why the iPad may never need multi-tasking

activityMonitor.png

Like the iPhone and iPod touch the iPad does not allow multi-tasking of third-party apps. That is, if you are using one app and want to start another you must close the first app. I wouldn't rule out that changing in a future update, but I'm more inclined to think we won't see it even on the iPad - at least not in a general form. Why?

The way I see it multi-tasking is used for three purposes:

  1. Background apps, such as music players. The iPod app does run in the background, of course, but there is certainly a case for third-party apps such as Spotify or Last.fm to have the same ability.
  2. Notifications. This is at least partly addressed by the Push Notification service that Apple now provides. More on that in a moment.
  3. Fast switching between apps

I'd say that (3) is probably the number one reason we usually have so many apps open at once. After all we can usually only interact with one at a time. As the time it takes to (save and) quit one app and start (and restore) a new one gets closer to zero our need to have the apps open just so we can switch to them diminishes or disappears. From what I have read and heard the iPad gets us pretty close to the threshold where this happens. By some reports it crosses that threshold. I'll reserve final judgement until I see it with my own hands (mixed metaphor intended).

Going back to reason (2) let me explain what I mean by being only partly addressed so far. Imagine a calendar-type app - such as Event Horizon. You set up events and appointments - which may be days, weeks or months later. Ideally the app would be able to alert you of upcoming events in much the same way that Apple's own calendar app can already do. One way to achieve that now would be to use the push notification service. This would work but has two problems. First it's a bit heavyweight. Your event data would need to be synced up to a cloud service which could then decide when to push a notification back to the device. Cloud services certainly have their place, but bringing them in solely to provide alerts seems out of proportion. Secondly it means you will not receive the notification if you don't happen to be connected at that time. You will get it when you do get online, but that may be too late! Furthermore you need to be connected for the alert to sync up to the cloud in the first place. These all seems unjustifiable limitations when it concerns data that you already hold locally on your device. Therefore I think it reasonable that some sort of Scheduled Notification API should be made available. This could work just like the Push Notification service, allowing badges, messages and alert tones to be "pushed" to the user, with the ability for the user to open the associated app straight away, or defer that until later. The difference would be that the notification would be posted by the app itself for delivery at a specific time in the future and would never leave the device. I don't see any technical challenges to providing such an API.

If a scheduled notification API is made available I think the only significant remaining need for background tasks would be for things like music players. If Apple do allow background tasks in the future I think it will be limited to these types of tasks, and very strictly policed in the app store approval process. I'll say again, I don't believe Apple will ever allow general purpose multi-tasking in App Store apps. - and I don't believe they need to with the concessions just described. Note that I am ruling out other scenarios such as video encoders or bit-torrent clients, which I don't think are appropriate apps for running on these devices in the first place. If you can think of anything genuinely useful for a significant number of people that wouldn't be covered by such provisions I'd be interested to know.

Technorati Tags: , , ,

Saturday
Jan302010

What *I'll* be using the iPad for

My last post was a bit of a teaser for the app that I am writing for the iPad. I don't apologise for that. While I did want to pique your curiosity my key point was that there are developers, like me, already out there with plans for iPad applications that simply wouldn't be feasible for any other platform. Just to clarify that: they would be possible for sure - I was planning first a desktop version, then later an iPhone version, of my app - but the experience would always feel like a compromise. I still plan on desktop, iPhone, and even web clients - but they will effectively be auxiliary clients in much the same way that many iPhone apps today are auxiliary to their desktop counterparts.

Does this mean that, before such apps arrive, the iPad is just an empty slab? Perhaps useable only for the very old, the very young, or the technically illiterate? Not at all. The key reason that there has been such a backlash against the iPad is that there was so much hype about it before the announcement. It wasn't just that everyone had idyllic expectations of what it would be for them - although that certainly didn't help. Just the fact that there was hype meant that everyone was watching this event as being a turning point in the history of computing. The trouble with historical turning points is that they are often only recognisable as such when you have the historical perspective. Put it this way: if the launch of the iPad really does turn out to be the moment of a revolution in computing, when looking back on this in years to come, would it seem reasonable to think that its lack of a front-facing camera or support for Flash would be a factor in that status?

I think it's quite likely that some sort of camera facility will arrive at some point - either by way of a future hardware upgrade, or as a peripheral. There are some practical challenges there, but if they can be overcome I suspect they will. As for Flash, that's another story. I'm with the camp that hopes it doesn't get it. There is plenty of good discussion of why that should be the case around already. Whether you agree with that or not, and whether Apple does eventually allow Flash or not, I think is irrelevant to whether this device will change the way we think about computing in the future.

Some great articles and blog posts have been written already about why the iPad really is a revolutionary new device. I think I can summarise them in one sentence:

The iPad takes the tasks we use personal computers and the internet for the most and packages them into a focused, polished, device with an interaction method that gets out of the way.

To add a couple more sentences to that: It does this by removing the need to know about mice and windows and multi-tasking and file systems and cables and hard drives and memory and even, to some extent, physical keyboards! Not that those things are gone forever - just that, for most tasks - tasks that even us technically-savvy power users perform much of the time - they are just a distraction! They are, or were, the means not the end. The iPad gets you to what you were trying to achieve more directly than any device before. If this is not what you wanted that's fine. That doesn't mean this device won't change the way many - perhaps most - people interact with and think about computers forever.

But that hasn't answered the question of what can we use the iPad for right away? Well, even assuming there are no additional game-changing apps available on launch (remember that's still 2-3 months away) there are still some very worthwhile apps bundled with the device, or available on the app store (including Apple's own iWork suite). While none of these are conceptually revolutionary, the ways you interact with them, and when you can use them, may well be. Here are some of the things I can see myself doing that currently would be either impossible or a different quality of experience:

  • Reading on the train. The laptop is not always possible - iPhone too small for sustained reading
  • Reading/ watching video at the gym
  • Browsing maps. Especially when planning future or reflecting on earlier travel. Looks like such a better experience than a laptop.
  • Picture frame in my office. I was planning on getting one anyway - this will save me the cost and looks better than most, if not all, dedicated devices anyway.
  • Task management. It's been great having GTD apps like Things with me all the time on my iPhone, but they still suck for data entry or big re-orgs - so I usually have to wait until I have access to my laptop - by which time I often have forgotten (which was why I was using them in the first place!). I'm confident that at least once such app will be available on, or shortly after launch. If not the existing iPhone versions, in pixel doubled mode, will still be more usable just be dint of a larger keyboard.
  • I'm hoping that the Remote app for controlling iTunes will soon be updated with an iPad UI as that will be an awesome way to control my household media
  • Showing photos. When family visit it's been great that we can browse photos on the TV or a laptop - but this will be a far more natural and intimate experience.
  • Playing vConqr. No seriously - it's going to be great! :-)
  • Of course casual browsing, email contacts and calendar when I'm away from my laptop - or even instead of using the laptop for such things. That way I keep my workspace a little tidier. Obviously I'll still do serious web-browsing on the laptop (if only because of multiple-tabs and copy-and-paste into my other desk-top apps).

I should point out that I consider myself very much a power-user of my laptop/ desktop computers. I have my laptop (a Macbook Pro) connected to a 30" display and still use Spaces to give me more workspace area! As a developer and technologist this will alway be my primary computing experience, but I still see myself using the iPad for more and more tasks too - and not just when I'm on the move. For many people I can see it being their primary, and at some point only, computing device (for now, at least, it seems the iPad still needs to sync against a general purpose computer but the need for that seems to be decreasing).

I started to write about multi-tasking here too, but realised I had a lot of material that diverges from this post, so I've moved it into its own article, which I'll post a bit later.

Technorati Tags: , , ,

Page 1 ... 4 5 6 7 8 ... 9 Next 5 Entries ยป