5 February 2010: Added link to explanation why we need touch events.
3 February 2010: corrected mistake (misunderstood iPhone model).
This paper was originally published on 2 February 2010. Original research ordered and paid for by Vodafone.
Related files:
This page has been translated into Russian.
When a user touches the screen of a touchscreen phone, sufficient events should fire so that web developers know what’s going on and can decide on which actions to take.
Events can be divided into three groups:
All mobile browsers MUST support the touch events. No excuses allowed. Any mobile browser that does not support them by the end of 2010 will be out of the race.
Do we really need separate touch events? Yes, as far as I’m concerned. Touch is a new interaction mode. It requires new events.
Browsers must largely copy the iPhone touch event model. There’s no reason not to: it’s well thought-out, and it is already supported by market leaders iPhone and Android. See Touching and Gesturing on the iPhone for more information on how the system works.
These events are vital for common user interface elements such as drag-and-drop. On touchscreens dragging an element to another position and having the interface react to that action is a very natural way of interacting with a web application.
If operating systems support multitouch, browsers must also support the gesturestart, gesturechange, and gestureend events.
Browsers must support the following interface events. I do not treat the resize event because I need to do more research first.
Browsers MUST fire a scroll event when the user scrolls the browser. It’s as simple as that.
Browsers often fire this event during a resize, and that’s fine. Scrolling always occurs during a resize because the scrollbars gets larger or smaller even if nothing else happens. That fact should give rise to scroll events.
Browsers MUST fire a zoom event whenever the user zooms in by whatever method, pinch-zooming, double-tap, sliders, hardware buttons, or as-yet uninvented zooming methods.
This zoom event should have some properties that give more information about the zooming. It should probably include the zoom scale, although I don’t think that information is very important. Much more important is the effective width and height of the screen and the position of the viewport relative to the browser canvas. All this requires further research.
Currently some browsers fire the resize event onzoom, but other browsers don’t, and besides, the resize event may have another purpose altogether on mobile. I have to do some more research in this area.
When the user briefly touches a link or other active element a click event should fire. All browsers do this. The problem lies in the proper implementation of the touchclick action.
Definition: If a touchstart and a touchend action occur on roughly the same coordinates a touchclick action takes place.
The trick lies in the “roughly.”
The amount of leeway a browser gives the user for moving his finger a little bit during a touchclick action is the prime measure of its aptness for the touchscreen environment. Users must have some margin for error here, or the interface just won’t work.
The problem is that a distinction between click and scroll is usually made at the OS level, and not at the browser level.
The browsers have already come up with a model for dealing with the legacy mouse events. Eventually, in theory, support for them would have to be entirely phased out because touch phones just don’t use a mouse, but in practice that’s not possible because millions of sites around the world depend on the mouse events. Wrongly, but there you go.
I propose this set of rules, which mostly follows established practice:
:hover
styles are applied when either a touchstart or a touchenter
action takes place on the element, and removed when either a touchleave or a touchend action takes
place.Currently some browsers don’t implement this scheme of things quite correctly. See the compatibility tables for more information.
The :hover
rule is new. I feel that it makes sense, although no browser currently
implements it.
I feel the click event should fire before the legacy events. Interface events such as click are more important than legacy events meant for another input type entirely. Currently all browsers fire the click event last.
During my research I had some ideas for extending the touch event model. These extensions are in no way required (essentially I thought them up on the spur of the moment), but it might be interesting to have at least one implementation of every event so we can play with it and determine whether they’re useful to web developers.
Would it be a good idea to express the position of a finger not as a one-pixel point, but as an area? After all, a touch will always cover several pixels, and it might make sense to expose this information to JavaScript.
For instance, you could use it to determine which elements are overlapped by the touch, and then decide which element the user wanted to click on by measuring which one overlaps more with the touch event.
This trick would be most important on the older and crappier systems, because it’s their interfaces that are in most need of such patching.
If the touch area is exposed, should it be as a square, with top, left, width and height? Or should it be as a circle, with midpoint coordinates and radius? Right now I’m just not sure.
Would touchenter and touchleave events make sense? Essentially they’d be touch-optimised copies of the mouseenter and mouseleave events (and not mouseover and mouseout).
I realise that this might change the current situation in which the touch events retain their original target even when the touch moves outside the target element. I think that might not be such a bad idea; the user starting a touch on an element or the user moving a touch into an element are different actions.
These events should not bubble up! Bubbling is what makes mouseover and mouseout unusable, and we’d should not port this design flaw to a whole new platform. Instead, we’d have to follow Microsoft’s lead and make the touchenter and touchleave events fire only when the touch enters or leaves the element they’re defined on.
Finally, maybe a touchhold event would make sense. It would fire when the user touches an element and keeps his finger there for a while. In some browsers context menus pop up, while other browsers do nothing. It might be interesting for web developers to open menus of their own, since a touchhold is the closest we can come to a right-click.
Who would determine what constitutes a touchhold? The browser would obviously have to have some rule in place, such as “when a touchstart is not followed by any other touch event for at least a second,” but web developers would inevitably start to complain about the too-long or too-short touchhold timeout on platform X. Would it make sense to give them the chance to actually set the timeout?
Of all suggested touch extensions the touchhold event is by far the easiest to mock up in JavaScript. The logic I sketched can be created programmatically quite easily. So this event might be more interesting to library makers than to browser vendors.