Now that we have the iPad Mini, web designers waste no time in wanting to distinguish between it and the iPad 2. Tough luck.
Yesterday Max Firtman explained in detail why that is not possible. Briefly, no JavaScript or CSS property, variable or media query is different on the iPad 2 and the iPad Mini. Both are 1024x768, neither has a retina screen, etc.
Incidentally, it appears that native developers can distinguish between the two, making the playing field for the grand match between native and web unleveled (disleveled?). If you wish you can blame Apple.
That’s not what I want to write about, though. Instead, it’s about web developers’ expectations and physical units in the W3C spec.
I saw several tweets and posts that expected media queries, particularly device-width
and resolution
to be the answer to that problem. If you use cm or in as your units, the iPad Mini would give different readings than the iPad 2, wouldn’t it? After all, the Mini is smaller than the 2, and since it crams the same amount of pixels in that smaller space, its resolution would be higher, wouldn’t it?
Actually, no. Forget for a moment that WebKit-based browsers do not support the resolution media query yet. Even if they did, it wouldn’t help.
The fundamental problem is that CSS units such as cm or in have nothing to do with the physical world. Three years ago I did some solid testing, and with the exception of one Firefox version (can’t remember which one) all browsers define 1 inch as 96px (CSS pixels, obviously). And the sole Firefox version that did not was replaced by one that did. Cms and the other units, as well as resolution in dpi and dpcm are calculated from this base.
So if you say min-width: 3in
you’re really saying min-width: 288px
. Nothing more, nothing less. And the dpi
unit should really be dp-96-CSS-pixels
.
I used to be quite annoyed at this state of affairs. On mobile especially, it would be great to say min-width: 1cm; min-height: 0.8cm
and make sure that the element retained that real-world, physical height under all circumstances, so that it would remain easily tappable.
That still sounds great in theory, but I spent some time thinking about it, and it appears to me that performance (and battery life) are in the way.
Consider.
Suppose I have an element with min-width: 1cm; min-height: 0.8cm
, and these units mean real, physical centimeters. Now I zoom out, and the browser has to recalculate the width and height of the element based on the zoom level in order to keep them tappable. It’s quite likely that I do not have one such element, but an entire navigation menu full of them. All of these have to be recalculated.
If I zoom out far enough it’s likely that they’ll interfere with the rest of the layout, squeezing it to the side, or overlapping, or something else that doesn’t contribute to good graphic design. The browsers would be supposed to solve this problem for us, and (surprise!) they’d do it in slightly different ways, leading to premature hair loss, compatibility tables, and all the rest.
Also, it would take quite a bit of raw horsepower to actually perform the calculations. Nowadays the hardware is up to that, but it might stretch its performance to the limits if I zoomed in and out often enough. Besides, it’d eat into the battery life since every recalculation costs some juice.
In other words, it may be that using physical units would be too costly in terms of performance and battery life. And those terms are by far the most important ones in the mobile world.
So, although I’d still love to have cm mean real-life centimeters, I don’t think it’s ever going to happen. Pity, but such is life.
This is the blog of Peter-Paul Koch, web developer, consultant, and trainer.
You can also follow
him on Twitter or Mastodon.
Atom
RSS
If you like this blog, why not donate a little bit of money to help me pay my bills?
Categories: