Metro: Typography trumps chrome–debunked.

Metro, is fast becoming this unclear, messy craptuclar retardation of modern interface design. In that, the current execution out there is getting out of control resulting in what originally started out as a Microsofts plagiarized edition of Dieter Rams “Ten Principles of Good Design” into what we have before us today.

I am actually ok with that, as if I ever looked back on the first year of my designs in the 90s I’d cringe at the sight of lots of Alienskin Bevels, Glows and Fire plugin driven pixel vomit.

The part though I’m a little nervous about is how fast the microsoftees of the world have somehow collectively agreed that Text is in Chrome is out – like somehow science is wrong, that what we really need to do is get back to basics of ASCII inspired typography design(s) of yesteryear.

Typography is ok, in short bursts.

Spatial Visualization is the key description you need to Google a bit more around. Let me save you a little google confusion and explain what I mean.

Humans are not normal, to assume that inside HCI we are all equal in our IQ levels is dangerous, it is quite the opposite and to be fair the human mental conditions that we often suffer from are still quite an the infancy of medicine – we have so much more to learn about genetic deformation/mutations that are ongoing.

The reality is that most humans hail from a different approach to the way in which we decipher patterns within our day-to-day lives as we aren’t getting smarter we’re just getting faster at developing habitual comprehension of patterns that we often create.

Let us for example assume I snapped someone from the 1960’s, and I sat him or her in a room and handed them a mobile device. I then asked them “turn it on” and measured the reaction time to navigating the device itself to switching it on.

You would most likely find a lot of accidental learning, trial and error but eventually they’d figure it out and now that information is recorded into their brain for two reasons. Firstly, pressure does that to humans we record data when under duress that is surprisingly accurate (thus bank robbers often figure out that their disguises aren’t as affective as once thought) and secondly we discovered fire for the first time – an event gave it meaning “this futuristic device!!”

What is my point, firstly, the brain capacity has not increased our ability to think and react visually is what I’d argue is the primary driver for our ability to decode what’s in front of us.  (point in case the usage of H1 tag breaks up the indexation of comprehending of what I’ve written).

How so?

Research in the early 80’s found that we are more likely to detect misspelled words than we are correctly spelled words. The research goes on to suggest that the reason for this is that we obtain shape information about a word via peripheral vision initially (we later narrow in on the said word and make a decision on true/false after we’ve slowed the reading down to a fixated position).

It doesn’t stop there, by now you the reader have probably fixated on a few mistakes in my paragraph structure or word usage as you’ve read this, but yet you’ve still persisted in comprehending the information – despite the flaws.

What’s important about this packet of information is that it hints at what I’m stating, that a reliance on typography is great but for initial bursts of information only. Should the density of data in front of you increase, your ability to decode and decipherer (scan / proof read) becomes more of a case of balancing peripheral vision and fixated selection(s).

Your CPU is maxed out is my point.

AS I AM INFERRING, THE HUMAN BEING IS NOW JUGGLING THE BASICS IN AND AROUND GETTING SPATIAL QUEUES FROM BOTH TEXT, IMAGERY AND TASK MATCHING – ALL CRAMMED INSIDE A SMALL DEVICE. THE PROBLEM HOWEVER WONT STOP THERE, IT GOES ON INTO A MORE DEEPER CYCLE OF STUPIDITY.
INSIDE METRO THE BALANCE BETWEEN UPPER AND LOWER CASE FLUCTUATES THAT IS TO SEE AT TIMES IT WILL BE PURE UPPERCASE, MIXED OR LOWERCASE.

Did you also notice what I just did? I put all that text in Uppercase, and what research has also gone onto suggest is that when we go full-upper in our usage our reading speed decreases as more and more words are added. That is to say, now inside metro we use a mixed edition of both and somehow this is a good thing or bad thing?

Apple has over-influenced Microsoft.

I’m all for new design patterns in pixel balancing, I’m definitely still hanging in there on Metro but what really annoys me the most is that the entire concept isn’t really about breaking way based on scientific data centered in around the an average humans ability to react to computer interfaces.

It simply is a competitive reaction to Apple primarily, had Apple not existed I highly doubt we would not be having this kind of discussion and it would probably be full glyph/charms/icon visual thinking friendly environment(s).

Instead what we are probably doing is grabbing what appears to be a great interruption in design status quo and declaring it “more easier” but the reality kicks in fast when you go beyond the initial short burst of information or screen composition into denser territory – even Microsoft are hard pressed to come up with a Metro inspired edition of Office.

Metro Reality Check – Typography style.

The reality is the current execution of Metro on Windows Phone 7 isn’t built or ready for dense information and I would argue that the rationale that typography replaces chrome is merely a case of being the opposite of a typical iPhone like experience – users are more in love with the unique anti-pattern then they are with the reality of what is actually happening.

Using typography as your spatial visualization go to pattern of choice simply flies in the face of what we actually do know in the small packets of research we have on HCI.

Furthermore, if you think about it, the iPhone itself when It first came out was more of a mainstream interruption to the way in which we interpret UI via mobile device, icons for example took on more of candy experience and the chrome itself become themed.

It became almost as if Disney had designed the user interface as being their digital mobile theme park, yet here is the thing – it works (notice when Metro UI adds pictures to the background it seems to fit?…there’s a reason for that).

Chrome isn’t a bad thing, it taps into what we are hard wired to do in our ability to process information, we think visually (with the minority being the exclusion).

Egyptians, Asian(s) and Aboriginals wrote their history on walls/paper using visual glyphs/symbols not typography. That is an important principle to grapple onto the most; historically speaking we have always shown evidence to gravitate towards a pictorial view of the world and less around complexity in glyphs around pattern(s) (text) (that’s why Data Visualization works better than text based reports).

We ignore this basic principle because our technology environment has gotten more advanced but we do not have extra brainpower as human race, our genome has not mutate or evolved! We have just gotten better at collectively deciphering the patterns in and in turn have built up better habitual usage of these patterns.

Software today has a lot of bad UI out there, I mean terrible experiences, yet we are still able to use and navigate them.

Metro is mostly marketing / anti-compete than it is about being the righteous path to HCI design, never forget that part. Metros tagline as being “digitally authentic” is probably one of Deiter Rams principles being mutated and broken at the same time.

Good design is honest.
It does not make a product more innovative, powerful, or valuable than it really is. It does not attempt to manipulate the consumer with promises that cannot be kept.

Should point out, these ten principles are what have inspired Apple and other brands in the industrial design space. Food for thought.

Lastly one more thing, what if your audience was 40% Autistic/Dyslexic how would your UI react differently to the current designs you have before you.