Metro: Typography trumps chrome–debunked.

Metro, is fast becoming this unclear, messy craptuclar retardation of modern interface design. In that, the current execution out there is getting out of control resulting in what originally started out as a Microsofts plagiarized edition of Dieter Rams “Ten Principles of Good Design” into what we have before us today.

I am actually ok with that, as if I ever looked back on the first year of my designs in the 90s I’d cringe at the sight of lots of Alienskin Bevels, Glows and Fire plugin driven pixel vomit.

The part though I’m a little nervous about is how fast the microsoftees of the world have somehow collectively agreed that Text is in Chrome is out – like somehow science is wrong, that what we really need to do is get back to basics of ASCII inspired typography design(s) of yesteryear.

Typography is ok, in short bursts.

Spatial Visualization is the key description you need to Google a bit more around. Let me save you a little google confusion and explain what I mean.

Humans are not normal, to assume that inside HCI we are all equal in our IQ levels is dangerous, it is quite the opposite and to be fair the human mental conditions that we often suffer from are still quite an the infancy of medicine – we have so much more to learn about genetic deformation/mutations that are ongoing.

The reality is that most humans hail from a different approach to the way in which we decipher patterns within our day-to-day lives as we aren’t getting smarter we’re just getting faster at developing habitual comprehension of patterns that we often create.

Let us for example assume I snapped someone from the 1960’s, and I sat him or her in a room and handed them a mobile device. I then asked them “turn it on” and measured the reaction time to navigating the device itself to switching it on.

You would most likely find a lot of accidental learning, trial and error but eventually they’d figure it out and now that information is recorded into their brain for two reasons. Firstly, pressure does that to humans we record data when under duress that is surprisingly accurate (thus bank robbers often figure out that their disguises aren’t as affective as once thought) and secondly we discovered fire for the first time – an event gave it meaning “this futuristic device!!”

What is my point, firstly, the brain capacity has not increased our ability to think and react visually is what I’d argue is the primary driver for our ability to decode what’s in front of us.  (point in case the usage of H1 tag breaks up the indexation of comprehending of what I’ve written).

How so?

Research in the early 80’s found that we are more likely to detect misspelled words than we are correctly spelled words. The research goes on to suggest that the reason for this is that we obtain shape information about a word via peripheral vision initially (we later narrow in on the said word and make a decision on true/false after we’ve slowed the reading down to a fixated position).

It doesn’t stop there, by now you the reader have probably fixated on a few mistakes in my paragraph structure or word usage as you’ve read this, but yet you’ve still persisted in comprehending the information – despite the flaws.

What’s important about this packet of information is that it hints at what I’m stating, that a reliance on typography is great but for initial bursts of information only. Should the density of data in front of you increase, your ability to decode and decipherer (scan / proof read) becomes more of a case of balancing peripheral vision and fixated selection(s).

Your CPU is maxed out is my point.

AS I AM INFERRING, THE HUMAN BEING IS NOW JUGGLING THE BASICS IN AND AROUND GETTING SPATIAL QUEUES FROM BOTH TEXT, IMAGERY AND TASK MATCHING – ALL CRAMMED INSIDE A SMALL DEVICE. THE PROBLEM HOWEVER WONT STOP THERE, IT GOES ON INTO A MORE DEEPER CYCLE OF STUPIDITY.
INSIDE METRO THE BALANCE BETWEEN UPPER AND LOWER CASE FLUCTUATES THAT IS TO SEE AT TIMES IT WILL BE PURE UPPERCASE, MIXED OR LOWERCASE.

Did you also notice what I just did? I put all that text in Uppercase, and what research has also gone onto suggest is that when we go full-upper in our usage our reading speed decreases as more and more words are added. That is to say, now inside metro we use a mixed edition of both and somehow this is a good thing or bad thing?

Apple has over-influenced Microsoft.

I’m all for new design patterns in pixel balancing, I’m definitely still hanging in there on Metro but what really annoys me the most is that the entire concept isn’t really about breaking way based on scientific data centered in around the an average humans ability to react to computer interfaces.

It simply is a competitive reaction to Apple primarily, had Apple not existed I highly doubt we would not be having this kind of discussion and it would probably be full glyph/charms/icon visual thinking friendly environment(s).

Instead what we are probably doing is grabbing what appears to be a great interruption in design status quo and declaring it “more easier” but the reality kicks in fast when you go beyond the initial short burst of information or screen composition into denser territory – even Microsoft are hard pressed to come up with a Metro inspired edition of Office.

Metro Reality Check – Typography style.

The reality is the current execution of Metro on Windows Phone 7 isn’t built or ready for dense information and I would argue that the rationale that typography replaces chrome is merely a case of being the opposite of a typical iPhone like experience – users are more in love with the unique anti-pattern then they are with the reality of what is actually happening.

Using typography as your spatial visualization go to pattern of choice simply flies in the face of what we actually do know in the small packets of research we have on HCI.

Furthermore, if you think about it, the iPhone itself when It first came out was more of a mainstream interruption to the way in which we interpret UI via mobile device, icons for example took on more of candy experience and the chrome itself become themed.

It became almost as if Disney had designed the user interface as being their digital mobile theme park, yet here is the thing – it works (notice when Metro UI adds pictures to the background it seems to fit?…there’s a reason for that).

Chrome isn’t a bad thing, it taps into what we are hard wired to do in our ability to process information, we think visually (with the minority being the exclusion).

Egyptians, Asian(s) and Aboriginals wrote their history on walls/paper using visual glyphs/symbols not typography. That is an important principle to grapple onto the most; historically speaking we have always shown evidence to gravitate towards a pictorial view of the world and less around complexity in glyphs around pattern(s) (text) (that’s why Data Visualization works better than text based reports).

We ignore this basic principle because our technology environment has gotten more advanced but we do not have extra brainpower as human race, our genome has not mutate or evolved! We have just gotten better at collectively deciphering the patterns in and in turn have built up better habitual usage of these patterns.

Software today has a lot of bad UI out there, I mean terrible experiences, yet we are still able to use and navigate them.

Metro is mostly marketing / anti-compete than it is about being the righteous path to HCI design, never forget that part. Metros tagline as being “digitally authentic” is probably one of Deiter Rams principles being mutated and broken at the same time.

Good design is honest.
It does not make a product more innovative, powerful, or valuable than it really is. It does not attempt to manipulate the consumer with promises that cannot be kept.

Should point out, these ten principles are what have inspired Apple and other brands in the industrial design space. Food for thought.

Lastly one more thing, what if your audience was 40% Autistic/Dyslexic how would your UI react differently to the current designs you have before you.

Related Posts:

What if you had to build software for undiagnosed adults that suffer from autism, Asperger’s, dyslexia, ADHD etc.

Software today is definitely getting more and more user experience focused, that is to say we are preoccupied with getting the minimal designs into the hands of adults for task driven operations. We pride ourselves on knowing how the human mind works through various blog based theories on how one is to design the user interface and what the likely hood of the adult behind the computers initial reaction will be.

The amount of conferences I’ve personally attended that has a person or person(s) on stage touting the latest and greatest in cognitive science buzz word bingo followed by best practices in software design is well, too many.

On a personal level, my son has a rare chromosome disorder called Trisomy 8, its quite an unexplored condition and I’ve pretty much spent the last eight years interacting with medical professions that touch on not just the psychology of humans but zeros in on the way in which our brains form over time.

In the last eight years of my research I have learnt quite a lot about how the human mind works specifically on how we react to information and more importantly our abilities to cope with change that aren’t just about our environments but also plays a role in Software.

I’ve personally read research papers that explore the impacts of society’s current structure on future generations and more importantly how our macro and micro environments play a role with regards to the children of tomorrow coping with change and learning at the same time – that is to say, we adults cope with the emerging technology advancements because for us “its about time” but for todays child of 5-8 this is a huge problem around having to manifest coping skills to dealing with a fluid technology adoption that often doesn’t make sense.

Yesterday we didn’t have NUI, today we do? Icons that have a 3.5” floppy disc that represent “saving” have no meaning to my son etc.

The list goes on just how rapid and fast we are changing our environments and more importantly how adults that haven’t formulated the necessary social skills to realistically control the way in which our children are parented often rely on technology as at times being the de-facto teacher or leader (Amount of insights I’ve read on how XBOX 360 has become the baby sitter in households is scary).

Getting back to the topic at hand, that is what if the people you are designing software have an undiagnosed mental illness or are better yet diagnosed. How would you design tomorrow’s user interface to cope with this dramatic new piece of evidence? To you the minimal design works, it seems fresh and clear and has definitive boundaries established.

To an adult suffering from Type-6 ADHD (if you believe in it) that has a degree of over-focus its not enough, in fact it could have the opposite effect of what you are trying to do in your design composition.

Autism also has a role, grid formation in design would obviously appeal to their autistic traits given it’s a pattern that they can lock onto and can often agree with – Asperger sufferers may disagree with it, and could annoy or irritate them in some way (colour choice, too much movement blah blah).

Who has to say your designs work, as if you ask people on the street a series of questions and observe their reactions you are not really providing an insight into how the human mind reacts to computer interaction. You’ve automatically failed in a clinical trial, as the person on the street isn’t just a normal adult there’s a whole pedigree of historical information you’re not factoring into the study that is relevant.

At the end of the day, the real heart of HCI for this and is my working theory is that we formulate our expectations around software design from our own personal development. That is to say, if we as children had normal or above average IQ level in around about ability to understand patterns and the ability to cope with change we in turn are more likely to adapt to both “well design” and “poorly designed” software.

That is to say, when you sit down and think about an Adult and how they react to tomorrow’s software one has to really think about the journey the adult has taken to arrive at this point, more to the point how easily they are also influenced.

A child who came from broken home, parents left and raised by other adults who is now a receptionist within a company is more likely to have absolutely no confidence in around making decisions. That person is now an easy mark for someone who has the opposite and can easily sway this person to adoption and change.

Put those two people into a clinical trial around how the next piece of software you are going to roll out for a company works and the various tests you put them through, watch what happens.

Most tests in UX / HCI often focus on the ability of the candidate to make their way through the digital maze in order to get the cheese (basic principles around reward / recognition) so to be fair its really about how the human mind can navigate a series of patterns to arrive at a result (positive / negative) and then furthermore how the said humans can recall that information at a later date (memory recall meets muscle memory).

These style of results will tell I guess the amount of friction associated with your change, and give you a score / credit in around what the impact it will likely have but in reality what you really probably need to do as an industry is zero in on how aggressively you can decrease the friction levels associated with change prior to the person arriving at the console.

How do you get Jenny the receptionist who came from an abused child hood enough confidence to tackle a product like Autodesk Maya (which is largely complex) as if you were to sit down with Jenny and learn that she also has a creative component to her that’s not obvious to all – its her way of medicating the abuse through design.

How do you get Jack the stock broker who spends most of his time jacked on speed/coke and caffeine to focus long enough to read the information in front of him through data visualisation metaphors / methodologies then the decisions he makes could impact the future of the global financial system(s) world wide (ok bit extreme but you get my point?)

It’s amazing how much we as a software industry associate normal from abnormal when it comes to software design. It is also amazing how we look at our personas in the design process and attach the basic “this is mike, he likes facebook” fluffy profiling. When what you may find even more interesting is that Mike may like facebook, but in his down time he likes to put fireworks in cats butts and set them on fire because he has this weird fascination with making things suffer – which is probably why Mike now runs ORACLE’s user experience program.

The persona in my view is simply the design team having a massive dose of confirmation bias as when you sit down and read research paper after research paper on how a child sees the world that later helps define him/her as an adult, well…. In my view, my designs take on a completely new shift in thinking in around how I approach them.

My son has been tested numerous times and has been given an IQ of around 135 which depending on how you look at it puts him in around the genius level. The problem though is my son can’t focus or pays full attention to things and relies heavily on patterns to speed through tasks but at the same time he’s not aware of how he did it.

Imagine designing software for him. I do, daily and have to help him figure out life, it just has taught me so much in the process.

Metro UI vs Apple OS 5..pft, don’t get me started on this subject as both have an amazing insight into pro’s and con’s.

Related Posts:

How much would you invest in a pixel?

I am a massive fan of World of Warcraft again; yes, it is sad isn’t it? Last night I was playing my usual allotment of time watching pixels update on a screen vs. interacting with real humans and something I witnessed struck a chord with me.

The ‘flash of genius’ for me was when I was playing a typical Player vs. Player (PVP) round of Battleground(s). This is a part of the game that essentially randomly aggregates a group of people spread throughout the entire WoW realm(s) into a 10vs10 etc match.

It’s basically unbridled chaos and it really highlights some components for me that I find fascinating as to watch the herding mentality of us humans in an avatar driven game is kind of predictable. For instance, I was the first out of the gate when the match started, I rode my horse towards a spot that was the second closest and because I was the party leader, many other players followed me. I arrived at the spot and just waited. Out of the six other players, three stayed after 20sec has or so of making decision whilst the other three grew what I can only imagine as being bored and rode off in search of a fight.

imageWhat is so profound about this is how easily I convinced others to wait beside me in a place that had no real plan other than "well, if the sh*t goes down, we defend with our lives" as the core plan. We had no vision of what was about to happen beyond that and we had no clue as to how we would all work together as we just met each other in game only 5mins beforehand. Here we are armed ready to fight and hoping we can figure it out as we go.

We died.

This is much like most teams I’ve been in over the past few years, I keep hearing about a good team is one that is in sync with one another but in the end that only lasts for the first flag/waypoint as beyond that a lot of variables occur that in turn causes a de-synchronization from occurring. 

In the above example had I been paired with a healer and another tank/dps (tank or dps are basically characters whose sole job is to hit hard and often as a healers job is to keep everyone alive while they do so) we may have stood a chance of survival. As we all had a role to play and whilst the plan was distilled into a core class structure, we still have a series of objectives that must be upheld.

A healer must be protected at all cost, as well that character is your tipping point between living and dying but at the same time a healer must keep back from the fight – as much as possible. A tank/dps job is to draw fire and get deep into the melee as much as possible and the more you can tie up your enemy’s focus the greater the chance of a win.

In software, this concept has not entirely lost, as a UX/UI person(s) job is to figure out how to keep this software from dying of bad usability death. The coder’s jobs are to underpin it with large amounts of code to keep structural integrity intact if they do not do their jobs rights, it can in turn create more work for the UX/UI person to go fix. If the UI/UX person does not do their jobs, right they can in turn suffocate the work of the coder – so it is a partnership.

A great software release respects this partnership to the end. Good UI/UX and Good Code = Good software.

If you randomly put together a team of mixed classes and pin your hopes on the agile a way of life then well you are no different to my WoW example. An assumed leader leads a group of you into a spot that has no agenda or plan other than "don’t die please".

How you live or die is based purely around how fast you can communicate with one another about what tactics you can deploy to uphold this basic principle of preserving one’s life.

All it takes however is one person to break ranks, to be the Leroy Jenkins (See Video below) and well it comes unstuck and fast. We all die of a horrible humiliating death – (aka miss our deadlines etc.).

 

Agile is not enough, is my overall point. I think agile works if you are solely focused on being a tank/dps class (coder). If you mix in UX/UI then that is where it keeps coming back with mixed results, there appears to be no right or wrong formula here.

The one concept I think – and it’s only a theory – is that you need to at times stop fighting the code and give enough time for the healer (UX/UI) person to catch their breath, to drink some mana potions if you will to figure out how to navigate the next fight.

Lost in my metaphor?

What I am trying to say is that UX/UI in a sprint equation needs to occur every other sprint, meaning at some point in the process you need to arrive at a point in time where you the coders will have to refactor your UI / UX to accommodate the new direction in the design.

It sux.

It is however, the realistic way to accommodate the reactive design you have put in place and to be clear it has little return in investment other than user efficiency and satisfaction levels.

Now comes the question – how much do you invest in a pixel?

Answer that and you will have a better understanding of Agile, UI/UX + Code than I currently have.  As you now need to think in terms of how it all comes together and what value you place on the UI/UX component. Agile won’t necessary work in the way you think when it comes to intgerating your healer (UX/UI Person) into your battle group. At times you may not need them – that is until your hitting a wall and soon realize it would have been better to have them at the start of the fight vs end.

I can think of some rebuttals here – ‘well you are doing agile wrong’ or ‘your team sounds like it wasn’t assembled correctly’ to which I simply respond – welcome to reality. Sometimes you have to play the game with a randomly aggregated team and it is not always a case of Greenfield project management.

Now, your move, how do you accommodate these variables.

Related Posts:

The 6 things that annoy me when you design my software.

1. Stop making bottleneck software

image

Technically you could write most software today as one big mega-class with loads of switch / if&else statements. If you did that, not only would every other developer you come across immediately punch you in the nose but it would also become hard to maintain over time.

We agree that would be stupid right. I mean one large file for all code! – yet why do I always see software designed in such a way that it becomes the Swiss army knife of all tasks associated to the user, in that it becomes feature-heavy based around feeble arguments of "but the user wanted.."

The user is 80% of the time a jackass.

You are armed with a plethora of programming models today, stop crowding (thereby creating UX bottlenecks) the user interface for every single role known to man. Figure out the "persona’s" attached to your software and if need be, make smaller contextually relevant versions of the software per person (whether it be modular or separate specific installations).

2. Third Party Controls do not negate the need for a designer.

image

When I first left Microsoft and joined the working class (mwahah), I was often thrown into the deepened of projects that needed some UX makeovers. Given I have both a programming and design background it seemed a natural fit so sure, go with the flow I say. I’d walk into a typical gig and sure enough I’d see 3rd party controls lurking about (ie Telerik, Component One etc).

Nothing against these brands but if you are dealing with WPF or Silverlight then let me give you a heads up on why this is a bad idea. Firstly, the 3rd party controls are just a quick dirty fix to get around bad UI design, I get it, budgets are non-existent so you do the best you can. Secondly, these controls are made for multiple developers around the world, so there are many keys to turn on and off for them to snap together – which means your controls are not on a diet. Thirdly, you need to walk a mile in the shoes of say a C++ programmer or some language that used to have to play a game of memory Tetris to really grasp the concept of the second point.

Diet is the keyword. If you are dealing in Silverlight space the leaner and smaller footprint your code has, the snappier things are going to get. I am not talking about pure CPU no-holds bar processing time; I am talking about rendering pipeline time. I am yet to see an example of 3rd party controls improving performance and not subtracting them.

Stop outsourcing design for third party controls and I am looking at you graph boy/girl.

3. Every screen has a soul.

image

In UI Principles space there’s this little concept call false affordance. It means something that looks like it was supposed to do xyz but does nothing (i.e. Push the button and all negative energy will disappear scams).

If you have some software that has a hierarchy of navigational elements, you click on the first node, and it does nothing but expand to the second node but at the same time shows a view with some "weak" summary (i.e. description etc.). Stop, you are doing it wrong.

Every click has a purpose of existence. If you have a dashboard, what is its purpose? Think about its relevance in the grand scheme of things. Should it be fresh content daily/weekly/monthly? Is a holding pattern screen necessary?  The screen, which is like the UX principles are buffering between two major waypoints – you know the one screen in the app that really has no purpose other than to get you from A to C but somehow you felt the need to keep B in place.

If you have a screen that is filled with say two Input controls and that is it. That is a freakin dialog box, it is not a screen. Stop being lazy and think about the problem not how easy it is for you to just whack up an app. It’s not about you, it is about them *points to the end user*.

4. You are not a magician so quit giving me the constant "surprise" moments.

image

Ever used an application that when you click on something random inside a screen suddenly a piece of User Interface randomly appears somewhere in the screen? Maybe hidden inside a secondary tree node somewhere?

This is not a magic show and you are not a magician. Progressive Disclosure is great when done in a way that leads the user on a journey, no more "I’ve just modified the screen, if you guess what I just did you get a fluffy kitten" moments.

5. Humans are smarter than you think

image

I have covered this quite a lot but let me re-iterate in the theme of this post. Over 90% of the world’s computer population right now has some piece of overly complicated unnecessary piece of crap software installed on their hard drive that they have somehow managed to figure out partially its inner workings.

The benchmark for success right now in this space is so low you could trip over it and still succeed.  My point is the end users are actually smarter than you give them credit for. If you are in a team and someone says, "Yeah our users aren’t smart enough to.." challenge that jackass upfront. As did he conduct a survey where One in Five housewives came back being dumber than he anticipated?

If an average worker-bee can sit through SAP ERP or any piece of software that Oracle/Microsoft throws at them, they can sit through your software as well.

The trick is to make it enjoyable for them, to be the software that does not feel like the others – the stand out. Rather than holding them hostage to complexity because of your own arrogance, try to think less about the complexity levels and more the enjoyment levels. Software should be enjoyable as we work WITH software – we do not USE it.

6. I did not buy a cat so it could be my master.

image

My kids wanted a kitten and so me being the "fun" dad I bought one. Today that cat rules the house most of the time because we react to it, not the other way around.

In software, this often is similar to what happens. We buy software thinking it will save us time and money as it will improve the master/slave relationship to our daily lives. Instead, we become more enslaved in its processes.

An example. Today I went to my bank ANZ (which I am ditching – F*K you ANZ). I said, "I’d like a copy of my home mortgage statement to give to your competitor so I can leave your dumbasses – i.e. YOU ARE FIRED"

I watched the teller pound away at a keyboard for like 5mins before she arrived at a point of some kind that then needed her co-worker to give her instructions on generating a printable report.

I am sitting there thinking the following things:

  • Why are you typing so much?
  • Why can’t I do this online myself? You give me access to every other account functionality yet why not this?
  • Why am I giving you everything but a DNA sample to authenticate I am who I say I am still to this day?

My point here is that aside from a crappy online service from ANZ Bank, the teller herself should have a simple input control that has a button next to it. Inside that input control, she types, "Print <AccountNumberXYZ> Mortgage Statement as of Today"

The input box then does the following:

  • Looks up my account number and verifies it still is active.
  • Takes the verb Print to mean "fetch" and the words Mortgage Statement as being what should be fetched whilst the word "Today" meaning as in Now(). Then spit out a piece of paper with that information. In otherwords “PrintMortgageStatementWorkflow(custId, date);”

I think I make my point(s) in saying why are we jumping through hurdles to make software do the work when it feels like we are a separate background thread in the software’s world.

Related Posts:

UX Lab: Changing the way you handle CRUD workflow

I often see a lot of consistent patterns in the way applications are being built when it comes to generic create, read, update and delete (CRUD) workflows .

The usual pattern is that a screen starts off with a add/remove action followed by a very large datagrid and probably some paging. A user would then refine the datagrid’s result set, make a selection either inline on the datagrid or opens a modal via an action like double click which then presents the end user with a more detailed view of that record. This is probably so generic in the way it’s being approached that I’d probably dare say nobody’s really sat down and thought about its actual practicality – as it seems to be the unofficial standard for screen design (well the bloody apps I see day in day out anyway).

This pattern for me isn’t something I’m a fan of, maybe because it’s so common now that I simply crave for an alternative approach? I crave this alternative because I feel at times the workflow in itself seems oddly backwards?

The part that catches me out, is the overall approach taken. For instance, the end user has come to the said screen to get a detailed view of a record – maybe a summary, but doubtful. They wade around in the various amounts of turn-keys (filter settings) until they settle on a pattern of data that they can then scan (hunt/browse) for and proceed to get the modal open for a detailed view. It appears that majority of the practical usage is saved towards the end of the process pipeline? in that getting a detailed snapshot of the record seems to be an extension to the UI instead of probably being the focal point of the UI?

Armed with this style of thinking, today, I set out to try an alternative approach to the way this workflow could work. I decided to simply inverse the workflow, in that take a typical Security (add/remove users etc) workflow and try a different approach (see below).

SecurityUserScreenBkg

The idea is that when you click on “Find Users” the screen opens up to your summary view, in that since I’m logged in it reflects back my entire account profile found within the system. There are then a number of actions one can take in and around deciding on what to do next but the main key piece here for me, is well I’ve shown you the end point up front – I’ve seeded a contract with the end user around what screens will look like once they’ve found a user of their choosing.

How do I change the user from me to someone else?

image

The change button in this screen kicks off what is traditionally the first screen, in that if the end user clicks on [Change..] a modal will open over the top, presenting the end user with search criteria. The user then fires up some search results and can specify filters for their search. Once the end user has found the right user of their choosing, the modal closes and the original security profile (you) switches to the person in question.

SecurityUserScreen

Ok, I’m kind of with you, but what benefits does this give then?

I personally think it shifts the user into a more focused approach to how they handle the workflow. It’s quite easy to snap in a datagrid + tree control and hit F5/Ship. This approach in my opinion approaches the workflow differently, in that it asks the user to be specific in what they are really after. If you’re in the User Administration area of this application, then what is it you want to do? Manage users is probably the typical response here. So, let’s let them manage a User in a more focused fashion by exposing other areas of interest in a screen that’s more content specific and less cramped / buried in a floating modal.

The typical “list all users” with paging approach is quite unnecessary real estate to reserve for prime time, as well it’s merely a stepping stone to the end point. It’s almost throw away in the task process should the user want to change “John Doe” password or check when that user last logged in etc.

You could even approach the way I’ve done it differently, by simply providing a search box at the top with a label “Find User..”. Once the user types in “Scott Bar..” (auto complete) like experience fires, but instead of a pulldown it could then go off and grab all twitter feeds, flickr photos, facebook profiles, linked profiles etc and just start showing them on screen. This kind of approach is more helpful when you’re trying to figure out who that “Scott” fellow was last night, as now you’re meet with multiple forms of media to help guide your search detective skills down to a more informed end point.

The point is, it’s taking the equation of CRUD and flipping it into a more interactive experience. Why invest all this time and energy into some of the new UX platform’s out there only to use generic patterns like the original one mentioned in this post? How can you evolve this pattern further and where can the users gain in terms of data + contextual view beyond what they’ve typically been given.

It’s a new world people, try and break a few things as when you break something you in turn are rewarded with knowledge on where risk/failure can occur. Much more informative approach than “well everyone else is doing so i assume it works” policies :0

To be tested..

Related Posts:

The principles of Microsoft Metro UI decoded

The phrase “authentically digital” makes me want to barf rainbow pixels. This was a quote pulled from a Windows Phone 7 reviewer when he first got a hold of the said phone. At first you could arguably rail against the concept of what Authentically Digital means and simply lock it into the yet another marketing fluff to jazz a situation in an unnecessary way.

I did, until I sat back and thought about it more.

Issues Presented.

Metro in itself has its own design language attached, they cite a bunch of commandments that the overall experience is to respect and adhere that is to say, someone has actually sat down and thought the concept through (rare inside Microsoft UX). I like what the story is pitching and I agree in most parts with the laws of Metro that is to say, I am partially onboard but not completely.

I’m on board with what Metro could be, but am not excited about where it’s at right now. I state this as I think the future around software is going through what the fashion industry has done for generations – a cultural rebirth / reboot.

Looking back at Retro not metro.

Looking at the past, back in the late 90’s the world was filled with bold flat looking user interfaces that made use of a limited color palette given the said video capabilities back then wasn’t exactly the greatest on earth. EGA was all the rage and we were seeing hints of VGA whilst hating the idea that CGA was our first real cut at graphics.

EGA eventually faded out and we found ourselves in the VGA world (color TV vs. black n white if you will), life was grand and with 32bit color vs. 16bit color wars coming to a conclusion the worlds creative space moved forward leaps and bounds. Photoshop users found themselves creating some seriously wicked UI, stuff that made you at the time thank the UI gods for plug-ins like alien ware etc as they gave birth to what I now call the glow/bevel revolution in user interface design.

Chrome inside software started to take on an interesting approach, I actually think you could probably trace its origins of birth in terms of creative new waves back to products like Winamp & Windows Media player skins. The idea that you could take a few assets and feed them into mainstream products like this and in turn create this experience on the desktop that wasn’t a typical application was interesting (not to mention Macromedia Director’s influence here either).

image

I think we all simply got on a user interface sugar induced high, we effectively went through our awkward 80’s fashion stage, where crazy weird looking outfits / music etc was pretty much served up to the world to gorge on. This feast of weird UI has probably started to wind down to thanks to the evolution of web applications, more importantly what they in turn taught us slowly.

Web taught the desktop how to design.

The first lesson we have learnt about design in user interface from the web is simple – less is more. Apple knocks this out of the park extremely well and I’d argue Apple wasn’t its creator, the Web 2.0 crowd as they use to be know was. The Web 2.0 crowd found ways to simply keep the UI basic to the point and yet visually engaging but with minimalist views in mind. It worked, and continues to work to this day – even on Apple.com

image

Companies like Microsoft have seen this approach to designing user interface and came to a fairly swift rationale that if one were to create a platform for developers & designers to work in a fashion much like the web, well desktop applications themselves could take on an entirely new approach.

History lesson is over.

I now look at Metro thinking back on the past evolution and can’t but help think that we’re going back to a reboot of EGA world, in that we are looking for an alternative to design in order to attract / differentiate from the past. Innovation is a scarce commodity in today’s software business, so we in turn are looking at ways to re-energize our thinking around software design but in a way that doesn’t create a cognitive overload – be radical, be daring but don’t be disruptive to process/task.

Inside Microsoft what I can presume, the ECG group found a way to hijack existing patterns in terms of user recognition and make use of modern signage found inside bus station, railways, elevator marshal areas etc and declared this to be the way out of the excess UI scourge.

I like it, I like this source of inspiration but my first instinct was simple – I hope your main source of success isn’t the reliance on typography, especially in this 7second attention economy of today. Sure enough, there it is, the reliance in Windows phone 7. Large typography taking over areas of where chrome used to live in order to fix what chrome once did. The removal of color / boundary textures in order to create large empty space filled with 70px+ Typography with half-seen half-hidden typography is what Microsoft’s vision of tomorrow looks like.

Metro isn’t Wp7, Metro is Microsoft Future Vision.

My immediate reaction to seeing the phone (before the public did) back inside Microsoft was "are you guys high, this is not what we should be doing, we are close but keep at it, you’re nearly there! don’t rush this!". This reaction was the equivalent of me looking at a Category 5 Tornado, demanding it turn around and seek another town to smash to bits – brave, forward thinking but foolish.

This phone has to ship, its already had two code resets, get it done, fix it later is pretty much the realistic vision behind Windows Phone 7 – NOT – Metro.

Disbelief?

Take a look at what the Industry Innovation Group has produced via a company called Oh, Hello. In this vision of tomorrow’s software (2019 to be exact) you’ll see a strong reliance on the metro laws of design.

The Principles of Metro vs. Microsoft Future Vision.

In order to start a conversation around Metro in the near future, one has to identify with the level of thinking associated with its creation. Below is the principles of metro – more to the point, these are the design objectives and creative brief if you will on what one should approach metro with.

Clean, Light, Open, Fast

  • Feels Fast and Responsive
  • Focus on Primary Tasks
  • Do a Lot with Very Little
  • Fierce Reduction of Unnecessary Elements
  • Delightful Use of Whitespace
  • Full Bleed Canvas

You could essentially distill these points down to one word – minimalist. Take a minimalist approach to your user interface and the rewards are simple – sense of responsiveness in user interface, reliance on less information (which in turn increases decision response in the end user) and a reduction in creative noise (distracting elements that add no value other than it was cool at the time).

image

In Figure 1, we I’d strongly argue you could adhere to these principles. This image is from the Microsoft Sustainability video, but inside it you’ve got a situation which respects the concept of Metro as after all given the wide open brief here under one principle you could argue either side of this.

Personally, I find the UI in question approachable. It makes use of a minimalist approach, provides the end user with a central point of focus. Chrome is in place, but its not intrusive and isn’t over bearing. Reliance on typography is there, but at the same time it approaches in a manner that befits the task at hand.

image

Microsoft’s vision of this principle comes out via the phone user interface above (Figure 2). I’m not convinced here that this I the right approach to minimalism. I state this, as the iconography within the UI is inconsistent – some are contained others are just glyphs indicating state?. The containment within the actual message isn’t as clear in terms of spacing – it feels as if the user interface is willing to sacrifice content in order to project who the message is from (Frank Miller). The subject itself has a lower visual priority along with the attachment within – more to the point, the attachment has no apparent containment line in place to highlight the message has an attachment?

image

Microsoft’s original vision of device’s future has a different look to where Windows Phone 7 today. Yet I’d state that the original vision is more in line with the principles than actual Windows Phone 7. It initially has struck a balance between the objectives provided.

The iconography is consistent and contained, typography is balanced and invites the users attention on important specifics – What happened, where and oh by the way more below… and lastly it makes use of visuals such as the photo of the said person. The UI also leverages the power of peripheral vision to give the user a sense of spatial awareness in that, its subtle but takes on the look and feel of an “airport” scenario.

Is this the best UI for a device today? No, but it’s approach is more in tune with the first principle then arguably the current Windows Phone 7’s approach which is reliance of fierce amounts of whitespace, reduction in iconography to the point where they clearly have a secondary reliance and lastly emphasis on parts of the UI which I’d argue as having the lowest importance (i.e. the screen before would of indicated who the message is from, now I’m more focused on what the message is about!).

image

image

 

Celebrate Typography

  • Type is Beautiful, Not Just Legible
  • Clear, Straightforward Information Design
  • Uncompromising Sensitivity to Weight, Balance and Scale

I love a good font as the next designer. I hoard these like my icons, in fact It’s a disease and if you’re a font lover a must see video is Helvetica. That being said, there is a balance between text and imagery, this balance is one struck often daily in a variety of mediums – mainly advertising.

Imagery will grab your attention first as it taps into a primitive component within your brain, the part that works without your realizing its working. The reason being is your brain often is in auto-pilot, constantly scanning for patterns in your every day environment. It’s programmed to identify with three primative checks, fear, food and sex. Imagery can tap into these striaght away, as if you have an image of an attractive person looking down at a beverage you can’t but help first think “that’ person’s cute (attractive bias) and what are they looking at? oh its food!…” All this happens despite there being text on the said image prior to your brain actually taking time to analyse the said image. To put it bluntly, we do judge a book by its cover with extrem amount of prejudice. We are shallow, we do prefer to view attractive people over ugly unless we are conveying a fear focused point “If you smoke, your teeth will turn into this guys – eewwww” (Notice why anti-cigarette companies don’t use attractive people?)

Back to the point at hand, celebrating typography. The flaw in this beast despite my passion for fonts, is that given we are living in a 7 second attention economy (we scan faster than we have before) reliance on typography can be a slippery slope.

image

In Figure 6, a typical futuristic newspaper that has multi-touch (oh but I dream), you’ll notice the various levels of usage of typography (no secret to news papers today). The headings on purpose approach the user with both different font types, font weight, uppercase vs lowercase and for those of you out there really paying attention, at times different kerning / spacing.

The point being, the objective is that typography is in actuality processed first via your brain as a glyph, a pattern to decode. You’ve all seen that link online somewhere where the wrod is jumbled in a way that you first are able to read but then straight away identify the spelling / order of the siad words. The fact I just did it then along with poor grammar / spelling within this blog, indicates you agree to that point. You are forgiving majority of the time towards this as given you’ve established a base understanding of the english language and combine that with your attention span being so fast paced – you are more focused on absorbing the information than picking apart how it got to you.

Typography can work in favor of this, but it comes at a price between balancing imagery / glyphs with words.

image

The above image (Figure 7) is an example of Metro in the wild. Typography here is in not to bad of a shape, except for a few things. The first being the “Pictures” text is making use of a large amount of the canvas, to the point where the background image and heading are probably duking it out for your attention. The second part of this is the part that irritates me the most, in that the size of the secondary heading with the list items is quite close in terms of scale. Aside from the font weight being a little bolder, there is no real sense of separation here compared to what it should or could be if one was to respect the principle of celebrating typography.

Is Segoe UI the vision of the only font allowed? I hope not. Is the font weight “light” and “regular” the only two weights attached to the UI? what relevance does the background hold to the area – pictures? ok, flimsy at best contextual relevance but in comparison to the Figure 3 above a subtle usage of watermarks etc. to tap into your peripheral vision would provide you more basis to grapple onto – pattern wise that is. Take these opinions and combine the reality that there is no sense of containment and I’m just not convinced this is in tune with the principle. It’s like the designers of metro on windows phone 7 took 5% of the objectives and just ran with it.

image

Comparisons between Figure7 and Figure8, the contrast in usage of typography is different but yet both using the same one and only font – Segoe UI. The introduction of color helps you separate the elements within the user interface, the difference in scale is obvious along with weight and transforms (uppercase / lowercase). Almost 80% of this User Interface is typography driven yet the difference in both is what I hope to be obvious.

image

Don’t despair, it’s not all dark and gloom for the Windows Phone 7 future. Figure 9 (Above) is probably one of the strongest hints of “yes!” moment for the siad phone I could find. Typography is used but add visual elements and approach the design of typography slightly differently and you may just have a stake in this principle. The downside is the choice of color, orange and light gray on white is ok for situations that have increased scale, but on a device where lighting can be hit/miss, probably need to approach this with more bolder colors. The picture in the background also creeps into your field of view over the text, especially in the far right panel.

image

Alive in motion

  • Feels Responsive and Alive
  • Creates a System
  • Gives Context to Improve Usability
  • Transition Between UI is as Important as the Design of the UI
  • Adds Dimension and Depth

I can’t really talk to these principles via  text on a blog, but what I would say is that the Windows Phone attacks this relatively ok. I still think the FlipToBack transition is to tacky and the reality between how the screens transition in and out at times isn’t as attractive as for example the iPhone (ie I really dig how the iphone zooms the UI back and to the front?). The usage of kinetic scrolling is also one that gives you the sense of control, like there are some really well oiled ball bearings under the UI’s plane that if you flick it up, down, right or left the sense of velocity and friction is there.

If you zoom in and out of the UI, the sense that the UI will expand and contract in a fluid nature also gives you the element of discovery  (Progressive disclosure) but can also give you a sense of less work attached.

image

image

Taking Figure 11 & Figure 12 (start and end) one could imagine a lot of possibilities here in terms of the transition were to work. The reality that Reptile Node expands out to give way to types of reptiles is hopefully obvious whilst at the same time the focus is on reptile is also in place (via a simple gradient / drop shadow to illustrate depth). Everything could snap together in under a second or maybe two but it’s something you approach with a degree of purpose driven direction. The direction is “keep your eye on what I’m about to change, but make note of these other areas I’m now introducing” – you have to move with the right speed, right transition effect and at the same time don’t distract to heavily in areas that aren’t important.

Content, Not Chrome

  • Delight through Content Instead of Decoration
  • Reduce Visuals that are Not Content
  • Content is the UI
  • Direct interaction with the Content

Chrome is important as content. I dare anyone to provide any hint of scientific data to highlight the negative effects of grouping in user interface design. Chrome can be over used, but at the same time it can be a life saver especially when the content becomes over bearing (most line of business applications today suffer from this).

Having chrome serves a purpose, that is to provide the end user a boundary of content within a larger canvas. An example is below

image

image

image

I could list more examples but because I’m taking advantage of Microsoft Sustainability video, I figure this would be sufficient examples of how chrome is able to breakup the user interface into contextual relevance. Chrome provides a boundary, the areas of control if you will in order to separate content into piles of semantic action(s). Specifically in Figure 15, the brown chrome is much like your dashboard on the car ie you’re main focus is the road ahead, that’s your content of focus but at the same time having access to other pieces of information can be vital to your successful outcome. Chrome also provides you access to actions in which you can carry out other principles of human interaction – e.g., adjustment of window placement and separation from within other areas offers the end user a chance of tucking the UI into an area for later resurrection (perspective memory).

Windows Phone 7 for example prefers to levearge the power of Typography and background imagery as its “chrome” of choice. I’m in stern disagreement with this as the phone itself projects what I can only describe as uncontained vast piles of emptiness and less on actual content. The biggest culprit of all for me is the actual Outlook client within the said phone.

image

The Outlook UI for me is like this itch I have to scratch, I want the messages to have subtle separation and lastly I want the typography to have a balance between “chrome” and “whitespace”.

image

Chrome can also not just be about the outer regions of a window/UI, it has to do with the internal components of the user interface – especially in the input areas. The above (Figure 17) is an example of Windows Phone 7 / Metro keyboard(s). At first glance they are simple, clean and open, but the part that captures my attention the most is the lack of chrome or more to the point separation. I say lack, as the purpose of chrome here would be to simulate tactile touch without actually giving you tactile touch. The keyboard to the right has ok height, but the width feels cramped and when I type on the said device It feels like I’m going to accidently hit the other keys (so I’m now more cautious as a result).

image

The above (Figure 18) offers the same concept but now with “chrome” if you will. Nice even spacing, solid use of principles of the Typography and clear defined separation in terms of actions below.

image

iPhone has found a way to also strike a balance between chrome and the previous stated principles. The thing that struck me the most about the two keyboards is not which is better, but more how the same problem was thought about differently.  Firstly as you type an enlarged character shows – indicating you hit that character (reward), secondly the actual keys have a similar scale in terms of height/width proportions yet the key itself having a drop shadow (indicates depth) to me is more inviting to touch then a flat – (its like which do you prefer? a holographic keyboard or one with tactile touch, physical embodiment?). If you were to also combine both sound and vibration as the user types it can also help trick the end users sense into a comfortable input.

I digress from Chrome, but the point I’m making is chrome serves a purpose and don’t be quick to declare the principles of Metro as being the “yes!” moment as I’d argue the jury is still not able to formulate a definitive answer either way.

Authentically Digital

  • Design for the Form Factor
  • Don’t Try to be What It’s NOT
  • Be Direct

I can’t talk to this to much other than to say this isn’t a principle its more marketing fluff (the only one with a tenuous at best attachment to design principles would be “design for the form factor” meaning don’t try and scale down a desktop user interface into a device. Make the user interface react to the device not the other way around.

Summary

Metro is a concept, Microsoft has had a number of goes at this concept and I for one am not on board with its current incarnation inside the Windows Phone 7 device. I think the team have lost sight of the principles they themselves have put forward and given the Industry Innovation Group have painted the above picture as to what’s possible, it’s not like the company itself hasn’t a clue. There is a balance to be struck here between what Metro could be and is today. There are parts of Windows Phone 7 that are attractive and then there are parts where I feel it’s either been rushed or engineering overtook design in terms of reasons for what is going on the way it is (maybe the design team couldn’t be bothered arguing to have more time/money spent on propping up areas where it falls short).

People around the world will have mixed opinions about what metro is or isn’t and lastly what makes a good design vs what doesn’t. We each pass our own judgement on what is attractive and what isn’t that’s nothing new to you. What is new to you is the rationale that software design is taking a step back into the past in order to propel itself into the future. That is, the industry is rebooting itself again but this time the focus is on simplicity and by approaching metro with the Microsoft Future’s vision vs the Windows Phone 7 today, I have high hopes for this proposed design language.

If the future is taking Zune Desktop + Windows Phone 7 today and simply rinse / repeating, then all this will become is a design fad, one that really doesn’t offer much depth other than limited respite from the typical desktop / device UI we’ve become used to. If this is enough, then in reality all it takes is a newer design methodology to hit our computer screens and we’re off chasing the next evolution without consistency in our approach (we simply are just chasing shiny objects).

I’ve got a limited time on this earth and I’d like to live in a world where the future is about breaking down large amounts of unreadable / unattractive information into parts that propel our race forward and not stifle it into bureaucratic filled celebrations of mediocrity.

Apple as a company has kick started a design evolution, and say what you will about the brand but the iphone has dared everyone to simply approach things differently. Windows Phone team were paralyzed at times with a sense of “not good enough” when it came to releasing the vnext phone, it went through a number of UI and code resets to get it to the point it’s at now. It had everything to do with the iPhone, it had to dominate its market share again and it had to attract consumers in a more direct fashion. It may not have the entire world locked to the device, but it’s made a strong amount of interruption into what’s possible. It did not do this via the Metro design language, they simply made up their own internally (who knows what that really looks like under the covers).

Microsoft has responded and declared metro design as its alternative to the Apple culture, the question really now is can the company maintain the right amount of discipline required in order to respect the proposed principles.

I’d argue so far, they haven’t but I am hopeful of Windows 8.

Lead with design, engineer second.

Related Posts:

What happens when you bring the UX person in last.

How many of you have been to a conference that has a UX/UI Person on stage discussing the mystic art of software development and design? In that said session they at some point raise the slide that outlines you should engage a UX person early and think about UI/UX from the start.

How many of you then go back to your respective cubicles, nodding in agreement but then immediately go into a new project ignoring the said suggestion?

Don’t lie, I see you looking back in a nervous manner and shouting out reasons like “Well, we didn’t have the budget” or “My boss wouldn’t …” etc.

Meet Mr Wolf

image

Just like in Pulp Fiction, a guy like me is called in after the crime has been committed. I’m the guy you bring in after you accidently killed someone and its my job to navigate the mess in order to get you back to your life without prison time. If I succeed, you don’t’ spend the rest of your life in jail if I fail, well, learn how to fight using prison rules.

When I come into a team in this situation, the thing that I notice the most is they are looking for guidance around a plan, in that it’s a case of me analyzing the situation, asking a series of specific questions relating to the said scene and then giving them a task list to execute on – whilst being clear to stick to my rules or well, good luck in jail.

The problem with this approach at times is that you have usually one or two people in the room who ask for your help but at the same time are giving your orders on how to clean the mess up quicker – each time they do this they in turn increase their chances of prison time.

It’s a hard balance to participate in from my perspective as I have to figure out a way to firstly give a design and experience to the software’s targeted end users in a way that isn’t just a screen after screen of tree controls and datagrids whilst at the same time having a low impact to the codebase and lastly but more importantly doing it within a very tight timeframe/budget?

Its hard work and you know what, its going to cost you so don’t whine about it.

Had you called me from the start, it would have been a completely different outcome and yes, you’ve heard this thousands of times at whatever conference you last attended – engage a UX person early and let them direct the screens overall compensations – design first engineer second.

I personally have been pulled into over 30+ projects in the last year that have this exact situation unfolding before me, in that it’s the last two sprints of a project, I’m playing a massive game of UX Tetris with WPF/Silverlight or Wp7 and I’m constantly being harassed on time/budget questions.

It sux but that’s the reality of the role I play in this business, the guy who can code and design at the same time. Its why I charge the amounts I do and sure the price attracts attention but in truth If you follow my rules and approach you will come out with a finished result. If you interject along the way with the way you think it should be done, fine, I’ll do it your way but if it fails – given the inexperience so far, it will – then to be fair, you were warned.

My way or the wrong way.

The way I approach situations where I’m brought in at the last hour is via the following routine.

  • The Primitives. In every application you have what I call the primitives, in that these are the buttons, modal windows, textbox’s, scrollbars, checkboxes etc. .. the stuff you get out of the box for free with .NET. My first attack posture is to start building out a resource dictionary library for you to bootstrap your UI against. In that for example TextBox and Button controls I start putting into what I call the UI-Shirt-Sizes, Large, Medium and Small. If your form in question requires the user enter 15chars min/max, who cares, the end user is open to the idea that this textbox is a small one that magically doesn’t let me type more than those pre-defined characters.

    If your software has a large sentence like “Find a users profile”  labels on buttons – guess what I’m going to do, re-label that as “Find..” keep it simple less extraneous cognitive load and more assume the user has used software before they picked up yours.

  • The Layout. Chances are you’ve probably put together a UI that I can only describe as a DataGrid orgy followed by copious amounts of Modal windows and screens that probably looks like the dashboard of a Qantas Jet in terms of fields/inputs etc. Just for giggles, I’m also likely to find a TreeGrid control because of some random hierarchy based navigational weird mutation of a need (you know who you are, there’s no shame in admitting that)

    I’m going to simplify this down to the point where the data flows in a fashion that makes sense to the outcome of the screens purpose. I’m also going to look for ways to make use of a party trick called “progressive disclosure” as you do want the user to feel like they stand a chance at success should they use your software don’t you?

    This is what I call the hostage negotiation in that chances are there is an entity in the room that is locked on the way it works at the moment and its my job to find a way to get you to release parts of the UI so I can find a happy resolution to the situation. I’m going to ask you to give me a little control over how the UI comes together and in return I’ll turn the lights back on followed by some pizza. We need to build trust and you got to work with me on this one, I can make good on some promises if you do!

  • The Validations. I have seen some crazy ways that developers have approached the simplistic concept of alerting the user that they did something wrong. What I have noticed the most is its kind of OnChange vs OnSubmit mode of approach. The reality is validation isn’t that, as you have the “Hey before we show you this form, here’s where you need to focus”, “Hey I just noticed you filled out that field wrong, can you fix it”, “Hey I am about to send this data off and noticed the form isn’t really done yet?” and lastly “hey I know at the time you sent this the form felt like it was good, but the server just called me and told me its wrong, so can you go fix” .. point is, IDataErrorInfo implementation is only going to work so far.

    I focus on this area is this is where at times bugs tend to get brought up and it can be a case of where the most effort can be spent trying to undo user fail. Its important that one approaches this in a way that makes sense to the end user and you also find ways to decode the error in a meaningful way – not one that aims to reduce the user to a dribbling mess of “I don’t speak computer geek?”

    Validation styling and alert states are crucial.

  • The BackgroundWorker. Its not about just fixing the UI look and user experience fail points its also about shift the work into areas that make the application feel snappy. In WPF the UI Thread is an absolute pain in the butt when you at times talk to WCF – in that I have seen a lot of apps that keep the entire workload under the one thread only. In Silverlight this can be a fairly low risk situation given Async works ok, but in WPF it means your application grinds to a halt until the service layer comes back from the dead. It also isn’t just a threading issue its also a latency issue as well.

    Latency is a buzz killer in making the user feel like the application is responsive, it creates this effect in which the user punishes themselves and attempts to pay their debt by trying again and again etc if left unchecked. Its situations like this I look for ways in making the user aware that they did a good job but at the same time finding ways to NOT remind them of time – as time is the enemy given each millisecond you are banking hate debt with the user?

    This is where I look for ways to use some slight of hand techniques to convince the user there isn’t a problem and everything is fast / efficient in the software. I also may lie to the user if I can eg Please wait while Security authenticates you” – damn those Security Nazi’s I agree, it sux but what can you do – its actually an effective way to pacify users as you all collectively shake your fist at IT Department for always riding you about security – when I reality I’m waiting for blah service to wake up from its slumber?

    Point is, find a common villain to throw under a bus or find a way to keep peoples attention away from their watch (eg: Now herding llamas for the great stampede …< MAXIS do this in their games, it works)

I’ll leave there as this is turning into a tomb of gospel around how I approach my job, but the point is that I do have a process and there is a method to my madness. I’ve been in a lot of fire drills with WPF/Silverlight and WP7 and I’ve now settled on some patterns that have produced results  around nice UI/UX and customers happy.

The reality is this though, you could of saved yourself minimum double through to quadruple the amount of money it cost by bringing me in early instead of late. I can’t say it enough, engage early and upfront you will save, you may be skeptical htats fine, but either way a person like me gets paid – its just a matter of how much?

Related Posts:

UX Creator Tip: Fear the surrogate user.

image

Ever sat on a project and heard someone give their account as to why the user base won’t like xyz feature or UI change? Ever sat in a cubicle and listen to someone rail against the idea of change for fear it would upset the user base to the point where the helpdesk would be flooded with “Please Explain” calls.

Surrogate User – people used as a substitute or representative for users, in order to provide information in design meetings, user testing, and so forth.

The reality is this, end users are surprising beasts and often will surprise you in what they can and can’t do. The end user especially in enterprise is so used to crap-tac-ula software day in day out that anything really that you do as of today onwards is highly likely to be much simpler to what they are used to (especially given the consumerism within Enterprise these days). Furthermore should they dislike the software they aren’t likely to all abandon their jobs simply because of a bad UX decision – as 9/10 they are under duress around crappy software decisions made by other teams anyway.

Instead, the end user is probably thirsty as ever for software that feels simpler to use and actually looks like someone took the time to think about them and their needs instead of how it solves one finite problem only. Software’s job is to react to the end user, not make the user react to it! 🙂

End users are also making use of a variety of software so whilst one particular UI pattern that has been adopted is “the way they are used to today” doesn’t necessarily mean they are ignorant of all other UI patterns out there on the market.

The key is to leverage existing muscle memory as much as you can today and less showing off on what you can and can’t do with some of the UX Platforms at your disposal. Be creative but don’t be overly creative, you get no points for showing off.

Layer-in complexity is what I always tell people, as it’s much harder to reduce complexity later than it is to bring it in slowly. It’s also the best discussions to have, as if the business or end users are complaining that the software is too simple – which let’s be clear, I dream of these discussions – then you have more of a baseline to draw from going forward around feature weighting and selection (which plays into UX + Agile in a way).

The surrogate user is someone you should fear in all software projects as they often bring pre-existing bad habits forward and lastly suffer from the “I’m in touch with my audience” arrogance (sometimes without realizing. I’m told that a Surrogate User when done right works, i’m yet to see one of these unicorn beasts, but i’m told just the same.

Whenever I hear someone say “Users don’t like..” my first instinct is to respond “Oh? Did 1 in 5 housewives tell you that or is this something you’re just making assumptions on?” – Meaning is this “I think” or is it “I know”.

Surrogate Users are dangerous unless they are moderated by someone who has the “UX” somewhere buried in their resume, as they can often decode the “personal bias” from the science of what these entities represent.

Related Posts:

Your own Mulit-touch Surface Prototypeboard-thingy.

Today it hit me that a co-worker and I have been using a mini-whiteboard in a way that could easily be used as a way to prototype Surface style applications or ideas you may have.

IMG_0232 Let me explain, there is a partition between me and my co-worker which used to be a locker of some sort (no idea actually). It’s pretty high in terms of size and given its between the co-worker and I it’s kind of suddenly become our meet/greet tabletop.

On top of this partition is a mini-whiteboard that we have (approx 50cm wide and 50cm high) and it’s fast become our “hey got an idea for the UI” or “hey need some help with OOP composition” meeting point. We typically sketch out our problem/idea and then proceed to – yes you heard it first – communicate with one another on what’s possible etc.

We often call it the poor-mans Surface table as a joke, but in reality if i were to work on a Surface / iPad etc style application it’s this little whiteboard I’d love to have in the room. As I’m then able to sketch out the ideas and walk others through it via old skool pen + paper mode but at the same time can allow quicker iterations then “can you hand me that eraser” or “let me get another sheet of paper” ..

Anyway, thought I’d share this little eco-friendly prototyping tool for all to think about.

IMG_0233

Related Posts:

Silverlight Installation / Preloader Experience – BarnesStyle.

When I was in the Silverlight Product team, I had many visions of where I wanted to take the product beyond where some of my co-team mates were comfortable with (slow painful incremental growth in terms of change).

One of the main focal areas I wanted to fix, was the overall Installation and Preloading Experiences for Silverlight. In that, i think it’s essentially the like the IRAQ war of software (i.e. meaning, its so far embedded now that fixing it is going to take generations of change).

Here is how I’d love to see it change course.

Change the way Silverlight Boostraps.

If you new-up a project within VisualStudio or Expression Blend, you will effectively get an automated boostrapped solution, meaning inside your main Silverlight project via App.xaml.cs for example, you should see something like this:



        private void Application_Startup(object sender, StartupEventArgs e)
        {
            this.RootVisual = new MainPage();
        }	

What effectively is happening here is that Application Class is the default root for Silverlight and when you inject “MainPage()” into the RootVisual its pretty much the same as if you went:


	UserControl MyUserControl = new UserControl();
	MyUserControl.Content = new MainPage();

What I would love to see firstly is a separate Project called “BootStrapper” created as part of the new-up Project template – that or it prompts you to create one much like it does at the moment with ASP.NET Website (More on that below)

The point is, it draws the developers around the worlds attention to the fact that the Spinning Balls are really bad idea to hand out to public facing websites.

Why are they bad you may ask?

It has to do with the way end users approach your experience and assuming they have Silverlight in place, it’s important that you give the end users some clues as to what they are loading and what is the likely time or more to the point is this going to take forever?

Impatience is a virtue all users have so its going to be very hit or miss depending on what the context of your application expected usage is and lastly the end users broadband connection and tolerance for plug-in experiences in general (I counted like 5 variables of failure that can occur per user when I did some research on this back at Microsoft).

The rotating balls don’t offer much value, there’s nothing to keep you entertained or interested in the experience other than balls rotating and some % of where I’m at.

Soliciting the end users.

Just like a hooker, your job is to entice the person before you to take faith in the hopeful reality that this will be an experience to remember (ok that analogy just took a nose dive in very bad way). Your job is to firstly convince the end user to install Silverlight should it not be in place and secondly and just as importantly your job is to convince the end user that sticking around is also equally important SHOULD they have the installation in place of Silverlight.

You first need to have inside your webpage “You don’t have Silverlight, go get it and here’s what you will get in return” vs the dreaded “Get Silverlight” medallion.

To illustrate this importance; when I was at Microsoft we noticed on Microsoft properties an increase in installation of Silverlight when we actively went out of our way to solicit end users to Install vs the default “Get Silverlight” medallion – information is power, users want power just as much as the next person, power of choice.

Once they jump through that hurdle, you need to again keep their attention on you and try and convince them to avoid the temptation of alt-tabing and twittering etc while they wait – think of all end users as a 3 year old child’s attention span and you will be better positioned for success here.

You need to create a preloading experience that is as helpful and joyful as the intended experience you’ve just spent $thousands of dollars creating (why drop the ball at the last yard! – for you NFL fans)

In this you create something that is part of the theme or take a page out of MAXIS Games where you insert random crap that’s quite funny – example:

“…Initializing launch codes for anti-nuclear attack”

”…Growing Llamas feet so it can walk…”

”…Handing a Monkey a nail gun for entertainment value..”

Keep them informed but not too informed as you want to balance out keeping them informed whilst not making them aware of “time” as that is the enemy, “time”. I’ve even lied once due to a latency hit that I couldn’t avoid, so I put in the initializing splash screen “Checking Security Credentials”  (Given I found end users were more likely to wait for a serious thing like Security to validate vs.. staring at rotating balls of stupidity).

That all aside, this is the “Why” both Preloading/Splash Screens and Install Templates are critical for SIlverlight’s future success as this in turn is what end users judge the technology on (Do i need to bring up the “Skip Intro” debacle of the early 2000’s where Flash Intros were all the rage and bad bad experiences with Flash occurred as a result).

First: Install Templates.

Imagine if you will, you new-up a Silverlight Project. You’re asked obviously what type of project you require and then in the next step it prompts you with the below:

image

You then choose your Install Template and it can be both an Online or Local template (more on Silverlight Marketplace potential later). Once you select the template, this then will take a vanilla themed experience and injects in into your MySilverlightProject.BooStrapper project. You as a developer and/or designer can then focus on swapping out these assets and messaging to suite your intended experience context for your brand etc (much like the larger brands have done with Silverlight today – e.g. MSNBC etc).

Second: Preloaders/Splash Screens.

Same approach as the Install Templates, except it automatically attaches the intended original Silverlight project you wanted as being the “First” to load (but with enough breadcrumbs in code that you can also swap this out should you choose to).

image

Once you have gone through these three templates, your solution should have 3 projects in place.

  • Project1 – MyProject.Silverlight.BootStrapper

    This project’s job is to handle the preloading of Project2, as in order to preload you first have to have a project that is very small in size for Silverlight to load, then once it’s loaded, Silverlight can then automatically bring down the .XAP file (secondary but main project) in a more controlled and aesthetically pleasing manner.

  • Project2 – MyProject.Silverlight

    This is the project you originally intended to use, exact same structure(s) as you have today in Silverlight.

  • Project3 – MyProject.Silverlight.WebThis is the project which is in place today in terms of automatically generating the said ASP.NET / HTML project code you need to test with. Except, it also injects a bunch of files/scripts which handle the “Does the end user have Silverlight?” which then based on a Boolean result reacts and produces a prompt that goes beyond the “Get Silverlight” medallion.

The Marketplace.

Ok, you can technically write a VS Template or WPF/WinForms app today do the above without having to bug Microsoft (i’ve started and stopped 3 times – stopping only due to boredom or busy). Why this needs to come from Microsoft is simply put – Marketplace.

We should have a concept where we can buy/sell Themes, Behaviors, Preloaders and Install Templates etc from one another whether it be by cash, XBOX Live Points or whatever currency you want to barter with. Point is, we should foster more of an exchange based community that is more consolidated and branded under a single point of entry for both Silverlight and Expression (say NO to Expression and Silverlight/WPF segregation– designer / developers need to cross-pollinate).

I’d love to see a similar concept as preloaders.net and scalenine.com for the Silverlight community only less fragmented and one that has a much smoother tooling integration experience (I’ll come back and work at Microsoft if need be to make this happen).

Summary.

I’d like to see us as a community leap frog the Flash community in terms of handling these two experiences. As the below illustration highlights the fatigue gates associated with any plug-in experience.

image

Why leap frog Flash? it’s nothing to do with their community it has to do with “learning from their mistakes” as at the moment Flash folks have figured this out and have a bunch of strategies (whilst fragmented) in place to fix this broken situation. We on the other hand are like the retarded step-child twice removed when it comes to picking up on this, and it erks…ERKS..me (for I am ERKED) to see the rotating splash balls and Get Silverlight Medallion – which incidentally were just a placeholder animations and images that someone forgot to come back and replace.

We fix this we drive Silverlight installation experiences up by minimum 20% per month, I guarantee you that much. As it will lesson majority friction associated with Silverlight and drive a much more deeper awareness of the product amongst consumers who aren’t reading the blogsphere for “What is Silverlight?”

The “What Is Silverlight” is still a question being asked a lot today. It’s one thing to answer that, but it’s another to attach friction to and users experience of the said product once they’ve found a satisfactory answer to that question with bad preloading/installation experiences – OUTSIDE – of Silverlight today.

This is both a Microsoft and Community problem that needs immediate resolution.

Call to Action: Contact Microsoft and hammer away at this issue, get more of a community groundswell behind it so that we can all move forward. I remember inside the team, community reaction was one thing we often would use to trigger emails with one another on why change is important.

Vote here so this can be escalated to the Silverlight Feature planning team! – : http://dotnet.uservoice.com/forums/4325-silverlight-feature-suggestions/suggestions/632735-silverlight-installation-and-preloader-experience-

Related Posts: