Xamarin & Microsoft merger may yet prove useful to designers.

The .NET community has been fractured for quite some time when it comes to mobile development, and a large amount of hate debt has been banked as a result. Products like Xamarin have been given the appropriate amount of adoption because they have a more agnostic vision of how .NET could work in a truly x-platform / x-device arena.

However, the approach to date isn’t an easy stroll down success lane, as to develop a mobile app even with Xamarin you’re faced with two decisions to begin with. Xamarin “native” or Xamarin “Forms”, each having their own set of pro’s and con’s attached from a pure “developer-centric” perspective.

Next decision after that is how do you design for three platforms (*maybe two*) and still retain constancy – yes I said constancy, not consistency. On one hand designing apps to work inside iPhone is different to how they work in Android – but only up to a specific context (as tradeoffs and split thinking naturally then occurs).

In order to achieve this, you have to essentially begin the same set of compromises you would make with the web, forking your feature design/development vision to accommodate and absorb the various limitations imposed on each platform in accordance to the restraints Xamarin imposes on top (ie there’s an element of decay implied).

To compound issues further, you then have Xamarin not really adhering to the previous iterations of XAML (aka Avalon) and whilst it looks kind of like XAML, it’s really in many ways just XML with limitations (ie you can’t really animate with it using the same Storyboard composition as you once had with Silverlight/WPF and so on). Xamarin’s XAML is the panacea we want but isn’t the same.

Now you have to programmatically design your composition with either a designer’s comps on your second monitor as a guide or worse, the designer is over your shoulder offering feedback loop hell.

Xamarin failed thus abandon it?

Hell no, Xamarin has all the ingredients one would need to really get the .NET x-platform / x-device story going, in fact, I’m more frustrated at the post platform execution than its original foundation itself. The secondary parts above can easily be fixed provided there’s some stronger thinking imposed about how “creative influence” applies to the composition of design – that is to say, at what point does the designer have free control over composition without haggling with a developer on limitations artificially imposed due to what i can only guess at being resource allocation issues on Xamarin’s part.

This, in turn, means that one would need to approach the composition of a Xamarin vNext with the idea or intent of using XAML/C# marriage the way .NET gods intended. What that means to say is that if you took the same conceptual develop/design pipeline that .appx or .xap has today and applied this to Mobile development this, in turn, unites the developer & designer workflow under the one constancy based banner, which in return reduces less feature editing / design cut-aways.

Why is this important?

In 2007, we were faced with a mission to get Designers more engaged with Developers, and that’s why Silverlight/WPF was born. We had small amount of success but in truth, we were side-tracked on conflicting priorities and poor management to really dig in on that same set of problems. Today, the various technical platforms have shifted but the core fundamental issue hasn’t gone away, in fact, it’s gotten smarter about how the two worlds collide – sadly, Microsoft has never really gotten an invite to that discussion due to its retreat positioning.

Microsoft’s answer, in general, has been to remove the designer from the equation given its complexity, instead, they gave developers a cookie-cut style template titled “metro/modern UI design” (aka Paint by numbers developer art) thinking that if you reduce the composition of design to basic minimal aesthetics, you, in turn, reduce the burden or need to have a designer influence the creative process.

That strategy is an utter failure and I’d promote the theory that the reason why Windows Phone has failed as a product is solely due to the UI (given the phone hardware is perfect, development SDK is the easiest by far but the design integration .. too boring, too hard).

Xamarin merger with Microsoft now has the potential to reboot a company’s mobile strategy in a way that it needs more than ever before, however, if the two worlds continue to solely double down on “developers, developers, developers” that don’t factor in “designers, designers, designers” all we really have achieved now is a license model reduction, better Visual Studio support, stronger echo chamber but still a designer stalemate, resulting in continued “developer-only” circle jerk sessions.

Related Posts:

Being Playful with Industrial Software

I’ve been sitting in the Enterprise space as a UX mercenary for probably around 5+ years. In every team, sales meeting and brainstorming session I’ve always encountered resistance around “maturity” in terms of design. The more money that is being spent on the software, the more “serious” the design should be. This line of thinking I think typically comes from the concern that if the design is not serious therefore the trust around it’s ability to do the various task(s) will be eroded.

The thing is, the more sales meetings I’ve been in or participated in the preparation for the more I’ve come to the conclusion that “design” isn’t even a bullet point in the overall sales pipeline. Sure the design makes an appearance at the brochure / demo level but overall nobody really sits down and discusses how the software should look or feel during the process. Furthermore the client(s) typically have invited the sales team(s) into the selection panel(s) based off their existing brand, known or rumoured capabilities and/or because they are legally required to.

To my way of thinking, being “playful” with your design is a very unnerving discussion to have in such a scenario. The moment you say the word “playful” most people respond with some word association positive or negative (usually negative) as the word may take you back to your childhood (playing with lego or dolls …I didn’t play with dolls..they were called G.I. JOE’S!). It’s that hint of immaturity with the word that makes it more appealing to me, as it forces you to think about maturity but with the constraints of immaturity (cognitive dissonance).

Playful however doesn’t have to be immature, there are very subtle ways to invoke the feeling of making something playful without actually being obvious about it. For example, Google+ and most of Google’s new branding is what I’d consider “playful” but at the same time the product(s) or body of work that goes into their solutions are quite serious.

Playful Mood Board

Playful Mood Board

Why be playful? My working theory is that the reason why users find software “unusable” has to do with confidence and incentive. If these two entities don’t’ fuel their usage furnace the overall behaviour around their usage decay(s), that is they begin to taper off and reduce it to an absolute “use at minimum” behavioural pattern. This theory is what I would class as being at the heart of invoking “emotion” or “feeling” into how software is made and often why a lot of UX Practitioners will preach as to why these two should be taken quite serious in the design process.

The art of being playful in a way regresses adults back to their childhood where they were encouraged to draw, build and decorate inanimate object(s) without consequences attached. As a early teenage child, you were encouraged to fail, you were given a blank piece of paper and asked to express your ideas without being reprimanded. You in short, “designed” without the fear of getting it wrong or for that matter right (although right was usually rewarded with your art piece being put on the fridge at home or something along those lines). A playful design composition can be both serious but inviting, as a good design will make you feel as if you’re “home” again. A great design will make that temporary break away into using other software and then back again an obvious confidence switch – as if you’re saying out loud “gah! that was a horrible experience, but I’m’m back to this app…man…it feels good to be home and why can’t other software be like this”




Related Posts:

Dont be a clone, be different.

It’s been roughly a week or so now since I got my Windows Phone 8 iPhone clone – I mean, Nokia Lumia 920 (it was a joke, relax).

The phone itself is quite large, and that for me isn’t an issue except I find my thumbs don’t get as much surface coverage on either side of the phone. The battery life on the phone is nice but the overall user experience within the phone drives me mad.

The camera for instance was annoying because when it came to take a photo I had forgotten I had the setting on close up, so when I took my shot of choice it came out blurry.  It took a while for me to remember that the setting was changed as there was no visual indication that the said phone was in a particular setting – as if having an icon on display all the time was a failure Nokia wouldn’t tolerate (you failed me Nokia)

There are a lot of other settings that also drive me crazy and I could list the postives & negatives all day (Still trying to sort through my emotions on whether this phone will last or go).  However, the one and most crucial thing of all that I dislike about the experience is the App Store clones.

What I mean to say is, despite the various ups & downs that come with having the actual phone – which I can live with – the one piece to this equation is just how immature and terrible the applications that you have on offer are within the Microsoft store. It’s like all the other kids (iPhone/Android) are riding dirtbikes but your parents give you  a new bmx bike (Windows Phone 8) with a fake muffler attached.

I’m struggling even as I type this to come up with some examples of great apps, the ones you cannot live without. The only application that I find actually useful and fairly well designed was Skype. I found Twitter apps to be half-done, broken, prone to “an error has occurred” status messages or the worse offender of all – the official Facebook app (which feels like it was written by a first year programming intern). These are really two applications that a smartphone today must own in terms of unique experience, as these i’d argue are probably the most frequently used outside email (would it kill the design team to use “bold” font to indicate unread emails btw?? and text messaging + threads… really.. threads? what is this a texting forum?).

There is much I’d tolerate about owning this phone but looking at my iPhone apps that are sitting idle and then staring at my Windows Phone I can’t but help develop buyer’s remorse at the moment. I miss my instagram, twitter, flipboard, facebook (yes even iPhone Facebook app), games,  XBMC remote, ANZ Bank and the list goes on and on.

There are really only two applications within the Windows Phone 8 market place that stand out for me – Qantas and ZARA.  The Qantas app is still a bit flat but it looks different enough to give it a pass whilst the ZARA app (Fashion) looks quality elegant / tastefully done – even though I have zero use for it but can appreciate its design.

My underlying point is this. I want to keep using this phone, I want to get off the iPhone crack and try new things but if you keep rinse & repeating the same stupid template driven applications whilst touting “I’m being authentically digital” then you in turn are killing yourselves more than my experience.

If this phone has a chance of success it’s going to come down to development teams engaging a designer and throwing out the Windows Phone 8 “Design Guidelines” by Microsoft.

Microsoft have not a consistent coherent clue as to what good design is and have consistently shown they themselves can’t even lock onto the concept of what good design is. They rely heavily on design agencies, contractors and partners to do the majority of the actual design for solutions they “make”.

There are currently 90+ designers on the Windows Phone 8 “team” and I ask a simple question – What the f**K are you all doing? You’re not helping the community & marketplace that’s for sure.

So please hire a designer today.

Related Posts:

Digital Skeuomorphism decoded.

There seems to be an undercurrent of contempt towards Digital Skeuomorphism – the art of taking real world subject material and dragging it kicking & screaming into your current UI design(s) (if you’re an iPad designer mostly).

I’ve personally sat on the fence with regards to this subject as I do see merit in both sides of the argument in terms of those who believe it’s gotten out of hand vs those who swear it’s the right mix to helping people navigate UX complexity.




Here’s what I know.

I know personally that the human mind is much faster at decoding patterns that involve depth and mixed amounts of color (to a degree). I know that while sight is one of our sensory radars working 24/7 it is also one that often scans ahead for known pattern(s) to then decode at sub-millisecond speeds.

I know we often think in terms of analogies when we are trying to convey a message or point. I know designers scour the internet and use a variety of mediums (real life subject matter and other people(s) designs) to help them organize their thoughts / mojo onto a blank canvas.

Finally I know that with design propositions like the monochrome like existence of Metro it has created an area of conflict around like vs dislike in comparison to the rest of the web that opts to ignore these laid out principles by Microsoft design team(s).

Here’s what I think.

I think Apple design community has taken the idea of theming applications to take on a more unrealistic but realistic concepts and apply them to their UI designs are more helpful then hurtful. I say this as it seems to not only work but solves a need – despite the hordes mocking its existence.

I know I have personally gone my entire life without grabbing an envelope, photo, and a paperclip and attached them together – prior – to writing a letter to a friend.

Yet, there is a User Interface out there in the iPad AppStore that is probably using this exact concept to help coach the user that they are in fact writing a digital letter to someone with a visual attachment paper clipped to the fake envelope it will get sent in.


Why is this a bad idea?

For one it’s not realistic and it easily can turn a concept into a fisher price existence quite fast. Secondly it taps into the same ridiculous faux UI existence commonly found in a lot of movies today (you know the ones, where a hacker worms his way into the banks mainframe with lots of 3D visuals to illustrate how he/she is able to overcome complex security protocols).

It’s bad simply for those two reasons.

It’s also good for those two reasons. Let’s face it the more friction and confidence we can build in end-users around attaching real-life analogies or metaphors to a variety of software problems the less they are preoccupied with building large amounts of unnecessary muscle in their ability to decode patterns via spatial cognition.

Here’s who I think is right.

Apple and Microsoft are both on this different voyage of discovery and both are likely to create havoc on the end user base around which is better option of the two – digitally authentic or digitally unauthentic.

It doesn’t matter in the end who wins as given both have created this path it’s fair to say that an average user out there is now going to be tuned into both creative output(s). As such there is no such thing as a virgin user when it comes to these design models.

I would however say out loud that I think when it comes to down cognitive load on the end user around which Application(s) out there that opt for a Metro vs. Apple iPad like solution, the iPad should by rights win that argument.

The reason being is our ability to scan the associated pattern with the faux design model works to the end user favor much the same way it does when you 30sec of a hacker busting their way into the mainframe.

The faux design approach will work for depth engagement but here’s the funny and wonderful thought that I think will fester beyond this post for many.

Ever notice the UI designs in movies opt for a flat “metro” like monochrome existence that at first you go “oh my that’s amazing CG!”. Yet if you then play with it for long period of time their wow factor begins to taper off fast.


I don’t have the answers on either sides here and it’s all based of my own opinion and second-hand research. I can tell you though sex sells, we do judge a book by its cover, and I think what makes the iPad apps appeal too many is simply – attractive bias in full flight.

Before I leave with that last thought, I will say that over time I’ve seen quite a lot of iPad applications use Wood textures throughout their designs. I’d love to explore the phycology of why that reoccurs more as I wonder if it has to do with some primitive design DNA of some sort.


Here’s some research that hints at this space [Click here].

Related Posts:

What if you had to build software for undiagnosed adults that suffer from autism, Asperger’s, dyslexia, ADHD etc.

Software today is definitely getting more and more user experience focused, that is to say we are preoccupied with getting the minimal designs into the hands of adults for task driven operations. We pride ourselves on knowing how the human mind works through various blog based theories on how one is to design the user interface and what the likely hood of the adult behind the computers initial reaction will be.

The amount of conferences I’ve personally attended that has a person or person(s) on stage touting the latest and greatest in cognitive science buzz word bingo followed by best practices in software design is well, too many.

On a personal level, my son has a rare chromosome disorder called Trisomy 8, its quite an unexplored condition and I’ve pretty much spent the last eight years interacting with medical professions that touch on not just the psychology of humans but zeros in on the way in which our brains form over time.

In the last eight years of my research I have learnt quite a lot about how the human mind works specifically on how we react to information and more importantly our abilities to cope with change that aren’t just about our environments but also plays a role in Software.

I’ve personally read research papers that explore the impacts of society’s current structure on future generations and more importantly how our macro and micro environments play a role with regards to the children of tomorrow coping with change and learning at the same time – that is to say, we adults cope with the emerging technology advancements because for us “its about time” but for todays child of 5-8 this is a huge problem around having to manifest coping skills to dealing with a fluid technology adoption that often doesn’t make sense.

Yesterday we didn’t have NUI, today we do? Icons that have a 3.5” floppy disc that represent “saving” have no meaning to my son etc.

The list goes on just how rapid and fast we are changing our environments and more importantly how adults that haven’t formulated the necessary social skills to realistically control the way in which our children are parented often rely on technology as at times being the de-facto teacher or leader (Amount of insights I’ve read on how XBOX 360 has become the baby sitter in households is scary).

Getting back to the topic at hand, that is what if the people you are designing software have an undiagnosed mental illness or are better yet diagnosed. How would you design tomorrow’s user interface to cope with this dramatic new piece of evidence? To you the minimal design works, it seems fresh and clear and has definitive boundaries established.

To an adult suffering from Type-6 ADHD (if you believe in it) that has a degree of over-focus its not enough, in fact it could have the opposite effect of what you are trying to do in your design composition.

Autism also has a role, grid formation in design would obviously appeal to their autistic traits given it’s a pattern that they can lock onto and can often agree with – Asperger sufferers may disagree with it, and could annoy or irritate them in some way (colour choice, too much movement blah blah).

Who has to say your designs work, as if you ask people on the street a series of questions and observe their reactions you are not really providing an insight into how the human mind reacts to computer interaction. You’ve automatically failed in a clinical trial, as the person on the street isn’t just a normal adult there’s a whole pedigree of historical information you’re not factoring into the study that is relevant.

At the end of the day, the real heart of HCI for this and is my working theory is that we formulate our expectations around software design from our own personal development. That is to say, if we as children had normal or above average IQ level in around about ability to understand patterns and the ability to cope with change we in turn are more likely to adapt to both “well design” and “poorly designed” software.

That is to say, when you sit down and think about an Adult and how they react to tomorrow’s software one has to really think about the journey the adult has taken to arrive at this point, more to the point how easily they are also influenced.

A child who came from broken home, parents left and raised by other adults who is now a receptionist within a company is more likely to have absolutely no confidence in around making decisions. That person is now an easy mark for someone who has the opposite and can easily sway this person to adoption and change.

Put those two people into a clinical trial around how the next piece of software you are going to roll out for a company works and the various tests you put them through, watch what happens.

Most tests in UX / HCI often focus on the ability of the candidate to make their way through the digital maze in order to get the cheese (basic principles around reward / recognition) so to be fair its really about how the human mind can navigate a series of patterns to arrive at a result (positive / negative) and then furthermore how the said humans can recall that information at a later date (memory recall meets muscle memory).

These style of results will tell I guess the amount of friction associated with your change, and give you a score / credit in around what the impact it will likely have but in reality what you really probably need to do as an industry is zero in on how aggressively you can decrease the friction levels associated with change prior to the person arriving at the console.

How do you get Jenny the receptionist who came from an abused child hood enough confidence to tackle a product like Autodesk Maya (which is largely complex) as if you were to sit down with Jenny and learn that she also has a creative component to her that’s not obvious to all – its her way of medicating the abuse through design.

How do you get Jack the stock broker who spends most of his time jacked on speed/coke and caffeine to focus long enough to read the information in front of him through data visualisation metaphors / methodologies then the decisions he makes could impact the future of the global financial system(s) world wide (ok bit extreme but you get my point?)

It’s amazing how much we as a software industry associate normal from abnormal when it comes to software design. It is also amazing how we look at our personas in the design process and attach the basic “this is mike, he likes facebook” fluffy profiling. When what you may find even more interesting is that Mike may like facebook, but in his down time he likes to put fireworks in cats butts and set them on fire because he has this weird fascination with making things suffer – which is probably why Mike now runs ORACLE’s user experience program.

The persona in my view is simply the design team having a massive dose of confirmation bias as when you sit down and read research paper after research paper on how a child sees the world that later helps define him/her as an adult, well…. In my view, my designs take on a completely new shift in thinking in around how I approach them.

My son has been tested numerous times and has been given an IQ of around 135 which depending on how you look at it puts him in around the genius level. The problem though is my son can’t focus or pays full attention to things and relies heavily on patterns to speed through tasks but at the same time he’s not aware of how he did it.

Imagine designing software for him. I do, daily and have to help him figure out life, it just has taught me so much in the process.

Metro UI vs Apple OS 5..pft, don’t get me started on this subject as both have an amazing insight into pro’s and con’s.

Related Posts:

Decoding Windows 8 UX Principles– Let Context breathe instead of the UI!

Last night I was sitting in a child psychologist office watching my son undergo a whole heap of cognitive testing (given he has a rare condition called Trisomy 8 Mosaicism) and in that moment I had what others would call a “flash” or “epiphany” (i.e. theory is we get ideas based on a network of ideas that pre-existed).

The flash came about from watching my son do a few Perceptional Reasoning Index tests. The idea in these tests is to have a group of imagery (grid form) and they have to basically assign semantic similarities between the images (ball, bat, fridge, dog, plane would translate to ball and bat being the semantic similarities).

This for me was one of those ahah! Moments. You see, for me when I first saw the Windows 8 opening screen of boxes / tiles being shown with a mixed message around letting the User Interface “breathe” combined with ensuring a uniform grid / golden ratio style rant … I just didn’t like it.

There was something about this approach that for me I just instantly took a dislike. Was it because I was jaded? Was it because I wanted more? ..there was something I didn’t get about it.


Over the past few days I’ve thought more about what I don’t like about it and the most obvious reaction I had was around the fact that we’re going to rely on imagery to process which apps to load and not load. Think about that, you are now going to have images some static whilst others animated to help you guage which one of these elements you need to touch/mouse click in order to load?

re-imagining or re-engineering the problem?

This isn’t re-imagining the problem, its simply taken a broken concept form Apple and made it bigger so instead of Icons we now have bigger imagery to process.

Just like my son, your now being attacked at Perceptional Reasoning level on which of these “items are the same or similar” and given we also have full control over how these boxes are to be clustered, we in turn will put our own internal taxonomy into play here as well…. Arrghh…

Now I’m starting to formulate an opinion that the grid box layout approach is not only not solving the problem but its actually probably a usability issue lurking (more testing needs to be had and proven here I think).

Ok, I’ve arrived at a conscious opinion on why I don’t like the front screen, now what? The more I thought about it the more I kept coming back to the question – “Why do we have apps and why do we cluster them on screens like this”

The answer isn’t just a Perspective Memory rationale, the answer really lies in the context in which we as humans lean on software for our daily activities. Context is the thread we need to explore on this screen, not “Look I can move apps around and dock them” that’s part of the equation but in reality all you are doing is mucking around with grouping information or data once you’ve isolated the context to an area of comfort – that or you’re still hunting / exploring for the said data and aren’t quite ready to release (in short, you’re accessing information in working memory and processing the results real-time).

As the idea is beginning to brew, I think about to sources of inspiration – the user interfaces I have loved and continue to love that get my design mojo happening. User interfaces such as the one that I think captures the concept of Metro better than what Microsoft has produced today – the Microsoft Health / Productivity Video(s).


Back to the Fantasy UI for Inspiration

If you analyze the attractive elements within these videos what do you notice the most? For me it’s a number of things.


I notice the fact that the UI is simple and in a sense “metro –paint-by-numbers” which despite their basic composition is actually quite well done.


I notice the User Interface is never just one composition that the UI appears to react to the context of usage for the person and not the other way around. Each User Interface has a role or approach that carries out a very simplistic approach to a problem but done so in a way that feels a lot more organic.

In short, I notice context over and over.

I then think back to a User Interface design I saw years ago at Adobe MAX. It’s one of my favorites, in this UI Adobe were showing off what they think could be the future of entertainment UI, in that they simply have a search box on screen up top. The default user interface is somewhat blank providing a passive “forcing function” on the end user to provide some clues as to what they want.

The user types the word “spid” as their intent is Spiderman. The User Interface reacts to this word and its entire screen changes to the theme of Spiderman whilst spitting out movies, books, games etc – basically you are overwhelmed with context.

Crazy huh?

I look at Zune, I type the word “the Fray” and hit search, again, contextual relevance plays a role and the user interface is now reacting to my clues.


I look back now at the Microsoft Health videos and then back to the Windows 8 Screens. The videos are one in the same with Windows 8 in a lot of ways but the huge difference is one doesn’t have context it has apps.

The reality is, most of the Apps you have has semantic data behind (except games?) so in short why are we fishing around for “apps” or “hubs” when we should all be reimagineering the concept of how an operating system of tomorrow like Windows 8 accommodates a personal level of both taxonomy and contextual driven usage that also respects each of our own cognitive processing capabilities?

Now I know why I dislike Windows 8 User Interface, as the more I explore this thread the more I look past the design elements and “WoW” effects and the more I start coming to the realization that in short, this isn’t a work of innovation, it simply a case of taking existing broken models on the market today and declaring victory on them because it’s now either bigger or easier to approach from a NUI perspective.

There isn’t much reimagination going on here, it’s more reengineering instead. There is a lot of potential here for smarter, more innovative and relevant improvements on the way in which we interact with software of tomorrow.

I gave a talk similar to this at local Seattle Design User Group once. Here’s the slides but I still think it holds water today especially in a Windows 8 futures discussion.

Related Posts:

How much would you invest in a pixel?

I am a massive fan of World of Warcraft again; yes, it is sad isn’t it? Last night I was playing my usual allotment of time watching pixels update on a screen vs. interacting with real humans and something I witnessed struck a chord with me.

The ‘flash of genius’ for me was when I was playing a typical Player vs. Player (PVP) round of Battleground(s). This is a part of the game that essentially randomly aggregates a group of people spread throughout the entire WoW realm(s) into a 10vs10 etc match.

It’s basically unbridled chaos and it really highlights some components for me that I find fascinating as to watch the herding mentality of us humans in an avatar driven game is kind of predictable. For instance, I was the first out of the gate when the match started, I rode my horse towards a spot that was the second closest and because I was the party leader, many other players followed me. I arrived at the spot and just waited. Out of the six other players, three stayed after 20sec has or so of making decision whilst the other three grew what I can only imagine as being bored and rode off in search of a fight.

imageWhat is so profound about this is how easily I convinced others to wait beside me in a place that had no real plan other than "well, if the sh*t goes down, we defend with our lives" as the core plan. We had no vision of what was about to happen beyond that and we had no clue as to how we would all work together as we just met each other in game only 5mins beforehand. Here we are armed ready to fight and hoping we can figure it out as we go.

We died.

This is much like most teams I’ve been in over the past few years, I keep hearing about a good team is one that is in sync with one another but in the end that only lasts for the first flag/waypoint as beyond that a lot of variables occur that in turn causes a de-synchronization from occurring. 

In the above example had I been paired with a healer and another tank/dps (tank or dps are basically characters whose sole job is to hit hard and often as a healers job is to keep everyone alive while they do so) we may have stood a chance of survival. As we all had a role to play and whilst the plan was distilled into a core class structure, we still have a series of objectives that must be upheld.

A healer must be protected at all cost, as well that character is your tipping point between living and dying but at the same time a healer must keep back from the fight – as much as possible. A tank/dps job is to draw fire and get deep into the melee as much as possible and the more you can tie up your enemy’s focus the greater the chance of a win.

In software, this concept has not entirely lost, as a UX/UI person(s) job is to figure out how to keep this software from dying of bad usability death. The coder’s jobs are to underpin it with large amounts of code to keep structural integrity intact if they do not do their jobs rights, it can in turn create more work for the UX/UI person to go fix. If the UI/UX person does not do their jobs, right they can in turn suffocate the work of the coder – so it is a partnership.

A great software release respects this partnership to the end. Good UI/UX and Good Code = Good software.

If you randomly put together a team of mixed classes and pin your hopes on the agile a way of life then well you are no different to my WoW example. An assumed leader leads a group of you into a spot that has no agenda or plan other than "don’t die please".

How you live or die is based purely around how fast you can communicate with one another about what tactics you can deploy to uphold this basic principle of preserving one’s life.

All it takes however is one person to break ranks, to be the Leroy Jenkins (See Video below) and well it comes unstuck and fast. We all die of a horrible humiliating death – (aka miss our deadlines etc.).


Agile is not enough, is my overall point. I think agile works if you are solely focused on being a tank/dps class (coder). If you mix in UX/UI then that is where it keeps coming back with mixed results, there appears to be no right or wrong formula here.

The one concept I think – and it’s only a theory – is that you need to at times stop fighting the code and give enough time for the healer (UX/UI) person to catch their breath, to drink some mana potions if you will to figure out how to navigate the next fight.

Lost in my metaphor?

What I am trying to say is that UX/UI in a sprint equation needs to occur every other sprint, meaning at some point in the process you need to arrive at a point in time where you the coders will have to refactor your UI / UX to accommodate the new direction in the design.

It sux.

It is however, the realistic way to accommodate the reactive design you have put in place and to be clear it has little return in investment other than user efficiency and satisfaction levels.

Now comes the question – how much do you invest in a pixel?

Answer that and you will have a better understanding of Agile, UI/UX + Code than I currently have.  As you now need to think in terms of how it all comes together and what value you place on the UI/UX component. Agile won’t necessary work in the way you think when it comes to intgerating your healer (UX/UI Person) into your battle group. At times you may not need them – that is until your hitting a wall and soon realize it would have been better to have them at the start of the fight vs end.

I can think of some rebuttals here – ‘well you are doing agile wrong’ or ‘your team sounds like it wasn’t assembled correctly’ to which I simply respond – welcome to reality. Sometimes you have to play the game with a randomly aggregated team and it is not always a case of Greenfield project management.

Now, your move, how do you accommodate these variables.

Related Posts:

The principles of Microsoft Metro UI decoded

The phrase “authentically digital” makes me want to barf rainbow pixels. This was a quote pulled from a Windows Phone 7 reviewer when he first got a hold of the said phone. At first you could arguably rail against the concept of what Authentically Digital means and simply lock it into the yet another marketing fluff to jazz a situation in an unnecessary way.

I did, until I sat back and thought about it more.

Issues Presented.

Metro in itself has its own design language attached, they cite a bunch of commandments that the overall experience is to respect and adhere that is to say, someone has actually sat down and thought the concept through (rare inside Microsoft UX). I like what the story is pitching and I agree in most parts with the laws of Metro that is to say, I am partially onboard but not completely.

I’m on board with what Metro could be, but am not excited about where it’s at right now. I state this as I think the future around software is going through what the fashion industry has done for generations – a cultural rebirth / reboot.

Looking back at Retro not metro.

Looking at the past, back in the late 90’s the world was filled with bold flat looking user interfaces that made use of a limited color palette given the said video capabilities back then wasn’t exactly the greatest on earth. EGA was all the rage and we were seeing hints of VGA whilst hating the idea that CGA was our first real cut at graphics.

EGA eventually faded out and we found ourselves in the VGA world (color TV vs. black n white if you will), life was grand and with 32bit color vs. 16bit color wars coming to a conclusion the worlds creative space moved forward leaps and bounds. Photoshop users found themselves creating some seriously wicked UI, stuff that made you at the time thank the UI gods for plug-ins like alien ware etc as they gave birth to what I now call the glow/bevel revolution in user interface design.

Chrome inside software started to take on an interesting approach, I actually think you could probably trace its origins of birth in terms of creative new waves back to products like Winamp & Windows Media player skins. The idea that you could take a few assets and feed them into mainstream products like this and in turn create this experience on the desktop that wasn’t a typical application was interesting (not to mention Macromedia Director’s influence here either).


I think we all simply got on a user interface sugar induced high, we effectively went through our awkward 80’s fashion stage, where crazy weird looking outfits / music etc was pretty much served up to the world to gorge on. This feast of weird UI has probably started to wind down to thanks to the evolution of web applications, more importantly what they in turn taught us slowly.

Web taught the desktop how to design.

The first lesson we have learnt about design in user interface from the web is simple – less is more. Apple knocks this out of the park extremely well and I’d argue Apple wasn’t its creator, the Web 2.0 crowd as they use to be know was. The Web 2.0 crowd found ways to simply keep the UI basic to the point and yet visually engaging but with minimalist views in mind. It worked, and continues to work to this day – even on Apple.com


Companies like Microsoft have seen this approach to designing user interface and came to a fairly swift rationale that if one were to create a platform for developers & designers to work in a fashion much like the web, well desktop applications themselves could take on an entirely new approach.

History lesson is over.

I now look at Metro thinking back on the past evolution and can’t but help think that we’re going back to a reboot of EGA world, in that we are looking for an alternative to design in order to attract / differentiate from the past. Innovation is a scarce commodity in today’s software business, so we in turn are looking at ways to re-energize our thinking around software design but in a way that doesn’t create a cognitive overload – be radical, be daring but don’t be disruptive to process/task.

Inside Microsoft what I can presume, the ECG group found a way to hijack existing patterns in terms of user recognition and make use of modern signage found inside bus station, railways, elevator marshal areas etc and declared this to be the way out of the excess UI scourge.

I like it, I like this source of inspiration but my first instinct was simple – I hope your main source of success isn’t the reliance on typography, especially in this 7second attention economy of today. Sure enough, there it is, the reliance in Windows phone 7. Large typography taking over areas of where chrome used to live in order to fix what chrome once did. The removal of color / boundary textures in order to create large empty space filled with 70px+ Typography with half-seen half-hidden typography is what Microsoft’s vision of tomorrow looks like.

Metro isn’t Wp7, Metro is Microsoft Future Vision.

My immediate reaction to seeing the phone (before the public did) back inside Microsoft was "are you guys high, this is not what we should be doing, we are close but keep at it, you’re nearly there! don’t rush this!". This reaction was the equivalent of me looking at a Category 5 Tornado, demanding it turn around and seek another town to smash to bits – brave, forward thinking but foolish.

This phone has to ship, its already had two code resets, get it done, fix it later is pretty much the realistic vision behind Windows Phone 7 – NOT – Metro.


Take a look at what the Industry Innovation Group has produced via a company called Oh, Hello. In this vision of tomorrow’s software (2019 to be exact) you’ll see a strong reliance on the metro laws of design.

The Principles of Metro vs. Microsoft Future Vision.

In order to start a conversation around Metro in the near future, one has to identify with the level of thinking associated with its creation. Below is the principles of metro – more to the point, these are the design objectives and creative brief if you will on what one should approach metro with.

Clean, Light, Open, Fast

  • Feels Fast and Responsive
  • Focus on Primary Tasks
  • Do a Lot with Very Little
  • Fierce Reduction of Unnecessary Elements
  • Delightful Use of Whitespace
  • Full Bleed Canvas

You could essentially distill these points down to one word – minimalist. Take a minimalist approach to your user interface and the rewards are simple – sense of responsiveness in user interface, reliance on less information (which in turn increases decision response in the end user) and a reduction in creative noise (distracting elements that add no value other than it was cool at the time).


In Figure 1, we I’d strongly argue you could adhere to these principles. This image is from the Microsoft Sustainability video, but inside it you’ve got a situation which respects the concept of Metro as after all given the wide open brief here under one principle you could argue either side of this.

Personally, I find the UI in question approachable. It makes use of a minimalist approach, provides the end user with a central point of focus. Chrome is in place, but its not intrusive and isn’t over bearing. Reliance on typography is there, but at the same time it approaches in a manner that befits the task at hand.


Microsoft’s vision of this principle comes out via the phone user interface above (Figure 2). I’m not convinced here that this I the right approach to minimalism. I state this, as the iconography within the UI is inconsistent – some are contained others are just glyphs indicating state?. The containment within the actual message isn’t as clear in terms of spacing – it feels as if the user interface is willing to sacrifice content in order to project who the message is from (Frank Miller). The subject itself has a lower visual priority along with the attachment within – more to the point, the attachment has no apparent containment line in place to highlight the message has an attachment?


Microsoft’s original vision of device’s future has a different look to where Windows Phone 7 today. Yet I’d state that the original vision is more in line with the principles than actual Windows Phone 7. It initially has struck a balance between the objectives provided.

The iconography is consistent and contained, typography is balanced and invites the users attention on important specifics – What happened, where and oh by the way more below… and lastly it makes use of visuals such as the photo of the said person. The UI also leverages the power of peripheral vision to give the user a sense of spatial awareness in that, its subtle but takes on the look and feel of an “airport” scenario.

Is this the best UI for a device today? No, but it’s approach is more in tune with the first principle then arguably the current Windows Phone 7’s approach which is reliance of fierce amounts of whitespace, reduction in iconography to the point where they clearly have a secondary reliance and lastly emphasis on parts of the UI which I’d argue as having the lowest importance (i.e. the screen before would of indicated who the message is from, now I’m more focused on what the message is about!).




Celebrate Typography

  • Type is Beautiful, Not Just Legible
  • Clear, Straightforward Information Design
  • Uncompromising Sensitivity to Weight, Balance and Scale

I love a good font as the next designer. I hoard these like my icons, in fact It’s a disease and if you’re a font lover a must see video is Helvetica. That being said, there is a balance between text and imagery, this balance is one struck often daily in a variety of mediums – mainly advertising.

Imagery will grab your attention first as it taps into a primitive component within your brain, the part that works without your realizing its working. The reason being is your brain often is in auto-pilot, constantly scanning for patterns in your every day environment. It’s programmed to identify with three primative checks, fear, food and sex. Imagery can tap into these striaght away, as if you have an image of an attractive person looking down at a beverage you can’t but help first think “that’ person’s cute (attractive bias) and what are they looking at? oh its food!…” All this happens despite there being text on the said image prior to your brain actually taking time to analyse the said image. To put it bluntly, we do judge a book by its cover with extrem amount of prejudice. We are shallow, we do prefer to view attractive people over ugly unless we are conveying a fear focused point “If you smoke, your teeth will turn into this guys – eewwww” (Notice why anti-cigarette companies don’t use attractive people?)

Back to the point at hand, celebrating typography. The flaw in this beast despite my passion for fonts, is that given we are living in a 7 second attention economy (we scan faster than we have before) reliance on typography can be a slippery slope.


In Figure 6, a typical futuristic newspaper that has multi-touch (oh but I dream), you’ll notice the various levels of usage of typography (no secret to news papers today). The headings on purpose approach the user with both different font types, font weight, uppercase vs lowercase and for those of you out there really paying attention, at times different kerning / spacing.

The point being, the objective is that typography is in actuality processed first via your brain as a glyph, a pattern to decode. You’ve all seen that link online somewhere where the wrod is jumbled in a way that you first are able to read but then straight away identify the spelling / order of the siad words. The fact I just did it then along with poor grammar / spelling within this blog, indicates you agree to that point. You are forgiving majority of the time towards this as given you’ve established a base understanding of the english language and combine that with your attention span being so fast paced – you are more focused on absorbing the information than picking apart how it got to you.

Typography can work in favor of this, but it comes at a price between balancing imagery / glyphs with words.


The above image (Figure 7) is an example of Metro in the wild. Typography here is in not to bad of a shape, except for a few things. The first being the “Pictures” text is making use of a large amount of the canvas, to the point where the background image and heading are probably duking it out for your attention. The second part of this is the part that irritates me the most, in that the size of the secondary heading with the list items is quite close in terms of scale. Aside from the font weight being a little bolder, there is no real sense of separation here compared to what it should or could be if one was to respect the principle of celebrating typography.

Is Segoe UI the vision of the only font allowed? I hope not. Is the font weight “light” and “regular” the only two weights attached to the UI? what relevance does the background hold to the area – pictures? ok, flimsy at best contextual relevance but in comparison to the Figure 3 above a subtle usage of watermarks etc. to tap into your peripheral vision would provide you more basis to grapple onto – pattern wise that is. Take these opinions and combine the reality that there is no sense of containment and I’m just not convinced this is in tune with the principle. It’s like the designers of metro on windows phone 7 took 5% of the objectives and just ran with it.


Comparisons between Figure7 and Figure8, the contrast in usage of typography is different but yet both using the same one and only font – Segoe UI. The introduction of color helps you separate the elements within the user interface, the difference in scale is obvious along with weight and transforms (uppercase / lowercase). Almost 80% of this User Interface is typography driven yet the difference in both is what I hope to be obvious.


Don’t despair, it’s not all dark and gloom for the Windows Phone 7 future. Figure 9 (Above) is probably one of the strongest hints of “yes!” moment for the siad phone I could find. Typography is used but add visual elements and approach the design of typography slightly differently and you may just have a stake in this principle. The downside is the choice of color, orange and light gray on white is ok for situations that have increased scale, but on a device where lighting can be hit/miss, probably need to approach this with more bolder colors. The picture in the background also creeps into your field of view over the text, especially in the far right panel.


Alive in motion

  • Feels Responsive and Alive
  • Creates a System
  • Gives Context to Improve Usability
  • Transition Between UI is as Important as the Design of the UI
  • Adds Dimension and Depth

I can’t really talk to these principles via  text on a blog, but what I would say is that the Windows Phone attacks this relatively ok. I still think the FlipToBack transition is to tacky and the reality between how the screens transition in and out at times isn’t as attractive as for example the iPhone (ie I really dig how the iphone zooms the UI back and to the front?). The usage of kinetic scrolling is also one that gives you the sense of control, like there are some really well oiled ball bearings under the UI’s plane that if you flick it up, down, right or left the sense of velocity and friction is there.

If you zoom in and out of the UI, the sense that the UI will expand and contract in a fluid nature also gives you the element of discovery  (Progressive disclosure) but can also give you a sense of less work attached.



Taking Figure 11 & Figure 12 (start and end) one could imagine a lot of possibilities here in terms of the transition were to work. The reality that Reptile Node expands out to give way to types of reptiles is hopefully obvious whilst at the same time the focus is on reptile is also in place (via a simple gradient / drop shadow to illustrate depth). Everything could snap together in under a second or maybe two but it’s something you approach with a degree of purpose driven direction. The direction is “keep your eye on what I’m about to change, but make note of these other areas I’m now introducing” – you have to move with the right speed, right transition effect and at the same time don’t distract to heavily in areas that aren’t important.

Content, Not Chrome

  • Delight through Content Instead of Decoration
  • Reduce Visuals that are Not Content
  • Content is the UI
  • Direct interaction with the Content

Chrome is important as content. I dare anyone to provide any hint of scientific data to highlight the negative effects of grouping in user interface design. Chrome can be over used, but at the same time it can be a life saver especially when the content becomes over bearing (most line of business applications today suffer from this).

Having chrome serves a purpose, that is to provide the end user a boundary of content within a larger canvas. An example is below




I could list more examples but because I’m taking advantage of Microsoft Sustainability video, I figure this would be sufficient examples of how chrome is able to breakup the user interface into contextual relevance. Chrome provides a boundary, the areas of control if you will in order to separate content into piles of semantic action(s). Specifically in Figure 15, the brown chrome is much like your dashboard on the car ie you’re main focus is the road ahead, that’s your content of focus but at the same time having access to other pieces of information can be vital to your successful outcome. Chrome also provides you access to actions in which you can carry out other principles of human interaction – e.g., adjustment of window placement and separation from within other areas offers the end user a chance of tucking the UI into an area for later resurrection (perspective memory).

Windows Phone 7 for example prefers to levearge the power of Typography and background imagery as its “chrome” of choice. I’m in stern disagreement with this as the phone itself projects what I can only describe as uncontained vast piles of emptiness and less on actual content. The biggest culprit of all for me is the actual Outlook client within the said phone.


The Outlook UI for me is like this itch I have to scratch, I want the messages to have subtle separation and lastly I want the typography to have a balance between “chrome” and “whitespace”.


Chrome can also not just be about the outer regions of a window/UI, it has to do with the internal components of the user interface – especially in the input areas. The above (Figure 17) is an example of Windows Phone 7 / Metro keyboard(s). At first glance they are simple, clean and open, but the part that captures my attention the most is the lack of chrome or more to the point separation. I say lack, as the purpose of chrome here would be to simulate tactile touch without actually giving you tactile touch. The keyboard to the right has ok height, but the width feels cramped and when I type on the said device It feels like I’m going to accidently hit the other keys (so I’m now more cautious as a result).


The above (Figure 18) offers the same concept but now with “chrome” if you will. Nice even spacing, solid use of principles of the Typography and clear defined separation in terms of actions below.


iPhone has found a way to also strike a balance between chrome and the previous stated principles. The thing that struck me the most about the two keyboards is not which is better, but more how the same problem was thought about differently.  Firstly as you type an enlarged character shows – indicating you hit that character (reward), secondly the actual keys have a similar scale in terms of height/width proportions yet the key itself having a drop shadow (indicates depth) to me is more inviting to touch then a flat – (its like which do you prefer? a holographic keyboard or one with tactile touch, physical embodiment?). If you were to also combine both sound and vibration as the user types it can also help trick the end users sense into a comfortable input.

I digress from Chrome, but the point I’m making is chrome serves a purpose and don’t be quick to declare the principles of Metro as being the “yes!” moment as I’d argue the jury is still not able to formulate a definitive answer either way.

Authentically Digital

  • Design for the Form Factor
  • Don’t Try to be What It’s NOT
  • Be Direct

I can’t talk to this to much other than to say this isn’t a principle its more marketing fluff (the only one with a tenuous at best attachment to design principles would be “design for the form factor” meaning don’t try and scale down a desktop user interface into a device. Make the user interface react to the device not the other way around.


Metro is a concept, Microsoft has had a number of goes at this concept and I for one am not on board with its current incarnation inside the Windows Phone 7 device. I think the team have lost sight of the principles they themselves have put forward and given the Industry Innovation Group have painted the above picture as to what’s possible, it’s not like the company itself hasn’t a clue. There is a balance to be struck here between what Metro could be and is today. There are parts of Windows Phone 7 that are attractive and then there are parts where I feel it’s either been rushed or engineering overtook design in terms of reasons for what is going on the way it is (maybe the design team couldn’t be bothered arguing to have more time/money spent on propping up areas where it falls short).

People around the world will have mixed opinions about what metro is or isn’t and lastly what makes a good design vs what doesn’t. We each pass our own judgement on what is attractive and what isn’t that’s nothing new to you. What is new to you is the rationale that software design is taking a step back into the past in order to propel itself into the future. That is, the industry is rebooting itself again but this time the focus is on simplicity and by approaching metro with the Microsoft Future’s vision vs the Windows Phone 7 today, I have high hopes for this proposed design language.

If the future is taking Zune Desktop + Windows Phone 7 today and simply rinse / repeating, then all this will become is a design fad, one that really doesn’t offer much depth other than limited respite from the typical desktop / device UI we’ve become used to. If this is enough, then in reality all it takes is a newer design methodology to hit our computer screens and we’re off chasing the next evolution without consistency in our approach (we simply are just chasing shiny objects).

I’ve got a limited time on this earth and I’d like to live in a world where the future is about breaking down large amounts of unreadable / unattractive information into parts that propel our race forward and not stifle it into bureaucratic filled celebrations of mediocrity.

Apple as a company has kick started a design evolution, and say what you will about the brand but the iphone has dared everyone to simply approach things differently. Windows Phone team were paralyzed at times with a sense of “not good enough” when it came to releasing the vnext phone, it went through a number of UI and code resets to get it to the point it’s at now. It had everything to do with the iPhone, it had to dominate its market share again and it had to attract consumers in a more direct fashion. It may not have the entire world locked to the device, but it’s made a strong amount of interruption into what’s possible. It did not do this via the Metro design language, they simply made up their own internally (who knows what that really looks like under the covers).

Microsoft has responded and declared metro design as its alternative to the Apple culture, the question really now is can the company maintain the right amount of discipline required in order to respect the proposed principles.

I’d argue so far, they haven’t but I am hopeful of Windows 8.

Lead with design, engineer second.

Related Posts: