What I have learnt being a designer, developer and product manager.

It’s approx. 19 years now since I first earned payment for my ability to use a computer. In that 19 years I’ve had a vast number of different roles and some of them have been mentally draining whilst most have been constantly built around growth (i.e. learning from failures etc.). In this time I’ve come across a few things that we seemed to be destined to repeat over and over no matter how many places I work and how experienced my fellow computer horde members are.

Before I outline these let me start by saying I have been programming more in my life than at times I get to design and when I can’t get the designs I want from the design team, well, I always end up doing the actual designs anyway. I often find people that I work with can’t compartmentalize the idea or notion that someone can do both equally well as another, in that you’re always defending your position in the developer vs designer discussion. In reality you’re always nearly often asked “pick a side and stick with it” or face career penalties should you try and sit idle in the middle.

That’s however until I discovered the role of Product Management and what better place to learn that role than when I was at Microsoft. I mean I had some wins for sure but I spent a lot of my time watching absolute brilliant minds at work. In this role working others in this field, I learnt being a developer & designer had benefits as I was now able to look at both sides of the equation and get a sense of what my audience (.NET developers & designers) needed the most from us.

What have I learnt overall in 18 years?

Humans and Chunking.

Some would call this Agile but at the end of the day we humans tend to break things down into small pieces as way of coping with scale (kind of like Neurons in our body aren’t complete end to end cycles, they are really just iterative linkages of energy). Sure we build ceremonies around the problem and often swear “this is the winning formula” but in truth all we are doing is taking a really big lumpy problem and breaking it down into manageable pieces. There is strength in doing this but there is also problems that often outweigh the positives.

For instance in software I’ve noticed teams always almost get caught up in the idea that once you break an object into small pieces it becomes more manageable. In reality this often leads to context or peripheral awareness of the problem being lost due to the fragmentation.  That’s the issue it’s not the idea of breaking things down its when the breaking things down gets obfuscated or leaves others out it in turn gives the developer or designer a small amount of information to work from. This in turn creates problems as without context of the problem, intended audience or even why the problem should be worked on… well…. More problems emerge. To put it bluntly you always almost end up doing keyhole surgery – some can make it work others, end in a painful career stunting failure.

A Product Manager is like a Military General.

We often in the delivery teams (developer/designer) roll our eyes at times at the role of Product Manager. We at times have this distrust for anyone that can’t develop or write code to therefore be in charge of the products direction. Then on the flipside I’ve seen teams punish a Product Manager for having the development background because that person can’t resist from checking in on their code base and outlining solutions to problems before they arrive (oh btw, Scott Guthrie used to check name spaces on code before the devs could release…so don’t fault people for that!)

A bad or good product manager is like a General in an army, should they give you bad orders or send you down the wrong path you can easily take a winning high performance team and run them into the ground in under a month. It’s very easy to kill the moral of a development team under the vision or guidance of bad product management (including release management).

Release Management is the same as horoscope reading.

I’ve seen way to many teams be held hostage to a deadline they have little or no control over. Sure Agile processes get placed on a pedestal that is until the business throws a tantrum and says “I need it now” in which case agile will not be a safe haven to hide behind if it even hints at being a reason for delays.

Agile however is often used to manage this and it can work to hold back the release date demons but I’ve also seen Agile become this lumpy carpet in which bad product design or strategies hide under. It’s easy to bury a team with no thinking or strategy behind feature development as all you have to say is “We’ll iterate our way out of this” and unless someone in the room is sharp enough to catch them in the act (empowered as well), guess what…that beloved agile just became that noose around your career necks.

Agile also is a funny and interesting thing I’ve noticed in probably 8 years or so travelling around Australia/World. I’ve honestly never seen teams actually do it right to the letter of the law. I always see them cherry pick it to death and I’ve lost count at the number of times I’ve seen teams argue “well we like to keep it simple” as their rationale for its adoption hackery. I’ve also seen teachers of agile rant “You’re doing it wrong!” to which I now wonder if the builder blaming the tool is the right course of action here?

Suffice to say Agile + Release Management is always an amusing sociological experiment to witness. It often is in many ways like watching the TV show “Survivor” + “Amazing Race” inside the cubicle.

Design is on a pedestal until pressure builds.

As a UX person now (also now studying psychology), I’ve come to learn that we humans are actually quite adaptable to change and experiences. We often place ourselves in compartments and use personas as a shield to hide behind the various matrix’s that we assume or intend users to uphold. It’s as if we assume out loud that our users will self-divide into sub-tribes that fit in with our mental models around usage & expectations.

It’s an ongoing science HCI without any hint of complete understanding but in the mean time we’ll continue to evolve design in a way that hopefully proves probabilities and our internal monologues about what users like or don’t like in designs.

Design is the term though as at the end of the day despite all the research it comes down to the hand of a designer either using a mouse or Wacom graphic pen (most designers I know don’t use a mouse). We can craft the ideas or belief system but its not until these folks grind the pixels out that we have a well formed output that the users can appreciate and be drawn into.

Marketing also play a role in this and they’ll often want more influence or say into what the design composition upholds – in fact everybody wants input as because its visual this means everyone gets a say!. Yet nobody volunteers to have input in that line of code you wrote or even that decision you made around a campaign.

A designer is Queen/King that is until he/she accidently and stupidly shows the rest of the business what they made and then you watch a positive or negative feeding frenzy take place.  The feeding frenzy often however is used by developers as now they to have a safe haven to also hide behind as all they have to say out loud is “I can’t do design, so I can’t finish this until the designer finishes”.

Hiding behind that means they have to take no risks or never fail in both their execution of an idea or worse keep their efficiency returns high (i.e. why bother trying to do a design ahead of the designer when all it would mean is wasted time, time…in agile….time…you say)

What have I really learnt?

That despite all the knowledge and experience I’ve acquired over the years it’s really rare I see the business, technical and design equation balanced. Almost every company I’ve consulted with, worked in, contracted for and observed have always managed to have an imbalance in these three areas. If the balance tips in say technical favor it usually means business & design are at a loss and likewise if the other two do the same. You may find one or two areas where the balance stays true or looks balanced but it usually is a false positive as seemingly its usually the “design” that’s the one bluffing (ie crap design experiences being palmed off as “good enough”).

My theory or something I’m going to devote I guess the rest of my life to is finding a way or rhythm to debunk this equation in that there has to be a way to balance the three without cubicle combat.

Today I’d simply say this, if all three parties aren’t sharing the risk of change or failures, then that’s the starting point. In 20 years I’ve rarely seen all three take that on willingly and accepting that failure has rewards as well as losses. That giving a deadline to a developer is like yelling at a tornado to turn around, it may feel good to do so but you will always most certainly get creamed anyway.

A designer is the user advocate and they have instincts built in that are well honed towards how people deal with vast amounts of information & cognitive load. An Engineer can work in literal form better than lateral but a designer has only lateral so the balance has to be struck (form vs function wars begin).

Lastly a Product Manager without a 2+ year roadmap isn’t a product manager, they are just the business development suit running around pretending they are in charge of an empire that has enormous of opportunity that continues to go wasted. If you haven’t got a forward thinking General then maybe the competitor does and that’s why you seemingly keep looking at what they did for visual cues on success vs fail (Microsoft we agonized at Apple/Google/Oracles growth. I doubt it was a two-way process hence the huge leads they have gained in 8 years).

 

Related Posts:

Creating a designer & developer workflow end to end.

 

When I was at Microsoft we had one mission really with the whole Silverlight & WPF platform(s) – create a developer & designer workflow that keeps both parties working productively. It was a mission that i look back on even to this day with annoyance, because we never even came close to scratching the surface of that potential. Today, the problem still even with Adobe and Microsoft competitive battles in peace-time hasn’t actually been solved – if anything it kind of got slightly compounded or fragmented more so.

Absorbing the failure and having to take a defensive posture over the past five years around how one manages to inject design into a code-base in the most minimal way possible, I’ve sort of settled into a pipeline of design that may hint at the potential of a solution (i say hint..).

The core problem.

Ever since I think 2004, I’ve never been able to get a stable process in place that enables a designer and developer to share & communicate their intended ideas in a way that ends up in production. Sure they end up something of a higher quality state but it was never really what they originally set out to build, it was simply a end result of compromises both technically and visually.

Today, its kind of still there lingering. I can come up with a design that works on all platforms and browsers but unless i sit inside the developer enclosure and curate my design through their agile process in a concentrated pixel for pixel way, it just simply ends up getting slightly mutated or off target.

The symptoms.

A common issue in the process happens soon after the design in either static form or in prototype form gets handed off to the developer or delivery team. They look at the design, dissect it in their minds back to whatever code base they are working on and start to iterate on transforming it from this piece of artwork into actual living interactive experience.

The less prescriptive I am in the design (discovery phase) the less likely i’ll end up with a result that fits the way i had initially imagined it to begin with. Given most teams are also in an Agile way of life the idea that I have time or luxury of doing a “big up front” design rarely ever presents itself these days. Instead the ask is to be iterative and to design in chunking formations with the hope that once i’ve done my part its handed off to delivery and then it will come out unscathed, on time and without regression built in.

Nope. I end up the designer paying the tax bill on compromise, i’m the guy usually sacrificing design quality in lieu of “complexity” or “time” derived excuses.

I can sit here as most UX`ers typically do and wave my fist at “You don’t get us UI and UX people” or argue about “You need to be around the right people” all i want but in truth this is a formula that gets repeated throughout the world. It’s actually the very reason why ASP.NET MVC, WPF and Silverlight exist really – how do we keep the designer and developer separated in the hope they can come together more cleanly in design & development.

The actual root cause for this entire issue is right back at the tooling stage. The talent is there, the optimism is there but when you have two sets of tooling philosophies all trying to do similar or close to similar things it tends to kind of breed this area of stupidity. If for example i’m in Photoshop drawing a button on canvas and using a font to do so, well at the back my mind i realise that the chances of that font displaying on that button within a browser is less likely to happen then inside the tool – so i make compromises.

If i’m using a grid setting that doesn’t match the CSS framework i’m working with, well, guess what one of us is about to have a bad day when it comes to the designer & developer convergence.

If i’m using 8px padding for my According Panel headers in WPF and the designs outside that aren’t sharing the same constancy – well, again, someone’s in for a bad day.

It’s all about grids.

Obviously when you design these days a grid is used to help figure out portion allocation(s) but the thing is unless the tooling from design to development all share the same settings or agreed settings then you open yourself up from the outset to failure. If my grid is 32×32 and your CSS grid uses 30% and we get into the design hand over, well, someone in that discussion has to give up some ground to make it work (“lets just stretch that control” or “nope its fixed, just align it left…” etc start to arise).

Using a grid even at the wireframing stage can even tease out the right attitude as you’re all thinking in terms of portion and sizing weights (t-shirt size everything). The wireframes should never be 1:1 pixel ready or whatever unit of measure you choose, they are simply there to give a “sense” of what this thing could look like, but it won’t hurt to at least use a similar grid pattern.

T-shirt size it all.

Once you settle on a grid setting (column, gutters and N number of columns) you then have to really reduce the complexity back to simplicity in design. Creating T-shirt sizes (small, medium, large etc) isn’t a new concept but have you even considered making that happen for spacing, padding, fonts, buttons, textinputs, icons etc etc.

Keeping things simple and being able to say to a developer “Actually try using a medium button there when we get to that resolution” is at the very least a vocabulary that you can all converse in and understand. Having the ability to say “well, maybe use small spacing between those two controls” is not a guessing game, its a simple instruction that empowers the designer to make an after-design adjustment whilst at the same time not causing code-headaches for the developer.

Color Palettes aren’t RGB or Hex.

Simplicity in the language doesn’t end with T-shirt sizing it also has to happen with the way we use colors. Naming colors like ClrPrimaryNormal, ClrPrimaryDark, ClrPrimaryDarker, ClrSecondaryNormal etc help reduce the dependency of getting bogged down into color specifics whilst at the same time giving the same adjustment potential as the T-shirt sizes had as well – “try using ClrBrandingDarker instead of ClrBrandingLight”. If the developer is also color blind as in no they are actually colorblind, this instruction also helps as well.

Tools need to be the answer.

Once you sort the typography sizing, color palette and grid settings well you’re now on your way to having a slight chance of coming out of this design pipeline unscathed but the problem hasn’t still been solved. All we have done really is created a “virtual” agreement between how we work and operate but nothing really reinforces this behavior and the tools still aren’t being nice with one another as much as they could be.

If i do a design in say Adobe tools I can upload them to their creative cloud quite quickly or maybe even dropbox if have it embedded into my OS. However my developer team uses Visual Studio’s way of life so now i’m at this DMZ style area of annoyance. On one hand i’m very keep to give the development team assets they need to have but at the same time i don’t want to share my source files and much the same way they have with code. We need to figure out a solution here that ticks each others boxes as sure i can make them come to my front door – cloud or dropbox. That will work i guess, but they are using github soon so i guess do i install some command line terminal solution that lets me “Push” artwork files into this developer world?

There is no real “bridge” and yet these two set of tools has been the dogma of a lot teams lives of the better part of 10 years, still no real bridge other then copy & paste files one by one.

For instance if you were to use the aforementioned workflow and you realize at the CSS end that the padding pixels won’t work then how do you ensure everyone see’s the latest version of the design(s)? it realises heavily on your own backwater bridge process.

My point is this – for the better part of 10 years i’ve been working hard to find a solution for this developer / designer workflow. I’ve been in the trenches, i’ve been in the strategy meetings and i’ve even been the guy evangelizing  but i’m still baffled as to how I can see this clear linear workflow but the might of Adobe, Microsoft, Google, Apple and Sun just can’t seem to get past the developer focused approach.

Developers aren’t ready for design because the tools assume the developer will teach the designer how to work with them. The designer won’t go to the developer tools because simply put they have low tolerance for solutions that have an overburden of cognitive load mixed with shitty experiences.

5 years ago had we made Blend an intuitive experience that built a bridge between us and Adobe we’d probably be in a different discussion about day to day development. Instead we competed head-on and sent the entire developer/designer workflow backwards as to this day i still see no signs of recovery.

Related Posts:

I think therefore I know.

When you’re in a role of UX you tend to have contested territory marked out around you. Everyone around you has an opinion on something that fits within your charter so you in turn have to be the guarded diplomat constantly. I don’t mind heated exchange of ideas, when people get passionate about something they always stand their ground on a topic and make sure their voice is heard clearly and loudly (often without politeness attached). In these situations what I typically have echoing at the back of my brain is a question “do they think or do they know”.

I think something instead of I know something takes on a whole new set of discussion points because if you think something then its just an idea or assumption. If you know something, well chances are you have data points filled with confidence attached, this is good, this tells me straight away there are more clues to be found.

“..The only way you win an argument is if you get the other side to agree with you..”

Is what my dad would say when he & i used to get into the thick of it. Its a fairly simple statement as in the end when you have two opposing ideas on the same problem, well it comes down to either compromise or an impasse. If its an impasse then it probably will come down to the title you have on the day, in my case being Head of User Experience. A title like mine carries some weight that means i can ignore your opinion and proceed onwards without it, but doing so means that i need to qualify my arrogance more.

Being the top-dog in UX land isn’t an excuse to just push past people on their “I think” statements and supplant your “I thinks” ontop. Instead what it means is we have to be more focused on establishing the “I know” statement that absorb the two opposing ideas. My way of thinking is this, when I reach a point where there isn’t any data to support the opinions / ideas its now a case of writing multiple tests to get them fact checked and broken down until we do have the ideas transformed into behaviour facts.

I think the users will not like the start menu removed so don’t touch it.

Now lets remove the start menu is my immediate thought, screw the statement what happens when we do it. I’m assuming there will be some negative blowback but can you imagine the data we can now capture once its removed and how the users react. The users will tell us so much, how they use the menu, where they like it, why they like it there, who they are, what they use and so on.

That one little failure in Windows 8 is a gold mine of data and online there are discussion forums filled with topics / messages that centre around “ I think “ but nobody really has “I know” except Microsoft.

My point is this. If you’re not in a role that has User Experience in its title then fine, knock yourselves out with the back and forth of “I think” arguments. If you are in UX your job is to not settle with “I think” and instead hunt for “I know” for you will always get rewarded.

Related Posts:

How UX & Ethnography mix.

Inside most organisations you’ll likely see a marketing team distill their customer base into a cluster of persona(s), which in their view is a core representative of a segment of their audience in a meaningful & believable form. These persona(s) are likely to be accurate or moreover a confirmation on a series of instincts that may or may not have supportive data to underpin their factoids. The issue with these personas is that they are likely to be a representative of the past, that is to say using them isn’t really about transplanting their behaviors into the future, instead its a snapshot in time of what happened at the time they were documented.

The definition of Ethnography basically distills to what i’d class is happening in the persona research space, especially when you commission design agencies to do the research. They are usually quite thorough in their research and often don’t miss a step in cataloging the series of data points needed in order to build a picture as to whom they are looking at and what the behavioral traits the persona(s) in question are likely to have in a range or clustered form.

Downside for UX people like myself is there’s no real jump off point for this type of data, as for me, it’s not really about whether or not “Max” is prone to water-sports or is in the age bracket of 25-35, i have really no need for excessive metadata. The challenge for me is to map these series of personas back into a timeline of graduation both in simplicity vs complexity but also around how their confidence levels are organised in a way that outlines the cold/hot spots within a feature(s) experience needs.

If you were to take a feature, break it down into its intended audience, complexity required to use it and lastly its overall metrics that help define its success/fail  – well you’d likely end up with a lot of moving parts that don’t offer up any tangible qualitative value that helps you at the very least sniff out “what just happened”. What if you instead take the marketing personas, take a guesstimate around who you’re targeting, the features likely markers that trigger the metric and infer based on this data, the outcome – this would in turn be called confirmation bias.

There’s the uppercut with Persona(s) as you can easily set out to build on a solid foundation of healthy data but it’s only when you transfer or map these series of data points to the actual set of features & content within an experience that it starts to unravel and threads of its truisms get caught up in a lot of inferred guesstimates.

The root cause for this failure in qualitative data is simply due to the past being used to dictate the future, again remembering that at the time you interviewed and inspected your persona(s) it was based on either “what if” or questions that point to competitors or existing experiences that are already set in stone. Today and tomorrow you’re not keeping those experiences locked like that, in fact you’re probably looking to move the needle or innovate in a different direction which means you have small to large impact on their behavior, so thus the experiences can often involve dramatic or not so dramatic change(s). The only way to test or baseline the change is to have this continuous sampling that keeps checking & rechecking the data points in the hope of change makes itself prominent.

Problem – change isn’t always obvious, it can be subtle, the slightest introduction of a new variable or experience can often lead to adjustments that go unnoticed. I’ll cite an example in abstract form.

A respondent is asked to walk on a path through a forest from A to B. The respondent is asked to count how many “blue” objects are lined along the path, and the said respondent’s heart rate will be also monitored (also base-lined / zeroed out). Before the respondent takes off the testers place a stick that has similar shape to a coiled snake midway on the path.

 

The respondent is then asked to proceed on the journey, and they begin to count the blue objects and at the end of the path when they arrive, they give an accounting of their blue object findings. Their heart rate was normal in line with normal physical activity.

 

Respondents were less likely to notice the stick.

 

Next round of respondents are asked to the same, only this time the seed of fear is planted in their subconscious with “oh others noticed a snake a few hours ago along the path, be careful and if you see it sing out, it should be gone by now and we couldn’t find it earlier so just take note”.

 

Respondents begin the journey on the path, they notice the stick initially and a lot of messaging between the optics and brain are moving at lightning speed trying to decipher the pattern(s) needed to place a confirmation on “threat or non-threat” levels. Heart rate is spiking and eventually they realize its a stick and proceed, as they walk past the stick still keeping a very close eye and proximity buffer between the stick and them.

The point of that story is this, that with an introduction to the standard test of a new variable (fear) you’re able to affect the experience dramatically to the point where you’ve also touched on a primal instinct. In software that “stick” moment can be anything from moving the “start button” on a menu through to moving the way a tabular amount of data has been traditionally been displayed.

As a User Experience creator, we typically move the cheese a lot and it’s more to do with controlling change in our user(s) behavior (for the greater good). Persona(s) don’t measure that change, all they measure is what happened before you made the change. All you can do is create markers in the experience that help you map your initial persona baseline back to the new in the hopes it provides a bounty of data in which “change” is made obvious.

It doesn’t… sadly… it just doesn’t and so all we can do is keep focusing on the past behavioral patterns in the hope that new patterns emerge.

Persona(s) aren’t bad, they aren’t good, they are just a representative sample of something we knew yesterday that maybe still relevant today. The thing i do like about personas from marketing folks is this, it keeps everyone focused on behaviors they’d like to see tomorrow re-appear and that in the end is all i ever really needed.

Where do you want to head tomorrow?

Last example – NBC Olympics were streamed in 2009 to the entire US with every sport captured and made available. At the time everyone inferred that an average viewer would likely spend 2mins viewing time. In actuality they spent 20mins average viewing time and sent massive ripples in the TV/Movie industry in terms of the value of “online viewing”. If we had of asked candidates back then both as content publishers and consumers, they’d probably have told us data that they asserted to be relevant at the time. In this instance the Silverlight team were able to serve up HD video for the first time too many people online, and that’s what changed peoples experience. Today, its abnormal to even contemplate HD video streaming online as anything but an expected experience for “video” … 5 years ago, it didn’t exist. Personas compared to then and now are all dramatically different now, so while change can in some parts be slow… they can easily expedite to days, months as well as years.

I don’t dislike Persona’s, i just remain skeptical always of the data that fuels them – but thats my job.

Related Posts:

Being Playful with Industrial Software

I’ve been sitting in the Enterprise space as a UX mercenary for probably around 5+ years. In every team, sales meeting and brainstorming session I’ve always encountered resistance around “maturity” in terms of design. The more money that is being spent on the software, the more “serious” the design should be. This line of thinking I think typically comes from the concern that if the design is not serious therefore the trust around it’s ability to do the various task(s) will be eroded.

The thing is, the more sales meetings I’ve been in or participated in the preparation for the more I’ve come to the conclusion that “design” isn’t even a bullet point in the overall sales pipeline. Sure the design makes an appearance at the brochure / demo level but overall nobody really sits down and discusses how the software should look or feel during the process. Furthermore the client(s) typically have invited the sales team(s) into the selection panel(s) based off their existing brand, known or rumoured capabilities and/or because they are legally required to.

To my way of thinking, being “playful” with your design is a very unnerving discussion to have in such a scenario. The moment you say the word “playful” most people respond with some word association positive or negative (usually negative) as the word may take you back to your childhood (playing with lego or dolls …I didn’t play with dolls..they were called G.I. JOE’S!). It’s that hint of immaturity with the word that makes it more appealing to me, as it forces you to think about maturity but with the constraints of immaturity (cognitive dissonance).

Playful however doesn’t have to be immature, there are very subtle ways to invoke the feeling of making something playful without actually being obvious about it. For example, Google+ and most of Google’s new branding is what I’d consider “playful” but at the same time the product(s) or body of work that goes into their solutions are quite serious.

Playful Mood Board

Playful Mood Board

Why be playful? My working theory is that the reason why users find software “unusable” has to do with confidence and incentive. If these two entities don’t’ fuel their usage furnace the overall behaviour around their usage decay(s), that is they begin to taper off and reduce it to an absolute “use at minimum” behavioural pattern. This theory is what I would class as being at the heart of invoking “emotion” or “feeling” into how software is made and often why a lot of UX Practitioners will preach as to why these two should be taken quite serious in the design process.

The art of being playful in a way regresses adults back to their childhood where they were encouraged to draw, build and decorate inanimate object(s) without consequences attached. As a early teenage child, you were encouraged to fail, you were given a blank piece of paper and asked to express your ideas without being reprimanded. You in short, “designed” without the fear of getting it wrong or for that matter right (although right was usually rewarded with your art piece being put on the fridge at home or something along those lines). A playful design composition can be both serious but inviting, as a good design will make you feel as if you’re “home” again. A great design will make that temporary break away into using other software and then back again an obvious confidence switch – as if you’re saying out loud “gah! that was a horrible experience, but I’m’m back to this app…man…it feels good to be home and why can’t other software be like this”

 

 

 

Related Posts:

VS2011 “Reimagined” – Class View

Note: The below is an attempt to contribute to the discussion around Visual Studio vNext and what I think personally should eventuate into features for future generations of Visual Studio. The objective behind this is not to declare the UI examples as “done” but more to provoke a discussion around ways in which the tool itself could become more intelligent and contextually relevant to not just developers but also those of us who can do both design and code. I plan on compiling this into a more comprehensive document post public feedback.

Situation.

Today, the ClassView inside Visual Studio is pretty much useless for most parts, in that when you sit down inside the tool and begin stubbing out your codebase (initial file-new creation) you are probably in the “creative” mode of object composition.

Visual Studio in its current and proposed form does not really aid you in a way that makes sense to your natural approach to writing classes. That is to say, all it really can do is echo back to you what you’ve done or more to the point give you a “at a glance view” only.

Improvement.

The class view itself should have a more intelligent by design visual representation. When you are stubbing or opening an existing class, the tool should reflect more specifics around not only what the class composition looks like (at a glance view) but also should enable developers to approach their class designs in a more interactive fashion. The approach should enable developer(s) to hide and show methods, properties and so on within the class itself, meaning “get out of my way, I need to focus on this method for a minute” which in turn keeps the developer(s) focused on the task.

The ClassViewer should also make it quick to comment out large blocks of code, display visual issues relating to the large blocks of code whilst at the same time highlight which parts of the codes have and don’t have Attributes/Annotations attached.

Furthermore, the ClassViewer should also allow developer(s) to integrate their source and task tracking solutions (TFS) via a finite way, that is to say enable both overall class level commentary and “TODO” allocation(s). At the same time have similar approaches at a finite level such as “property, method, or other” areas of interest – (i.e. “TODO: this method is not great code, need to come back refactor this later”).

Feature breakdown.

image

The above is the overall fantasy user interface of what a class viewer could potential look like. Keeping in mind the UI itself isn’t accommodate every single use-case, but simply hints at the direction I am talking about.

Navigation.

image

Inside the ClassView there are the following Navigational items that represent different states of usage.

Documentation
TBA.

Stats
TBA.

Usage by
TBA.

Derived By

The “Derived By” view enables developers to gain a full understanding of how a class handles known inheritance chain by displaying a visual representation of how it relates to other interfaces and classes.

Minimap

image

This inheritance hierarchy will outline specifically how the classes’ relationship model would look like within a given solution (obviously only indexing classes known within an opened solution).

  • The end user is able to jump around inside the minimap view; to get an insight into what metadata (properties, methods etc.) is associated with each class without having to open the said class.
  • The end user is able to gain a satellite view what is inside each class via the Class Properties panel below the minimap.

Class Properties.

image

Interactive Elements.

  • The end user is able to double click on a minmap box (class file representation) and as such, the file will open directly into the code view area.
  • The end user is able to select each field, property, method etc. within the Class Properties data grid. Each time the user selects that specific area and If the file is opened, the code view will automatically position the cursor to the first character within that specific code block.
  • The end user is able to double click on the image first circle to indicate that this code block should be faded back to allow the developer to focus on other parts of the code base. When the circle turns red, the code block itself foreground colour will fade back to a passive state (i.e. all grey text) as whilst this code is still visible and compliable, it however visually isn’t displayed in a prominent state.
  • The end user is able to click on the image second circle to indicate that the code block itself should take on a breakpoint behaviour (when debugging please stop here). When the circle turns red, it will indicate that a debug breakpoint is in place. The circle itself right click context will also take on an as-is behavior found within Visual Studio we see today.
  • The end user is able to click on the image Tick icon (grey off, green on). If the Tick state is grey, this indicates that this code block has been commented out and is in a disabled state (meaning as per commenting code it will not show up at compile time).
  • The end user is able to click on the image Eye icon to switch the code block into either a private or public state (public is considered viewable outside the class itself, ie internal vs public are one in the same but will respect the specifics within the code itself).

Stateful Display.

  • Each row will indicate the name given to the property, its return or defined type, whether or not it is public or private and various tag elements attached to its composition.
  • When a row has a known error attached within its code block, the class view will display a red indication that this area needs the end users attention.
  • The image eye icon represents whether or not this class has been marked for public or private usage (i.e. public is considered whether the class is viewable from outside the class itself – ie internal is considered “viewable” etc.).
  • Tags associated to the row indicate elements of interest, in that the more additional per code block features built in, they will in turn display here (e.g.: Has Data Annotations, Codeblock is Read Only, Has notes attached etc.).

Tags.

My thinking is that development teams can attach tabs to each code block whilst at the same time the code itself will reflect what I call “decorators” that have been attached (ie attributes).

Example Tags.

  • image Attribute / Annotation. This tag will enable the developer to see at a glance as to what attributes or annotations are attached to this specific code block. This is mainly useful from a developer(s) perspective to ensure whether or not the class itself has the right amount of attributes (oops I forgot one?) whilst at the same time can provide an at-a-glance view as to what types of dependencies this class is likely to have (e.g use case Should EntityFramework Data Annotations be inside each POCO class? Or should it be handled in the DBContext itself?..before we answer that, lets see what code blocks have that dependency etc.).
  • image Locked. This ones a bit of a tricky concept, but initially the idea is to enable development teams to lock specific code blocks from other developer(s) manipulation, that is to say the idea is that when a developer is working on a specific set of code chunks and they don’t want other developer(s) to touch, they can insert code-locks in place. This in turn will empower other developer’s to still make minor modification(s) to the code whilst at the same time, check in the code itself but at the same time removing resolution conflicts at the end of the overall work stream (although code resolution is pretty simplified these days, this just adds an additional layer of protecting ones sandpit).
  • image Notes. When documenting issues within a task or bug, it’s at times helpful to leave traces behind that indicate or warn other developers to be careful of xyz issues within this code block (make sure you close out your while loop, make sure you clean-up your background threading etc.). The idea here is that developer(s) can leave both class and code-block specific notes of interest.
  • Insert Your idea here. These tag specific features outlined so far aren’t the exhausted list, they are simply thought provokers as to how far one can go within a specific code-block. The idea is to leverage the power Visual Studio to take on a context specific approach to the way you interact with a classes given composition. The tags themselves can be injected into the code base itself or they can simply reside in a database that surrounds the codebase (ie metdata attached outside of the physical file itself).

Discussion Points..

  • The idea behind this derived by and class properties view is that the way in which developer(s) code day in day out takes on a more helpful state, that is to say you are able to make at-a-glance decisions on what you see within the code file itself. At the same time providing a mini-map overarching view as to what the composition of your class looks like – given most complex classes can have a significant amount of code in place?
  • Tagging code-chunks is a way of attaching metadata to a given class without specifically having to pollute the class’s actual composition, these could be attachments that are project or solution specific or they can be actual code manipulation as well (private flipped to public etc.). The idea is simply to enable developer(s) to communicate with one another in a more direct and specific fashion whilst at the same time enable the developer(s) to shift their coding lense to enable them to zero in on what’s important to them at the time of coding (ie fading the less important code to a grey state).

Going forward, throw your ideas into the mix, how would you see this as being a positive or negative way forward?

Related Posts:

Decoding the use of grey in Visual Studio vNext

Visual Studio team have put out some UI updates to the vNext release. The thing that struck a chord with this update is how flat and grey it’s become, that is they’ve taken pretty much all colors out of the tool and pushed it back to a grey based palette.

Here are my thoughts:

On the choice of grey.

Grey is a color that I have used often in my UI’s and I have no issue with going 100% monochrome grey provided you could layer in depth. The thing about grey is that if it has to flat and left in a minimalist state it often will not work for situations where there is what I call “feature density.”

If you approach it from a pure Metro minimalist approach, then it can still work but you need to calibrate your contrast(s) to accommodate the end users ability to hunt and gather for tasks. That is to say this is where Gestalt Laws of Perceptual Organization comes into play.

The main “law” that one would pay attention to the most is the “Law of Continuity” – The mind continues visual, auditory, and kinetic pattern.

This law in its basic form is the process in which the brain decodes a bunch of patterns in full view and begins to assign inference to what it perceives as being the flow of design. That is to say, if you designed a data grid of numeric values that are right aligned, no borders then the fact the text becomes right aligned is what the brain perceives as being a column.

That law itself hints that at face value we as humans rely quite heavily on pattern recognition, we are constantly streaming in data on what we see; making snap judgment calls on where the similarities occur and more importantly how information is grouped or placed.

When you go limited shades of grey and you remove the sense of depth, you’re basically sending a scrambled message to the brain around where the grouping stops and starts, what form of continuity is still in place (is the UI composition unbroken and has a consistent experience in the way it tracks for information?)

It’s not that grey is bad, but one thing I have learnt when dealing with shallow color palettes is that when you do go down the path of flat minimalist design you need to rely quite heavily on at times with a secondary offsetting or complimentary color. If you don’t then its effectively taking your UI, changing it to greyscale and declaring done.

It is not that simple, color can often feed into the other law with Gestalts bag of psychology 101, that is to say law of similarity can often be your ally when it comes to color selection. The involvement of color can often leading the user into being tricked into how data despite its density can be easily grouped based on the context that a pattern of similarity immediately sticks out. Subtle things like vertical borders separating menus would indicate that the grouping both left and right of this border are what indicates, “These things are similar.”

Using the color red in a financial tabular summary also indicates this case as they are immediately stand out elements that dictate “these things are similar” given red indicates a negative value – arguably this is a bit of digital skeuomorphs at work (given red pens were used pre-digital world by account ledgers to indicate bad).

Ok I will never use flat grey again.

No, I’m not saying that flat grey shades are bad, what I am saying is that the way in which the Visual Studio team have executed this design is to be openly honest, lazy. It’s pretty much a case of taking the existing UI, cherry picking the parts they liked about the Metro design principles and then declaring done.

Sure they took a survey and found responded were not affected by the choice of grey, but anyone who’s been in the UX business for more than 5mins will tell you that initial reactions are false positives.

I call this the 10-second wow effect, in that if you get a respondent to rate a UI within the first 10seconds of seeing it, they will majority of the time score quite high. If you then ask the same respondents 10days, 10months, or a year from the initial question, the scores would most likely decline dramatically from the initial scoring – habitual usage and prolonged use will determine success.

We do judge a book by its cover and we do have an attractive bias.

Using flat grey in this case simply is not executed as well as it could be, simply because they have not added depth to the composition.

I know, gradients equal non-metro right. Wrong, metro design principles call for a minimalist approach now while Microsoft has executed on those principles with a consistent flat experience (content first marketing) they however are not correct in saying that gradients are not authentically digital.

Gradients are in place because they help us determine depth and color saturation levels within a digital composition that is to say they trick you into a digital skeumorphism, which is a good thing. Even though the UI is technically 2D they do give off a false signal that things are in fact 3D? which if you’ve spent enough time using GPS UI’s you’ll soon realize that we adore our given inbuilt depth perception engine.

Flattening out the UI in the typical metro-style UI’s work because they are dealing with the reality that data’s density has been removed that is to say they take on more of a minimalist design that has a high amount of energy and focus on breaking data down into quite a large code diet.

Microsoft has yet to come out with UI that handles large amounts of data and there is a reason they are not forthcoming with this as they themselves are still working through that problem. They have probably broken the first rule of digital design – they are bending their design visions to the principles and less on the principles evolving and guiding the design.

Examples of Grey working.

Here are some examples of a predominately grey palette being effective, that is to say Adobe have done quite well in their latest round of product design especially in the way they have balanced a minimalist design whilst still adhering to visual depth perception based needs (gradients).

image

image

Everything inside this UI is grouped as you would arguably expect it to be, the spacing is in place, and there is not a sense of crowding or abuse of colors. Gradients are not hard, they are very subtle in their use of light, or dark even though they appear to have different shades of grey, they are in fact the same color throughout.

Grey can be a deceiving color given I think it has to do with its natural state, but looking at this brain game from National Geographic, ask yourself the question “Is there two shades of grey here?”

image

The answer is no, the dark & light tips give you the illusion of difference in grey but what actually is also tricking the eye is the use of colors and a consistent horizon line.

Summary.

I disagree with the execution of this new look, I think they’ve taken a lazy approach to the design and to be fair, they aren’t really investing in improving the tool this release as they are highly most likely moving all investments into keeping up with Windows 8 release schedules. The design given to us is a quick cheap tactic to provoke the illusion of change given I am guessing the next release of Visual Studio will not have much of an exciting set of feature(s). The next release is likely to either be a massive service pack with a price tag (same tactic used with Windows7 vs. Windows Vista – under the hood things got tidied up, but really you were paying for a service pack + better UI) or a radical overhaul (I highly doubt).

Grey is a fine color to go full retard on (Tropic Thunder Quote) but only if you can balance the composition to adhere to a whole bunch of laws out there that aren’t just isolated to Gestalt psychology 101 but there is hours of reading in HCI circles around how humans unpick patterns.

Flattening out Icons to be a solid color isn’t also a great idea either, as we tend to rely on shape outlines to give us visual cues as to how what the meaning of objects are and by at times. Redesigning the shape or flattening out the shape if done poorly can only add friction or enforce a new round of learning / comprehension and some of the choices being made is probably unnecessary? (Icons are always this thing of guess-to-mation so I can’t fault this choice to harshly given in my years of doing this it’s very hit/miss – i.e. 3.5” inch disk represents save in UI, yet my kids today wouldn’t even have a clue what a floppy disk is? …it’s still there though!).

I’m not keen to just sit on my ivory throne and kick the crap out of the Visual Studio team for trying something new, I like this team and it actually pains me to decode their work. I instead am keen to see this conversation continue with them, I want them to keep experimenting and putting UI like this out there, as to me this tool can do a lot more than it does today. Discouraging them from trying and failing is in my view suffocating our potential but they also have to be open to new ideas and energy around this space as well (so I’d urge them to broker a better relationship with the community around design).

Going forward, I have started to type quite a long essay on how I would re-imagine Visual Studio 2011 (I am ignoring DevDev’s efforts to rebrand it VS11, you started the 20XX you are now going to finish it – marketing fail) and have sketched out some ideas.

I’ll post more on this later this week as I really want to craft this post more carefully than this one.

Related Posts:

Metro: Typography trumps chrome–debunked.

Metro, is fast becoming this unclear, messy craptuclar retardation of modern interface design. In that, the current execution out there is getting out of control resulting in what originally started out as a Microsofts plagiarized edition of Dieter Rams “Ten Principles of Good Design” into what we have before us today.

I am actually ok with that, as if I ever looked back on the first year of my designs in the 90s I’d cringe at the sight of lots of Alienskin Bevels, Glows and Fire plugin driven pixel vomit.

The part though I’m a little nervous about is how fast the microsoftees of the world have somehow collectively agreed that Text is in Chrome is out – like somehow science is wrong, that what we really need to do is get back to basics of ASCII inspired typography design(s) of yesteryear.

Typography is ok, in short bursts.

Spatial Visualization is the key description you need to Google a bit more around. Let me save you a little google confusion and explain what I mean.

Humans are not normal, to assume that inside HCI we are all equal in our IQ levels is dangerous, it is quite the opposite and to be fair the human mental conditions that we often suffer from are still quite an the infancy of medicine – we have so much more to learn about genetic deformation/mutations that are ongoing.

The reality is that most humans hail from a different approach to the way in which we decipher patterns within our day-to-day lives as we aren’t getting smarter we’re just getting faster at developing habitual comprehension of patterns that we often create.

Let us for example assume I snapped someone from the 1960’s, and I sat him or her in a room and handed them a mobile device. I then asked them “turn it on” and measured the reaction time to navigating the device itself to switching it on.

You would most likely find a lot of accidental learning, trial and error but eventually they’d figure it out and now that information is recorded into their brain for two reasons. Firstly, pressure does that to humans we record data when under duress that is surprisingly accurate (thus bank robbers often figure out that their disguises aren’t as affective as once thought) and secondly we discovered fire for the first time – an event gave it meaning “this futuristic device!!”

What is my point, firstly, the brain capacity has not increased our ability to think and react visually is what I’d argue is the primary driver for our ability to decode what’s in front of us.  (point in case the usage of H1 tag breaks up the indexation of comprehending of what I’ve written).

How so?

Research in the early 80’s found that we are more likely to detect misspelled words than we are correctly spelled words. The research goes on to suggest that the reason for this is that we obtain shape information about a word via peripheral vision initially (we later narrow in on the said word and make a decision on true/false after we’ve slowed the reading down to a fixated position).

It doesn’t stop there, by now you the reader have probably fixated on a few mistakes in my paragraph structure or word usage as you’ve read this, but yet you’ve still persisted in comprehending the information – despite the flaws.

What’s important about this packet of information is that it hints at what I’m stating, that a reliance on typography is great but for initial bursts of information only. Should the density of data in front of you increase, your ability to decode and decipherer (scan / proof read) becomes more of a case of balancing peripheral vision and fixated selection(s).

Your CPU is maxed out is my point.

AS I AM INFERRING, THE HUMAN BEING IS NOW JUGGLING THE BASICS IN AND AROUND GETTING SPATIAL QUEUES FROM BOTH TEXT, IMAGERY AND TASK MATCHING – ALL CRAMMED INSIDE A SMALL DEVICE. THE PROBLEM HOWEVER WONT STOP THERE, IT GOES ON INTO A MORE DEEPER CYCLE OF STUPIDITY.
INSIDE METRO THE BALANCE BETWEEN UPPER AND LOWER CASE FLUCTUATES THAT IS TO SEE AT TIMES IT WILL BE PURE UPPERCASE, MIXED OR LOWERCASE.

Did you also notice what I just did? I put all that text in Uppercase, and what research has also gone onto suggest is that when we go full-upper in our usage our reading speed decreases as more and more words are added. That is to say, now inside metro we use a mixed edition of both and somehow this is a good thing or bad thing?

Apple has over-influenced Microsoft.

I’m all for new design patterns in pixel balancing, I’m definitely still hanging in there on Metro but what really annoys me the most is that the entire concept isn’t really about breaking way based on scientific data centered in around the an average humans ability to react to computer interfaces.

It simply is a competitive reaction to Apple primarily, had Apple not existed I highly doubt we would not be having this kind of discussion and it would probably be full glyph/charms/icon visual thinking friendly environment(s).

Instead what we are probably doing is grabbing what appears to be a great interruption in design status quo and declaring it “more easier” but the reality kicks in fast when you go beyond the initial short burst of information or screen composition into denser territory – even Microsoft are hard pressed to come up with a Metro inspired edition of Office.

Metro Reality Check – Typography style.

The reality is the current execution of Metro on Windows Phone 7 isn’t built or ready for dense information and I would argue that the rationale that typography replaces chrome is merely a case of being the opposite of a typical iPhone like experience – users are more in love with the unique anti-pattern then they are with the reality of what is actually happening.

Using typography as your spatial visualization go to pattern of choice simply flies in the face of what we actually do know in the small packets of research we have on HCI.

Furthermore, if you think about it, the iPhone itself when It first came out was more of a mainstream interruption to the way in which we interpret UI via mobile device, icons for example took on more of candy experience and the chrome itself become themed.

It became almost as if Disney had designed the user interface as being their digital mobile theme park, yet here is the thing – it works (notice when Metro UI adds pictures to the background it seems to fit?…there’s a reason for that).

Chrome isn’t a bad thing, it taps into what we are hard wired to do in our ability to process information, we think visually (with the minority being the exclusion).

Egyptians, Asian(s) and Aboriginals wrote their history on walls/paper using visual glyphs/symbols not typography. That is an important principle to grapple onto the most; historically speaking we have always shown evidence to gravitate towards a pictorial view of the world and less around complexity in glyphs around pattern(s) (text) (that’s why Data Visualization works better than text based reports).

We ignore this basic principle because our technology environment has gotten more advanced but we do not have extra brainpower as human race, our genome has not mutate or evolved! We have just gotten better at collectively deciphering the patterns in and in turn have built up better habitual usage of these patterns.

Software today has a lot of bad UI out there, I mean terrible experiences, yet we are still able to use and navigate them.

Metro is mostly marketing / anti-compete than it is about being the righteous path to HCI design, never forget that part. Metros tagline as being “digitally authentic” is probably one of Deiter Rams principles being mutated and broken at the same time.

Good design is honest.
It does not make a product more innovative, powerful, or valuable than it really is. It does not attempt to manipulate the consumer with promises that cannot be kept.

Should point out, these ten principles are what have inspired Apple and other brands in the industrial design space. Food for thought.

Lastly one more thing, what if your audience was 40% Autistic/Dyslexic how would your UI react differently to the current designs you have before you.

Related Posts:

What if you had to build software for undiagnosed adults that suffer from autism, Asperger’s, dyslexia, ADHD etc.

Software today is definitely getting more and more user experience focused, that is to say we are preoccupied with getting the minimal designs into the hands of adults for task driven operations. We pride ourselves on knowing how the human mind works through various blog based theories on how one is to design the user interface and what the likely hood of the adult behind the computers initial reaction will be.

The amount of conferences I’ve personally attended that has a person or person(s) on stage touting the latest and greatest in cognitive science buzz word bingo followed by best practices in software design is well, too many.

On a personal level, my son has a rare chromosome disorder called Trisomy 8, its quite an unexplored condition and I’ve pretty much spent the last eight years interacting with medical professions that touch on not just the psychology of humans but zeros in on the way in which our brains form over time.

In the last eight years of my research I have learnt quite a lot about how the human mind works specifically on how we react to information and more importantly our abilities to cope with change that aren’t just about our environments but also plays a role in Software.

I’ve personally read research papers that explore the impacts of society’s current structure on future generations and more importantly how our macro and micro environments play a role with regards to the children of tomorrow coping with change and learning at the same time – that is to say, we adults cope with the emerging technology advancements because for us “its about time” but for todays child of 5-8 this is a huge problem around having to manifest coping skills to dealing with a fluid technology adoption that often doesn’t make sense.

Yesterday we didn’t have NUI, today we do? Icons that have a 3.5” floppy disc that represent “saving” have no meaning to my son etc.

The list goes on just how rapid and fast we are changing our environments and more importantly how adults that haven’t formulated the necessary social skills to realistically control the way in which our children are parented often rely on technology as at times being the de-facto teacher or leader (Amount of insights I’ve read on how XBOX 360 has become the baby sitter in households is scary).

Getting back to the topic at hand, that is what if the people you are designing software have an undiagnosed mental illness or are better yet diagnosed. How would you design tomorrow’s user interface to cope with this dramatic new piece of evidence? To you the minimal design works, it seems fresh and clear and has definitive boundaries established.

To an adult suffering from Type-6 ADHD (if you believe in it) that has a degree of over-focus its not enough, in fact it could have the opposite effect of what you are trying to do in your design composition.

Autism also has a role, grid formation in design would obviously appeal to their autistic traits given it’s a pattern that they can lock onto and can often agree with – Asperger sufferers may disagree with it, and could annoy or irritate them in some way (colour choice, too much movement blah blah).

Who has to say your designs work, as if you ask people on the street a series of questions and observe their reactions you are not really providing an insight into how the human mind reacts to computer interaction. You’ve automatically failed in a clinical trial, as the person on the street isn’t just a normal adult there’s a whole pedigree of historical information you’re not factoring into the study that is relevant.

At the end of the day, the real heart of HCI for this and is my working theory is that we formulate our expectations around software design from our own personal development. That is to say, if we as children had normal or above average IQ level in around about ability to understand patterns and the ability to cope with change we in turn are more likely to adapt to both “well design” and “poorly designed” software.

That is to say, when you sit down and think about an Adult and how they react to tomorrow’s software one has to really think about the journey the adult has taken to arrive at this point, more to the point how easily they are also influenced.

A child who came from broken home, parents left and raised by other adults who is now a receptionist within a company is more likely to have absolutely no confidence in around making decisions. That person is now an easy mark for someone who has the opposite and can easily sway this person to adoption and change.

Put those two people into a clinical trial around how the next piece of software you are going to roll out for a company works and the various tests you put them through, watch what happens.

Most tests in UX / HCI often focus on the ability of the candidate to make their way through the digital maze in order to get the cheese (basic principles around reward / recognition) so to be fair its really about how the human mind can navigate a series of patterns to arrive at a result (positive / negative) and then furthermore how the said humans can recall that information at a later date (memory recall meets muscle memory).

These style of results will tell I guess the amount of friction associated with your change, and give you a score / credit in around what the impact it will likely have but in reality what you really probably need to do as an industry is zero in on how aggressively you can decrease the friction levels associated with change prior to the person arriving at the console.

How do you get Jenny the receptionist who came from an abused child hood enough confidence to tackle a product like Autodesk Maya (which is largely complex) as if you were to sit down with Jenny and learn that she also has a creative component to her that’s not obvious to all – its her way of medicating the abuse through design.

How do you get Jack the stock broker who spends most of his time jacked on speed/coke and caffeine to focus long enough to read the information in front of him through data visualisation metaphors / methodologies then the decisions he makes could impact the future of the global financial system(s) world wide (ok bit extreme but you get my point?)

It’s amazing how much we as a software industry associate normal from abnormal when it comes to software design. It is also amazing how we look at our personas in the design process and attach the basic “this is mike, he likes facebook” fluffy profiling. When what you may find even more interesting is that Mike may like facebook, but in his down time he likes to put fireworks in cats butts and set them on fire because he has this weird fascination with making things suffer – which is probably why Mike now runs ORACLE’s user experience program.

The persona in my view is simply the design team having a massive dose of confirmation bias as when you sit down and read research paper after research paper on how a child sees the world that later helps define him/her as an adult, well…. In my view, my designs take on a completely new shift in thinking in around how I approach them.

My son has been tested numerous times and has been given an IQ of around 135 which depending on how you look at it puts him in around the genius level. The problem though is my son can’t focus or pays full attention to things and relies heavily on patterns to speed through tasks but at the same time he’s not aware of how he did it.

Imagine designing software for him. I do, daily and have to help him figure out life, it just has taught me so much in the process.

Metro UI vs Apple OS 5..pft, don’t get me started on this subject as both have an amazing insight into pro’s and con’s.

Related Posts:

Metrotastic– Palette Generator Preview.

I’ve been thinking about how to approach metro designs for the past year now, there’s a lot to the mechanics of getting the metro into what I call a “golden ratio” like state – that is to say, I think due to the simplicity of the design(s) you can achieve the bulk of the effort required by metro using mathematics and layout / proportions that are OCD / consistent

Tonight, I sat down inside Adobe Photoshop and decided to draw a line at the overall Resource Dictionary creation for some of the WPF/Silverlight and Windows Phone 7 projects I often work on. In doing this, I decided the first thing one needs to attack with a metro design is the color selection.

image

Color choice is important in the Microsoft style of Metro designs (I call it ms-metro as the word metro is getting to be an overloaded term, departing from the core design principles outlined), as you’ll note that Metro designs to date a really monochrome in the way they handle the selection of colors – to be upfront, I think they rely too heavily on primary colors and by not using shades of the primary / accent colors the designs come off to shallow / unpolished – helps to provide light/dark/normal contrasts imho.

I decide that in most of my designs I typically rest on 3-4 color choices overall – including the chrome (dark or light). These are often the basis for my design canvas and from here it’s really about balancing out the decals, typography and deciding how the overall screens and data flow.

More on this subject when I finish my brand reset (I’m redesigning riagenic.com and introducing metrotastic.com as well – more later).

In this post though, I thought it would be a good idea to provide a quick overview of my thinking here to gather some feedback?

Color Choice.

image

If you look at most brands around the world, they typically rely on dark/light in terms of a canvas base and from there it’s really down to one to two primary colors (Google etc. the exception – where they have more than two).

Combining this and along with a concept I notice in most modern cars today where what I call an “input” color exists – pop the hood of your car, notice the yellow parts? That means it’s safe to touch, the rest leave it to the mechanics.

Looking at the below, I’ve isolated the theme into three color choices starting with Normal as being the primary color choice. Once the primary has been nominated then it’s a case of mixing some white/black to provide you shading contrasts.

Shades of Normal.

The shading is bit of a guestimation at this point, but so far I’ve rested on an 80 or 30% split. In that, using these two values with a white/black shading over the top of the base you can achieve a contrast setting that’s quite palette friendly to the ms-metro look and feel.

The shades themselves also have slight adjustment requirements depending on how you use them in your UI as if you use the darkest shade as your background for example (as in the below example) then you have to account for how your foreground is going to look that will differ from say your lightest color choice – the point is if you have a dark/light theme switch you need to adjust not just the base color selection but also foreground colors to accommodate the shading contrasts.

image

Chrome vs Brand

Inside a lot of my designs I use chrome decals, despite what Microsoft often preach around letting UI breathe, I still prefer to use decals to help provide separation amongst areas – imho, Microsoft UI is often barren and flat! We saw hints of this when a designer soon after the Windows 8 release whereby he designed a fake Steam UI which was an example of additive decals.

My approach in doing this is to separate the chrome into its own color channel and with its own set of shades of contrast (lighter,light, normal, dark, darker).

The same also goes for Input (ie using the car metaphor above), I typically will often spend a lot of time at kuler.adobe.com playing around with colors before I find a color that matches the branding (primary) nicely – in this case I prefer a blue/green/gray color selection.

You can add a fourth palette to this, but in all honesty when you start getting to around the fourth color choice things get a bit interesting in the color / contrast department – dangerous design imho.

An example.

image

Using the color palette(s) here I quickly knocked up a fake basic demo UI in metro style.

With the example, I put in a radial gradient starting with DARK-Gray and DARKER-Gray and simply put the radial gradient in the upper left area – it gives the UI that dull spotlight effect.

I then put the NORMAL-Blue as the accent color here, whereby the blues role in this design is to act as an opposing contrast to the chrome – you’ll often see this in ms-metro around say designs like Contoso etc.

The menu and most of the text uses the colors in the Darker TEST palette, but the thing to note here is I used the Normal-Blue as a selection state. In that whilst the green indicates input, the blue however is used to indicate current selection state. I’ve played around with this for a while now and in all honesty it annoys me personally how this works as to me input color should be consistent? But yet it works?

I should point out that I often will just use this technique in terms of giving users a spatial understanding of where they are in the user interface(s). In tests I’ve done with users over the past 2-3 years using this technique, they’ve never bucked the concept or idea – if anything have made consistent notary that this approach “feels right” – so despite some UX / UI colleagues giving me advice to avoid it, so far, the data says “you’re not right and your not wrong either” J (everyone becomes a UX expert over night mind you).

The Green in this UI stands out more, it highlights that these buttons are safe to touch and they are the focal point of input and like I said, provides that experience similar to popping the hood of your car. In all the tests that I’ve done in usability / ux over the last year or so, every single time the user has found their way around with minimal eLearning / Advice required – I have a theory that it has a link to how we humans handle perceptional organization when dealing with working memory (ie grab a few clipart pics, pick two categories the same and put them into a grid with 4 others that aren’t, then ask the candidates to tell you which two are the same and measure their reaction time – I should discuss this more, as its quite fascinating to see how peoples IQ matches to UX with a fairly consistent rate of predictability).

Conclusion.

I have plans to really drill deeper into this area of design and I think I’m really only just scratching the surface of this conversation. The more I get asked to design metro themes for various Microsoft applications the more I question the overall strategy given for me this is quite simple stuff, yet it seems to be in high demand.

I enjoy working on it all now, I used to laugh at it but for me now the approach is getting much simpler by the day and I’d like to see the overall community raise the bar a bit more around this design language – that is to say, I really want to see what others do as I’m starving for alternative inspiration in this arena of metro-tastic design school.

Here is a sneak preview of my upcoming reset

image

 

Some Color Examples

image

image

image

Related Posts: