The core problem.Ever since I think 2004, I've never been able to get a stable process in place that enables a designer and developer to share & communicate their intended ideas in a way that ends up in production. Sure they end up something of a higher quality state but it was never really what they originally set out to build, it was simply a end result of compromises both technically and visually. Today, its kind of still there lingering. I can come up with a design that works on all platforms and browsers but unless i sit inside the developer enclosure and curate my design through their agile process in a concentrated pixel for pixel way, it just simply ends up getting slightly mutated or off target.
The symptoms.A common issue in the process happens soon after the design in either static form or in prototype form gets handed off to the developer or delivery team. They look at the design, dissect it in their minds back to whatever code base they are working on and start to iterate on transforming it from this piece of artwork into actual living interactive experience. The less prescriptive I am in the design (discovery phase) the less likely i’ll end up with a result that fits the way i had initially imagined it to begin with. Given most teams are also in an Agile way of life the idea that I have time or luxury of doing a “big up front” design rarely ever presents itself these days. Instead the ask is to be iterative and to design in chunking formations with the hope that once i’ve done my part its handed off to delivery and then it will come out unscathed, on time and without regression built in. Nope. I end up the designer paying the tax bill on compromise, i’m the guy usually sacrificing design quality in lieu of “complexity” or “time” derived excuses. I can sit here as most UX`ers typically do and wave my fist at “You don’t get us UI and UX people” or argue about “You need to be around the right people” all i want but in truth this is a formula that gets repeated throughout the world. It’s actually the very reason why ASP.NET MVC, WPF and Silverlight exist really - how do we keep the designer and developer separated in the hope they can come together more cleanly in design & development. The actual root cause for this entire issue is right back at the tooling stage. The talent is there, the optimism is there but when you have two sets of tooling philosophies all trying to do similar or close to similar things it tends to kind of breed this area of stupidity. If for example i’m in Photoshop drawing a button on canvas and using a font to do so, well at the back my mind i realise that the chances of that font displaying on that button within a browser is less likely to happen then inside the tool - so i make compromises. If i’m using a grid setting that doesn't match the CSS framework i’m working with, well, guess what one of us is about to have a bad day when it comes to the designer & developer convergence. If i’m using 8px padding for my According Panel headers in WPF and the designs outside that aren't sharing the same constancy - well, again, someone's in for a bad day.
It’s all about grids.Obviously when you design these days a grid is used to help figure out portion allocation(s) but the thing is unless the tooling from design to development all share the same settings or agreed settings then you open yourself up from the outset to failure. If my grid is 32x32 and your CSS grid uses 30% and we get into the design hand over, well, someone in that discussion has to give up some ground to make it work (“lets just stretch that control” or “nope its fixed, just align it left…” etc start to arise). Using a grid even at the wireframing stage can even tease out the right attitude as you’re all thinking in terms of portion and sizing weights (t-shirt size everything). The wireframes should never be 1:1 pixel ready or whatever unit of measure you choose, they are simply there to give a “sense” of what this thing could look like, but it won’t hurt to at least use a similar grid pattern.
T-shirt size it all.Once you settle on a grid setting (column, gutters and N number of columns) you then have to really reduce the complexity back to simplicity in design. Creating T-shirt sizes (small, medium, large etc) isn’t a new concept but have you even considered making that happen for spacing, padding, fonts, buttons, textinputs, icons etc etc. Keeping things simple and being able to say to a developer “Actually try using a medium button there when we get to that resolution” is at the very least a vocabulary that you can all converse in and understand. Having the ability to say “well, maybe use small spacing between those two controls” is not a guessing game, its a simple instruction that empowers the designer to make an after-design adjustment whilst at the same time not causing code-headaches for the developer.
Color Palettes aren't RGB or Hex.Simplicity in the language doesn't end with T-shirt sizing it also has to happen with the way we use colors. Naming colors like ClrPrimaryNormal, ClrPrimaryDark, ClrPrimaryDarker, ClrSecondaryNormal etc help reduce the dependency of getting bogged down into color specifics whilst at the same time giving the same adjustment potential as the T-shirt sizes had as well - “try using ClrBrandingDarker instead of ClrBrandingLight”. If the developer is also color blind as in no they are actually colorblind, this instruction also helps as well.
Tools need to be the answer.Once you sort the typography sizing, color palette and grid settings well you’re now on your way to having a slight chance of coming out of this design pipeline unscathed but the problem hasn't still been solved. All we have done really is created a “virtual” agreement between how we work and operate but nothing really reinforces this behavior and the tools still aren't being nice with one another as much as they could be. If i do a design in say Adobe tools I can upload them to their creative cloud quite quickly or maybe even dropbox if have it embedded into my OS. However my developer team uses Visual Studio’s way of life so now i’m at this DMZ style area of annoyance. On one hand i’m very keep to give the development team assets they need to have but at the same time i don’t want to share my source files and much the same way they have with code. We need to figure out a solution here that ticks each others boxes as sure i can make them come to my front door - cloud or dropbox. That will work i guess, but they are using github soon so i guess do i install some command line terminal solution that lets me “Push” artwork files into this developer world? There is no real “bridge” and yet these two set of tools has been the dogma of a lot teams lives of the better part of 10 years, still no real bridge other then copy & paste files one by one. For instance if you were to use the aforementioned workflow and you realize at the CSS end that the padding pixels won’t work then how do you ensure everyone see’s the latest version of the design(s)? it realises heavily on your own backwater bridge process. My point is this - for the better part of 10 years i’ve been working hard to find a solution for this developer / designer workflow. I’ve been in the trenches, i’ve been in the strategy meetings and i’ve even been the guy evangelizing but i’m still baffled as to how I can see this clear linear workflow but the might of Adobe, Microsoft, Google, Apple and Sun just can’t seem to get past the developer focused approach. Developers aren't ready for design because the tools assume the developer will teach the designer how to work with them. The designer won’t go to the developer tools because simply put they have low tolerance for solutions that have an overburden of cognitive load mixed with shitty experiences. 5 years ago had we made Blend an intuitive experience that built a bridge between us and Adobe we’d probably be in a different discussion about day to day development. Instead we competed head-on and sent the entire developer/designer workflow backwards as to this day i still see no signs of recovery.
"..The only way you win an argument is if you get the other side to agree with you.."Is what my dad would say when he & i used to get into the thick of it. Its a fairly simple statement as in the end when you have two opposing ideas on the same problem, well it comes down to either compromise or an impasse. If its an impasse then it probably will come down to the title you have on the day, in my case being Head of User Experience. A title like mine carries some weight that means i can ignore your opinion and proceed onwards without it, but doing so means that i need to qualify my arrogance more. Being the top-dog in UX land isn’t an excuse to just push past people on their “I think” statements and supplant your “I thinks” ontop. Instead what it means is we have to be more focused on establishing the “I know” statement that absorb the two opposing ideas. My way of thinking is this, when I reach a point where there isn’t any data to support the opinions / ideas its now a case of writing multiple tests to get them fact checked and broken down until we do have the ideas transformed into behaviour facts. I think the users will not like the start menu removed so don’t touch it. Now lets remove the start menu is my immediate thought, screw the statement what happens when we do it. I’m assuming there will be some negative blowback but can you imagine the data we can now capture once its removed and how the users react. The users will tell us so much, how they use the menu, where they like it, why they like it there, who they are, what they use and so on. That one little failure in Windows 8 is a gold mine of data and online there are discussion forums filled with topics / messages that centre around “ I think “ but nobody really has “I know” except Microsoft. My point is this. If you’re not in a role that has User Experience in its title then fine, knock yourselves out with the back and forth of “I think” arguments. If you are in UX your job is to not settle with “I think” and instead hunt for “I know” for you will always get rewarded.
A respondent is asked to walk on a path through a forest from A to B. The respondent is asked to count how many “blue” objects are lined along the path, and the said respondent’s heart rate will be also monitored (also base-lined / zeroed out). Before the respondent takes off the testers place a stick that has similar shape to a coiled snake midway on the path. The respondent is then asked to proceed on the journey, and they begin to count the blue objects and at the end of the path when they arrive, they give an accounting of their blue object findings. Their heart rate was normal in line with normal physical activity. Respondents were less likely to notice the stick. Next round of respondents are asked to the same, only this time the seed of fear is planted in their subconscious with “oh others noticed a snake a few hours ago along the path, be careful and if you see it sing out, it should be gone by now and we couldn't find it earlier so just take note”. Respondents begin the journey on the path, they notice the stick initially and a lot of messaging between the optics and brain are moving at lightning speed trying to decipher the pattern(s) needed to place a confirmation on “threat or non-threat” levels. Heart rate is spiking and eventually they realize its a stick and proceed, as they walk past the stick still keeping a very close eye and proximity buffer between the stick and them.The point of that story is this, that with an introduction to the standard test of a new variable (fear) you’re able to affect the experience dramatically to the point where you've also touched on a primal instinct. In software that “stick” moment can be anything from moving the “start button” on a menu through to moving the way a tabular amount of data has been traditionally been displayed. As a User Experience creator, we typically move the cheese a lot and it’s more to do with controlling change in our user(s) behavior (for the greater good). Persona(s) don’t measure that change, all they measure is what happened before you made the change. All you can do is create markers in the experience that help you map your initial persona baseline back to the new in the hopes it provides a bounty of data in which “change” is made obvious. It doesn't… sadly… it just doesn't and so all we can do is keep focusing on the past behavioral patterns in the hope that new patterns emerge. Persona(s) aren't bad, they aren't good, they are just a representative sample of something we knew yesterday that maybe still relevant today. The thing i do like about personas from marketing folks is this, it keeps everyone focused on behaviors they’d like to see tomorrow re-appear and that in the end is all i ever really needed. Where do you want to head tomorrow? Last example - NBC Olympics were streamed in 2009 to the entire US with every sport captured and made available. At the time everyone inferred that an average viewer would likely spend 2mins viewing time. In actuality they spent 20mins average viewing time and sent massive ripples in the TV/Movie industry in terms of the value of “online viewing”. If we had of asked candidates back then both as content publishers and consumers, they’d probably have told us data that they asserted to be relevant at the time. In this instance the Silverlight team were able to serve up HD video for the first time too many people online, and that’s what changed peoples experience. Today, its abnormal to even contemplate HD video streaming online as anything but an expected experience for "video" ... 5 years ago, it didn't exist. Personas compared to then and now are all dramatically different now, so while change can in some parts be slow... they can easily expedite to days, months as well as years. I don't dislike Persona's, i just remain skeptical always of the data that fuels them - but thats my job.
Situation.Today, the ClassView inside Visual Studio is pretty much useless for most parts, in that when you sit down inside the tool and begin stubbing out your codebase (initial file-new creation) you are probably in the “creative” mode of object composition. Visual Studio in its current and proposed form does not really aid you in a way that makes sense to your natural approach to writing classes. That is to say, all it really can do is echo back to you what you’ve done or more to the point give you a “at a glance view” only.
Improvement.The class view itself should have a more intelligent by design visual representation. When you are stubbing or opening an existing class, the tool should reflect more specifics around not only what the class composition looks like (at a glance view) but also should enable developers to approach their class designs in a more interactive fashion. The approach should enable developer(s) to hide and show methods, properties and so on within the class itself, meaning “get out of my way, I need to focus on this method for a minute” which in turn keeps the developer(s) focused on the task. The ClassViewer should also make it quick to comment out large blocks of code, display visual issues relating to the large blocks of code whilst at the same time highlight which parts of the codes have and don’t have Attributes/Annotations attached. Furthermore, the ClassViewer should also allow developer(s) to integrate their source and task tracking solutions (TFS) via a finite way, that is to say enable both overall class level commentary and “TODO” allocation(s). At the same time have similar approaches at a finite level such as “property, method, or other” areas of interest – (i.e. “TODO: this method is not great code, need to come back refactor this later”).
Feature breakdown.The above is the overall fantasy user interface of what a class viewer could potential look like. Keeping in mind the UI itself isn’t accommodate every single use-case, but simply hints at the direction I am talking about.
Navigation.Inside the ClassView there are the following Navigational items that represent different states of usage.
Usage by TBA.
Derived ByThe “Derived By” view enables developers to gain a full understanding of how a class handles known inheritance chain by displaying a visual representation of how it relates to other interfaces and classes.
MinimapThis inheritance hierarchy will outline specifically how the classes’ relationship model would look like within a given solution (obviously only indexing classes known within an opened solution).
- The end user is able to jump around inside the minimap view; to get an insight into what metadata (properties, methods etc.) is associated with each class without having to open the said class.
- The end user is able to gain a satellite view what is inside each class via the Class Properties panel below the minimap.
- The end user is able to double click on a minmap box (class file representation) and as such, the file will open directly into the code view area.
- The end user is able to select each field, property, method etc. within the Class Properties data grid. Each time the user selects that specific area and If the file is opened, the code view will automatically position the cursor to the first character within that specific code block.
- The end user is able to double click on the first circle to indicate that this code block should be faded back to allow the developer to focus on other parts of the code base. When the circle turns red, the code block itself foreground colour will fade back to a passive state (i.e. all grey text) as whilst this code is still visible and compliable, it however visually isn’t displayed in a prominent state.
- The end user is able to click on the second circle to indicate that the code block itself should take on a breakpoint behaviour (when debugging please stop here). When the circle turns red, it will indicate that a debug breakpoint is in place. The circle itself right click context will also take on an as-is behavior found within Visual Studio we see today.
- The end user is able to click on the Tick icon (grey off, green on). If the Tick state is grey, this indicates that this code block has been commented out and is in a disabled state (meaning as per commenting code it will not show up at compile time).
- The end user is able to click on the Eye icon to switch the code block into either a private or public state (public is considered viewable outside the class itself, ie internal vs public are one in the same but will respect the specifics within the code itself).
- Each row will indicate the name given to the property, its return or defined type, whether or not it is public or private and various tag elements attached to its composition.
- When a row has a known error attached within its code block, the class view will display a red indication that this area needs the end users attention.
- The eye icon represents whether or not this class has been marked for public or private usage (i.e. public is considered whether the class is viewable from outside the class itself – ie internal is considered “viewable” etc.).
- Tags associated to the row indicate elements of interest, in that the more additional per code block features built in, they will in turn display here (e.g.: Has Data Annotations, Codeblock is Read Only, Has notes attached etc.).
Tags.My thinking is that development teams can attach tabs to each code block whilst at the same time the code itself will reflect what I call “decorators” that have been attached (ie attributes). Example Tags.
- Attribute / Annotation. This tag will enable the developer to see at a glance as to what attributes or annotations are attached to this specific code block. This is mainly useful from a developer(s) perspective to ensure whether or not the class itself has the right amount of attributes (oops I forgot one?) whilst at the same time can provide an at-a-glance view as to what types of dependencies this class is likely to have (e.g use case Should EntityFramework Data Annotations be inside each POCO class? Or should it be handled in the DBContext itself?..before we answer that, lets see what code blocks have that dependency etc.).
- Locked. This ones a bit of a tricky concept, but initially the idea is to enable development teams to lock specific code blocks from other developer(s) manipulation, that is to say the idea is that when a developer is working on a specific set of code chunks and they don’t want other developer(s) to touch, they can insert code-locks in place. This in turn will empower other developer’s to still make minor modification(s) to the code whilst at the same time, check in the code itself but at the same time removing resolution conflicts at the end of the overall work stream (although code resolution is pretty simplified these days, this just adds an additional layer of protecting ones sandpit).
- Notes. When documenting issues within a task or bug, it’s at times helpful to leave traces behind that indicate or warn other developers to be careful of xyz issues within this code block (make sure you close out your while loop, make sure you clean-up your background threading etc.). The idea here is that developer(s) can leave both class and code-block specific notes of interest.
- Insert Your idea here. These tag specific features outlined so far aren’t the exhausted list, they are simply thought provokers as to how far one can go within a specific code-block. The idea is to leverage the power Visual Studio to take on a context specific approach to the way you interact with a classes given composition. The tags themselves can be injected into the code base itself or they can simply reside in a database that surrounds the codebase (ie metdata attached outside of the physical file itself).
- The idea behind this derived by and class properties view is that the way in which developer(s) code day in day out takes on a more helpful state, that is to say you are able to make at-a-glance decisions on what you see within the code file itself. At the same time providing a mini-map overarching view as to what the composition of your class looks like – given most complex classes can have a significant amount of code in place?
- Tagging code-chunks is a way of attaching metadata to a given class without specifically having to pollute the class’s actual composition, these could be attachments that are project or solution specific or they can be actual code manipulation as well (private flipped to public etc.). The idea is simply to enable developer(s) to communicate with one another in a more direct and specific fashion whilst at the same time enable the developer(s) to shift their coding lense to enable them to zero in on what’s important to them at the time of coding (ie fading the less important code to a grey state).
On the choice of grey.Grey is a color that I have used often in my UI’s and I have no issue with going 100% monochrome grey provided you could layer in depth. The thing about grey is that if it has to flat and left in a minimalist state it often will not work for situations where there is what I call “feature density.” If you approach it from a pure Metro minimalist approach, then it can still work but you need to calibrate your contrast(s) to accommodate the end users ability to hunt and gather for tasks. That is to say this is where Gestalt Laws of Perceptual Organization comes into play. The main “law” that one would pay attention to the most is the “Law of Continuity” - The mind continues visual, auditory, and kinetic pattern. This law in its basic form is the process in which the brain decodes a bunch of patterns in full view and begins to assign inference to what it perceives as being the flow of design. That is to say, if you designed a data grid of numeric values that are right aligned, no borders then the fact the text becomes right aligned is what the brain perceives as being a column. That law itself hints that at face value we as humans rely quite heavily on pattern recognition, we are constantly streaming in data on what we see; making snap judgment calls on where the similarities occur and more importantly how information is grouped or placed. When you go limited shades of grey and you remove the sense of depth, you’re basically sending a scrambled message to the brain around where the grouping stops and starts, what form of continuity is still in place (is the UI composition unbroken and has a consistent experience in the way it tracks for information?) It’s not that grey is bad, but one thing I have learnt when dealing with shallow color palettes is that when you do go down the path of flat minimalist design you need to rely quite heavily on at times with a secondary offsetting or complimentary color. If you don’t then its effectively taking your UI, changing it to greyscale and declaring done. It is not that simple, color can often feed into the other law with Gestalts bag of psychology 101, that is to say law of similarity can often be your ally when it comes to color selection. The involvement of color can often leading the user into being tricked into how data despite its density can be easily grouped based on the context that a pattern of similarity immediately sticks out. Subtle things like vertical borders separating menus would indicate that the grouping both left and right of this border are what indicates, “These things are similar.” Using the color red in a financial tabular summary also indicates this case as they are immediately stand out elements that dictate “these things are similar” given red indicates a negative value – arguably this is a bit of digital skeuomorphs at work (given red pens were used pre-digital world by account ledgers to indicate bad).
Ok I will never use flat grey again.No, I’m not saying that flat grey shades are bad, what I am saying is that the way in which the Visual Studio team have executed this design is to be openly honest, lazy. It’s pretty much a case of taking the existing UI, cherry picking the parts they liked about the Metro design principles and then declaring done. Sure they took a survey and found responded were not affected by the choice of grey, but anyone who’s been in the UX business for more than 5mins will tell you that initial reactions are false positives. I call this the 10-second wow effect, in that if you get a respondent to rate a UI within the first 10seconds of seeing it, they will majority of the time score quite high. If you then ask the same respondents 10days, 10months, or a year from the initial question, the scores would most likely decline dramatically from the initial scoring – habitual usage and prolonged use will determine success.
We do judge a book by its cover and we do have an attractive bias.Using flat grey in this case simply is not executed as well as it could be, simply because they have not added depth to the composition. I know, gradients equal non-metro right. Wrong, metro design principles call for a minimalist approach now while Microsoft has executed on those principles with a consistent flat experience (content first marketing) they however are not correct in saying that gradients are not authentically digital. Gradients are in place because they help us determine depth and color saturation levels within a digital composition that is to say they trick you into a digital skeumorphism, which is a good thing. Even though the UI is technically 2D they do give off a false signal that things are in fact 3D? which if you’ve spent enough time using GPS UI’s you’ll soon realize that we adore our given inbuilt depth perception engine. Flattening out the UI in the typical metro-style UI’s work because they are dealing with the reality that data’s density has been removed that is to say they take on more of a minimalist design that has a high amount of energy and focus on breaking data down into quite a large code diet. Microsoft has yet to come out with UI that handles large amounts of data and there is a reason they are not forthcoming with this as they themselves are still working through that problem. They have probably broken the first rule of digital design – they are bending their design visions to the principles and less on the principles evolving and guiding the design.
Examples of Grey working.Here are some examples of a predominately grey palette being effective, that is to say Adobe have done quite well in their latest round of product design especially in the way they have balanced a minimalist design whilst still adhering to visual depth perception based needs (gradients). Everything inside this UI is grouped as you would arguably expect it to be, the spacing is in place, and there is not a sense of crowding or abuse of colors. Gradients are not hard, they are very subtle in their use of light, or dark even though they appear to have different shades of grey, they are in fact the same color throughout. Grey can be a deceiving color given I think it has to do with its natural state, but looking at this brain game from National Geographic, ask yourself the question “Is there two shades of grey here?” The answer is no, the dark & light tips give you the illusion of difference in grey but what actually is also tricking the eye is the use of colors and a consistent horizon line.
Summary.I disagree with the execution of this new look, I think they’ve taken a lazy approach to the design and to be fair, they aren’t really investing in improving the tool this release as they are highly most likely moving all investments into keeping up with Windows 8 release schedules. The design given to us is a quick cheap tactic to provoke the illusion of change given I am guessing the next release of Visual Studio will not have much of an exciting set of feature(s). The next release is likely to either be a massive service pack with a price tag (same tactic used with Windows7 vs. Windows Vista – under the hood things got tidied up, but really you were paying for a service pack + better UI) or a radical overhaul (I highly doubt). Grey is a fine color to go full retard on (Tropic Thunder Quote) but only if you can balance the composition to adhere to a whole bunch of laws out there that aren’t just isolated to Gestalt psychology 101 but there is hours of reading in HCI circles around how humans unpick patterns. Flattening out Icons to be a solid color isn’t also a great idea either, as we tend to rely on shape outlines to give us visual cues as to how what the meaning of objects are and by at times. Redesigning the shape or flattening out the shape if done poorly can only add friction or enforce a new round of learning / comprehension and some of the choices being made is probably unnecessary? (Icons are always this thing of guess-to-mation so I can’t fault this choice to harshly given in my years of doing this it’s very hit/miss – i.e. 3.5” inch disk represents save in UI, yet my kids today wouldn’t even have a clue what a floppy disk is? …it’s still there though!). I’m not keen to just sit on my ivory throne and kick the crap out of the Visual Studio team for trying something new, I like this team and it actually pains me to decode their work. I instead am keen to see this conversation continue with them, I want them to keep experimenting and putting UI like this out there, as to me this tool can do a lot more than it does today. Discouraging them from trying and failing is in my view suffocating our potential but they also have to be open to new ideas and energy around this space as well (so I’d urge them to broker a better relationship with the community around design). Going forward, I have started to type quite a long essay on how I would re-imagine Visual Studio 2011 (I am ignoring DevDev’s efforts to rebrand it VS11, you started the 20XX you are now going to finish it – marketing fail) and have sketched out some ideas. I’ll post more on this later this week as I really want to craft this post more carefully than this one.
Metro, is fast becoming this unclear, messy craptuclar retardation of modern interface design. In that, the current execution out there is getting out of control resulting in what originally started out as a Microsofts plagiarized edition of Dieter Rams “Ten Principles of Good Design” into what we have before us today.
I am actually ok with that, as if I ever looked back on the first year of my designs in the 90s I’d cringe at the sight of lots of Alienskin Bevels, Glows and Fire plugin driven pixel vomit.
The part though I’m a little nervous about is how fast the microsoftees of the world have somehow collectively agreed that Text is in Chrome is out – like somehow science is wrong, that what we really need to do is get back to basics of ASCII inspired typography design(s) of yesteryear.
Typography is ok, in short bursts.
Spatial Visualization is the key description you need to Google a bit more around. Let me save you a little google confusion and explain what I mean.
Humans are not normal, to assume that inside HCI we are all equal in our IQ levels is dangerous, it is quite the opposite and to be fair the human mental conditions that we often suffer from are still quite an the infancy of medicine – we have so much more to learn about genetic deformation/mutations that are ongoing.
The reality is that most humans hail from a different approach to the way in which we decipher patterns within our day-to-day lives as we aren’t getting smarter we’re just getting faster at developing habitual comprehension of patterns that we often create.
Let us for example assume I snapped someone from the 1960’s, and I sat him or her in a room and handed them a mobile device. I then asked them “turn it on” and measured the reaction time to navigating the device itself to switching it on.
You would most likely find a lot of accidental learning, trial and error but eventually they’d figure it out and now that information is recorded into their brain for two reasons. Firstly, pressure does that to humans we record data when under duress that is surprisingly accurate (thus bank robbers often figure out that their disguises aren’t as affective as once thought) and secondly we discovered fire for the first time – an event gave it meaning “this futuristic device!!”
What is my point, firstly, the brain capacity has not increased our ability to think and react visually is what I’d argue is the primary driver for our ability to decode what’s in front of us. (point in case the usage of H1 tag breaks up the indexation of comprehending of what I’ve written).
Research in the early 80’s found that we are more likely to detect misspelled words than we are correctly spelled words. The research goes on to suggest that the reason for this is that we obtain shape information about a word via peripheral vision initially (we later narrow in on the said word and make a decision on true/false after we’ve slowed the reading down to a fixated position).
It doesn’t stop there, by now you the reader have probably fixated on a few mistakes in my paragraph structure or word usage as you’ve read this, but yet you’ve still persisted in comprehending the information – despite the flaws.
What’s important about this packet of information is that it hints at what I’m stating, that a reliance on typography is great but for initial bursts of information only. Should the density of data in front of you increase, your ability to decode and decipherer (scan / proof read) becomes more of a case of balancing peripheral vision and fixated selection(s).
Your CPU is maxed out is my point.
AS I AM INFERRING, THE HUMAN BEING IS NOW JUGGLING THE BASICS IN AND AROUND GETTING SPATIAL QUEUES FROM BOTH TEXT, IMAGERY AND TASK MATCHING – ALL CRAMMED INSIDE A SMALL DEVICE. THE PROBLEM HOWEVER WONT STOP THERE, IT GOES ON INTO A MORE DEEPER CYCLE OF STUPIDITY.
INSIDE METRO THE BALANCE BETWEEN UPPER AND LOWER CASE FLUCTUATES THAT IS TO SEE AT TIMES IT WILL BE PURE UPPERCASE, MIXED OR LOWERCASE.
Did you also notice what I just did? I put all that text in Uppercase, and what research has also gone onto suggest is that when we go full-upper in our usage our reading speed decreases as more and more words are added. That is to say, now inside metro we use a mixed edition of both and somehow this is a good thing or bad thing?
Apple has over-influenced Microsoft.
I’m all for new design patterns in pixel balancing, I’m definitely still hanging in there on Metro but what really annoys me the most is that the entire concept isn’t really about breaking way based on scientific data centered in around the an average humans ability to react to computer interfaces.
It simply is a competitive reaction to Apple primarily, had Apple not existed I highly doubt we would not be having this kind of discussion and it would probably be full glyph/charms/icon visual thinking friendly environment(s).
Instead what we are probably doing is grabbing what appears to be a great interruption in design status quo and declaring it “more easier” but the reality kicks in fast when you go beyond the initial short burst of information or screen composition into denser territory – even Microsoft are hard pressed to come up with a Metro inspired edition of Office.
Metro Reality Check – Typography style.
The reality is the current execution of Metro on Windows Phone 7 isn’t built or ready for dense information and I would argue that the rationale that typography replaces chrome is merely a case of being the opposite of a typical iPhone like experience – users are more in love with the unique anti-pattern then they are with the reality of what is actually happening.
Using typography as your spatial visualization go to pattern of choice simply flies in the face of what we actually do know in the small packets of research we have on HCI.
Furthermore, if you think about it, the iPhone itself when It first came out was more of a mainstream interruption to the way in which we interpret UI via mobile device, icons for example took on more of candy experience and the chrome itself become themed.
It became almost as if Disney had designed the user interface as being their digital mobile theme park, yet here is the thing – it works (notice when Metro UI adds pictures to the background it seems to fit?...there’s a reason for that).
Chrome isn’t a bad thing, it taps into what we are hard wired to do in our ability to process information, we think visually (with the minority being the exclusion).
Egyptians, Asian(s) and Aboriginals wrote their history on walls/paper using visual glyphs/symbols not typography. That is an important principle to grapple onto the most; historically speaking we have always shown evidence to gravitate towards a pictorial view of the world and less around complexity in glyphs around pattern(s) (text) (that’s why Data Visualization works better than text based reports).
We ignore this basic principle because our technology environment has gotten more advanced but we do not have extra brainpower as human race, our genome has not mutate or evolved! We have just gotten better at collectively deciphering the patterns in and in turn have built up better habitual usage of these patterns.
Software today has a lot of bad UI out there, I mean terrible experiences, yet we are still able to use and navigate them.
Metro is mostly marketing / anti-compete than it is about being the righteous path to HCI design, never forget that part. Metros tagline as being “digitally authentic” is probably one of Deiter Rams principles being mutated and broken at the same time.
Good design is honest.
It does not make a product more innovative, powerful, or valuable than it really is. It does not attempt to manipulate the consumer with promises that cannot be kept.
Should point out, these ten principles are what have inspired Apple and other brands in the industrial design space. Food for thought.
Lastly one more thing, what if your audience was 40% Autistic/Dyslexic how would your UI react differently to the current designs you have before you.
Software today is definitely getting more and more user experience focused, that is to say we are preoccupied with getting the minimal designs into the hands of adults for task driven operations. We pride ourselves on knowing how the human mind works through various blog based theories on how one is to design the user interface and what the likely hood of the adult behind the computers initial reaction will be.
The amount of conferences I’ve personally attended that has a person or person(s) on stage touting the latest and greatest in cognitive science buzz word bingo followed by best practices in software design is well, too many.
On a personal level, my son has a rare chromosome disorder called Trisomy 8, its quite an unexplored condition and I’ve pretty much spent the last eight years interacting with medical professions that touch on not just the psychology of humans but zeros in on the way in which our brains form over time.
In the last eight years of my research I have learnt quite a lot about how the human mind works specifically on how we react to information and more importantly our abilities to cope with change that aren’t just about our environments but also plays a role in Software.
I’ve personally read research papers that explore the impacts of society’s current structure on future generations and more importantly how our macro and micro environments play a role with regards to the children of tomorrow coping with change and learning at the same time – that is to say, we adults cope with the emerging technology advancements because for us “its about time” but for todays child of 5-8 this is a huge problem around having to manifest coping skills to dealing with a fluid technology adoption that often doesn’t make sense.
Yesterday we didn’t have NUI, today we do? Icons that have a 3.5” floppy disc that represent “saving” have no meaning to my son etc.
The list goes on just how rapid and fast we are changing our environments and more importantly how adults that haven’t formulated the necessary social skills to realistically control the way in which our children are parented often rely on technology as at times being the de-facto teacher or leader (Amount of insights I’ve read on how XBOX 360 has become the baby sitter in households is scary).
Getting back to the topic at hand, that is what if the people you are designing software have an undiagnosed mental illness or are better yet diagnosed. How would you design tomorrow’s user interface to cope with this dramatic new piece of evidence? To you the minimal design works, it seems fresh and clear and has definitive boundaries established.
To an adult suffering from Type-6 ADHD (if you believe in it) that has a degree of over-focus its not enough, in fact it could have the opposite effect of what you are trying to do in your design composition.
Autism also has a role, grid formation in design would obviously appeal to their autistic traits given it’s a pattern that they can lock onto and can often agree with – Asperger sufferers may disagree with it, and could annoy or irritate them in some way (colour choice, too much movement blah blah).
Who has to say your designs work, as if you ask people on the street a series of questions and observe their reactions you are not really providing an insight into how the human mind reacts to computer interaction. You’ve automatically failed in a clinical trial, as the person on the street isn’t just a normal adult there’s a whole pedigree of historical information you’re not factoring into the study that is relevant.
At the end of the day, the real heart of HCI for this and is my working theory is that we formulate our expectations around software design from our own personal development. That is to say, if we as children had normal or above average IQ level in around about ability to understand patterns and the ability to cope with change we in turn are more likely to adapt to both “well design” and “poorly designed” software.
That is to say, when you sit down and think about an Adult and how they react to tomorrow’s software one has to really think about the journey the adult has taken to arrive at this point, more to the point how easily they are also influenced.
A child who came from broken home, parents left and raised by other adults who is now a receptionist within a company is more likely to have absolutely no confidence in around making decisions. That person is now an easy mark for someone who has the opposite and can easily sway this person to adoption and change.
Put those two people into a clinical trial around how the next piece of software you are going to roll out for a company works and the various tests you put them through, watch what happens.
Most tests in UX / HCI often focus on the ability of the candidate to make their way through the digital maze in order to get the cheese (basic principles around reward / recognition) so to be fair its really about how the human mind can navigate a series of patterns to arrive at a result (positive / negative) and then furthermore how the said humans can recall that information at a later date (memory recall meets muscle memory).
These style of results will tell I guess the amount of friction associated with your change, and give you a score / credit in around what the impact it will likely have but in reality what you really probably need to do as an industry is zero in on how aggressively you can decrease the friction levels associated with change prior to the person arriving at the console.
How do you get Jenny the receptionist who came from an abused child hood enough confidence to tackle a product like Autodesk Maya (which is largely complex) as if you were to sit down with Jenny and learn that she also has a creative component to her that’s not obvious to all – its her way of medicating the abuse through design.
How do you get Jack the stock broker who spends most of his time jacked on speed/coke and caffeine to focus long enough to read the information in front of him through data visualisation metaphors / methodologies then the decisions he makes could impact the future of the global financial system(s) world wide (ok bit extreme but you get my point?)
It’s amazing how much we as a software industry associate normal from abnormal when it comes to software design. It is also amazing how we look at our personas in the design process and attach the basic “this is mike, he likes facebook” fluffy profiling. When what you may find even more interesting is that Mike may like facebook, but in his down time he likes to put fireworks in cats butts and set them on fire because he has this weird fascination with making things suffer – which is probably why Mike now runs ORACLE’s user experience program.
The persona in my view is simply the design team having a massive dose of confirmation bias as when you sit down and read research paper after research paper on how a child sees the world that later helps define him/her as an adult, well…. In my view, my designs take on a completely new shift in thinking in around how I approach them.
My son has been tested numerous times and has been given an IQ of around 135 which depending on how you look at it puts him in around the genius level. The problem though is my son can’t focus or pays full attention to things and relies heavily on patterns to speed through tasks but at the same time he’s not aware of how he did it.
Imagine designing software for him. I do, daily and have to help him figure out life, it just has taught me so much in the process.
Metro UI vs Apple OS 5..pft, don’t get me started on this subject as both have an amazing insight into pro’s and con’s.
I’ve been thinking about how to approach metro designs for the past year now, there’s a lot to the mechanics of getting the metro into what I call a “golden ratio” like state – that is to say, I think due to the simplicity of the design(s) you can achieve the bulk of the effort required by metro using mathematics and layout / proportions that are OCD / consistent
Tonight, I sat down inside Adobe Photoshop and decided to draw a line at the overall Resource Dictionary creation for some of the WPF/Silverlight and Windows Phone 7 projects I often work on. In doing this, I decided the first thing one needs to attack with a metro design is the color selection.
Color choice is important in the Microsoft style of Metro designs (I call it ms-metro as the word metro is getting to be an overloaded term, departing from the core design principles outlined), as you’ll note that Metro designs to date a really monochrome in the way they handle the selection of colors – to be upfront, I think they rely too heavily on primary colors and by not using shades of the primary / accent colors the designs come off to shallow / unpolished – helps to provide light/dark/normal contrasts imho.
I decide that in most of my designs I typically rest on 3-4 color choices overall – including the chrome (dark or light). These are often the basis for my design canvas and from here it’s really about balancing out the decals, typography and deciding how the overall screens and data flow.
More on this subject when I finish my brand reset (I’m redesigning riagenic.com and introducing metrotastic.com as well – more later).
In this post though, I thought it would be a good idea to provide a quick overview of my thinking here to gather some feedback?
If you look at most brands around the world, they typically rely on dark/light in terms of a canvas base and from there it’s really down to one to two primary colors (Google etc. the exception – where they have more than two).
Combining this and along with a concept I notice in most modern cars today where what I call an “input” color exists – pop the hood of your car, notice the yellow parts? That means it’s safe to touch, the rest leave it to the mechanics.
Looking at the below, I’ve isolated the theme into three color choices starting with Normal as being the primary color choice. Once the primary has been nominated then it’s a case of mixing some white/black to provide you shading contrasts.
Shades of Normal.
The shading is bit of a guestimation at this point, but so far I’ve rested on an 80 or 30% split. In that, using these two values with a white/black shading over the top of the base you can achieve a contrast setting that’s quite palette friendly to the ms-metro look and feel.
The shades themselves also have slight adjustment requirements depending on how you use them in your UI as if you use the darkest shade as your background for example (as in the below example) then you have to account for how your foreground is going to look that will differ from say your lightest color choice – the point is if you have a dark/light theme switch you need to adjust not just the base color selection but also foreground colors to accommodate the shading contrasts.
Chrome vs Brand
Inside a lot of my designs I use chrome decals, despite what Microsoft often preach around letting UI breathe, I still prefer to use decals to help provide separation amongst areas – imho, Microsoft UI is often barren and flat! We saw hints of this when a designer soon after the Windows 8 release whereby he designed a fake Steam UI which was an example of additive decals.
My approach in doing this is to separate the chrome into its own color channel and with its own set of shades of contrast (lighter,light, normal, dark, darker).
The same also goes for Input (ie using the car metaphor above), I typically will often spend a lot of time at kuler.adobe.com playing around with colors before I find a color that matches the branding (primary) nicely – in this case I prefer a blue/green/gray color selection.
You can add a fourth palette to this, but in all honesty when you start getting to around the fourth color choice things get a bit interesting in the color / contrast department – dangerous design imho.
Using the color palette(s) here I quickly knocked up a fake basic demo UI in metro style.
With the example, I put in a radial gradient starting with DARK-Gray and DARKER-Gray and simply put the radial gradient in the upper left area – it gives the UI that dull spotlight effect.
I then put the NORMAL-Blue as the accent color here, whereby the blues role in this design is to act as an opposing contrast to the chrome – you’ll often see this in ms-metro around say designs like Contoso etc.
The menu and most of the text uses the colors in the Darker TEST palette, but the thing to note here is I used the Normal-Blue as a selection state. In that whilst the green indicates input, the blue however is used to indicate current selection state. I’ve played around with this for a while now and in all honesty it annoys me personally how this works as to me input color should be consistent? But yet it works?
I should point out that I often will just use this technique in terms of giving users a spatial understanding of where they are in the user interface(s). In tests I’ve done with users over the past 2-3 years using this technique, they’ve never bucked the concept or idea – if anything have made consistent notary that this approach “feels right” – so despite some UX / UI colleagues giving me advice to avoid it, so far, the data says “you’re not right and your not wrong either” J (everyone becomes a UX expert over night mind you).
The Green in this UI stands out more, it highlights that these buttons are safe to touch and they are the focal point of input and like I said, provides that experience similar to popping the hood of your car. In all the tests that I’ve done in usability / ux over the last year or so, every single time the user has found their way around with minimal eLearning / Advice required – I have a theory that it has a link to how we humans handle perceptional organization when dealing with working memory (ie grab a few clipart pics, pick two categories the same and put them into a grid with 4 others that aren’t, then ask the candidates to tell you which two are the same and measure their reaction time – I should discuss this more, as its quite fascinating to see how peoples IQ matches to UX with a fairly consistent rate of predictability).
I have plans to really drill deeper into this area of design and I think I’m really only just scratching the surface of this conversation. The more I get asked to design metro themes for various Microsoft applications the more I question the overall strategy given for me this is quite simple stuff, yet it seems to be in high demand.
I enjoy working on it all now, I used to laugh at it but for me now the approach is getting much simpler by the day and I’d like to see the overall community raise the bar a bit more around this design language – that is to say, I really want to see what others do as I’m starving for alternative inspiration in this arena of metro-tastic design school.
Here is a sneak preview of my upcoming reset
Some Color Examples