Metro, is fast becoming this unclear, messy craptuclar retardation of modern interface design. In that, the current execution out there is getting out of control resulting in what originally started out as a Microsofts plagiarized edition of Dieter Rams “Ten Principles of Good Design” into what we have before us today.
I am actually ok with that, as if I ever looked back on the first year of my designs in the 90s I’d cringe at the sight of lots of Alienskin Bevels, Glows and Fire plugin driven pixel vomit.
The part though I’m a little nervous about is how fast the microsoftees of the world have somehow collectively agreed that Text is in Chrome is out – like somehow science is wrong, that what we really need to do is get back to basics of ASCII inspired typography design(s) of yesteryear.
Typography is ok, in short bursts.
Spatial Visualization is the key description you need to Google a bit more around. Let me save you a little google confusion and explain what I mean.
Humans are not normal, to assume that inside HCI we are all equal in our IQ levels is dangerous, it is quite the opposite and to be fair the human mental conditions that we often suffer from are still quite an the infancy of medicine – we have so much more to learn about genetic deformation/mutations that are ongoing.
The reality is that most humans hail from a different approach to the way in which we decipher patterns within our day-to-day lives as we aren’t getting smarter we’re just getting faster at developing habitual comprehension of patterns that we often create.
Let us for example assume I snapped someone from the 1960’s, and I sat him or her in a room and handed them a mobile device. I then asked them “turn it on” and measured the reaction time to navigating the device itself to switching it on.
You would most likely find a lot of accidental learning, trial and error but eventually they’d figure it out and now that information is recorded into their brain for two reasons. Firstly, pressure does that to humans we record data when under duress that is surprisingly accurate (thus bank robbers often figure out that their disguises aren’t as affective as once thought) and secondly we discovered fire for the first time – an event gave it meaning “this futuristic device!!”
What is my point, firstly, the brain capacity has not increased our ability to think and react visually is what I’d argue is the primary driver for our ability to decode what’s in front of us. (point in case the usage of H1 tag breaks up the indexation of comprehending of what I’ve written).
Research in the early 80’s found that we are more likely to detect misspelled words than we are correctly spelled words. The research goes on to suggest that the reason for this is that we obtain shape information about a word via peripheral vision initially (we later narrow in on the said word and make a decision on true/false after we’ve slowed the reading down to a fixated position).
It doesn’t stop there, by now you the reader have probably fixated on a few mistakes in my paragraph structure or word usage as you’ve read this, but yet you’ve still persisted in comprehending the information – despite the flaws.
What’s important about this packet of information is that it hints at what I’m stating, that a reliance on typography is great but for initial bursts of information only. Should the density of data in front of you increase, your ability to decode and decipherer (scan / proof read) becomes more of a case of balancing peripheral vision and fixated selection(s).
Your CPU is maxed out is my point.
AS I AM INFERRING, THE HUMAN BEING IS NOW JUGGLING THE BASICS IN AND AROUND GETTING SPATIAL QUEUES FROM BOTH TEXT, IMAGERY AND TASK MATCHING – ALL CRAMMED INSIDE A SMALL DEVICE. THE PROBLEM HOWEVER WONT STOP THERE, IT GOES ON INTO A MORE DEEPER CYCLE OF STUPIDITY.
INSIDE METRO THE BALANCE BETWEEN UPPER AND LOWER CASE FLUCTUATES THAT IS TO SEE AT TIMES IT WILL BE PURE UPPERCASE, MIXED OR LOWERCASE.
Did you also notice what I just did? I put all that text in Uppercase, and what research has also gone onto suggest is that when we go full-upper in our usage our reading speed decreases as more and more words are added. That is to say, now inside metro we use a mixed edition of both and somehow this is a good thing or bad thing?
Apple has over-influenced Microsoft.
I’m all for new design patterns in pixel balancing, I’m definitely still hanging in there on Metro but what really annoys me the most is that the entire concept isn’t really about breaking way based on scientific data centered in around the an average humans ability to react to computer interfaces.
It simply is a competitive reaction to Apple primarily, had Apple not existed I highly doubt we would not be having this kind of discussion and it would probably be full glyph/charms/icon visual thinking friendly environment(s).
Instead what we are probably doing is grabbing what appears to be a great interruption in design status quo and declaring it “more easier” but the reality kicks in fast when you go beyond the initial short burst of information or screen composition into denser territory – even Microsoft are hard pressed to come up with a Metro inspired edition of Office.
Metro Reality Check – Typography style.
The reality is the current execution of Metro on Windows Phone 7 isn’t built or ready for dense information and I would argue that the rationale that typography replaces chrome is merely a case of being the opposite of a typical iPhone like experience – users are more in love with the unique anti-pattern then they are with the reality of what is actually happening.
Using typography as your spatial visualization go to pattern of choice simply flies in the face of what we actually do know in the small packets of research we have on HCI.
Furthermore, if you think about it, the iPhone itself when It first came out was more of a mainstream interruption to the way in which we interpret UI via mobile device, icons for example took on more of candy experience and the chrome itself become themed.
It became almost as if Disney had designed the user interface as being their digital mobile theme park, yet here is the thing – it works (notice when Metro UI adds pictures to the background it seems to fit?...there’s a reason for that).
Chrome isn’t a bad thing, it taps into what we are hard wired to do in our ability to process information, we think visually (with the minority being the exclusion).
Egyptians, Asian(s) and Aboriginals wrote their history on walls/paper using visual glyphs/symbols not typography. That is an important principle to grapple onto the most; historically speaking we have always shown evidence to gravitate towards a pictorial view of the world and less around complexity in glyphs around pattern(s) (text) (that’s why Data Visualization works better than text based reports).
We ignore this basic principle because our technology environment has gotten more advanced but we do not have extra brainpower as human race, our genome has not mutate or evolved! We have just gotten better at collectively deciphering the patterns in and in turn have built up better habitual usage of these patterns.
Software today has a lot of bad UI out there, I mean terrible experiences, yet we are still able to use and navigate them.
Metro is mostly marketing / anti-compete than it is about being the righteous path to HCI design, never forget that part. Metros tagline as being “digitally authentic” is probably one of Deiter Rams principles being mutated and broken at the same time.
Good design is honest.
It does not make a product more innovative, powerful, or valuable than it really is. It does not attempt to manipulate the consumer with promises that cannot be kept.
Should point out, these ten principles are what have inspired Apple and other brands in the industrial design space. Food for thought.
Lastly one more thing, what if your audience was 40% Autistic/Dyslexic how would your UI react differently to the current designs you have before you.
Last night I was sitting in a child psychologist office watching my son undergo a whole heap of cognitive testing (given he has a rare condition called Trisomy 8 Mosaicism) and in that moment I had what others would call a “flash” or “epiphany” (i.e. theory is we get ideas based on a network of ideas that pre-existed).
The flash came about from watching my son do a few Perceptional Reasoning Index tests. The idea in these tests is to have a group of imagery (grid form) and they have to basically assign semantic similarities between the images (ball, bat, fridge, dog, plane would translate to ball and bat being the semantic similarities).
This for me was one of those ahah! Moments. You see, for me when I first saw the Windows 8 opening screen of boxes / tiles being shown with a mixed message around letting the User Interface “breathe” combined with ensuring a uniform grid / golden ratio style rant … I just didn’t like it.
There was something about this approach that for me I just instantly took a dislike. Was it because I was jaded? Was it because I wanted more? ..there was something I didn’t get about it.
Over the past few days I’ve thought more about what I don’t like about it and the most obvious reaction I had was around the fact that we’re going to rely on imagery to process which apps to load and not load. Think about that, you are now going to have images some static whilst others animated to help you guage which one of these elements you need to touch/mouse click in order to load?
re-imagining or re-engineering the problem?
This isn’t re-imagining the problem, its simply taken a broken concept form Apple and made it bigger so instead of Icons we now have bigger imagery to process.
Just like my son, your now being attacked at Perceptional Reasoning level on which of these “items are the same or similar” and given we also have full control over how these boxes are to be clustered, we in turn will put our own internal taxonomy into play here as well…. Arrghh…
Now I’m starting to formulate an opinion that the grid box layout approach is not only not solving the problem but its actually probably a usability issue lurking (more testing needs to be had and proven here I think).
Ok, I’ve arrived at a conscious opinion on why I don’t like the front screen, now what? The more I thought about it the more I kept coming back to the question – “Why do we have apps and why do we cluster them on screens like this”
The answer isn’t just a Perspective Memory rationale, the answer really lies in the context in which we as humans lean on software for our daily activities. Context is the thread we need to explore on this screen, not “Look I can move apps around and dock them” that’s part of the equation but in reality all you are doing is mucking around with grouping information or data once you’ve isolated the context to an area of comfort – that or you’re still hunting / exploring for the said data and aren’t quite ready to release (in short, you’re accessing information in working memory and processing the results real-time).
As the idea is beginning to brew, I think about to sources of inspiration – the user interfaces I have loved and continue to love that get my design mojo happening. User interfaces such as the one that I think captures the concept of Metro better than what Microsoft has produced today – the Microsoft Health / Productivity Video(s).
Back to the Fantasy UI for Inspiration
If you analyze the attractive elements within these videos what do you notice the most? For me it’s a number of things.
I notice the fact that the UI is simple and in a sense “metro –paint-by-numbers” which despite their basic composition is actually quite well done.
I notice the User Interface is never just one composition that the UI appears to react to the context of usage for the person and not the other way around. Each User Interface has a role or approach that carries out a very simplistic approach to a problem but done so in a way that feels a lot more organic.
In short, I notice context over and over.
I then think back to a User Interface design I saw years ago at Adobe MAX. It’s one of my favorites, in this UI Adobe were showing off what they think could be the future of entertainment UI, in that they simply have a search box on screen up top. The default user interface is somewhat blank providing a passive “forcing function” on the end user to provide some clues as to what they want.
The user types the word “spid” as their intent is Spiderman. The User Interface reacts to this word and its entire screen changes to the theme of Spiderman whilst spitting out movies, books, games etc – basically you are overwhelmed with context.
I look at Zune, I type the word “the Fray” and hit search, again, contextual relevance plays a role and the user interface is now reacting to my clues.
I look back now at the Microsoft Health videos and then back to the Windows 8 Screens. The videos are one in the same with Windows 8 in a lot of ways but the huge difference is one doesn’t have context it has apps.
The reality is, most of the Apps you have has semantic data behind (except games?) so in short why are we fishing around for “apps” or “hubs” when we should all be reimagineering the concept of how an operating system of tomorrow like Windows 8 accommodates a personal level of both taxonomy and contextual driven usage that also respects each of our own cognitive processing capabilities?
Now I know why I dislike Windows 8 User Interface, as the more I explore this thread the more I look past the design elements and “WoW” effects and the more I start coming to the realization that in short, this isn’t a work of innovation, it simply a case of taking existing broken models on the market today and declaring victory on them because it’s now either bigger or easier to approach from a NUI perspective.
There isn’t much reimagination going on here, it’s more reengineering instead. There is a lot of potential here for smarter, more innovative and relevant improvements on the way in which we interact with software of tomorrow.
I gave a talk similar to this at local Seattle Design User Group once. Here’s the slides but I still think it holds water today especially in a Windows 8 futures discussion.
I did it! and I feel exposed. I sat down tonight and put together my first of what may or may not be many (depending on how badly I get crit) screencasts around UI / UX + Microsoft Technology.
In this video, I show folks how one can take a workflow design concept and inject it into your canvas of choice but in an Isometric format. I like Isometrics simply because you can get more of a spatial view than most screen angles that and it derives from my old Pixel-art days so..yeah..Isometrics are the way!
Hope you enjoy, and feedback welcomed.
In this screencast I show how one can take a Isometric workflow map and transpose it into Expression Blend 4.
The phrase “authentically digital” makes me want to barf rainbow pixels. This was a quote pulled from a Windows Phone 7 reviewer when he first got a hold of the said phone. At first you could arguably rail against the concept of what Authentically Digital means and simply lock it into the yet another marketing fluff to jazz a situation in an unnecessary way.
I did, until I sat back and thought about it more.
Metro in itself has its own design language attached, they cite a bunch of commandments that the overall experience is to respect and adhere that is to say, someone has actually sat down and thought the concept through (rare inside Microsoft UX). I like what the story is pitching and I agree in most parts with the laws of Metro that is to say, I am partially onboard but not completely.
I'm on board with what Metro could be, but am not excited about where it's at right now. I state this as I think the future around software is going through what the fashion industry has done for generations - a cultural rebirth / reboot.
Looking back at Retro not metro.
Looking at the past, back in the late 90's the world was filled with bold flat looking user interfaces that made use of a limited color palette given the said video capabilities back then wasn't exactly the greatest on earth. EGA was all the rage and we were seeing hints of VGA whilst hating the idea that CGA was our first real cut at graphics.
EGA eventually faded out and we found ourselves in the VGA world (color TV vs. black n white if you will), life was grand and with 32bit color vs. 16bit color wars coming to a conclusion the worlds creative space moved forward leaps and bounds. Photoshop users found themselves creating some seriously wicked UI, stuff that made you at the time thank the UI gods for plug-ins like alien ware etc as they gave birth to what I now call the glow/bevel revolution in user interface design.
Chrome inside software started to take on an interesting approach, I actually think you could probably trace its origins of birth in terms of creative new waves back to products like Winamp & Windows Media player skins. The idea that you could take a few assets and feed them into mainstream products like this and in turn create this experience on the desktop that wasn't a typical application was interesting (not to mention Macromedia Director's influence here either).
I think we all simply got on a user interface sugar induced high, we effectively went through our awkward 80's fashion stage, where crazy weird looking outfits / music etc was pretty much served up to the world to gorge on. This feast of weird UI has probably started to wind down to thanks to the evolution of web applications, more importantly what they in turn taught us slowly.
Web taught the desktop how to design.
The first lesson we have learnt about design in user interface from the web is simple - less is more. Apple knocks this out of the park extremely well and I'd argue Apple wasn't its creator, the Web 2.0 crowd as they use to be know was. The Web 2.0 crowd found ways to simply keep the UI basic to the point and yet visually engaging but with minimalist views in mind. It worked, and continues to work to this day - even on Apple.com
Companies like Microsoft have seen this approach to designing user interface and came to a fairly swift rationale that if one were to create a platform for developers & designers to work in a fashion much like the web, well desktop applications themselves could take on an entirely new approach.
History lesson is over.
I now look at Metro thinking back on the past evolution and can't but help think that we're going back to a reboot of EGA world, in that we are looking for an alternative to design in order to attract / differentiate from the past. Innovation is a scarce commodity in today's software business, so we in turn are looking at ways to re-energize our thinking around software design but in a way that doesn't create a cognitive overload - be radical, be daring but don't be disruptive to process/task.
I like it, I like this source of inspiration but my first instinct was simple - I hope your main source of success isn't the reliance on typography, especially in this 7second attention economy of today. Sure enough, there it is, the reliance in Windows phone 7. Large typography taking over areas of where chrome used to live in order to fix what chrome once did. The removal of color / boundary textures in order to create large empty space filled with 70px+ Typography with half-seen half-hidden typography is what Microsoft's vision of tomorrow looks like.
Metro isn't Wp7, Metro is Microsoft Future Vision.
My immediate reaction to seeing the phone (before the public did) back inside Microsoft was "are you guys high, this is not what we should be doing, we are close but keep at it, you're nearly there! don't rush this!". This reaction was the equivalent of me looking at a Category 5 Tornado, demanding it turn around and seek another town to smash to bits - brave, forward thinking but foolish.
This phone has to ship, its already had two code resets, get it done, fix it later is pretty much the realistic vision behind Windows Phone 7 - NOT - Metro.
Take a look at what the Industry Innovation Group has produced via a company called Oh, Hello. In this vision of tomorrow's software (2019 to be exact) you'll see a strong reliance on the metro laws of design.
The Principles of Metro vs. Microsoft Future Vision.
In order to start a conversation around Metro in the near future, one has to identify with the level of thinking associated with its creation. Below is the principles of metro – more to the point, these are the design objectives and creative brief if you will on what one should approach metro with.
Clean, Light, Open, Fast
- Feels Fast and Responsive
- Focus on Primary Tasks
- Do a Lot with Very Little
- Fierce Reduction of Unnecessary Elements
- Delightful Use of Whitespace
- Full Bleed Canvas
You could essentially distill these points down to one word – minimalist. Take a minimalist approach to your user interface and the rewards are simple – sense of responsiveness in user interface, reliance on less information (which in turn increases decision response in the end user) and a reduction in creative noise (distracting elements that add no value other than it was cool at the time).
In Figure 1, we I’d strongly argue you could adhere to these principles. This image is from the Microsoft Sustainability video, but inside it you’ve got a situation which respects the concept of Metro as after all given the wide open brief here under one principle you could argue either side of this.
Personally, I find the UI in question approachable. It makes use of a minimalist approach, provides the end user with a central point of focus. Chrome is in place, but its not intrusive and isn’t over bearing. Reliance on typography is there, but at the same time it approaches in a manner that befits the task at hand.
Microsoft’s vision of this principle comes out via the phone user interface above (Figure 2). I’m not convinced here that this I the right approach to minimalism. I state this, as the iconography within the UI is inconsistent – some are contained others are just glyphs indicating state?. The containment within the actual message isn’t as clear in terms of spacing – it feels as if the user interface is willing to sacrifice content in order to project who the message is from (Frank Miller). The subject itself has a lower visual priority along with the attachment within – more to the point, the attachment has no apparent containment line in place to highlight the message has an attachment?
Microsoft’s original vision of device’s future has a different look to where Windows Phone 7 today. Yet I’d state that the original vision is more in line with the principles than actual Windows Phone 7. It initially has struck a balance between the objectives provided.
The iconography is consistent and contained, typography is balanced and invites the users attention on important specifics – What happened, where and oh by the way more below… and lastly it makes use of visuals such as the photo of the said person. The UI also leverages the power of peripheral vision to give the user a sense of spatial awareness in that, its subtle but takes on the look and feel of an “airport” scenario.
Is this the best UI for a device today? No, but it’s approach is more in tune with the first principle then arguably the current Windows Phone 7’s approach which is reliance of fierce amounts of whitespace, reduction in iconography to the point where they clearly have a secondary reliance and lastly emphasis on parts of the UI which I’d argue as having the lowest importance (i.e. the screen before would of indicated who the message is from, now I’m more focused on what the message is about!).
- Type is Beautiful, Not Just Legible
- Clear, Straightforward Information Design
- Uncompromising Sensitivity to Weight, Balance and Scale
I love a good font as the next designer. I hoard these like my icons, in fact It’s a disease and if you’re a font lover a must see video is Helvetica. That being said, there is a balance between text and imagery, this balance is one struck often daily in a variety of mediums – mainly advertising.
Imagery will grab your attention first as it taps into a primitive component within your brain, the part that works without your realizing its working. The reason being is your brain often is in auto-pilot, constantly scanning for patterns in your every day environment. It’s programmed to identify with three primative checks, fear, food and sex. Imagery can tap into these striaght away, as if you have an image of an attractive person looking down at a beverage you can’t but help first think “that’ person’s cute (attractive bias) and what are they looking at? oh its food!…” All this happens despite there being text on the said image prior to your brain actually taking time to analyse the said image. To put it bluntly, we do judge a book by its cover with extrem amount of prejudice. We are shallow, we do prefer to view attractive people over ugly unless we are conveying a fear focused point “If you smoke, your teeth will turn into this guys – eewwww” (Notice why anti-cigarette companies don’t use attractive people?)
Back to the point at hand, celebrating typography. The flaw in this beast despite my passion for fonts, is that given we are living in a 7 second attention economy (we scan faster than we have before) reliance on typography can be a slippery slope.
In Figure 6, a typical futuristic newspaper that has multi-touch (oh but I dream), you’ll notice the various levels of usage of typography (no secret to news papers today). The headings on purpose approach the user with both different font types, font weight, uppercase vs lowercase and for those of you out there really paying attention, at times different kerning / spacing.
The point being, the objective is that typography is in actuality processed first via your brain as a glyph, a pattern to decode. You’ve all seen that link online somewhere where the wrod is jumbled in a way that you first are able to read but then straight away identify the spelling / order of the siad words. The fact I just did it then along with poor grammar / spelling within this blog, indicates you agree to that point. You are forgiving majority of the time towards this as given you’ve established a base understanding of the english language and combine that with your attention span being so fast paced – you are more focused on absorbing the information than picking apart how it got to you.
Typography can work in favor of this, but it comes at a price between balancing imagery / glyphs with words.
The above image (Figure 7) is an example of Metro in the wild. Typography here is in not to bad of a shape, except for a few things. The first being the “Pictures” text is making use of a large amount of the canvas, to the point where the background image and heading are probably duking it out for your attention. The second part of this is the part that irritates me the most, in that the size of the secondary heading with the list items is quite close in terms of scale. Aside from the font weight being a little bolder, there is no real sense of separation here compared to what it should or could be if one was to respect the principle of celebrating typography.
Is Segoe UI the vision of the only font allowed? I hope not. Is the font weight “light” and “regular” the only two weights attached to the UI? what relevance does the background hold to the area – pictures? ok, flimsy at best contextual relevance but in comparison to the Figure 3 above a subtle usage of watermarks etc. to tap into your peripheral vision would provide you more basis to grapple onto – pattern wise that is. Take these opinions and combine the reality that there is no sense of containment and I’m just not convinced this is in tune with the principle. It’s like the designers of metro on windows phone 7 took 5% of the objectives and just ran with it.
Comparisons between Figure7 and Figure8, the contrast in usage of typography is different but yet both using the same one and only font – Segoe UI. The introduction of color helps you separate the elements within the user interface, the difference in scale is obvious along with weight and transforms (uppercase / lowercase). Almost 80% of this User Interface is typography driven yet the difference in both is what I hope to be obvious.
Don’t despair, it’s not all dark and gloom for the Windows Phone 7 future. Figure 9 (Above) is probably one of the strongest hints of “yes!” moment for the siad phone I could find. Typography is used but add visual elements and approach the design of typography slightly differently and you may just have a stake in this principle. The downside is the choice of color, orange and light gray on white is ok for situations that have increased scale, but on a device where lighting can be hit/miss, probably need to approach this with more bolder colors. The picture in the background also creeps into your field of view over the text, especially in the far right panel.
Alive in motion
- Feels Responsive and Alive
- Creates a System
- Gives Context to Improve Usability
- Transition Between UI is as Important as the Design of the UI
- Adds Dimension and Depth
I can’t really talk to these principles via text on a blog, but what I would say is that the Windows Phone attacks this relatively ok. I still think the FlipToBack transition is to tacky and the reality between how the screens transition in and out at times isn’t as attractive as for example the iPhone (ie I really dig how the iphone zooms the UI back and to the front?). The usage of kinetic scrolling is also one that gives you the sense of control, like there are some really well oiled ball bearings under the UI’s plane that if you flick it up, down, right or left the sense of velocity and friction is there.
If you zoom in and out of the UI, the sense that the UI will expand and contract in a fluid nature also gives you the element of discovery (Progressive disclosure) but can also give you a sense of less work attached.
Taking Figure 11 & Figure 12 (start and end) one could imagine a lot of possibilities here in terms of the transition were to work. The reality that Reptile Node expands out to give way to types of reptiles is hopefully obvious whilst at the same time the focus is on reptile is also in place (via a simple gradient / drop shadow to illustrate depth). Everything could snap together in under a second or maybe two but it’s something you approach with a degree of purpose driven direction. The direction is “keep your eye on what I'm about to change, but make note of these other areas I’m now introducing” – you have to move with the right speed, right transition effect and at the same time don’t distract to heavily in areas that aren’t important.
Content, Not Chrome
- Delight through Content Instead of Decoration
- Reduce Visuals that are Not Content
- Content is the UI
- Direct interaction with the Content
Chrome is important as content. I dare anyone to provide any hint of scientific data to highlight the negative effects of grouping in user interface design. Chrome can be over used, but at the same time it can be a life saver especially when the content becomes over bearing (most line of business applications today suffer from this).
Having chrome serves a purpose, that is to provide the end user a boundary of content within a larger canvas. An example is below
I could list more examples but because I’m taking advantage of Microsoft Sustainability video, I figure this would be sufficient examples of how chrome is able to breakup the user interface into contextual relevance. Chrome provides a boundary, the areas of control if you will in order to separate content into piles of semantic action(s). Specifically in Figure 15, the brown chrome is much like your dashboard on the car ie you’re main focus is the road ahead, that’s your content of focus but at the same time having access to other pieces of information can be vital to your successful outcome. Chrome also provides you access to actions in which you can carry out other principles of human interaction – e.g., adjustment of window placement and separation from within other areas offers the end user a chance of tucking the UI into an area for later resurrection (perspective memory).
Windows Phone 7 for example prefers to levearge the power of Typography and background imagery as its “chrome” of choice. I’m in stern disagreement with this as the phone itself projects what I can only describe as uncontained vast piles of emptiness and less on actual content. The biggest culprit of all for me is the actual Outlook client within the said phone.
The Outlook UI for me is like this itch I have to scratch, I want the messages to have subtle separation and lastly I want the typography to have a balance between “chrome” and “whitespace”.
Chrome can also not just be about the outer regions of a window/UI, it has to do with the internal components of the user interface – especially in the input areas. The above (Figure 17) is an example of Windows Phone 7 / Metro keyboard(s). At first glance they are simple, clean and open, but the part that captures my attention the most is the lack of chrome or more to the point separation. I say lack, as the purpose of chrome here would be to simulate tactile touch without actually giving you tactile touch. The keyboard to the right has ok height, but the width feels cramped and when I type on the said device It feels like I’m going to accidently hit the other keys (so I’m now more cautious as a result).
The above (Figure 18) offers the same concept but now with “chrome” if you will. Nice even spacing, solid use of principles of the Typography and clear defined separation in terms of actions below.
iPhone has found a way to also strike a balance between chrome and the previous stated principles. The thing that struck me the most about the two keyboards is not which is better, but more how the same problem was thought about differently. Firstly as you type an enlarged character shows – indicating you hit that character (reward), secondly the actual keys have a similar scale in terms of height/width proportions yet the key itself having a drop shadow (indicates depth) to me is more inviting to touch then a flat – (its like which do you prefer? a holographic keyboard or one with tactile touch, physical embodiment?). If you were to also combine both sound and vibration as the user types it can also help trick the end users sense into a comfortable input.
I digress from Chrome, but the point I’m making is chrome serves a purpose and don’t be quick to declare the principles of Metro as being the “yes!” moment as I’d argue the jury is still not able to formulate a definitive answer either way.
- Design for the Form Factor
- Don’t Try to be What It’s NOT
- Be Direct
I can’t talk to this to much other than to say this isn’t a principle its more marketing fluff (the only one with a tenuous at best attachment to design principles would be “design for the form factor” meaning don’t try and scale down a desktop user interface into a device. Make the user interface react to the device not the other way around.
Metro is a concept, Microsoft has had a number of goes at this concept and I for one am not on board with its current incarnation inside the Windows Phone 7 device. I think the team have lost sight of the principles they themselves have put forward and given the Industry Innovation Group have painted the above picture as to what’s possible, it’s not like the company itself hasn’t a clue. There is a balance to be struck here between what Metro could be and is today. There are parts of Windows Phone 7 that are attractive and then there are parts where I feel it’s either been rushed or engineering overtook design in terms of reasons for what is going on the way it is (maybe the design team couldn’t be bothered arguing to have more time/money spent on propping up areas where it falls short).
People around the world will have mixed opinions about what metro is or isn’t and lastly what makes a good design vs what doesn’t. We each pass our own judgement on what is attractive and what isn’t that’s nothing new to you. What is new to you is the rationale that software design is taking a step back into the past in order to propel itself into the future. That is, the industry is rebooting itself again but this time the focus is on simplicity and by approaching metro with the Microsoft Future’s vision vs the Windows Phone 7 today, I have high hopes for this proposed design language.
If the future is taking Zune Desktop + Windows Phone 7 today and simply rinse / repeating, then all this will become is a design fad, one that really doesn’t offer much depth other than limited respite from the typical desktop / device UI we’ve become used to. If this is enough, then in reality all it takes is a newer design methodology to hit our computer screens and we’re off chasing the next evolution without consistency in our approach (we simply are just chasing shiny objects).
I’ve got a limited time on this earth and I’d like to live in a world where the future is about breaking down large amounts of unreadable / unattractive information into parts that propel our race forward and not stifle it into bureaucratic filled celebrations of mediocrity.
Apple as a company has kick started a design evolution, and say what you will about the brand but the iphone has dared everyone to simply approach things differently. Windows Phone team were paralyzed at times with a sense of “not good enough” when it came to releasing the vnext phone, it went through a number of UI and code resets to get it to the point it’s at now. It had everything to do with the iPhone, it had to dominate its market share again and it had to attract consumers in a more direct fashion. It may not have the entire world locked to the device, but it’s made a strong amount of interruption into what’s possible. It did not do this via the Metro design language, they simply made up their own internally (who knows what that really looks like under the covers).
Microsoft has responded and declared metro design as its alternative to the Apple culture, the question really now is can the company maintain the right amount of discipline required in order to respect the proposed principles.
I’d argue so far, they haven’t but I am hopeful of Windows 8.
Lead with design, engineer second.
Which are you? a developer or designer?How far we have come and yet how so little we have learnt! As someone who worked in/with the Silverlight/Expression teams to make sure the message that Microsoft has entered the UX space and that we’re essentially building a mutated Developer meets Designer and vice versa pixel ninja type person, the reality is people still need to put you into a category. I often find myself torn between which side of that fence line I sit. As to be blunt i can do both, I know every single API inside Silverlight/WPF like the back of my hand, I can code in 9 languages outside of .NET and aren’t script kiddy languages either. I can do 3D and 2D design to the point where many have commented on my abilities here as being “eye for design” or “you are freakish good” but still i’m tormented by having to pigeonhole myself either category. The reality is people aren’t ready to accept the person who can do both just yet and it takes a lot of proof to build trust that you can do both. I’ll let you know how I go with this journey over time but for now suffice to say, its new territory for me and yet still profitable as I can easily pick left or right and just swim in either pool where needed.
I need a UX guy urgently.I've seen this a lot in the past 8 months. I get called in at the last sprint or towards the end of a project and find myself having to triage features vs design vs engineering constraints. It’s the worst time to engage a UX/UI person(s) as in the end you’re asking for a Hail Marry - “can you make this UI look good and functional oh and don’t change the code base in the process?” is a common brief. The trick I’ve learnt is that I can do it, it just takes a lot more patience and focus –, and you really need to know every single backdoor into Blend as well as the Silverlight/WPF API’s. IT is a challenge but can be a success if there is enough time and the communication is clear and expectations are set properly. The bottom line is folks – Engage early and often. Even if its just 1 or 2hrs of their time per week or day, make sure you have someone in the room who bleeds UI/UX from the beginning of the project. Don’t engage late as the price will go up you won’t be able to salvage as much as you think by then it’s not a UX consultation its just a pixel polish.
I don’t use Blend, just Visual Studio.You are breaking my design heart when you say this to me. Everybody right now who reads this open up Blend and pick a fight with it. If you take the time to get to know it and get to know it well, then when used right can help you out enormously with both Silverlight and WPF development. If you’re a person who likes to indent and keep their XAML neat, stop right now, you are trying to skate up hill. XAML is not meant to be a hands-on language. It's a common data format created to allow Design and Code tools to work against the same model without giving up their inherent capabilities. If you are editing it by hand just stop as you are not doing it right.
Pick a color any color.The amount of times I've walked into an engagement and seen a rainbow of colors in the UI has left me thinking that it’s not so much a lack of will power around design it’s more the reality that not everyone is up to speed with color theory (there is a science to color selection). The easiest tip I give people is this. Typically a brand has one or two colors that are used the majority of the time, then they will use white or black for the majority of the content depending on the background composition (white for dark, and black for light). When you design a User Interface for your next Silverlight/WPF project, pick one or two colors and create a ResourceDictionary called [ThemeName]Colors. Then take that color and break into four shades (2x dark, darker and 2x light, lighter). Now then select what I call your chrome colors, these are the colors you would use for the outer chrome of your UI, in windows its typically around 4x shades of gray (light, lighter, dark, darker) and label them accordingly (i.e. chrmeAccent1, chrmeAccent2) etc. Keep your color naming conventions abstract (use camel or Pascal case – whatever lights your design candle). Now don’t use any more colors. Lock that in and use these. Don’t deviate at all from this plan unless you have a designer person in the room who is held responsible for retaining the product/projects brand. Lastly and this is the most important thing I can say to developers world wide:- Don’t use bold colors. Stick to pastel or light colors as you’re typically not ready for the hurdles that bold colors can throw at you. In saying this I did also notice that the MetroTheme that Microsoft has put into play has me a little nervous as it relies heavily on bold color scheming – which is great and cheap way of avoiding depth in a UI but at the same time creates a potential color scheming hazard around highlights vs lowlights and focal areas of your GUI composition. Typography is also another concern of mine as too much reliance of ye olde text can put UI two steps back instead of forward – people don’t like to read in general, visuals often handle the workload – review the many articles available on "extraneous cognitive load" for proof of this.
MVVM that is all.I get that some of you want to get gung-ho with PRISM, MEF or your own framework. Bottom line is this, if you’re starting out and haven’t figured out the tricks and hacks just yet of WPF/Silverlight then you are better off sticking to simple MVVM. It handles 90% of your workload and doesn't require you to learn WPF/Silverlight and an extra layer of complexity at the same time. Keep it simple, work to the idea that the code you write in the first year of WPF/Silverlight is code you will want to throw away or refactor later on. It’s natural you write bad code or work onto something that a year later you'll look back on and proudly say “What was i thinking”. You've got your Microsoft UX training wheels on, embrace this openly and you’ll do just fine. Walk into a room and pretend you have it all under control and you’ll fold eventually as you can’t credibly hold that facade for too much longer. If you can also check out AutoFac as well, this again will compliment your codebase nicely. MEF/PRISM are really for folks who have a team of engineers and are looking to build a complex mammoth size system – that’s the reality even if Microsoft try to deliver a different message – I'm an ex Microsoft Product Manager so I can spin with the best of them 🙂 hehe.
UI and UX are two different things.I need to say this out loud. If you ask someone to do UI, then they will do just that; focus on designing a user interface for an existing concept. If you need someone to wireframe and help you figure out how the whole user interface can be built, that’s where a UX person comes in. They are two different work streams just like a developer and a DBA are different. You can find people who do both, but keep that in mind.
Oh, I need someone local.Yes, having someone onsite is definitely a goal a team should always be on the hunt for. SCRUM teams etc benefit from this and it doesn’t need to be evangelized further. I will say however though, having someone working remotely can be just as effective especially a guy like me in Australia. I say this, as at the moment I’m working on a project with Microsoft and it’s working out in our favor as while they sleep I work, while they work I sleep and we’re able to have a show & tell (i.e. remote stand-up) with one another where the design and development work can meet in the middle actually pretty well. As I’m able to say “ok here’s what I’ve done for you, its in your inbox when you wake up” and in the afternoons they’re able to go “ok, here’s what I need for you to start my day tomorrow” and so the cycle is a 24hr development run that works quite well. It’s not for everyone but so far I've found it works without any issues other than an expensive mobile/cell bill from my end lol.
Show me some of your work?There’s a reason why painters and builders never work on their own house – same goes for me. This blog as weak as it looks is still the front door for my company – RIAGENIC. I need to get off my ass this month and put my site up but the problem I have is distilling what I do into a webpage that makes sense as I’m my worst client (picky, arrogant and will agonize over every pixel and paragraph in the site). I also need to find a way to promote me but at the same time associate myself with a brand, so that for me is a tricky marketing hurdle. I’ll soon see if I can pull it off! 🙂
Find people you can trust and don’t have to babysit.I’ve worked with a lot of developers in my time, nothing annoys me more than baby sitting incompetence. I’m fine with newbies learning the ropes, that I find far more rewarding as you’re working with someone who has passion and a determination to learn. It’s the people who are lazy and expect you to spoon feed them every 5mins on “how”. I didn’t learn Cinema4D by sitting next to a 3D wizard and ask “Ok so how do I write an xpresso script that makes the wheels rotate per frame”, I sat on Google and the objective was this “Find how to make wheels rotate in xpresso” and eventually I found it. Along the way I learnt a lot about Cinema4D and Xpresso as i was hunting for my answers. If you work with me, I will set the benchmark high per person I meet, I will quickly assess your skill set and then raise the bar to challenge you to meet it as I do want to work with people who get it and are smart at what they do. That being said, I love nothing more than coming into a cubicle of developers and feeling like I'm the newbie in the room as now I’m in learning from others mode. At the moment I’m working with Joseph Cooney (one of WPF’s first MVP and of learnwpf.com fame). I’m learning heaps from interacting with this guy, and its a fun project at the moment we are on. I don’t have to babysit him and he doesn’t have to babysit me. We just looked at the specifications, agreed on a solution structure and boom, were’ off grinding pixels and code. The next job I go to where they need a WPF/Silverlight dev etc, Joseph is one i’d recommend – again, its about networking and building relationships and finding people who you can trust and work alongside.
Reputation is a false economy.I often hear how folks worry over their reputation. I’ve watched people spend way to much time either building or recovering it from a bad project etc. The simple truth to this from what I’ve learnt is that if you know your work, you approach things with openly and honesty and don’t dump and run as well as admit mistakes, you’ll come out fine. Just focus on doing good work, reputation has a habit of following and self regulating itself over time. At times people I've heard bad things about on a project often aren’t the ones at fault as the recruiter / business development sales person didn’t set expectations appropriately or the project was a train wreck well before this person arrived and they were the last ones to hold the steering wheel as it went off the road.
Agile/SCRUM is not a religion.I’ve seen a lot of developers follow this concept by the book to the point where I often wonder if they are conscious of how badly they have gotten. The correct way and the natural way are two different things and in the end communication is the core piece to this. Stop arguing over protocol and just focus on establishing a clear line of communication and work on getting estimations as close as you can while at the same time admitting to your fellow team mates the moment you can’t do something or are over on your estimate – just put up your hand and say a simple word - “help”. I personally work under the assumption I'm the dumbest guy in the room, it keeps me calibrated and if you work with me and think “geez i thought that guy knew all of this” that's fine, but i probably do, but i’ll ask anyway just to make sure. I’ve felt the wrath of a false hero before, and I ended up having to do his work and mine at the same time only to be burnt for it later on. I could of thrown this person under a bus and said “well actually it was his fault” but in reality, I just absorbed the blame and avoided working with this person since. That is all.
Note: I am a UX/UI Ninja for hire.Contact me at scott at this domain.
I have an Icon fetish that is disturbingly wrong. In that I collect them, horde them and would happily spend Microsoft's good hard earned money on as many of them as I can find - if allowed.
Yet, what makes Icon's so special? in that why do they enhance an applications user interface to the point where it almost is lost without them. Why does Microsoft and Apple spend a lot of money and time ensuring that menu navigation and icon's are done in a manner that's not only attractive to the eye, but enhance a users experience?
Well, I decided to ask our UX folks, the same folks whom chose Icons for our operating systems, software applications and so on. I had one intent, to get to the bottom of this whole Icon business and more to see where Icon's can play a role in tomorrows RIA. RIA is going to embrace the icon market, something I have now doubt and so with this, onto the top 10 questions with Frank Bisono & Brittnie Hervey (UX demi-gods).
Top 10 Questions for the Icon Ninja's here at Microsoft.
Q1. What is an icon?, in that we all see them daily in software but what does the icon represent to the end user?
Brittnie: An icon represents an action a user will take.
Frank: For our purposes, an icon would be a graphical representation (small picture or object) for a file, application or command (action). For the end user it should be an easy way to quickly identify what product they are in and what action they could take on a given object.
Q2. When you choose an icon, what is the process that you go through in selecting the right one?
Brittnie: In Vista there is set usages for every icon that we define when created. We align the concept of the functionality the user is taking to the best visual representation we can get based on elements rather than words.
Frank: So generally you don’t just have the luxury of choosing a pre-existing icon here. For most products or features, we create a custom icon. On the server side, this means literally THOUSANDS of icons. We follow the same process as Brittnie described above. That generally means meeting with a PM and translating the description for this icon into a graphical representation. Sometimes we have existing elements that we re-use to create an icon, other times, it’s a completely custom concept and we start from scratch.
Q3. Microsoft has released some guidelines around designing icon's, do you feel that the icon design community adhere to these?
Brittnie: I believe it depends on group and situation. Our current guidelines do not map 1 to 1 to what MS sets as guidelines. I think we adhere when appropriate. This is a harder question to answer.
Frank: If you mean the design community OUTSIDE of Microsoft, well – it all depends. We haven’t put out the most robust set of guidelines I’ve seen, but they are generally a pretty good start. The main problem I have seen with regards to icons is that sometimes the importance of an icon is overlooked. There are the obvious visual aspects of creating an icon, but then there are also things to consider such as geopolitical issues that can come back to haunt a developer or studio. The last thing you want to do is insult a particular culture with the use of an icon that has a detrimental meaning to them. I’ve also seen updates to products that continue to use icons developed for an older platform like XP. If you are targeting your application to run in Vista, then you need to refresh the icons to match the visual style we have set for Vista (the aero style). The last thing I’ll note is that all too often I’ve seen folks take a shortcut and use an icon designed for use at say 256x256 and they scale it down to fit a 16x16 block. Or even worse, they upscale an icon. That just doesn’t fly. There are a number of reasons why you can’t just shrink an icon in Photoshop and call it a day, and the same goes for sizing an icon up. At the end of the day, it just doesn’t look good.
Q4. I've always said that the icon market is ripe for the picking giving the technology going forward, where do you foresee this market going and is there room for icons in formats such as XAML?
Brittnie: I foresee icons becoming less important and the UI itself becoming more self explanatory. With that being said I don’t think icons will ever go completely away, just less needed.
Frank: The icon market is definitely getting more advanced. We are now seeing icons as large as 512x512 directly in the UI and with much richer detail than ever. I totally see a future with dynamic icons that change as the application’s state changes. As the graphics engines in our OS get better, so too will the use of icons and the value they can bring to the OS or application. That’s just one example. As far as XAML, there’s definitely something to be said there as well. Right now if you take an icon created in Illustrator, you could export that as XAML and drop that right into code using Expression Blend. After all, a vector is nothing more than a mathematical computation rendered as a graphic right? But another way to drop that into XAML is by defining a brush in Blend with an icon image and then using that brush in Blend (this is for when you only have a bitmap icon for example). The “icon” does ok at scaling, but there is room for improvement using that technique. XAML is definitely going to present some interesting possibilities moving forward with WPF applications. We are still WAY early in defining that, but as we move more towards a WPF based environment, you will see more attention being given to XAML Icons.
Q5. I have an icon fetish, i just seem to store them, 1000's of them. Do you also have hordes of icons tucked away on your hard drive and what is it you look for in the design styles?
Brittnie: No, I do not have many different icons I store on my hard drive but we do have thousands tucked away on a sever/share. The design style is the same for all the icons we create, as we have the Vista guidelines we follow. I only collect those icons. J
Frank: Well, I’m not going to lie here, I am a total icon fanboi 🙂 I literally have TENS of THOUSANDS of them hoarded away on my drives at home. I’ve been collecting them for years. I just love customizing my desktop and folders using custom icons.
Q6. OSX and Windows Vista have a unique design style to both, and lately the "Glass Effect" plays a role in design style(s). Why is this so? and do you have any thoughts on the next upcoming fashionable style?
Brittnie: I believe this is because it is a new visual style that you don’t see in a lot of places, and it gives the icons an extra bang. They feel more like a piece of art work then they do just a simple icon and glass adds some elegance. I can’t predict the next trend, but if I had to guess, I would think it would be a hybrid between the MSN style of icons and the current Vista style, giving a little less importance to the icon, and more importance to the UI.
Frank: Hmmm, the glass factor. Yeah, this is all the rage and trend lately, but I think we’ll see some evolution in the coming years. The glass thing is just a little too shiny and a little too frosty in places and I think you will start seeing that get toned down a bit. The big effect there is transparency. Like anything else though, too much is a bad thing. I would totally tell you what I think the next trend in icons will be, but I’d rather keep that a secret and let you see it when we release it.
Q7. What is the biggest mistake a developer or designer can do in choosing an Icon for their applications?
Brittnie: In our world they could use the icon incorrectly, which then breaks the users understanding of what that icon does. Windows, Windows Live, & IE all use the same library of icons so using them correctly helps the user to immediately identify what action is going to be taken when the icon is clicked, thus enhances the User experience. The second thing they could do wrong is size an icon up from a smaller file, pixilation then occurs in the image.
Frank: Totally in sync with Brittnie here. An example of using an icon incorrectly would be choosing an icon that has traditionally had a different metaphor to mean something else in your UI. This is BAD…REAL BAD. It’s hard to retrain people to think about something in a different way and if your use of an icon gives the user a result other than the intended result because of a bad metaphor, well then you just hosed the usability of your product. Metaphors in general can be a bad thing and should be avoided unless it is universally known. You have to think about localization here and what the icon could potentially mean in another culture.
Q8. What advice would you give to the design market around producing a set of icons? given that most software vendors require a themed approach?
Brittnie: I guess the advice I would give would depend on what style they were trying to create an icon in. If they were trying to create an icon in the Vista style I would say the most important thing to do is work closely with the library owner so they can understand what is already built, and how to visual represent something that needs to map into our icons, and to make sure the style guide is being followed.
Frank: For designers outside of MSFT, the #1 thing I’d say they need to know their target audience. Sounds stupid, but if none of your users are running Vista (which we all know they should right? J), then you shouldn’t be using the Aero theme for your icons or your UI will look like butt. This is where proper research comes into play. Know the limitations of your product. Think about WHERE the icon will be used, platform, form factor, etc. (mobile device or a huge honkin projection screen in a NOC center). Think about the environment in which your icon will be seen (potential lighting situations, types of display technology). We all like to think we are designing icons that will be used on a Windows box in a home or office environment, but the reality is that your icon could end up in a place you never expected it to. You have to think about a lot of factors when choosing the right design. Think ahead, anticipate the unexpected and ask a lot of questions.
Q9. Icon's typically have two states associated to them (eg: recycle bin, full/empty). Yet some (Audim on OSX for example) are now using animation to represent status change, what advice would you give around keeping that from getting out of hand?
Brittnie: I would say each situation needs to be addressed case by case. I avoid using animation or multiple states of icons unless there is a status to an icon that needs to be represented for its functionality. I think the cost of making second/third icons and the additional cost of animating those icons will keep us from doing it too often. That is usually where I push back from when an icon of this type is requested.
Frank: I would actually argue that it ISN’T typical for an icon to have 2 states. There are definitely times when this is the case however. Status change and animation are two separate things. You can have one without the other. I think that having status change is an effective way of providing feedback to a user for certain things. Animation is where things would tend to get out of control if not done correctly. In the case of an object that is synchronizing something or transferring data, I can see the value of adding animation to an icon because it’s representing that there is a task in progress. It’s live feedback letting the user know something is happening. But gratuitous animation for the sake of animation is where you start getting into the cheese factor. How long did those flaming .gifs and websites with music last back in 1995? Yeah…
Q10. Why can't we have a universal icon format that fits all platforms, devices and other digital surfaces.
Brittnie: I think it would be AMAZING to have all platforms support then same file type/format, but I don’t know if this would ever be possible considering the constraints on the web that don’t exist in the OS.
Frank: I also think that the idea of a universal icon format would be ideal. Unfortunately we live in a world where everyone wants to be king and nobody wants to concede to the other player. You can say that about almost any format on the market. Blue Ray vs. HD DVD / PDF vs. XPS / RAW vs. DNG, the list goes on. Then you have the issue of maintaining backwards compatibility and re-engineering existing apps to take advantage of a universal format. Then who owns it? I think people are just set in their ways and on the grand scheme of things, a universal icon format isn’t at the top of the list of priorities for most folks. It’s a shame really, but I guess that’s life in the 21st century.
I think that there is going to be a very lucrative market ahead for Icon Designers, especially as RIA begins to heat up more and more as technology gets advanced. Themed Icon designers, and quality ones will be in high demand along side UI designers - in fact - one could argue that a good UI designer for applications should come in armed with Icon Design capabilities. As you can then complete the entire themed experience in a way that others may not be able to.
XAML, is also something in which I think there is could have stronger potential. The ability to transfer icons back and forth amongst designer & developer workflow will also work towards reduction of having to design icon's for different scales (16,32,48 etc).
This is also something which probably doesn't get discussed enough, in that Microsoft Community can offer a lot of maturity in this space going forward. We have exceptionally talented, intelligent and extremely focused User Experience folks on our ethos. I expect as time passes we will continue to see some of this thought leadership and maturity help shape the Microsoft version of “Next Web”.
Also we have icon design guideline(s) which others may find useful:
I’ve just finished reading Kelly’s post on CBS + Silverlight + Accessibility. Its a great post, as the intent and motivation behind it seems to be based from a healthy place.
The thing about this post however, is the cold hard reality of what the intent of this post today is unlikely to yield a positive outcome. I say this boldly knowing full well someone out there will highly likely go “Just wait a darn gone minute barnesy, what are you saying here..”
The rationale behind my wording here is that when you combine a somewhat complex UX issue as Accessibility and mix it with Silverlight, well your talent pool can drop significantly beyond where it was before the two pieces were to meet. As it stands today, finding UX Specialists that can bring high quality experiences to Silverlight isn’t as vast as one would have hoped, now combine this need with “must have accessibility experience” and well, its small is all.
Its a tough problem to crack and any who do wish to partake in the quest to dominate this problem, will do so without a lot of guidance from the web. Silverlight is still in a relatively early stages of growth, as whilst there is wild success in installation and developer uptake, there is however a lot of unchartered ground to cover in terms of identifying best practices, guidance and techniques to solving problems that are mostly covered off 10x over in spaces like HTML/JS/CSS – not just with Accessibility as well.
Having said all of that, there are pieces to this puzzle that can be brought forward from HTML/JS/CSS into the Silverlight arena to push that agenda forward more. The question is really how does one bring these to the surface? who’s the folks leading this charge and how can more sites like CBS take a page out of their gospels? This is a problem in which more light should be cast as well as ways to ensure Silverlight based solutions also factor in a graceful degradation for situations where there is either a technical or resource challenge in place.
Is that fair though? in that if accessibility is too costly for a brand and as a result they adopt the cheap approach by marshalling folks with accessibility issues to a separate and less immersive experience, does this not hurt the equality of the web? Bloody oath it does and everytime that occurs, a kitten gets punched in the face – as its just as cruel.
In the end though, sadly, its a numbers game, and whether we wish to face this reality or keep hammering away at the politics surrounding it, often, companies will balance between quality vs quantity when it comes to issues like this.
If an intended experience is made up of 95% of folks who aren’t likely to face accessibility issues vs 5% who are, what is the risk/consequences of ignoring that 5%. Is that right to state that so coldly, no, but in today's online environment that equation is often calculated daily if not weekly. There are a lot of highly visible brands online today who aren’t 100% accessibly compliant – in fact Microsoft.com/Silverlight itself has issues there – so who or what entities cast light on this problem as one-off blog posts aren’t really being as effective as it could be. How can this issue in general, especially for the Silverlight community simply turn a corner and lead more by example?
I know there is a few folks inside the Silverlight engineering team that are solely devoted to the art of accessibility, so its not like Microsoft is ignoring the existence of this problem, absolutely not instead they are attacking it the best and fastest way they know how with the resources they have. Question is, who in the Silverlight community is actively supporting them and how many?
Where is the guidance on this problem in a more real-world focused way.
I take a lot of inspiration from the iPhone, as to me its this device that fits in your hand and has not a lot of real estate, yet it accomplishes more tasks at times than most computer desktops today. I can make calls, check email, look at calendar, browse sites online, play a game, set a task, take notes, tag a song for future purchase, tag a book for future purchase and so on.
All tucked inside a small device.
I typically each morning, check email in bed when I first wake up - habit from working at Microsoft where email dominates your life - and it struck me this morning about the way the iPhone was designed. I looked at the outer frame, and noticed for the first time that it was designed in such a way to be simply a "frame" to what is important, the software.
It hit me as a profound thought that despite the look and attraction of the iPhone, the actual device itself was firstly made to look appealing as it sits in your hand, but secondly it was designed to fade into the background when you decide to actually use it.
Armed with this thought, I jumped on both my iMac and Windows machine and explored the various applications I have installed and noticed that there seems to be a lot of confusion on the Windows side of things and less on the OSX side of things. It struck me that the difference between Windows and OSX isn't the brand wars, it's the subtle way things are designed to keep people focused on the task and less on the framing of the task.
In Windows, each application becomes its own pattern, or "iPhone" whereas on OSX typically most applications chrome looks the same, in fact it must be extremely hard for software vendors to deviate from Apple's look and feel.
This then made me think about how I've designed UI in the past, and I often think about the way I've approached the overall user interface. I have since then experimented with the way one project's design looks, and thought about how my chrome should be prominent at the start of the applications boot sequence (login etc). Then once the hygiene task has taken place, it's job is then to blend into the background.
Look at below, you will see that in isolation the design (even in its blank canvas form) becomes a focal point, the icon and then the panel at top etc.
Now look at the change if I simply add a rounded white rectangle, the actual chrome fades to the background, and the white overpowers your attention. You probably wouldn't of even noticed this had i not told you about it either.
This to me is where I think software design - aka interaction design - can make or break you in terms of how users interact with your solution. My theory is that a user interfaces job is to take you to the heart of a problem, its job is to connect you to the most important thing you can possibly do from within the context of the application. Sadly though, I rarely see this in software design today, all too often I see the software become more of a Swiss army knife in terms of features and needs. The argument there is well we are good at processing multiple tasks at the same time, which is true, but i also can't but help wonder if we're following the same mundane pattern over and over, resulting in no evolution in GUI.
The only evolution in GUI that I've really seen in the last 5+ years has been the introduction of gesture based interfaces (iPhone, Microsoft Surface etc). This has changed the way we've approached design, as now it's about touching the glass and manipulating design with our hands. It's about designing around the fact we can't see through our hands and traditional software GUI has to change into something that accommodates the new approach.
In doing this, we reverted back to simplicity. This to me, highlights that as much as we want to argue that Office Ribbon for example makes life easier to experience the plethora of features found in Microsoft Word etc, the reality is, they (Office Team) just found a way to simplify the overall interface to the bare minimum, and keep people focused on the important features.
The problem though is I can't seem to separate the framing of the software from the functionality, meaning as I type this blog post in Microsoft Word, I keep noticing the Ribbon Menu as I type. I instead want the UI to somehow take the life of the white document space and this is all I see, then when I need something from the office draw, I then go to it. The same as if I would on my desk at home, where if I need a post-it note etc, I turn away, open a draw and get it.
Its a rough example, but the point is hopefully made.
You can stare at that blinking cursor inside Visual Studio all you want, it’s not going to give you an immediate insight into how you should architect your Silverlight solution so that it can be reusable and scale.
It’s not that you’re an idiot or aren’t good at programming, it’s just that you are trying to juggle learning Silverlight and building it at the same time. You’re already stressed, at making some bets around adopting the product or maybe you’re trying to still decide if this is still a good bet. Don’t add more layers of stress by trying to find a way to keep your entire code base re-usable.
Yes, your background in ASP.NET or WinForms is going to help you a lot going forward and i bet you have a bunch of best practices or albeit ones that you’re comfortable or at peace with (screw that guy who tells you you’re doing it wrong, did you ship? yes, well back off is what I'd say).
Silverlight is going to be different though, it’s going to require you to rethink a lot of things you’ve learnt in the past. Now you can blame the product for making you change your behavior, sure that can be an easy way out i guess, but you’re smarter than that and you adopted this for the right reasons. You’re rising to the challenge, and i’m telling you now, the code you right in the first phase of your adoption isn’t going to be poetic.
Stop wasting your time trying to build a framework that is scalable, you’ll do that in a few months. Instead, get used to the feeling of producing a solution one that you can throw away – yes i said it, throw away – in a few months from now.
Just ship. As time passes you’ll get experience, just like you did with the technology you’ve just spent x number of years spanking to death. Only this time, you’re going to be in the next early majority and you know what, it’s going to be more fun – i guarantee you.
I’ve spent close to 15 years programming for the web, i miss it. I enjoy it and I've used nearly all languages associated with the web (you name it, I've written an app for it) and for me as a Product Manager for Silverlight, I often grow jealous of the work you do.
Do me a favor though, practice more. As when I leave Product Management for Silverlight and I one day jump into the hot seat with you, I expect – no – demand you teach me what the best practices are.
The guys on my firewall can’t help you just yet, as we’re busy building the actual product itself, but soon once it stabilizes our teams over at Patterns & Practices and Framework crew, are going to show you the Microsoft way, but that doesn’t mean its the right way, it’s just our preferred way.
We hope by that stage you’ll contribute back and we’ll move forward in Rich Internet/Interactive Applications (RIA).
Throw away your code, trust me, you’ll be better next time.