There seems to be an undercurrent of contempt towards Digital Skeuomorphism – the art of taking real world subject material and dragging it kicking & screaming into your current UI design(s) (if you’re an iPad designer mostly).
I’ve personally sat on the fence with regards to this subject as I do see merit in both sides of the argument in terms of those who believe it’s gotten out of hand vs those who swear it’s the right mix to helping people navigate UX complexity.
Here’s what I know.
I know personally that the human mind is much faster at decoding patterns that involve depth and mixed amounts of color (to a degree). I know that while sight is one of our sensory radars working 24/7 it is also one that often scans ahead for known pattern(s) to then decode at sub-millisecond speeds.
I know we often think in terms of analogies when we are trying to convey a message or point. I know designers scour the internet and use a variety of mediums (real life subject matter and other people(s) designs) to help them organize their thoughts / mojo onto a blank canvas.
Finally I know that with design propositions like the monochrome like existence of Metro it has created an area of conflict around like vs dislike in comparison to the rest of the web that opts to ignore these laid out principles by Microsoft design team(s).
Here’s what I think.
I think Apple design community has taken the idea of theming applications to take on a more unrealistic but realistic concepts and apply them to their UI designs are more helpful then hurtful. I say this as it seems to not only work but solves a need – despite the hordes mocking its existence.
I know I have personally gone my entire life without grabbing an envelope, photo, and a paperclip and attached them together – prior – to writing a letter to a friend.
Yet, there is a User Interface out there in the iPad AppStore that is probably using this exact concept to help coach the user that they are in fact writing a digital letter to someone with a visual attachment paper clipped to the fake envelope it will get sent in.
Why is this a bad idea?
For one it’s not realistic and it easily can turn a concept into a fisher price existence quite fast. Secondly it taps into the same ridiculous faux UI existence commonly found in a lot of movies today (you know the ones, where a hacker worms his way into the banks mainframe with lots of 3D visuals to illustrate how he/she is able to overcome complex security protocols).
It’s bad simply for those two reasons.
It’s also good for those two reasons. Let’s face it the more friction and confidence we can build in end-users around attaching real-life analogies or metaphors to a variety of software problems the less they are preoccupied with building large amounts of unnecessary muscle in their ability to decode patterns via spatial cognition.
Here’s who I think is right.
Apple and Microsoft are both on this different voyage of discovery and both are likely to create havoc on the end user base around which is better option of the two – digitally authentic or digitally unauthentic.
It doesn’t matter in the end who wins as given both have created this path it’s fair to say that an average user out there is now going to be tuned into both creative output(s). As such there is no such thing as a virgin user when it comes to these design models.
I would however say out loud that I think when it comes to down cognitive load on the end user around which Application(s) out there that opt for a Metro vs. Apple iPad like solution, the iPad should by rights win that argument.
The reason being is our ability to scan the associated pattern with the faux design model works to the end user favor much the same way it does when you 30sec of a hacker busting their way into the mainframe.
The faux design approach will work for depth engagement but here’s the funny and wonderful thought that I think will fester beyond this post for many.
Ever notice the UI designs in movies opt for a flat “metro” like monochrome existence that at first you go “oh my that’s amazing CG!”. Yet if you then play with it for long period of time their wow factor begins to taper off fast.
I don’t have the answers on either sides here and it’s all based of my own opinion and second-hand research. I can tell you though sex sells, we do judge a book by its cover, and I think what makes the iPad apps appeal too many is simply – attractive bias in full flight.
Before I leave with that last thought, I will say that over time I’ve seen quite a lot of iPad applications use Wood textures throughout their designs. I’d love to explore the phycology of why that reoccurs more as I wonder if it has to do with some primitive design DNA of some sort.
Situation.Today, the ClassView inside Visual Studio is pretty much useless for most parts, in that when you sit down inside the tool and begin stubbing out your codebase (initial file-new creation) you are probably in the “creative” mode of object composition. Visual Studio in its current and proposed form does not really aid you in a way that makes sense to your natural approach to writing classes. That is to say, all it really can do is echo back to you what you’ve done or more to the point give you a “at a glance view” only.
Improvement.The class view itself should have a more intelligent by design visual representation. When you are stubbing or opening an existing class, the tool should reflect more specifics around not only what the class composition looks like (at a glance view) but also should enable developers to approach their class designs in a more interactive fashion. The approach should enable developer(s) to hide and show methods, properties and so on within the class itself, meaning “get out of my way, I need to focus on this method for a minute” which in turn keeps the developer(s) focused on the task. The ClassViewer should also make it quick to comment out large blocks of code, display visual issues relating to the large blocks of code whilst at the same time highlight which parts of the codes have and don’t have Attributes/Annotations attached. Furthermore, the ClassViewer should also allow developer(s) to integrate their source and task tracking solutions (TFS) via a finite way, that is to say enable both overall class level commentary and “TODO” allocation(s). At the same time have similar approaches at a finite level such as “property, method, or other” areas of interest – (i.e. “TODO: this method is not great code, need to come back refactor this later”).
Feature breakdown.The above is the overall fantasy user interface of what a class viewer could potential look like. Keeping in mind the UI itself isn’t accommodate every single use-case, but simply hints at the direction I am talking about.
Navigation.Inside the ClassView there are the following Navigational items that represent different states of usage.
Usage by TBA.
Derived ByThe “Derived By” view enables developers to gain a full understanding of how a class handles known inheritance chain by displaying a visual representation of how it relates to other interfaces and classes.
MinimapThis inheritance hierarchy will outline specifically how the classes’ relationship model would look like within a given solution (obviously only indexing classes known within an opened solution).
- The end user is able to jump around inside the minimap view; to get an insight into what metadata (properties, methods etc.) is associated with each class without having to open the said class.
- The end user is able to gain a satellite view what is inside each class via the Class Properties panel below the minimap.
- The end user is able to double click on a minmap box (class file representation) and as such, the file will open directly into the code view area.
- The end user is able to select each field, property, method etc. within the Class Properties data grid. Each time the user selects that specific area and If the file is opened, the code view will automatically position the cursor to the first character within that specific code block.
- The end user is able to double click on the first circle to indicate that this code block should be faded back to allow the developer to focus on other parts of the code base. When the circle turns red, the code block itself foreground colour will fade back to a passive state (i.e. all grey text) as whilst this code is still visible and compliable, it however visually isn’t displayed in a prominent state.
- The end user is able to click on the second circle to indicate that the code block itself should take on a breakpoint behaviour (when debugging please stop here). When the circle turns red, it will indicate that a debug breakpoint is in place. The circle itself right click context will also take on an as-is behavior found within Visual Studio we see today.
- The end user is able to click on the Tick icon (grey off, green on). If the Tick state is grey, this indicates that this code block has been commented out and is in a disabled state (meaning as per commenting code it will not show up at compile time).
- The end user is able to click on the Eye icon to switch the code block into either a private or public state (public is considered viewable outside the class itself, ie internal vs public are one in the same but will respect the specifics within the code itself).
- Each row will indicate the name given to the property, its return or defined type, whether or not it is public or private and various tag elements attached to its composition.
- When a row has a known error attached within its code block, the class view will display a red indication that this area needs the end users attention.
- The eye icon represents whether or not this class has been marked for public or private usage (i.e. public is considered whether the class is viewable from outside the class itself – ie internal is considered “viewable” etc.).
- Tags associated to the row indicate elements of interest, in that the more additional per code block features built in, they will in turn display here (e.g.: Has Data Annotations, Codeblock is Read Only, Has notes attached etc.).
Tags.My thinking is that development teams can attach tabs to each code block whilst at the same time the code itself will reflect what I call “decorators” that have been attached (ie attributes). Example Tags.
- Attribute / Annotation. This tag will enable the developer to see at a glance as to what attributes or annotations are attached to this specific code block. This is mainly useful from a developer(s) perspective to ensure whether or not the class itself has the right amount of attributes (oops I forgot one?) whilst at the same time can provide an at-a-glance view as to what types of dependencies this class is likely to have (e.g use case Should EntityFramework Data Annotations be inside each POCO class? Or should it be handled in the DBContext itself?..before we answer that, lets see what code blocks have that dependency etc.).
- Locked. This ones a bit of a tricky concept, but initially the idea is to enable development teams to lock specific code blocks from other developer(s) manipulation, that is to say the idea is that when a developer is working on a specific set of code chunks and they don’t want other developer(s) to touch, they can insert code-locks in place. This in turn will empower other developer’s to still make minor modification(s) to the code whilst at the same time, check in the code itself but at the same time removing resolution conflicts at the end of the overall work stream (although code resolution is pretty simplified these days, this just adds an additional layer of protecting ones sandpit).
- Notes. When documenting issues within a task or bug, it’s at times helpful to leave traces behind that indicate or warn other developers to be careful of xyz issues within this code block (make sure you close out your while loop, make sure you clean-up your background threading etc.). The idea here is that developer(s) can leave both class and code-block specific notes of interest.
- Insert Your idea here. These tag specific features outlined so far aren’t the exhausted list, they are simply thought provokers as to how far one can go within a specific code-block. The idea is to leverage the power Visual Studio to take on a context specific approach to the way you interact with a classes given composition. The tags themselves can be injected into the code base itself or they can simply reside in a database that surrounds the codebase (ie metdata attached outside of the physical file itself).
- The idea behind this derived by and class properties view is that the way in which developer(s) code day in day out takes on a more helpful state, that is to say you are able to make at-a-glance decisions on what you see within the code file itself. At the same time providing a mini-map overarching view as to what the composition of your class looks like – given most complex classes can have a significant amount of code in place?
- Tagging code-chunks is a way of attaching metadata to a given class without specifically having to pollute the class’s actual composition, these could be attachments that are project or solution specific or they can be actual code manipulation as well (private flipped to public etc.). The idea is simply to enable developer(s) to communicate with one another in a more direct and specific fashion whilst at the same time enable the developer(s) to shift their coding lense to enable them to zero in on what’s important to them at the time of coding (ie fading the less important code to a grey state).
On the choice of grey.Grey is a color that I have used often in my UI’s and I have no issue with going 100% monochrome grey provided you could layer in depth. The thing about grey is that if it has to flat and left in a minimalist state it often will not work for situations where there is what I call “feature density.” If you approach it from a pure Metro minimalist approach, then it can still work but you need to calibrate your contrast(s) to accommodate the end users ability to hunt and gather for tasks. That is to say this is where Gestalt Laws of Perceptual Organization comes into play. The main “law” that one would pay attention to the most is the “Law of Continuity” - The mind continues visual, auditory, and kinetic pattern. This law in its basic form is the process in which the brain decodes a bunch of patterns in full view and begins to assign inference to what it perceives as being the flow of design. That is to say, if you designed a data grid of numeric values that are right aligned, no borders then the fact the text becomes right aligned is what the brain perceives as being a column. That law itself hints that at face value we as humans rely quite heavily on pattern recognition, we are constantly streaming in data on what we see; making snap judgment calls on where the similarities occur and more importantly how information is grouped or placed. When you go limited shades of grey and you remove the sense of depth, you’re basically sending a scrambled message to the brain around where the grouping stops and starts, what form of continuity is still in place (is the UI composition unbroken and has a consistent experience in the way it tracks for information?) It’s not that grey is bad, but one thing I have learnt when dealing with shallow color palettes is that when you do go down the path of flat minimalist design you need to rely quite heavily on at times with a secondary offsetting or complimentary color. If you don’t then its effectively taking your UI, changing it to greyscale and declaring done. It is not that simple, color can often feed into the other law with Gestalts bag of psychology 101, that is to say law of similarity can often be your ally when it comes to color selection. The involvement of color can often leading the user into being tricked into how data despite its density can be easily grouped based on the context that a pattern of similarity immediately sticks out. Subtle things like vertical borders separating menus would indicate that the grouping both left and right of this border are what indicates, “These things are similar.” Using the color red in a financial tabular summary also indicates this case as they are immediately stand out elements that dictate “these things are similar” given red indicates a negative value – arguably this is a bit of digital skeuomorphs at work (given red pens were used pre-digital world by account ledgers to indicate bad).
Ok I will never use flat grey again.No, I’m not saying that flat grey shades are bad, what I am saying is that the way in which the Visual Studio team have executed this design is to be openly honest, lazy. It’s pretty much a case of taking the existing UI, cherry picking the parts they liked about the Metro design principles and then declaring done. Sure they took a survey and found responded were not affected by the choice of grey, but anyone who’s been in the UX business for more than 5mins will tell you that initial reactions are false positives. I call this the 10-second wow effect, in that if you get a respondent to rate a UI within the first 10seconds of seeing it, they will majority of the time score quite high. If you then ask the same respondents 10days, 10months, or a year from the initial question, the scores would most likely decline dramatically from the initial scoring – habitual usage and prolonged use will determine success.
We do judge a book by its cover and we do have an attractive bias.Using flat grey in this case simply is not executed as well as it could be, simply because they have not added depth to the composition. I know, gradients equal non-metro right. Wrong, metro design principles call for a minimalist approach now while Microsoft has executed on those principles with a consistent flat experience (content first marketing) they however are not correct in saying that gradients are not authentically digital. Gradients are in place because they help us determine depth and color saturation levels within a digital composition that is to say they trick you into a digital skeumorphism, which is a good thing. Even though the UI is technically 2D they do give off a false signal that things are in fact 3D? which if you’ve spent enough time using GPS UI’s you’ll soon realize that we adore our given inbuilt depth perception engine. Flattening out the UI in the typical metro-style UI’s work because they are dealing with the reality that data’s density has been removed that is to say they take on more of a minimalist design that has a high amount of energy and focus on breaking data down into quite a large code diet. Microsoft has yet to come out with UI that handles large amounts of data and there is a reason they are not forthcoming with this as they themselves are still working through that problem. They have probably broken the first rule of digital design – they are bending their design visions to the principles and less on the principles evolving and guiding the design.
Examples of Grey working.Here are some examples of a predominately grey palette being effective, that is to say Adobe have done quite well in their latest round of product design especially in the way they have balanced a minimalist design whilst still adhering to visual depth perception based needs (gradients). Everything inside this UI is grouped as you would arguably expect it to be, the spacing is in place, and there is not a sense of crowding or abuse of colors. Gradients are not hard, they are very subtle in their use of light, or dark even though they appear to have different shades of grey, they are in fact the same color throughout. Grey can be a deceiving color given I think it has to do with its natural state, but looking at this brain game from National Geographic, ask yourself the question “Is there two shades of grey here?” The answer is no, the dark & light tips give you the illusion of difference in grey but what actually is also tricking the eye is the use of colors and a consistent horizon line.
Summary.I disagree with the execution of this new look, I think they’ve taken a lazy approach to the design and to be fair, they aren’t really investing in improving the tool this release as they are highly most likely moving all investments into keeping up with Windows 8 release schedules. The design given to us is a quick cheap tactic to provoke the illusion of change given I am guessing the next release of Visual Studio will not have much of an exciting set of feature(s). The next release is likely to either be a massive service pack with a price tag (same tactic used with Windows7 vs. Windows Vista – under the hood things got tidied up, but really you were paying for a service pack + better UI) or a radical overhaul (I highly doubt). Grey is a fine color to go full retard on (Tropic Thunder Quote) but only if you can balance the composition to adhere to a whole bunch of laws out there that aren’t just isolated to Gestalt psychology 101 but there is hours of reading in HCI circles around how humans unpick patterns. Flattening out Icons to be a solid color isn’t also a great idea either, as we tend to rely on shape outlines to give us visual cues as to how what the meaning of objects are and by at times. Redesigning the shape or flattening out the shape if done poorly can only add friction or enforce a new round of learning / comprehension and some of the choices being made is probably unnecessary? (Icons are always this thing of guess-to-mation so I can’t fault this choice to harshly given in my years of doing this it’s very hit/miss – i.e. 3.5” inch disk represents save in UI, yet my kids today wouldn’t even have a clue what a floppy disk is? …it’s still there though!). I’m not keen to just sit on my ivory throne and kick the crap out of the Visual Studio team for trying something new, I like this team and it actually pains me to decode their work. I instead am keen to see this conversation continue with them, I want them to keep experimenting and putting UI like this out there, as to me this tool can do a lot more than it does today. Discouraging them from trying and failing is in my view suffocating our potential but they also have to be open to new ideas and energy around this space as well (so I’d urge them to broker a better relationship with the community around design). Going forward, I have started to type quite a long essay on how I would re-imagine Visual Studio 2011 (I am ignoring DevDev’s efforts to rebrand it VS11, you started the 20XX you are now going to finish it – marketing fail) and have sketched out some ideas. I’ll post more on this later this week as I really want to craft this post more carefully than this one.
Last night I was sitting in a child psychologist office watching my son undergo a whole heap of cognitive testing (given he has a rare condition called Trisomy 8 Mosaicism) and in that moment I had what others would call a “flash” or “epiphany” (i.e. theory is we get ideas based on a network of ideas that pre-existed).
The flash came about from watching my son do a few Perceptional Reasoning Index tests. The idea in these tests is to have a group of imagery (grid form) and they have to basically assign semantic similarities between the images (ball, bat, fridge, dog, plane would translate to ball and bat being the semantic similarities).
This for me was one of those ahah! Moments. You see, for me when I first saw the Windows 8 opening screen of boxes / tiles being shown with a mixed message around letting the User Interface “breathe” combined with ensuring a uniform grid / golden ratio style rant … I just didn’t like it.
There was something about this approach that for me I just instantly took a dislike. Was it because I was jaded? Was it because I wanted more? ..there was something I didn’t get about it.
Over the past few days I’ve thought more about what I don’t like about it and the most obvious reaction I had was around the fact that we’re going to rely on imagery to process which apps to load and not load. Think about that, you are now going to have images some static whilst others animated to help you guage which one of these elements you need to touch/mouse click in order to load?
re-imagining or re-engineering the problem?
This isn’t re-imagining the problem, its simply taken a broken concept form Apple and made it bigger so instead of Icons we now have bigger imagery to process.
Just like my son, your now being attacked at Perceptional Reasoning level on which of these “items are the same or similar” and given we also have full control over how these boxes are to be clustered, we in turn will put our own internal taxonomy into play here as well…. Arrghh…
Now I’m starting to formulate an opinion that the grid box layout approach is not only not solving the problem but its actually probably a usability issue lurking (more testing needs to be had and proven here I think).
Ok, I’ve arrived at a conscious opinion on why I don’t like the front screen, now what? The more I thought about it the more I kept coming back to the question – “Why do we have apps and why do we cluster them on screens like this”
The answer isn’t just a Perspective Memory rationale, the answer really lies in the context in which we as humans lean on software for our daily activities. Context is the thread we need to explore on this screen, not “Look I can move apps around and dock them” that’s part of the equation but in reality all you are doing is mucking around with grouping information or data once you’ve isolated the context to an area of comfort – that or you’re still hunting / exploring for the said data and aren’t quite ready to release (in short, you’re accessing information in working memory and processing the results real-time).
As the idea is beginning to brew, I think about to sources of inspiration – the user interfaces I have loved and continue to love that get my design mojo happening. User interfaces such as the one that I think captures the concept of Metro better than what Microsoft has produced today – the Microsoft Health / Productivity Video(s).
Back to the Fantasy UI for Inspiration
If you analyze the attractive elements within these videos what do you notice the most? For me it’s a number of things.
I notice the fact that the UI is simple and in a sense “metro –paint-by-numbers” which despite their basic composition is actually quite well done.
I notice the User Interface is never just one composition that the UI appears to react to the context of usage for the person and not the other way around. Each User Interface has a role or approach that carries out a very simplistic approach to a problem but done so in a way that feels a lot more organic.
In short, I notice context over and over.
I then think back to a User Interface design I saw years ago at Adobe MAX. It’s one of my favorites, in this UI Adobe were showing off what they think could be the future of entertainment UI, in that they simply have a search box on screen up top. The default user interface is somewhat blank providing a passive “forcing function” on the end user to provide some clues as to what they want.
The user types the word “spid” as their intent is Spiderman. The User Interface reacts to this word and its entire screen changes to the theme of Spiderman whilst spitting out movies, books, games etc – basically you are overwhelmed with context.
I look at Zune, I type the word “the Fray” and hit search, again, contextual relevance plays a role and the user interface is now reacting to my clues.
I look back now at the Microsoft Health videos and then back to the Windows 8 Screens. The videos are one in the same with Windows 8 in a lot of ways but the huge difference is one doesn’t have context it has apps.
The reality is, most of the Apps you have has semantic data behind (except games?) so in short why are we fishing around for “apps” or “hubs” when we should all be reimagineering the concept of how an operating system of tomorrow like Windows 8 accommodates a personal level of both taxonomy and contextual driven usage that also respects each of our own cognitive processing capabilities?
Now I know why I dislike Windows 8 User Interface, as the more I explore this thread the more I look past the design elements and “WoW” effects and the more I start coming to the realization that in short, this isn’t a work of innovation, it simply a case of taking existing broken models on the market today and declaring victory on them because it’s now either bigger or easier to approach from a NUI perspective.
There isn’t much reimagination going on here, it’s more reengineering instead. There is a lot of potential here for smarter, more innovative and relevant improvements on the way in which we interact with software of tomorrow.
I gave a talk similar to this at local Seattle Design User Group once. Here’s the slides but I still think it holds water today especially in a Windows 8 futures discussion.
I am working on a secret squirrel application. Can't say much suffice to say I had a situation where a bunch of clients connect to a network - like most apps I guess.
In this situation, I needed to inject a listening app to the overall network in that it needed to keep a pulse check on how the clients within the network are doing. This listening application needed to show the information in a meaningful way but at the same time; I do not want to have to inspect every single one of them each time they connect/disconnect.
I needed to visually show the connection states but I wanted it to be more reactive to me vs. me reactive to it.
Armed with problems like this, I now draw your attention to my biggest pet hate - DataGrids. A developer and you know who you are - would often take a situation like this and go "Ok, got it, what we need is a datagrid and the columns show machine name, state and blah blah metadata - easy peasy!!"
If you are that developer, I want you to do me a favor, remove yourself from the screen design team as you are hereby in a time-out.
Again, the problem is that I want to have a sense of all things are ok but I want to be alerted when things are not. I want to put this UI onto a large screen and just let it sit there keeping an eye on things and the moment something's amiss - tell me!
Here is what I came up with. It is a hexagonal grid, each tile represents a new client connection and what it does is when a client activates it pulsates (the tile also gets added randomly in the grid). Yes, it pulsates but slowly, so it gives the impression that the "grid" (i.e. network grid) is breathing just like an organic machine.
When something bad happens, the tile flips to a red alert state and if there is a massive network outage the entire screen flickers with red pulsating tiles all yelling "help me, help me".
If more than 10x tiles fail an overall, alert text flows over the top giving you the old "Warning will Robinson, warning" alert.
The point here is simple. I could of easily just taking a data grid and view model, bound the two together and walked away. I actively choose not to do that as it's a case of thinking about the problem creatively and trying an approach that makes sense but at the same time underpins an important principle - We work with software. We shouldn’t just use software"
That's why I hate datagrids.
As a Hybrid I want the ability to create user stories into clusters so that I can isolate ideas into more finite pieces.
Seeing that most are probably likely to ask what the heck I mean? A User Story by SCRUM standards is just a small two-sentence tid bit that gives one clues as to what one should develop in software vNext? The problem with this line of thinking is that it goes against the nature of human phycology in that to isolate streams of thought into abstract finite forms does not work.
Ever sat in a Story planning session and as the stories begin to flow you immediately start to cluster these in your mind into groups? You visualize them into clusters when you see them that are because we as humans are massive fans of chunking information so that we can process them in more digestive formats. Is this a problem? No, it is just a natural primitive instinct to encourage you, as an entity that has grown opposable thumbs into ensuring the thing in front of you is both learnable and adaptable to suit what happens next.
Today, if you were to look around on the inter-web you would see a bunch of SCRUM friendly software and most of them try their best - and fail - to re-create the experience of User Story capturing. The approach they often take is to create a rounded rectangle and display them onto the user's screen as:
"this is a card and therefore it's like the one you have in your cubicle in paper form. Please use this the same way in which you would that..”
Signed Software Designer.
Recently I learnt a very important snack of UX goldenness and that is we do not use software, we work with it. Software should reflect who I am and what is contextual relevant or albeit synchronized to suite my needs vs. having to ask me to adapt to its needs. Handing someone a virtual card on screen does not offer anything of value, all it does is remind me the constraints put forward - I need to cram an idea in under two sentences in abstract form so that I can break it down further into sub-forms in order to generate software feature(s).
The Flash of Genius.
I sat down today to tackle what I call the "Opening Act" in my UX magic show for an upcoming application I am making in WPF debut titled: IWANT - Weaponizing Agile. As I sat staring the blank canvas, I began to panic a little, as I did not want to just re-create a grid of cards and declare victor to do that is to admit I have not thought about who the end user is. Instead, I went for a walk and asked myself a simple question - Why does it have to be card and what else could it be? The more I thought about the form of media given to every SCRUM disciple out there the more I started to question its sanity - more to the point, who the freaking hell said a card was the best solution to this problem? A group of people who conjured up this religion we actively calls SCRUM today?
The SCRUM founding fathers if you will have some brilliance, but I'm not sure user experience or human phycology was at the forefront of their minds when producing this thing we call User Story management via card sorting. I would however put forward the theory that they were thinking of ways to force we humans into the existence of tearing down our natural instincts to cluster / chunk information in forms that are more isolated / streamlined.
Armed with this moment of brilliance, I sat down and began grinding pixels. I began to think about the problem in the fashion of idea clouds, just like as if we were about to read these stories in comic book form. Yes, comic book form - as name any child today that doesn't find reading more enjoyable in such format and I put it to you that we adults need to recapture our lost youth as much as we can.
Like all good experimenters, I need to assign some objectives to this newfound awareness. They are along these lines:
- I want the ability to visualize clusters of user stories in ways that respect my primitive instincts but at the same time respect the existence of SCRUM.
- I want the ability engage the experience with a sense of depth that is not flat, lifeless and typical response to visual grid ways.
- I want the ability to get in and get out when I need to resurrect a User Story of a specific type.
- I want the ability to just create in a fast responsive manner and I want the said creation to have dependency links throughout that are of contextual relevance.
Armed with this tall order, I bring you thine creation.
Story Clusters. A Story cluster is a group of user stories that fall under a theme / category. The idea is to allow teams to partition their ideas into taxonomy of their choosing but more so to ensure they can feed the idea threads around a particular concept - Security? Customer Management? File Storage etc as but examples of "theming".
User Story vs System Story. They are one in the same the difference however is to give developers or engineering minded people a free pass in terms of allocating some ideas to either a person(a) or a technical agent of their choosing. An example of this is
"As a User I want the ability to save my blog posts so that I can share it with others!" - User Story.
"As a UI Client I want the ability to CRUD a blog post so that I can allow users to manage blog posts" - System Story.
Some SCRUM masters may have a mental seizure at the sight of this - deal with it you jackasses. The reality is when someone sits down to write a User Story majority of the time the person is trying to force themselves out of a cycle of developer eyes and into something, they are not. The purpose in my opinion of this exercise is to tease out the idea; we can break it down further later but get the idea captured!
The UI Teardown.
View Finder. It's here you will find the User Story cluster(s). The clusters are grouped into a cloud like existence and the more stories found within the cluster the larger it becomes. I choose to enlarge these to enforce dominance for one and secondly to attack the end user in a subliminal manner by encouraging those to break it down into a separate cluster if it is getting to large. This is exactly like a container of any kind, if you keep cramming it with stuff it gets bigger and eventually you start to think about the problem again but now are already looking at ways to fork its contents.
Notice also, how the clusters are blurred and have a sense of depth. This is to ensure the end user does not take what is in front of them for granted, I want them to focus and I want them to explore the data. I do not want "at a glance" viewing; I want interaction and comprehension around what is in front of them. Explore, interact and adapt.
Search Box. Given the end user for this product is someone who's stuck in the mental model of "As a blah I want.." style thinking, it's important for me to not interrupt that conscious thought. If they are looking to find a specific task, user story or note of any kind then it's here I expect them to take a leap of faith and search. Important thing to note here is I am not relying on this massive data grid or tree control to allow my users access to the data. Why not? It is important to not give them a crutch to lean on; I want them to think about what they are asking and how they ask it. Providing a DataGrid or Tree Control is encouraging them to embrace perspective memory way to much (i.e. they will next want the ability to pin that area of navigation, taking up valuable real estate simply because at the heart of it they don't want to have to collapse / scroll to that sweet spot again). Instead get them into UX Rehab, ask them to treat the software as if they were turning to a co-worker and asking "hey, where did you put that file.." - behold the power of search!
Create Only. Notice I do not give much in the way other than "Create" options at first. I do this deliberately as I want the end user to build first tear down / fix next. Find the thing you want to do other stuff to but here, all I am interested in is giving you ways to add to the web of data.
Some of you may notice I used a Skull as the icon to represent the User Story. The reason I choose that icon instead of a typical silhouette head of a human is simple - we are creating digital Shakespeare. You are telling a story, so it is fitting - that and this is the spot where good ideas may go to die.
Stats. I am a sucker for fun facts or a sense of proportion but more importantly, it is about keeping a score of what is going on. Too many times software hides data to the point where you either underestimate or overestimate the complexities of a given problem because of such things as software hiding key pieces of information. I choose to keep a cycle of statistics around Story telling within iWANT so that as users are working on the feature catalogs of tomorrow they are also getting some fun facts so that they may turn to one another with stuff like
"oh shi.. We have like 300 stories added this month..man, we are never going to get this done in time.." (Invoking maybe a rethink in customer expectations?) etc.
My concept is unproven and untested. It may very well fail or it could succeed but right now any feedback or questions around this approach is simply "I think" and not "I know". What I am confident about is that it will spark a different round of thinking in terms of how one approaches user story telling. My objectives are clear enough to outline the overall intent that is to provide a safe haven for undisciplined and disciplined thinking around what goes into software of tomorrow. SCRUM is a little too religious in tis procedures and I find at times it goes against the grain of human psychology thus forcing a practice into place that at times is unnatural.
iWANT is a solution I am to challenge this thinking but at the same time allow development teams of all sizes the ability to get the administrative overheads out of the way fast, cleanly and smoothly so that they can focus on writing code, grinding pixels and marketing their product(s).
I did it! and I feel exposed. I sat down tonight and put together my first of what may or may not be many (depending on how badly I get crit) screencasts around UI / UX + Microsoft Technology.
In this video, I show folks how one can take a workflow design concept and inject it into your canvas of choice but in an Isometric format. I like Isometrics simply because you can get more of a spatial view than most screen angles that and it derives from my old Pixel-art days so..yeah..Isometrics are the way!
Hope you enjoy, and feedback welcomed.
In this screencast I show how one can take a Isometric workflow map and transpose it into Expression Blend 4.
Having recently, gone full METRO lately, I have in turn created what I'd call my 5 things you ought to know going full metro list. Here are five (5) things you probably might want to know if you head down this path like me?
1. Color choice is critical.
The thing that stands out about most "metro" inspired designs is they pretty much settle on around 2-3 primary colors of choice. They also then rely heavily on either black or white (maybe a shade of gray) for their canvas as well.
It's probably good thing to start by choosing your canvas to paint on upfront. White is great, it gives you more room to play around with and does not come across as obvious in terms of empty space. Black is also great, but keep in mind your empty space may become more obvious if you do not plan your designs as carefully.
Once you have outlined your canvas now comes the choice of a base color, any other accent you choose feeds off this color. Selection of your base color needs to keep one thing in mind; it will haunt you throughout your design.
The base color defines the "chrome" boundaries will give the user the pattern they have made a choice on something. Having often used the base color as both the partial chrome and when it comes to a selection. The reason I do this is in my own warped mind, the selection is part of the chrome, it's essentially a choice an active one at that so why does it need to be constantly visible in terms of difference? In that I've made my choice, I've noted the choice is visibly in place, let's move on please and focus on the parts I haven't made a choice on, am about to make a choice on and/or need to figure out whether a choice needs to be made?
After you have defined your base color, head on over to Adobe's Kuler site (that or use the Kuler extension found within Photoshop CS4+). Inject your base colors into the "create" area, and then decide how your color compliments by selecting an alternative/complimentary color. Kuler is pretty spot on in terms of helping you decide this, as if you choose a bright green; you have many colors to choose.
Example, one of the designs shown uses a bright blue, at first it seemed quite bright - but then after while of using it (own opinion), becomes a welcome relief from the green.
This alternative or secondary accent in your color, should become the color in which you decorate your buttons or inputs into.
Example, if you look under the hood of a new car, you will see areas colored yellow whereas the rest is the same color of the engine etc. Car manufacturers do this on purpose, they want you to touch yellow yourself but if it is not yellow and you are not mechanically minded - leave alone.
Having liked this principle and having fused it into designs constantly and personally used the secondary accent color to draw people's attention to the fact "I'm ok if you touch this, you won't break anything if you do" thinking. Personally, not entirely sure how popular it is amongst the UX/UI fraternity. So far, no users have made obvious complaints from this approach.
I have not seen data that contradicts my theory here, but ultimately the data I do see is if the end users are given a breadcrumb trail in terms of pattern recognition, you have won them over - complexity and efficiency formula's aside.
One digresses; finally keep your color palette to around four (4) shades in total. It forces you to keep your selection pretty consistent and close to the base as possible. Having liked this approach, as it prevents color conflicts occurring and again. It also maintains a pattern that is consistent - almost expected.
That is the interpretation of metro and colors so far - personal opinion based off current design style (which is slowly evolving - personal UI journey here).
2. Typography consistency is good, focus on that.
Designs shown are an example of how they do not keep typography settings consistent, as they will often fluctuate between 8pt to 30pt text depending on design purpose. These choices made only during the testing out layout composition from wireframes phase. The idea here is to see how it all stacks together inside Photoshop before you take the concept over into Expression Blend.
Once inside Expression Blend - much like HTML/CSS - I settle on a consistent and semantically named theme sizes. An example would be H1FontSize would be around 30pt, H2FontSize would be around 24pt and so on. The key here is to keep the approach in which you attack your UI consistent in your font size settings - goes without saying.
FontSizes are not the only issue, UpperCase, Camel Case, Lowercase etc. are all equally important. Do try to keep these consistent to their relative function, in that navigation kept lowercase but headings etc. all uppercase.
Zune team does this with the Zune Desktop, the actual navigation, and nav-headings and maybe category headings are all lowercase. The rest of the text fluctuates between all Uppercase and normal case
Not entirely convinced of a killer formula here as its one of those areas where more experimentation could be done around what can work vs. what cannot. Typography experts will definitely have an opinion here. Have yet to see one that outlines this. (Ping me if you have any bookmarks in this area of expertise).
Color choice per heading is also something that seems to have a bit of a formula around when it comes to my designs. I'll often use the base color on labels that I think are important enough to capture the users attention but then use opposite to the canvas colors in order to retain a normal state.
I typically think about this as a case of highs and lows in vocal tones. In that if describing a current screen to someone, one might say, "this is the USER MANAGEMENT screen within the security management area". This is personally, done naturally via voice emphasis the "User Management" vs. "Security Management". As who cares where it lives, it is not the end is it.
Welcome feedback on this approach, but the point is color selection helps carry that forward.
3. Text Chrome vs. Text Content balance it out.
This is one of those pet hates I have with the current implementation on Windows Phone 7 - in one way the team decided to abandon chrome artifacts in favor of text, the problem I personally have is switching gears between "Chrome" text and "Content" text. I find myself at times pausing and thinking, "ok, can I touch that..nope..ok what's touchable here..".
Overtime, personally, will develop some muscle memory in this space and will soon learn what the differences are but in given the current metro virginity I have, I just noticed this as a pattern which was not obvious (again, let users notice the scent for pattern recognition).
Would one put heading labels etc. as being the "Chrome" text category? Striking a balance here of too much text is important and again color choice can help you out here. I rely heavily on monochrome filled shapes and text in my current metro style, so for me it's a demon I'm always trying to wrestle to the ground - keep my text minimal and restrain from too much on the screen, especially if there is allot of data to view.
4. Element minimizations are ok, but add some life please.
Over lunch today, a colleague and I were discussing the metro way of life really does not allow too much in the way of gradients or watermark elements - most of the designs I have seen in this space are clean / white.
Is purity in no element additions bad or good?
I for one cannot honestly answer that as I am the type of guy who often just does not know to back off on the decals, I love a busy UI and it has nothing to do with the user experience it is more of an artistic habit.
Agreed, less is more, but sprinkle in some energy into the user interface. Think about what the end user is supposed to feel or relate to the said User Interface. One screen rather has a lot of fun with the idea that a boring concept like task tracking can take on more of a Bourne Identity / CIA feel. I then throw in a world map, as hey, all Fantasy User Interface(s) within Movies always have the map of the world behind them when hacking a large mainframe or accessing global data of some kind (it works, I love it, back off).
Point is, don't be a lemming and follow each other over the pure white cliff, sprinkle in some decals to take the edge of the seriousness of the UI and allow the end user to feel a small connection with the work you've put in front of them.
5. Fantasy User Interfaces in movies are your best friend.
Having found it difficult to get inspiration sources for metro and instead turning to local train stations etc., but they also have a much-prescribed look and feel throughout Australia. Often one use Google/Bing images and searches other transit signs etc. to get a pick out of what is out there in the wild - metro style.
The main source however for inspiration, is to watch movies that have kickass user interfaces in them and then pick them apart frame by frame. I analyze them and try to force myself to think outside the box more and how these concepts could relate to real-life user interfaces.
I like what Fantasy UI brings to the table, it creates this nice illusion within a TV Show or Movie that convinces you in under 30seconds that the actor is doing something high tech. It is obviously not real software, but you forgive it and go "yeah, suppose, I could buy what you're selling here" for that brief moment.
I think it's important to call that out, as it's at the core of probably our reaction to software and it gives off this pure signal of "that works" moment.
Metro can lend itself to simplistic Fantasy UI's, as often the user interfaces are very basic in their structure. Take the work of Mark Coleran (current geek UI hero); looking at his work, he has kept the UI basic in terms of color choice, composition and decals. The actual magic comes to live when there is animations etc. on it, but in the end, it is relatively simple in composition. - It works!
This is the only reason I am remotely convinced metro as a style so far (the one I am working towards anyway) might have some legs. It is not entirely rainbows and kisses, but at the same time, it is.
Also, look at HTML/CSS sites within csselite.com galleries for inspiration as well. As these sites typically rely quite heavily on CSS for their design governance - so in a way the metro concept really hails from this if you ask me...
I often see a lot of consistent patterns in the way applications are being built when it comes to generic create, read, update and delete (CRUD) workflows .
The usual pattern is that a screen starts off with a add/remove action followed by a very large datagrid and probably some paging. A user would then refine the datagrid’s result set, make a selection either inline on the datagrid or opens a modal via an action like double click which then presents the end user with a more detailed view of that record. This is probably so generic in the way it’s being approached that I’d probably dare say nobody’s really sat down and thought about its actual practicality – as it seems to be the unofficial standard for screen design (well the bloody apps I see day in day out anyway).
This pattern for me isn’t something I’m a fan of, maybe because it’s so common now that I simply crave for an alternative approach? I crave this alternative because I feel at times the workflow in itself seems oddly backwards?
The part that catches me out, is the overall approach taken. For instance, the end user has come to the said screen to get a detailed view of a record – maybe a summary, but doubtful. They wade around in the various amounts of turn-keys (filter settings) until they settle on a pattern of data that they can then scan (hunt/browse) for and proceed to get the modal open for a detailed view. It appears that majority of the practical usage is saved towards the end of the process pipeline? in that getting a detailed snapshot of the record seems to be an extension to the UI instead of probably being the focal point of the UI?
Armed with this style of thinking, today, I set out to try an alternative approach to the way this workflow could work. I decided to simply inverse the workflow, in that take a typical Security (add/remove users etc) workflow and try a different approach (see below).
The idea is that when you click on “Find Users” the screen opens up to your summary view, in that since I’m logged in it reflects back my entire account profile found within the system. There are then a number of actions one can take in and around deciding on what to do next but the main key piece here for me, is well I've shown you the end point up front – I've seeded a contract with the end user around what screens will look like once they’ve found a user of their choosing.
How do I change the user from me to someone else?
The change button in this screen kicks off what is traditionally the first screen, in that if the end user clicks on [Change..] a modal will open over the top, presenting the end user with search criteria. The user then fires up some search results and can specify filters for their search. Once the end user has found the right user of their choosing, the modal closes and the original security profile (you) switches to the person in question.
Ok, I’m kind of with you, but what benefits does this give then?
I personally think it shifts the user into a more focused approach to how they handle the workflow. It’s quite easy to snap in a datagrid + tree control and hit F5/Ship. This approach in my opinion approaches the workflow differently, in that it asks the user to be specific in what they are really after. If you’re in the User Administration area of this application, then what is it you want to do? Manage users is probably the typical response here. So, let’s let them manage a User in a more focused fashion by exposing other areas of interest in a screen that’s more content specific and less cramped / buried in a floating modal.
The typical “list all users” with paging approach is quite unnecessary real estate to reserve for prime time, as well it’s merely a stepping stone to the end point. It’s almost throw away in the task process should the user want to change “John Doe” password or check when that user last logged in etc.
You could even approach the way I’ve done it differently, by simply providing a search box at the top with a label “Find User..”. Once the user types in “Scott Bar..” (auto complete) like experience fires, but instead of a pulldown it could then go off and grab all twitter feeds, flickr photos, facebook profiles, linked profiles etc and just start showing them on screen. This kind of approach is more helpful when you’re trying to figure out who that “Scott” fellow was last night, as now you’re meet with multiple forms of media to help guide your search detective skills down to a more informed end point.
The point is, it’s taking the equation of CRUD and flipping it into a more interactive experience. Why invest all this time and energy into some of the new UX platform’s out there only to use generic patterns like the original one mentioned in this post? How can you evolve this pattern further and where can the users gain in terms of data + contextual view beyond what they’ve typically been given.
It’s a new world people, try and break a few things as when you break something you in turn are rewarded with knowledge on where risk/failure can occur. Much more informative approach than “well everyone else is doing so i assume it works” policies :0
To be tested..