I think therefore I know.

When you’re in a role of UX you tend to have contested territory marked out around you. Everyone around you has an opinion on something that fits within your charter so you in turn have to be the guarded diplomat constantly. I don’t mind heated exchange of ideas, when people get passionate about something they always stand their ground on a topic and make sure their voice is heard clearly and loudly (often without politeness attached). In these situations what I typically have echoing at the back of my brain is a question “do they think or do they know”.

I think something instead of I know something takes on a whole new set of discussion points because if you think something then its just an idea or assumption. If you know something, well chances are you have data points filled with confidence attached, this is good, this tells me straight away there are more clues to be found.

“..The only way you win an argument is if you get the other side to agree with you..”

Is what my dad would say when he & i used to get into the thick of it. Its a fairly simple statement as in the end when you have two opposing ideas on the same problem, well it comes down to either compromise or an impasse. If its an impasse then it probably will come down to the title you have on the day, in my case being Head of User Experience. A title like mine carries some weight that means i can ignore your opinion and proceed onwards without it, but doing so means that i need to qualify my arrogance more.

Being the top-dog in UX land isn’t an excuse to just push past people on their “I think” statements and supplant your “I thinks” ontop. Instead what it means is we have to be more focused on establishing the “I know” statement that absorb the two opposing ideas. My way of thinking is this, when I reach a point where there isn’t any data to support the opinions / ideas its now a case of writing multiple tests to get them fact checked and broken down until we do have the ideas transformed into behaviour facts.

I think the users will not like the start menu removed so don’t touch it.

Now lets remove the start menu is my immediate thought, screw the statement what happens when we do it. I’m assuming there will be some negative blowback but can you imagine the data we can now capture once its removed and how the users react. The users will tell us so much, how they use the menu, where they like it, why they like it there, who they are, what they use and so on.

That one little failure in Windows 8 is a gold mine of data and online there are discussion forums filled with topics / messages that centre around “ I think “ but nobody really has “I know” except Microsoft.

My point is this. If you’re not in a role that has User Experience in its title then fine, knock yourselves out with the back and forth of “I think” arguments. If you are in UX your job is to not settle with “I think” and instead hunt for “I know” for you will always get rewarded.

Related Posts:

How UX & Ethnography mix.

Inside most organisations you’ll likely see a marketing team distill their customer base into a cluster of persona(s), which in their view is a core representative of a segment of their audience in a meaningful & believable form. These persona(s) are likely to be accurate or moreover a confirmation on a series of instincts that may or may not have supportive data to underpin their factoids. The issue with these personas is that they are likely to be a representative of the past, that is to say using them isn’t really about transplanting their behaviors into the future, instead its a snapshot in time of what happened at the time they were documented.

The definition of Ethnography basically distills to what i’d class is happening in the persona research space, especially when you commission design agencies to do the research. They are usually quite thorough in their research and often don’t miss a step in cataloging the series of data points needed in order to build a picture as to whom they are looking at and what the behavioral traits the persona(s) in question are likely to have in a range or clustered form.

Downside for UX people like myself is there’s no real jump off point for this type of data, as for me, it’s not really about whether or not “Max” is prone to water-sports or is in the age bracket of 25-35, i have really no need for excessive metadata. The challenge for me is to map these series of personas back into a timeline of graduation both in simplicity vs complexity but also around how their confidence levels are organised in a way that outlines the cold/hot spots within a feature(s) experience needs.

If you were to take a feature, break it down into its intended audience, complexity required to use it and lastly its overall metrics that help define its success/fail  – well you’d likely end up with a lot of moving parts that don’t offer up any tangible qualitative value that helps you at the very least sniff out “what just happened”. What if you instead take the marketing personas, take a guesstimate around who you’re targeting, the features likely markers that trigger the metric and infer based on this data, the outcome – this would in turn be called confirmation bias.

There’s the uppercut with Persona(s) as you can easily set out to build on a solid foundation of healthy data but it’s only when you transfer or map these series of data points to the actual set of features & content within an experience that it starts to unravel and threads of its truisms get caught up in a lot of inferred guesstimates.

The root cause for this failure in qualitative data is simply due to the past being used to dictate the future, again remembering that at the time you interviewed and inspected your persona(s) it was based on either “what if” or questions that point to competitors or existing experiences that are already set in stone. Today and tomorrow you’re not keeping those experiences locked like that, in fact you’re probably looking to move the needle or innovate in a different direction which means you have small to large impact on their behavior, so thus the experiences can often involve dramatic or not so dramatic change(s). The only way to test or baseline the change is to have this continuous sampling that keeps checking & rechecking the data points in the hope of change makes itself prominent.

Problem – change isn’t always obvious, it can be subtle, the slightest introduction of a new variable or experience can often lead to adjustments that go unnoticed. I’ll cite an example in abstract form.

A respondent is asked to walk on a path through a forest from A to B. The respondent is asked to count how many “blue” objects are lined along the path, and the said respondent’s heart rate will be also monitored (also base-lined / zeroed out). Before the respondent takes off the testers place a stick that has similar shape to a coiled snake midway on the path.


The respondent is then asked to proceed on the journey, and they begin to count the blue objects and at the end of the path when they arrive, they give an accounting of their blue object findings. Their heart rate was normal in line with normal physical activity.


Respondents were less likely to notice the stick.


Next round of respondents are asked to the same, only this time the seed of fear is planted in their subconscious with “oh others noticed a snake a few hours ago along the path, be careful and if you see it sing out, it should be gone by now and we couldn’t find it earlier so just take note”.


Respondents begin the journey on the path, they notice the stick initially and a lot of messaging between the optics and brain are moving at lightning speed trying to decipher the pattern(s) needed to place a confirmation on “threat or non-threat” levels. Heart rate is spiking and eventually they realize its a stick and proceed, as they walk past the stick still keeping a very close eye and proximity buffer between the stick and them.

The point of that story is this, that with an introduction to the standard test of a new variable (fear) you’re able to affect the experience dramatically to the point where you’ve also touched on a primal instinct. In software that “stick” moment can be anything from moving the “start button” on a menu through to moving the way a tabular amount of data has been traditionally been displayed.

As a User Experience creator, we typically move the cheese a lot and it’s more to do with controlling change in our user(s) behavior (for the greater good). Persona(s) don’t measure that change, all they measure is what happened before you made the change. All you can do is create markers in the experience that help you map your initial persona baseline back to the new in the hopes it provides a bounty of data in which “change” is made obvious.

It doesn’t… sadly… it just doesn’t and so all we can do is keep focusing on the past behavioral patterns in the hope that new patterns emerge.

Persona(s) aren’t bad, they aren’t good, they are just a representative sample of something we knew yesterday that maybe still relevant today. The thing i do like about personas from marketing folks is this, it keeps everyone focused on behaviors they’d like to see tomorrow re-appear and that in the end is all i ever really needed.

Where do you want to head tomorrow?

Last example – NBC Olympics were streamed in 2009 to the entire US with every sport captured and made available. At the time everyone inferred that an average viewer would likely spend 2mins viewing time. In actuality they spent 20mins average viewing time and sent massive ripples in the TV/Movie industry in terms of the value of “online viewing”. If we had of asked candidates back then both as content publishers and consumers, they’d probably have told us data that they asserted to be relevant at the time. In this instance the Silverlight team were able to serve up HD video for the first time too many people online, and that’s what changed peoples experience. Today, its abnormal to even contemplate HD video streaming online as anything but an expected experience for “video” … 5 years ago, it didn’t exist. Personas compared to then and now are all dramatically different now, so while change can in some parts be slow… they can easily expedite to days, months as well as years.

I don’t dislike Persona’s, i just remain skeptical always of the data that fuels them – but thats my job.

Related Posts:

Being Playful with Industrial Software

I’ve been sitting in the Enterprise space as a UX mercenary for probably around 5+ years. In every team, sales meeting and brainstorming session I’ve always encountered resistance around “maturity” in terms of design. The more money that is being spent on the software, the more “serious” the design should be. This line of thinking I think typically comes from the concern that if the design is not serious therefore the trust around it’s ability to do the various task(s) will be eroded.

The thing is, the more sales meetings I’ve been in or participated in the preparation for the more I’ve come to the conclusion that “design” isn’t even a bullet point in the overall sales pipeline. Sure the design makes an appearance at the brochure / demo level but overall nobody really sits down and discusses how the software should look or feel during the process. Furthermore the client(s) typically have invited the sales team(s) into the selection panel(s) based off their existing brand, known or rumoured capabilities and/or because they are legally required to.

To my way of thinking, being “playful” with your design is a very unnerving discussion to have in such a scenario. The moment you say the word “playful” most people respond with some word association positive or negative (usually negative) as the word may take you back to your childhood (playing with lego or dolls …I didn’t play with dolls..they were called G.I. JOE’S!). It’s that hint of immaturity with the word that makes it more appealing to me, as it forces you to think about maturity but with the constraints of immaturity (cognitive dissonance).

Playful however doesn’t have to be immature, there are very subtle ways to invoke the feeling of making something playful without actually being obvious about it. For example, Google+ and most of Google’s new branding is what I’d consider “playful” but at the same time the product(s) or body of work that goes into their solutions are quite serious.

Playful Mood Board

Playful Mood Board

Why be playful? My working theory is that the reason why users find software “unusable” has to do with confidence and incentive. If these two entities don’t’ fuel their usage furnace the overall behaviour around their usage decay(s), that is they begin to taper off and reduce it to an absolute “use at minimum” behavioural pattern. This theory is what I would class as being at the heart of invoking “emotion” or “feeling” into how software is made and often why a lot of UX Practitioners will preach as to why these two should be taken quite serious in the design process.

The art of being playful in a way regresses adults back to their childhood where they were encouraged to draw, build and decorate inanimate object(s) without consequences attached. As a early teenage child, you were encouraged to fail, you were given a blank piece of paper and asked to express your ideas without being reprimanded. You in short, “designed” without the fear of getting it wrong or for that matter right (although right was usually rewarded with your art piece being put on the fridge at home or something along those lines). A playful design composition can be both serious but inviting, as a good design will make you feel as if you’re “home” again. A great design will make that temporary break away into using other software and then back again an obvious confidence switch – as if you’re saying out loud “gah! that was a horrible experience, but I’m’m back to this app…man…it feels good to be home and why can’t other software be like this”




Related Posts:

Inserting the UX into an existing Agile Project.

It is a Wednesday afternoon in Sydney North Ryde, humidity is quite hot and I am walking up a fairly steep hill panting and cursing to myself about getting to the gym sooner rather than later. I glance over to my left and I see this person riding a unicycle up a hill whilst listening to music and moving at a pace that is faster than my walking (yes that is how unfit I am).

I simply stopped in my tracks and first chuckled at how amazingly insightful that was to witness as I immediately thought “that basically was the visual for my role as a UX Architect” – which was to say, “Poor me, how hard is my job as only an idiot would ride a unicycle up a hill”.

Sometimes life is like riding a unicycle up a hill.

Sometimes life is like riding a unicycle up a hill.

As I continued to walk, I started to think about how much effort that person is putting into attacking the hill before him. Firstly he has to balance whilst at the same time maintain a steady forward momentum (too fast he falls, too slow he falls). Secondly, he is listening to music while he attacks the hill that I can only assumes helps him focus on the mission ahead by blocking out distraction(s).

That encounter inspired me, it gave me a renewed sense of energy at facing down the biggest problem I have today – “how do you integrate UX into existing Agile projects cleanly”. I mean to say the task before me is not easy, it is filled with many uphill battles such as balancing between function and form, whilst at the same time not spooking stakeholders into cost blow out panic attacks. I also am required to have a concentration level that simply at times feels inhuman given the unchartered territory ahead.

The difference between that role and the actual guy on the unicycle is well at least he gets to see what’s ahead of him whereas a UX in Agile world is typically doing the same thing blind folded.

Discovery vs Delivery.

I spent over three years travelling around Australia in every capital city visiting “developer” teams for all types of companies (enterprise, startups, government etc.). I have seen the same thing happen over and over, whereby each company swears by their agile manifesto and how important it is to maintain “agile” discipline. I also notice they cherry pick agile each time to make sure it fits in with their culture and more importantly to not take it as an absolute but more a relative approach to designing software.

The part that often sticks in my craw is not the sprint cycle(s) or the sprint backlog creation. Nope, the part that I immediately notice as being the fatal flaw to why User Experience is often the sacrificial lamb in the development process is well the discovery of the said feature(s).

For instance, a team will often sit down in a room with a whiteboard and then begin coming up with some stories around what they are hoping to achieve with the software. They then will likely document these stories with the usual “As a User I want the ability to do X so that I can do Y” style sentences. After that process they, would likely then unpack these at a later time into developer task(s) along with success/fail criteria (tests, definition of done blah blah)?

I would guestimate that 90% of the hundreds of developer teams I have visited do pretty much the above. Furthermore, I would often be invited into some of these teams at around the last few sprints to help “make it look UXy” as if there was some way I could just “Integrate” into the team(s) development process, fix the UX problems, low impact to the code and do the aforementioned in a timely manner (Usually I say: “You don’t need a UX Architect, you need a priest as this thing is dead and all you’re after is a lot of prayers to reanimate the corpse”).

Let me simply say this, as a contractor I would love nothing more than to have you do the above, especially if I am charging you per hour. I could simply tread water and extract large amount of free useless work hours knowing either outcome still does not result in a successful delivery.

Ethics 101 aside, today, I am not that contractor, I am a UX Architect and I have a queue of other products waiting in line for my same attention. I do not have time to drag the timelines out so I have to instead get the above optimized.

The flaw in this aforementioned process of white boarding features is the part that you first make your biggest and ultimately largest mistakes when it comes to making a software product (which is a general argument to make, but hear me out). As when you sat down to the feature, you did not document whom you are making the product for. In addition, how much time did you spend on refining the process other than the first bunch of fragmented ideas? Lastly, how do various stories influence one another?

Agile is not an excuse to just deliver a project without planning and planning doesn’t mean you have to spend the bulk of your time in “waterfall” delivery purgatory either.

Great quote from Jeff

It is probably a good point to say that if you are focused on delivery and you spent little time on discovery, well you are basically in for a lot of turbulence and “UX Tetris” in the coming sprint cycles.

Unpacking the Discovery phase.

Take an existing project you are on today, now look at the User Stories you have (ignore your tasks). Now grab these stories whilst grabbing some pens & notepads. Now take some of these stories (does not matter where you start) and begin drawing a comic book of your product, that is “how would you draw an activity from the moment the user clicks on an icon to the splash screen and then to what they see first”.

Example – If I was building Outlook I’d say scene(1) is user clicks on icon, scene(2) splash screen loads, scene(3) outlook opens in default folder view, scene(4) user creates new email etc.


The objective behind this in the first part of the discovery phase is to illustrate that your story whilst at first sounds perfectly fine is not really “done”.  How many UX Personas did you draw in the comic? I would wager maybe one? Moreover, how many times did you redraw this process before you exhausted the steps into what you would consider “simplicity”.

  • How much influence did iPhone, iPad, Microsoft Office, Outlook, Windows 8, Adobe Photoshop, Adobe Illustrator, Visual Studio etc. all play in the mental model you had for the software(s) design?
  • How many DataGrids and Tree controls did you “assume” existing in the UI as you drew the comic story?
  • Did you draw the UI as a wireframe or just abstract shapes? (If you did wireframe, stop, rub it out and keep it abstract).

A Definition of a Story is “An account of imaginary or real people and events told for entertainment” – the key words from that definition are “An account”. That is to say, it is your own personal bias coming through on how you foresee the event taking place based on existing ideas or experiences from the past.

Now, to bake your noodle, grab three or four of your colleagues and get them to do the same process on the said story but be very careful not to lead them on the same path you just took (e.g.: “draw a comic book for how our customer can create a new email. STOP. No more information”).

I have done this a few times and usually what happens if done correctly (i.e. everyone is in isolation) is the story typically has similar patterns but the ordering and approach taken often has mixed result(s) (especially if its domain specific to your companies problems and not generic like email checking).

First lesson learnt here is that we all approach the design with a bias in mind and yes, we feel that if we all share the same pattern in design it will in turn invoke less agitation on the end user(s). Agitation such as this is still good and most of the time the behavior of the said end-users will likely follow the approach defined.

Problem is you are not in the business of “good enough” anymore. Software today is expected to rise above mediocrity and everyone is under pressure to deliver products as “simpler” and less “dense” in terms of feature(s) and/or layout design(s).

With that, it is your job to put this entire story design on a very strict diet that should take more time than you probably anticipated. Typically you will want to time box this process as it can drag out and I highly recommend you grab a healthy mix of developers, customers (trusted), sales, marketing and if possible receptionists (i.e. people who aren’t your target user) in the creation process. The more diverse the background the more likely you can feed of each other’s ideas of “simplicity” without having blinders on. Lastly make sure it’s in a room where everyone can draw their ideas and do not break these sessions into hourly mini meetings – make them days, in that spend five days in a room fighting, crying, swearing, hugging and so on (as you will take two days to get everyone in a relaxed mode whereby failure won’t be seen as embarrassing – humans are funny like that).

ProTip: Simplicity can be measured.

A Comic style slice of all your user stories can feel like a complete time waste, especially if you have pressure to deliver. In order to do this you have to justify how this can improve cost of delivery whilst at the same time it looks dangerously close to “Waterfall”.

Truth be told, in under a week or maybe two you could potentially visualize your entire current story catalog in a way that would likely reduce countless hours of communication issue(s) around design which in turn would without a doubt reduce communication costs. That being said, that is a very loose “return on investment” pitch to make.

  1. For giggles, take the existing slice you have designed and then unpack that into tasks with forecasts attached. Record that number and cost it out in terms of effort.
  2. Now, for giggles again, take your team and tell them “make it simpler” that is refine the story further, squeeze it to the point where you automatically feel it is not as “Powerful” in terms of feature design.
  3. Once that occurs, run your costs against that.

If done in a way that I am assuming you should see a fluctuation between the two cost(s) and it can be either higher or lower in terms of cost.

You have the first measurement to add to the “simpler” cost center, in that if it’s lesser in terms of cost – awesome, see we just reduced a lot of excess coding time!.

If it is higher then well how badly do you want a better user experience for the product you are making? (Sneaking this past people usually ends badly, so be honest and if executives do not subscribe then you have an answer on how they feel regarding “experience matters”).

Often when I do it, the cost initially increases (i.e. short term loss, long term repeatable win) because as I am refining the stories I am looking to make the behavior of the user less effort for them. That is to say I am thinking how the software’s job is to make their life easier and that it should do 90% of the work for them and that means at times decluttering the task from the usual user interface design and being context specific (i.e. isolate the user to carry out a task that’s reduced of distractions).

An example.

If you mapped out Outlook comic story you would probably have imagined outlook as it is today, whereby you have the tree of folders, you click “new” and prompted the same way it does today. 

I think of it differently, I think that whatever behavior I invoke that triggers “new” is initiated, the entire user interface goes into “new mail mode” that is the existing chrome/HUD is screened back and all the specific requirements I have for “new email” is in front of me. To my left there is a smart way to access my contacts, especially given I’m in a large enterprise and often have issues between first/lastnames and aliases”. To my right I have a different way to create an email in that am I creating just a text email or am I creating something that I want to insert media into for all to visualize my point?.

My point here is I could easily increase the development cost in order to assemble the user interface in a way that for me puts all the necessary pieces the end user will need in order to carry out the said task. Simpler doesn’t mean minimal (as that can often be misleading) it means how does one make the life of the persona simple in carrying out the task – given software is made to make our lives easier right?

UX Personas are not what you may think they are.

Having redesigned the new email some may declare “..hang on, you didn’t specify the feature criteria fairly..” which is true, I did not. The next thing one needs to learn is my idea of a user and your ideas of a user are highly likely two different types.

Example of Useless "UX Persona"

Example of Useless “UX Persona”

A UX Persona discovery will fix the assumption failure(s), whereby if you sit down and you unpack the word “user” to the point you exhaust your collective knowledge of that means. A UX Persona is not a story about “Jim who is 22, likes fishing and blah blah” as bottom line who gives a shit who he is or what he likes. A Persona designed like that is used for marketing purposes to help sales teams position the messaging & roles the said product will likely excite.

A UX Persona typically needs to focus on two simple areas, that is what behaviors they are currently exhibiting (AS-IS) and what behaviors they should be exhibiting (TO-BE). The word “User” needs to absorb the fact that sure, a user is doing xyz today but you are in the business of innovation so you in turn need to move them to a new set of behaviors!

Eg: When Apple sat down to design the iPod touch they were not pandering to existing behaviors user(s) were exhibiting on mp3 players. They moved us all over to the touch interface and it was initially confusing but today I see five year olds queuing music & playing games.

Defining a UX Persona for me is mainly about breaking their behaviors into four categories


  • Influence (low to very high). Take training, mentoring, buying power, optimization etc. as categories you can help shape the low to very high score. Basically how much influence does this persona have over the adoption of your new product, the training burden required in order to use your new product and lastly the output of the product (i.e. are they the end customer for your customers customer).
  • Usage  (low to high). Similar to influence but now how much of the actual product are they going to be using? Specifically which modes of the product are they using (e.g. Visual Studio – Build time, Debug & Runtime). If you are writing software for both an executive assistant and their boss, then basically it is likely the assistant is going to have a higher rating then the boss depending on the scenario (vice versa).
  • Form Factor. What are they using to access the product? Given tablets, smartphones, laptops etc. are all evolving technology what is the likely input of choice. Do not just isolate this to device/platforms but also are they using stylus pens, are they using modified keyboards etc.
  • Environment. What is type of environment are they using the product in? Is it inside a coal mine where it is dark (i.e. white vs. black colors are a safety issue), is there many hazardous issues nearby? Is it noisy (distraction and cannot hear sounds), is it inside an office? Is it inside an operator building where your product is one of sixteen screens? 

    Environment is really an important amount of information that gets lost in the “Story” creation. As we really need to pay attention to how much duress, the user is under in order to make their life simpler.

Notice I never discussed usability issues such as their age, sight quality, gloves vs no gloves, color blind vs non color blind and so on? Well, if you did not now you have. Usability is a completely new chapter on its own, suffice to say I typically design for extreme in mind that is I assume the worst and hope for the best, make the process accessible and it in theory should put you in a position to refine for specific usability & accessibility scenarios (ie design garden sheers for people with arthritous and in theory you will design for both people with and without in mind).

Keep breaking the UX Personas you design down until you simply cannot come up with new ones. Then go grab some customers you have today or want to have tomorrow and play a game of “Guess Who” with the existing ones you have defined. If you cannot line them up with what you have or you end up with orphan UX Persona(s) then consider how to merge or separate until you reach a “best guess” group of personas to attack with your new product.

The trick here is also to focus more on “TO-BE” not “AS-IS” as the moment you release your product to market you are changing the rules of usage. You are invoking change in an area where existing mental models are either hard wired into the users or have no concept of said feature(s) even existing.

Once you have the list of Persona(s) grouped in a way you feel make sense (make tribes if it helps with the grouping) then I want you to divide into two piles – first being 80% and second being 20%.  Dividing these personas into the 80% and 20% piles gives you two options going forward.

The 20% pile could be the first target users, these are the ones you want to launch version (1) of your new product with. It means you have a much simpler feature set to attack but it also means you can iron out the kinks in this new process whilst illustrating the value of “simplicity”.

The 80% pile could be the same as the 20% or it can be the persona(s) that are distracting you from simplicity, which is they aren’t as important for the first round of delivery? Either way you choose to approach this just settle on one of these piles as your “target” user base.

The truth is you will never hit 100% of your persona(s) needs in your ongoing deliveries, and once you make peace with that, the pressure of being everything to everyone will be reduced. It means that over time, you will have to work harder to regain the lost personas back but that is fine, provided you stick to that mission and remain calm.

Example – When Silverlight was being built, the first version pretty much took the 20% path with a focus on video persona(s). As more and more versions of the product where released it then would take the missing 80% pile and subdivide that into 80/20 and again, take that 20% and chip away at those personas that wanted more than video.

ProTip: Consider putting these personas into a deck of “Cards” and hand them out to all members in your team. As when you are discussing problems in your day to day development get into the habbit of keeping them in view when you say the words “Customer” or “Users”.

You are in the patterns business.

Do not fool yourself into thinking the software you are working on is unique and never been done. There are elements to the software you are making that is fresh in terms of features here and there but ultimately you are highly likely re-using existing UI patterns found in software today.

The question is what UI patterns are you using and why have you changed them? That is to say which Color Modal are you going to use and why don’t you like existing patterns out there should it be different?


It’s at this micro level you isolate your users actual behavior and can be majority of the time field tested with customer(s) to establish what is “usable” and which isn’t. It is also at this point you can attach “who” the UI pattern is designed for.

If  you catalog these UI patterns you can also begin the “visual” treatment process before any code is even written (in parallel) as its really about which assets are missing, which you already own and when they can be queued for delivery.

Lastly and the most important point of all is that you can identify which Patterns are “Fresh” and begin your patent application(s) for retaining your intellectual property rights whilst at the same time ensuring that you’re not infringing on other existing patents out there.

 (i.e. have you read the terms & conditions of Office Ribbon, Adobe vs. Microsoft legal case around panel snapping etc.?)

Discovery integrates with delivery.

To recap, you have taken the simple “user stories” and you have mapped them into visual stories that help illustrate the “before” and “after” in terms of refining them down into “simplicity”. You have also identified whom the actors or “user” actually are finite detail that even talks to what, where and how.

So, discovery phase is done right?

No. This is the easy part as now you have information and you have a sense of possibilities. What comes next is the part where you often will lose the executive owners of your team. It’s the prototyping phase of discovery, that is take your ideas and come up with some wireframes or small time boxed interactive prototypes of how the comic stories can be achieved.

I say you lose them as if you do not handle this carefully you will position this phase as being “wasteful” or “not as important – function vs. form”. The trick is to factor this phase as being part of your “delivery” but knowing it is actually more to do with “discovery”, (it is as if you Jedi mind tricked the project management & executive fears).

Prototypes, Wireframes and Comic Stories are deliverable that works the same as actually writing the code itself in a normal agile scenario (i.e. they too can follow sprint cycles). You can do it parallel (if you are in v2 of this process) or you can do it sequential but the key is to deliver something that shows the investment is not wasted. You also iron out the unknown and can often deliver drops to candidates for the software(s) release to get a sense of what success/fail will look like had you spent months coding it for real.

Remember Windows Longhorn? Yeah most of that was Macromedia Director. Remember Silverlight on the Nokia N9 back in MIX keynotes? Yeah that was kind of a mini PowerPoint style animation! Point is when you demo something, nobody 99% of the time asks to see how many lines of code it took to make.

This approach builds confidence that your development has balanced the feature(s) required with market readiness/attractiveness whilst at the same time maintains a disciplined delivery schedule. It also allows  you to really validate your UX personas against the features that they are attached to, as then you can start to formulate a fairly accurate understanding of what feature(s) are going to make the release and who they are being targeted for. Having just this alone can improve your “feature” cutting when the time for reducing scope occurs; again, you know more about the impact than a general “user” statement.

It also helps to know this when forecasting costs for a new products development, which is how much this will cost to produce and who the first, second, third round of releases will likely excite. It also helps training / documentation teams begin preparing their work streams on how to manage change management issues. This also helps testing teams understand how to attack their tests to ensure the said quality gates are kept intact and aren’t being approached from “generic” scenarios (i.e. play the role of the UX persona not the roles like “let’s see how much of this I can break”).

Finally, it helps Marketing/Sales teams get actually ready for the “launch” in a way that hits your target market squarely in the places it should.  I.e. they know who to avoid making eye contact with during launch time should that UX Persona group not make the cut).

Sunlight is the best disinfectant.

Before I close out this poorly written tomb, let me say that no matter what choices you decide to make, ensure you keep it all open and transparent. There is nothing worse than having all the discovery and delivery processes being locked up in the hands of a select few or worse making it available but displayed in such an abstract format that it simply holds no clarity around what just happened.

The combination of UX and Code delivery needs to happen clearly, there needs to be KPI’s set and lastly you need to ensure everyone in the room has a visual simple display as to what is being worked on.


An executive does not care about your “Done” board in agile and they do not care about how many stories you have to write code against either. They also do not care you have unpacked a generic User persona into five sub personas and lastly they do not care about how you have improved the “AS-IS” comic style user stories into the “TO-BE”. The assume you do this normally, so don’t expect an “Adda boy/girl” pat on the back as well it’s like being asked for a high five for knowing how to check email?

The mostly care about progress reports – where is their money being spent and why. They will care at times when visiting peers or their power brokers come by for a visit, in which case bring all the above out for a full show & tell (factory visits is what I call them).

If you can justify the costs in a meaningful way that does not involve reading text then you are miles ahead of other teams who assume that just because a Story is written down that everyone “gets” where they are heading. Visualization of your products early often gives everyone in the room clear communication around what the vision will end up looking like. There is less pressure for demos to be at a fairly high standard given comic stories, prototypes & wireframes will paint that end point in a much more cleanly digestible way (which ultimately will mean memory recall – which is what you all want).

Lastly, before I close out, can I just say aloud – everybody relax. The agile movement is simply about taking a lot of big lumpy problems and breaking them down into really small bite sized pieces that are easier to manage. This is a strategy that is not unique and exclusive to software development; other industries do this daily and without as many issues as we often create.

An Example of this process is grab 3-4 unique small “beads” and drop them into a bucket of sand. Mix the sand up, then using just a small scoop, plastic bags and a scale come up with a strategy on how you can sample the said mix evenly.

Solve that problem and you can have a future in geology but at the same time, you will also be in a better position to understand how Agile + Forecasting actually work.



Related Posts:

Digital Skeuomorphism decoded.

There seems to be an undercurrent of contempt towards Digital Skeuomorphism – the art of taking real world subject material and dragging it kicking & screaming into your current UI design(s) (if you’re an iPad designer mostly).

I’ve personally sat on the fence with regards to this subject as I do see merit in both sides of the argument in terms of those who believe it’s gotten out of hand vs those who swear it’s the right mix to helping people navigate UX complexity.




Here’s what I know.

I know personally that the human mind is much faster at decoding patterns that involve depth and mixed amounts of color (to a degree). I know that while sight is one of our sensory radars working 24/7 it is also one that often scans ahead for known pattern(s) to then decode at sub-millisecond speeds.

I know we often think in terms of analogies when we are trying to convey a message or point. I know designers scour the internet and use a variety of mediums (real life subject matter and other people(s) designs) to help them organize their thoughts / mojo onto a blank canvas.

Finally I know that with design propositions like the monochrome like existence of Metro it has created an area of conflict around like vs dislike in comparison to the rest of the web that opts to ignore these laid out principles by Microsoft design team(s).

Here’s what I think.

I think Apple design community has taken the idea of theming applications to take on a more unrealistic but realistic concepts and apply them to their UI designs are more helpful then hurtful. I say this as it seems to not only work but solves a need – despite the hordes mocking its existence.

I know I have personally gone my entire life without grabbing an envelope, photo, and a paperclip and attached them together – prior – to writing a letter to a friend.

Yet, there is a User Interface out there in the iPad AppStore that is probably using this exact concept to help coach the user that they are in fact writing a digital letter to someone with a visual attachment paper clipped to the fake envelope it will get sent in.


Why is this a bad idea?

For one it’s not realistic and it easily can turn a concept into a fisher price existence quite fast. Secondly it taps into the same ridiculous faux UI existence commonly found in a lot of movies today (you know the ones, where a hacker worms his way into the banks mainframe with lots of 3D visuals to illustrate how he/she is able to overcome complex security protocols).

It’s bad simply for those two reasons.

It’s also good for those two reasons. Let’s face it the more friction and confidence we can build in end-users around attaching real-life analogies or metaphors to a variety of software problems the less they are preoccupied with building large amounts of unnecessary muscle in their ability to decode patterns via spatial cognition.

Here’s who I think is right.

Apple and Microsoft are both on this different voyage of discovery and both are likely to create havoc on the end user base around which is better option of the two – digitally authentic or digitally unauthentic.

It doesn’t matter in the end who wins as given both have created this path it’s fair to say that an average user out there is now going to be tuned into both creative output(s). As such there is no such thing as a virgin user when it comes to these design models.

I would however say out loud that I think when it comes to down cognitive load on the end user around which Application(s) out there that opt for a Metro vs. Apple iPad like solution, the iPad should by rights win that argument.

The reason being is our ability to scan the associated pattern with the faux design model works to the end user favor much the same way it does when you 30sec of a hacker busting their way into the mainframe.

The faux design approach will work for depth engagement but here’s the funny and wonderful thought that I think will fester beyond this post for many.

Ever notice the UI designs in movies opt for a flat “metro” like monochrome existence that at first you go “oh my that’s amazing CG!”. Yet if you then play with it for long period of time their wow factor begins to taper off fast.


I don’t have the answers on either sides here and it’s all based of my own opinion and second-hand research. I can tell you though sex sells, we do judge a book by its cover, and I think what makes the iPad apps appeal too many is simply – attractive bias in full flight.

Before I leave with that last thought, I will say that over time I’ve seen quite a lot of iPad applications use Wood textures throughout their designs. I’d love to explore the phycology of why that reoccurs more as I wonder if it has to do with some primitive design DNA of some sort.


Here’s some research that hints at this space [Click here].

Related Posts:

Metro: Typography trumps chrome–debunked.

Metro, is fast becoming this unclear, messy craptuclar retardation of modern interface design. In that, the current execution out there is getting out of control resulting in what originally started out as a Microsofts plagiarized edition of Dieter Rams “Ten Principles of Good Design” into what we have before us today.

I am actually ok with that, as if I ever looked back on the first year of my designs in the 90s I’d cringe at the sight of lots of Alienskin Bevels, Glows and Fire plugin driven pixel vomit.

The part though I’m a little nervous about is how fast the microsoftees of the world have somehow collectively agreed that Text is in Chrome is out – like somehow science is wrong, that what we really need to do is get back to basics of ASCII inspired typography design(s) of yesteryear.

Typography is ok, in short bursts.

Spatial Visualization is the key description you need to Google a bit more around. Let me save you a little google confusion and explain what I mean.

Humans are not normal, to assume that inside HCI we are all equal in our IQ levels is dangerous, it is quite the opposite and to be fair the human mental conditions that we often suffer from are still quite an the infancy of medicine – we have so much more to learn about genetic deformation/mutations that are ongoing.

The reality is that most humans hail from a different approach to the way in which we decipher patterns within our day-to-day lives as we aren’t getting smarter we’re just getting faster at developing habitual comprehension of patterns that we often create.

Let us for example assume I snapped someone from the 1960’s, and I sat him or her in a room and handed them a mobile device. I then asked them “turn it on” and measured the reaction time to navigating the device itself to switching it on.

You would most likely find a lot of accidental learning, trial and error but eventually they’d figure it out and now that information is recorded into their brain for two reasons. Firstly, pressure does that to humans we record data when under duress that is surprisingly accurate (thus bank robbers often figure out that their disguises aren’t as affective as once thought) and secondly we discovered fire for the first time – an event gave it meaning “this futuristic device!!”

What is my point, firstly, the brain capacity has not increased our ability to think and react visually is what I’d argue is the primary driver for our ability to decode what’s in front of us.  (point in case the usage of H1 tag breaks up the indexation of comprehending of what I’ve written).

How so?

Research in the early 80’s found that we are more likely to detect misspelled words than we are correctly spelled words. The research goes on to suggest that the reason for this is that we obtain shape information about a word via peripheral vision initially (we later narrow in on the said word and make a decision on true/false after we’ve slowed the reading down to a fixated position).

It doesn’t stop there, by now you the reader have probably fixated on a few mistakes in my paragraph structure or word usage as you’ve read this, but yet you’ve still persisted in comprehending the information – despite the flaws.

What’s important about this packet of information is that it hints at what I’m stating, that a reliance on typography is great but for initial bursts of information only. Should the density of data in front of you increase, your ability to decode and decipherer (scan / proof read) becomes more of a case of balancing peripheral vision and fixated selection(s).

Your CPU is maxed out is my point.


Did you also notice what I just did? I put all that text in Uppercase, and what research has also gone onto suggest is that when we go full-upper in our usage our reading speed decreases as more and more words are added. That is to say, now inside metro we use a mixed edition of both and somehow this is a good thing or bad thing?

Apple has over-influenced Microsoft.

I’m all for new design patterns in pixel balancing, I’m definitely still hanging in there on Metro but what really annoys me the most is that the entire concept isn’t really about breaking way based on scientific data centered in around the an average humans ability to react to computer interfaces.

It simply is a competitive reaction to Apple primarily, had Apple not existed I highly doubt we would not be having this kind of discussion and it would probably be full glyph/charms/icon visual thinking friendly environment(s).

Instead what we are probably doing is grabbing what appears to be a great interruption in design status quo and declaring it “more easier” but the reality kicks in fast when you go beyond the initial short burst of information or screen composition into denser territory – even Microsoft are hard pressed to come up with a Metro inspired edition of Office.

Metro Reality Check – Typography style.

The reality is the current execution of Metro on Windows Phone 7 isn’t built or ready for dense information and I would argue that the rationale that typography replaces chrome is merely a case of being the opposite of a typical iPhone like experience – users are more in love with the unique anti-pattern then they are with the reality of what is actually happening.

Using typography as your spatial visualization go to pattern of choice simply flies in the face of what we actually do know in the small packets of research we have on HCI.

Furthermore, if you think about it, the iPhone itself when It first came out was more of a mainstream interruption to the way in which we interpret UI via mobile device, icons for example took on more of candy experience and the chrome itself become themed.

It became almost as if Disney had designed the user interface as being their digital mobile theme park, yet here is the thing – it works (notice when Metro UI adds pictures to the background it seems to fit?…there’s a reason for that).

Chrome isn’t a bad thing, it taps into what we are hard wired to do in our ability to process information, we think visually (with the minority being the exclusion).

Egyptians, Asian(s) and Aboriginals wrote their history on walls/paper using visual glyphs/symbols not typography. That is an important principle to grapple onto the most; historically speaking we have always shown evidence to gravitate towards a pictorial view of the world and less around complexity in glyphs around pattern(s) (text) (that’s why Data Visualization works better than text based reports).

We ignore this basic principle because our technology environment has gotten more advanced but we do not have extra brainpower as human race, our genome has not mutate or evolved! We have just gotten better at collectively deciphering the patterns in and in turn have built up better habitual usage of these patterns.

Software today has a lot of bad UI out there, I mean terrible experiences, yet we are still able to use and navigate them.

Metro is mostly marketing / anti-compete than it is about being the righteous path to HCI design, never forget that part. Metros tagline as being “digitally authentic” is probably one of Deiter Rams principles being mutated and broken at the same time.

Good design is honest.
It does not make a product more innovative, powerful, or valuable than it really is. It does not attempt to manipulate the consumer with promises that cannot be kept.

Should point out, these ten principles are what have inspired Apple and other brands in the industrial design space. Food for thought.

Lastly one more thing, what if your audience was 40% Autistic/Dyslexic how would your UI react differently to the current designs you have before you.

Related Posts:

Decoding Windows 8 UX Principles– Let Context breathe instead of the UI!

Last night I was sitting in a child psychologist office watching my son undergo a whole heap of cognitive testing (given he has a rare condition called Trisomy 8 Mosaicism) and in that moment I had what others would call a “flash” or “epiphany” (i.e. theory is we get ideas based on a network of ideas that pre-existed).

The flash came about from watching my son do a few Perceptional Reasoning Index tests. The idea in these tests is to have a group of imagery (grid form) and they have to basically assign semantic similarities between the images (ball, bat, fridge, dog, plane would translate to ball and bat being the semantic similarities).

This for me was one of those ahah! Moments. You see, for me when I first saw the Windows 8 opening screen of boxes / tiles being shown with a mixed message around letting the User Interface “breathe” combined with ensuring a uniform grid / golden ratio style rant … I just didn’t like it.

There was something about this approach that for me I just instantly took a dislike. Was it because I was jaded? Was it because I wanted more? ..there was something I didn’t get about it.


Over the past few days I’ve thought more about what I don’t like about it and the most obvious reaction I had was around the fact that we’re going to rely on imagery to process which apps to load and not load. Think about that, you are now going to have images some static whilst others animated to help you guage which one of these elements you need to touch/mouse click in order to load?

re-imagining or re-engineering the problem?

This isn’t re-imagining the problem, its simply taken a broken concept form Apple and made it bigger so instead of Icons we now have bigger imagery to process.

Just like my son, your now being attacked at Perceptional Reasoning level on which of these “items are the same or similar” and given we also have full control over how these boxes are to be clustered, we in turn will put our own internal taxonomy into play here as well…. Arrghh…

Now I’m starting to formulate an opinion that the grid box layout approach is not only not solving the problem but its actually probably a usability issue lurking (more testing needs to be had and proven here I think).

Ok, I’ve arrived at a conscious opinion on why I don’t like the front screen, now what? The more I thought about it the more I kept coming back to the question – “Why do we have apps and why do we cluster them on screens like this”

The answer isn’t just a Perspective Memory rationale, the answer really lies in the context in which we as humans lean on software for our daily activities. Context is the thread we need to explore on this screen, not “Look I can move apps around and dock them” that’s part of the equation but in reality all you are doing is mucking around with grouping information or data once you’ve isolated the context to an area of comfort – that or you’re still hunting / exploring for the said data and aren’t quite ready to release (in short, you’re accessing information in working memory and processing the results real-time).

As the idea is beginning to brew, I think about to sources of inspiration – the user interfaces I have loved and continue to love that get my design mojo happening. User interfaces such as the one that I think captures the concept of Metro better than what Microsoft has produced today – the Microsoft Health / Productivity Video(s).


Back to the Fantasy UI for Inspiration

If you analyze the attractive elements within these videos what do you notice the most? For me it’s a number of things.


I notice the fact that the UI is simple and in a sense “metro –paint-by-numbers” which despite their basic composition is actually quite well done.


I notice the User Interface is never just one composition that the UI appears to react to the context of usage for the person and not the other way around. Each User Interface has a role or approach that carries out a very simplistic approach to a problem but done so in a way that feels a lot more organic.

In short, I notice context over and over.

I then think back to a User Interface design I saw years ago at Adobe MAX. It’s one of my favorites, in this UI Adobe were showing off what they think could be the future of entertainment UI, in that they simply have a search box on screen up top. The default user interface is somewhat blank providing a passive “forcing function” on the end user to provide some clues as to what they want.

The user types the word “spid” as their intent is Spiderman. The User Interface reacts to this word and its entire screen changes to the theme of Spiderman whilst spitting out movies, books, games etc – basically you are overwhelmed with context.

Crazy huh?

I look at Zune, I type the word “the Fray” and hit search, again, contextual relevance plays a role and the user interface is now reacting to my clues.


I look back now at the Microsoft Health videos and then back to the Windows 8 Screens. The videos are one in the same with Windows 8 in a lot of ways but the huge difference is one doesn’t have context it has apps.

The reality is, most of the Apps you have has semantic data behind (except games?) so in short why are we fishing around for “apps” or “hubs” when we should all be reimagineering the concept of how an operating system of tomorrow like Windows 8 accommodates a personal level of both taxonomy and contextual driven usage that also respects each of our own cognitive processing capabilities?

Now I know why I dislike Windows 8 User Interface, as the more I explore this thread the more I look past the design elements and “WoW” effects and the more I start coming to the realization that in short, this isn’t a work of innovation, it simply a case of taking existing broken models on the market today and declaring victory on them because it’s now either bigger or easier to approach from a NUI perspective.

There isn’t much reimagination going on here, it’s more reengineering instead. There is a lot of potential here for smarter, more innovative and relevant improvements on the way in which we interact with software of tomorrow.

I gave a talk similar to this at local Seattle Design User Group once. Here’s the slides but I still think it holds water today especially in a Windows 8 futures discussion.

Related Posts:

UXCAST: DataGrid or should it be Data Visualization.


I am working on a secret squirrel application. Can’t say much suffice to say I had a situation where a bunch of clients connect to a network – like most apps I guess.

In this situation, I needed to inject a listening app to the overall network in that it needed to keep a pulse check on how the clients within the network are doing. This listening application needed to show the information in a meaningful way but at the same time; I do not want to have to inspect every single one of them each time they connect/disconnect.

I needed to visually show the connection states but I wanted it to be more reactive to me vs. me reactive to it.

Armed with problems like this, I now draw your attention to my biggest pet hate – DataGrids. A developer and you know who you are – would often take a situation like this and go "Ok, got it, what we need is a datagrid and the columns show machine name, state and blah blah metadata – easy peasy!!"

If you are that developer, I want you to do me a favor, remove yourself from the screen design team as you are hereby in a time-out.

Again, the problem is that I want to have a sense of all things are ok but I want to be alerted when things are not. I want to put this UI onto a large screen and just let it sit there keeping an eye on things and the moment something’s amiss – tell me!

Here is what I came up with. It is a hexagonal grid, each tile represents a new client connection and what it does is when a client activates it pulsates (the tile also gets added randomly in the grid). Yes, it pulsates but slowly, so it gives the impression that the "grid" (i.e. network grid) is breathing just like an organic machine.

When something bad happens, the tile flips to a red alert state and if there is a massive network outage the entire screen flickers with red pulsating tiles all yelling "help me, help me".

If more than 10x tiles fail an overall, alert text flows over the top giving you the old "Warning will Robinson, warning" alert.

The point here is simple. I could of easily just taking a data grid and view model, bound the two together and walked away. I actively choose not to do that as it’s a case of thinking about the problem creatively and trying an approach that makes sense but at the same time underpins an important principle – We work with software. We shouldn’t just use software"

That’s why I hate datagrids.

Related Posts:

How much would you invest in a pixel?

I am a massive fan of World of Warcraft again; yes, it is sad isn’t it? Last night I was playing my usual allotment of time watching pixels update on a screen vs. interacting with real humans and something I witnessed struck a chord with me.

The ‘flash of genius’ for me was when I was playing a typical Player vs. Player (PVP) round of Battleground(s). This is a part of the game that essentially randomly aggregates a group of people spread throughout the entire WoW realm(s) into a 10vs10 etc match.

It’s basically unbridled chaos and it really highlights some components for me that I find fascinating as to watch the herding mentality of us humans in an avatar driven game is kind of predictable. For instance, I was the first out of the gate when the match started, I rode my horse towards a spot that was the second closest and because I was the party leader, many other players followed me. I arrived at the spot and just waited. Out of the six other players, three stayed after 20sec has or so of making decision whilst the other three grew what I can only imagine as being bored and rode off in search of a fight.

imageWhat is so profound about this is how easily I convinced others to wait beside me in a place that had no real plan other than "well, if the sh*t goes down, we defend with our lives" as the core plan. We had no vision of what was about to happen beyond that and we had no clue as to how we would all work together as we just met each other in game only 5mins beforehand. Here we are armed ready to fight and hoping we can figure it out as we go.

We died.

This is much like most teams I’ve been in over the past few years, I keep hearing about a good team is one that is in sync with one another but in the end that only lasts for the first flag/waypoint as beyond that a lot of variables occur that in turn causes a de-synchronization from occurring. 

In the above example had I been paired with a healer and another tank/dps (tank or dps are basically characters whose sole job is to hit hard and often as a healers job is to keep everyone alive while they do so) we may have stood a chance of survival. As we all had a role to play and whilst the plan was distilled into a core class structure, we still have a series of objectives that must be upheld.

A healer must be protected at all cost, as well that character is your tipping point between living and dying but at the same time a healer must keep back from the fight – as much as possible. A tank/dps job is to draw fire and get deep into the melee as much as possible and the more you can tie up your enemy’s focus the greater the chance of a win.

In software, this concept has not entirely lost, as a UX/UI person(s) job is to figure out how to keep this software from dying of bad usability death. The coder’s jobs are to underpin it with large amounts of code to keep structural integrity intact if they do not do their jobs rights, it can in turn create more work for the UX/UI person to go fix. If the UI/UX person does not do their jobs, right they can in turn suffocate the work of the coder – so it is a partnership.

A great software release respects this partnership to the end. Good UI/UX and Good Code = Good software.

If you randomly put together a team of mixed classes and pin your hopes on the agile a way of life then well you are no different to my WoW example. An assumed leader leads a group of you into a spot that has no agenda or plan other than "don’t die please".

How you live or die is based purely around how fast you can communicate with one another about what tactics you can deploy to uphold this basic principle of preserving one’s life.

All it takes however is one person to break ranks, to be the Leroy Jenkins (See Video below) and well it comes unstuck and fast. We all die of a horrible humiliating death – (aka miss our deadlines etc.).


Agile is not enough, is my overall point. I think agile works if you are solely focused on being a tank/dps class (coder). If you mix in UX/UI then that is where it keeps coming back with mixed results, there appears to be no right or wrong formula here.

The one concept I think – and it’s only a theory – is that you need to at times stop fighting the code and give enough time for the healer (UX/UI) person to catch their breath, to drink some mana potions if you will to figure out how to navigate the next fight.

Lost in my metaphor?

What I am trying to say is that UX/UI in a sprint equation needs to occur every other sprint, meaning at some point in the process you need to arrive at a point in time where you the coders will have to refactor your UI / UX to accommodate the new direction in the design.

It sux.

It is however, the realistic way to accommodate the reactive design you have put in place and to be clear it has little return in investment other than user efficiency and satisfaction levels.

Now comes the question – how much do you invest in a pixel?

Answer that and you will have a better understanding of Agile, UI/UX + Code than I currently have.  As you now need to think in terms of how it all comes together and what value you place on the UI/UX component. Agile won’t necessary work in the way you think when it comes to intgerating your healer (UX/UI Person) into your battle group. At times you may not need them – that is until your hitting a wall and soon realize it would have been better to have them at the start of the fight vs end.

I can think of some rebuttals here – ‘well you are doing agile wrong’ or ‘your team sounds like it wasn’t assembled correctly’ to which I simply respond – welcome to reality. Sometimes you have to play the game with a randomly aggregated team and it is not always a case of Greenfield project management.

Now, your move, how do you accommodate these variables.

Related Posts:

UXCAST–Making Isometric Workflows inside Expression Blend–Part 1


I did it! and I feel exposed. I sat down tonight and put together my first of what may or may not be many (depending on how badly I get crit) screencasts around UI / UX + Microsoft Technology.

In this video, I show folks how one can take a workflow design concept and inject it into your canvas of choice but in an Isometric format. I like Isometrics simply because you can get more of a spatial view than most screen angles that and it derives from my old Pixel-art days so..yeah..Isometrics are the way!

Hope you enjoy, and feedback welcomed.



RIGANEIC – UXCAST – Isometrics in Expression Blend from Scott Barnes on Vimeo.

In this screencast I show how one can take a Isometric workflow map and transpose it into Expression Blend 4.

Related Posts: