What I have learnt being a designer, developer and product manager.

It’s approx. 19 years now since I first earned payment for my ability to use a computer. In that 19 years I’ve had a vast number of different roles and some of them have been mentally draining whilst most have been constantly built around growth (i.e. learning from failures etc.). In this time I’ve come across a few things that we seemed to be destined to repeat over and over no matter how many places I work and how experienced my fellow computer horde members are. Before I outline these let me start by saying I have been programming more in my life than at times I get to design and when I can’t get the designs I want from the design team, well, I always end up doing the actual designs anyway. I often find people that I work with can’t compartmentalize the idea or notion that someone can do both equally well as another, in that you’re always defending your position in the developer vs designer discussion. In reality you’re always nearly often asked “pick a side and stick with it” or face career penalties should you try and sit idle in the middle. That’s however until I discovered the role of Product Management and what better place to learn that role than when I was at Microsoft. I mean I had some wins for sure but I spent a lot of my time watching absolute brilliant minds at work. In this role working others in this field, I learnt being a developer & designer had benefits as I was now able to look at both sides of the equation and get a sense of what my audience (.NET developers & designers) needed the most from us. What have I learnt overall in 18 years?

Humans and Chunking.

Some would call this Agile but at the end of the day we humans tend to break things down into small pieces as way of coping with scale (kind of like Neurons in our body aren't complete end to end cycles, they are really just iterative linkages of energy). Sure we build ceremonies around the problem and often swear “this is the winning formula” but in truth all we are doing is taking a really big lumpy problem and breaking it down into manageable pieces. There is strength in doing this but there is also problems that often outweigh the positives. For instance in software I’ve noticed teams always almost get caught up in the idea that once you break an object into small pieces it becomes more manageable. In reality this often leads to context or peripheral awareness of the problem being lost due to the fragmentation.  That’s the issue it’s not the idea of breaking things down its when the breaking things down gets obfuscated or leaves others out it in turn gives the developer or designer a small amount of information to work from. This in turn creates problems as without context of the problem, intended audience or even why the problem should be worked on… well…. More problems emerge. To put it bluntly you always almost end up doing keyhole surgery - some can make it work others, end in a painful career stunting failure.

A Product Manager is like a Military General.

We often in the delivery teams (developer/designer) roll our eyes at times at the role of Product Manager. We at times have this distrust for anyone that can’t develop or write code to therefore be in charge of the products direction. Then on the flipside I’ve seen teams punish a Product Manager for having the development background because that person can’t resist from checking in on their code base and outlining solutions to problems before they arrive (oh btw, Scott Guthrie used to check name spaces on code before the devs could release…so don’t fault people for that!) A bad or good product manager is like a General in an army, should they give you bad orders or send you down the wrong path you can easily take a winning high performance team and run them into the ground in under a month. It’s very easy to kill the moral of a development team under the vision or guidance of bad product management (including release management).

Release Management is the same as horoscope reading.

I’ve seen way to many teams be held hostage to a deadline they have little or no control over. Sure Agile processes get placed on a pedestal that is until the business throws a tantrum and says “I need it now” in which case agile will not be a safe haven to hide behind if it even hints at being a reason for delays. Agile however is often used to manage this and it can work to hold back the release date demons but I’ve also seen Agile become this lumpy carpet in which bad product design or strategies hide under. It’s easy to bury a team with no thinking or strategy behind feature development as all you have to say is “We’ll iterate our way out of this” and unless someone in the room is sharp enough to catch them in the act (empowered as well), guess what…that beloved agile just became that noose around your career necks. Agile also is a funny and interesting thing I’ve noticed in probably 8 years or so travelling around Australia/World. I’ve honestly never seen teams actually do it right to the letter of the law. I always see them cherry pick it to death and I’ve lost count at the number of times I’ve seen teams argue “well we like to keep it simple” as their rationale for its adoption hackery. I’ve also seen teachers of agile rant “You’re doing it wrong!” to which I now wonder if the builder blaming the tool is the right course of action here? Suffice to say Agile + Release Management is always an amusing sociological experiment to witness. It often is in many ways like watching the TV show “Survivor” + “Amazing Race” inside the cubicle.

Design is on a pedestal until pressure builds.

As a UX person now (also now studying psychology), I’ve come to learn that we humans are actually quite adaptable to change and experiences. We often place ourselves in compartments and use personas as a shield to hide behind the various matrix’s that we assume or intend users to uphold. It’s as if we assume out loud that our users will self-divide into sub-tribes that fit in with our mental models around usage & expectations. It’s an ongoing science HCI without any hint of complete understanding but in the mean time we’ll continue to evolve design in a way that hopefully proves probabilities and our internal monologues about what users like or don’t like in designs. Design is the term though as at the end of the day despite all the research it comes down to the hand of a designer either using a mouse or Wacom graphic pen (most designers I know don’t use a mouse). We can craft the ideas or belief system but its not until these folks grind the pixels out that we have a well formed output that the users can appreciate and be drawn into. Marketing also play a role in this and they’ll often want more influence or say into what the design composition upholds – in fact everybody wants input as because its visual this means everyone gets a say!. Yet nobody volunteers to have input in that line of code you wrote or even that decision you made around a campaign. A designer is Queen/King that is until he/she accidently and stupidly shows the rest of the business what they made and then you watch a positive or negative feeding frenzy take place.  The feeding frenzy often however is used by developers as now they to have a safe haven to also hide behind as all they have to say out loud is “I can’t do design, so I can’t finish this until the designer finishes”. Hiding behind that means they have to take no risks or never fail in both their execution of an idea or worse keep their efficiency returns high (i.e. why bother trying to do a design ahead of the designer when all it would mean is wasted time, time…in agile….time…you say)

What have I really learnt?

That despite all the knowledge and experience I’ve acquired over the years it’s really rare I see the business, technical and design equation balanced. Almost every company I’ve consulted with, worked in, contracted for and observed have always managed to have an imbalance in these three areas. If the balance tips in say technical favor it usually means business & design are at a loss and likewise if the other two do the same. You may find one or two areas where the balance stays true or looks balanced but it usually is a false positive as seemingly its usually the “design” that’s the one bluffing (ie crap design experiences being palmed off as “good enough”). My theory or something I’m going to devote I guess the rest of my life to is finding a way or rhythm to debunk this equation in that there has to be a way to balance the three without cubicle combat. Today I’d simply say this, if all three parties aren’t sharing the risk of change or failures, then that’s the starting point. In 20 years I’ve rarely seen all three take that on willingly and accepting that failure has rewards as well as losses. That giving a deadline to a developer is like yelling at a tornado to turn around, it may feel good to do so but you will always most certainly get creamed anyway. A designer is the user advocate and they have instincts built in that are well honed towards how people deal with vast amounts of information & cognitive load. An Engineer can work in literal form better than lateral but a designer has only lateral so the balance has to be struck (form vs function wars begin). Lastly a Product Manager without a 2+ year roadmap isn’t a product manager, they are just the business development suit running around pretending they are in charge of an empire that has enormous of opportunity that continues to go wasted. If you haven’t got a forward thinking General then maybe the competitor does and that’s why you seemingly keep looking at what they did for visual cues on success vs fail (Microsoft we agonized at Apple/Google/Oracles growth. I doubt it was a two-way process hence the huge leads they have gained in 8 years).  

Related Posts:

Creating a designer & developer workflow end to end.

  When I was at Microsoft we had one mission really with the whole Silverlight & WPF platform(s) - create a developer & designer workflow that keeps both parties working productively. It was a mission that i look back on even to this day with annoyance, because we never even came close to scratching the surface of that potential. Today, the problem still even with Adobe and Microsoft competitive battles in peace-time hasn't actually been solved - if anything it kind of got slightly compounded or fragmented more so. Absorbing the failure and having to take a defensive posture over the past five years around how one manages to inject design into a code-base in the most minimal way possible, I've sort of settled into a pipeline of design that may hint at the potential of a solution (i say hint..).

The core problem.

Ever since I think 2004, I've never been able to get a stable process in place that enables a designer and developer to share & communicate their intended ideas in a way that ends up in production. Sure they end up something of a higher quality state but it was never really what they originally set out to build, it was simply a end result of compromises both technically and visually. Today, its kind of still there lingering. I can come up with a design that works on all platforms and browsers but unless i sit inside the developer enclosure and curate my design through their agile process in a concentrated pixel for pixel way, it just simply ends up getting slightly mutated or off target.

The symptoms.

A common issue in the process happens soon after the design in either static form or in prototype form gets handed off to the developer or delivery team. They look at the design, dissect it in their minds back to whatever code base they are working on and start to iterate on transforming it from this piece of artwork into actual living interactive experience. The less prescriptive I am in the design (discovery phase) the less likely i’ll end up with a result that fits the way i had initially imagined it to begin with. Given most teams are also in an Agile way of life the idea that I have time or luxury of doing a “big up front” design rarely ever presents itself these days. Instead the ask is to be iterative and to design in chunking formations with the hope that once i’ve done my part its handed off to delivery and then it will come out unscathed, on time and without regression built in. Nope. I end up the designer paying the tax bill on compromise, i’m the guy usually sacrificing design quality in lieu of “complexity” or “time” derived excuses. I can sit here as most UX`ers typically do and wave my fist at “You don’t get us UI and UX people” or argue about “You need to be around the right people” all i want but in truth this is a formula that gets repeated throughout the world. It’s actually the very reason why ASP.NET MVC, WPF and Silverlight exist really - how do we keep the designer and developer separated in the hope they can come together more cleanly in design & development. The actual root cause for this entire issue is right back at the tooling stage. The talent is there, the optimism is there but when you have two sets of tooling philosophies all trying to do similar or close to similar things it tends to kind of breed this area of stupidity. If for example i’m in Photoshop drawing a button on canvas and using a font to do so, well at the back my mind i realise that the chances of that font displaying on that button within a browser is less likely to happen then inside the tool - so i make compromises. If i’m using a grid setting that doesn't match the CSS framework i’m working with, well, guess what one of us is about to have a bad day when it comes to the designer & developer convergence. If i’m using 8px padding for my According Panel headers in WPF and the designs outside that aren't sharing the same constancy - well, again, someone's in for a bad day.

It’s all about grids.

Obviously when you design these days a grid is used to help figure out portion allocation(s) but the thing is unless the tooling from design to development all share the same settings or agreed settings then you open yourself up from the outset to failure. If my grid is 32x32 and your CSS grid uses 30% and we get into the design hand over, well, someone in that discussion has to give up some ground to make it work (“lets just stretch that control” or “nope its fixed, just align it left…” etc start to arise). Using a grid even at the wireframing stage can even tease out the right attitude as you’re all thinking in terms of portion and sizing weights (t-shirt size everything). The wireframes should never be 1:1 pixel ready or whatever unit of measure you choose, they are simply there to give a “sense” of what this thing could look like, but it won’t hurt to at least use a similar grid pattern.

T-shirt size it all.

Once you settle on a grid setting (column, gutters and N number of columns) you then have to really reduce the complexity back to simplicity in design. Creating T-shirt sizes (small, medium, large etc) isn’t a new concept but have you even considered making that happen for spacing, padding, fonts, buttons, textinputs, icons etc etc. Keeping things simple and being able to say to a developer “Actually try using a medium button there when we get to that resolution” is at the very least a vocabulary that you can all converse in and understand. Having the ability to say “well, maybe use small spacing between those two controls” is not a guessing game, its a simple instruction that empowers the designer to make an after-design adjustment whilst at the same time not causing code-headaches for the developer.

Color Palettes aren't RGB or Hex.

Simplicity in the language doesn't end with T-shirt sizing it also has to happen with the way we use colors. Naming colors like ClrPrimaryNormal, ClrPrimaryDark, ClrPrimaryDarker, ClrSecondaryNormal etc help reduce the dependency of getting bogged down into color specifics whilst at the same time giving the same adjustment potential as the T-shirt sizes had as well - “try using ClrBrandingDarker instead of ClrBrandingLight”. If the developer is also color blind as in no they are actually colorblind, this instruction also helps as well.

Tools need to be the answer.

Once you sort the typography sizing, color palette and grid settings well you’re now on your way to having a slight chance of coming out of this design pipeline unscathed but the problem hasn't still been solved. All we have done really is created a “virtual” agreement between how we work and operate but nothing really reinforces this behavior and the tools still aren't being nice with one another as much as they could be. If i do a design in say Adobe tools I can upload them to their creative cloud quite quickly or maybe even dropbox if have it embedded into my OS. However my developer team uses Visual Studio’s way of life so now i’m at this DMZ style area of annoyance. On one hand i’m very keep to give the development team assets they need to have but at the same time i don’t want to share my source files and much the same way they have with code. We need to figure out a solution here that ticks each others boxes as sure i can make them come to my front door - cloud or dropbox. That will work i guess, but they are using github soon so i guess do i install some command line terminal solution that lets me “Push” artwork files into this developer world? There is no real “bridge” and yet these two set of tools has been the dogma of a lot teams lives of the better part of 10 years, still no real bridge other then copy & paste files one by one. For instance if you were to use the aforementioned workflow and you realize at the CSS end that the padding pixels won’t work then how do you ensure everyone see’s the latest version of the design(s)? it realises heavily on your own backwater bridge process. My point is this - for the better part of 10 years i’ve been working hard to find a solution for this developer / designer workflow. I’ve been in the trenches, i’ve been in the strategy meetings and i’ve even been the guy evangelizing  but i’m still baffled as to how I can see this clear linear workflow but the might of Adobe, Microsoft, Google, Apple and Sun just can’t seem to get past the developer focused approach. Developers aren't ready for design because the tools assume the developer will teach the designer how to work with them. The designer won’t go to the developer tools because simply put they have low tolerance for solutions that have an overburden of cognitive load mixed with shitty experiences. 5 years ago had we made Blend an intuitive experience that built a bridge between us and Adobe we’d probably be in a different discussion about day to day development. Instead we competed head-on and sent the entire developer/designer workflow backwards as to this day i still see no signs of recovery.

Related Posts:

I think therefore I know.

When you’re in a role of UX you tend to have contested territory marked out around you. Everyone around you has an opinion on something that fits within your charter so you in turn have to be the guarded diplomat constantly. I don’t mind heated exchange of ideas, when people get passionate about something they always stand their ground on a topic and make sure their voice is heard clearly and loudly (often without politeness attached). In these situations what I typically have echoing at the back of my brain is a question “do they think or do they know”. I think something instead of I know something takes on a whole new set of discussion points because if you think something then its just an idea or assumption. If you know something, well chances are you have data points filled with confidence attached, this is good, this tells me straight away there are more clues to be found.

"..The only way you win an argument is if you get the other side to agree with you.."

Is what my dad would say when he & i used to get into the thick of it. Its a fairly simple statement as in the end when you have two opposing ideas on the same problem, well it comes down to either compromise or an impasse. If its an impasse then it probably will come down to the title you have on the day, in my case being Head of User Experience. A title like mine carries some weight that means i can ignore your opinion and proceed onwards without it, but doing so means that i need to qualify my arrogance more. Being the top-dog in UX land isn’t an excuse to just push past people on their “I think” statements and supplant your “I thinks” ontop. Instead what it means is we have to be more focused on establishing the “I know” statement that absorb the two opposing ideas. My way of thinking is this, when I reach a point where there isn’t any data to support the opinions / ideas its now a case of writing multiple tests to get them fact checked and broken down until we do have the ideas transformed into behaviour facts. I think the users will not like the start menu removed so don’t touch it. Now lets remove the start menu is my immediate thought, screw the statement what happens when we do it. I’m assuming there will be some negative blowback but can you imagine the data we can now capture once its removed and how the users react. The users will tell us so much, how they use the menu, where they like it, why they like it there, who they are, what they use and so on. That one little failure in Windows 8 is a gold mine of data and online there are discussion forums filled with topics / messages that centre around “ I think “ but nobody really has “I know” except Microsoft. My point is this. If you’re not in a role that has User Experience in its title then fine, knock yourselves out with the back and forth of “I think” arguments. If you are in UX your job is to not settle with “I think” and instead hunt for “I know” for you will always get rewarded.

Related Posts:

How UX & Ethnography mix.

Inside most organisations you’ll likely see a marketing team distill their customer base into a cluster of persona(s), which in their view is a core representative of a segment of their audience in a meaningful & believable form. These persona(s) are likely to be accurate or moreover a confirmation on a series of instincts that may or may not have supportive data to underpin their factoids. The issue with these personas is that they are likely to be a representative of the past, that is to say using them isn't really about transplanting their behaviors into the future, instead its a snapshot in time of what happened at the time they were documented. The definition of Ethnography basically distills to what i’d class is happening in the persona research space, especially when you commission design agencies to do the research. They are usually quite thorough in their research and often don’t miss a step in cataloging the series of data points needed in order to build a picture as to whom they are looking at and what the behavioral traits the persona(s) in question are likely to have in a range or clustered form. Downside for UX people like myself is there’s no real jump off point for this type of data, as for me, it’s not really about whether or not “Max” is prone to water-sports or is in the age bracket of 25-35, i have really no need for excessive metadata. The challenge for me is to map these series of personas back into a timeline of graduation both in simplicity vs complexity but also around how their confidence levels are organised in a way that outlines the cold/hot spots within a feature(s) experience needs. If you were to take a feature, break it down into its intended audience, complexity required to use it and lastly its overall metrics that help define its success/fail  - well you’d likely end up with a lot of moving parts that don’t offer up any tangible qualitative value that helps you at the very least sniff out “what just happened”. What if you instead take the marketing personas, take a guesstimate around who you’re targeting, the features likely markers that trigger the metric and infer based on this data, the outcome - this would in turn be called confirmation bias. There’s the uppercut with Persona(s) as you can easily set out to build on a solid foundation of healthy data but it’s only when you transfer or map these series of data points to the actual set of features & content within an experience that it starts to unravel and threads of its truisms get caught up in a lot of inferred guesstimates. The root cause for this failure in qualitative data is simply due to the past being used to dictate the future, again remembering that at the time you interviewed and inspected your persona(s) it was based on either “what if” or questions that point to competitors or existing experiences that are already set in stone. Today and tomorrow you’re not keeping those experiences locked like that, in fact you’re probably looking to move the needle or innovate in a different direction which means you have small to large impact on their behavior, so thus the experiences can often involve dramatic or not so dramatic change(s). The only way to test or baseline the change is to have this continuous sampling that keeps checking & rechecking the data points in the hope of change makes itself prominent. Problem - change isn't always obvious, it can be subtle, the slightest introduction of a new variable or experience can often lead to adjustments that go unnoticed. I’ll cite an example in abstract form.
A respondent is asked to walk on a path through a forest from A to B. The respondent is asked to count how many “blue” objects are lined along the path, and the said respondent’s heart rate will be also monitored (also base-lined / zeroed out). Before the respondent takes off the testers place a stick that has similar shape to a coiled snake midway on the path.   The respondent is then asked to proceed on the journey, and they begin to count the blue objects and at the end of the path when they arrive, they give an accounting of their blue object findings. Their heart rate was normal in line with normal physical activity.   Respondents were less likely to notice the stick.   Next round of respondents are asked to the same, only this time the seed of fear is planted in their subconscious with “oh others noticed a snake a few hours ago along the path, be careful and if you see it sing out, it should be gone by now and we couldn't find it earlier so just take note”.   Respondents begin the journey on the path, they notice the stick initially and a lot of messaging between the optics and brain are moving at lightning speed trying to decipher the pattern(s) needed to place a confirmation on “threat or non-threat” levels. Heart rate is spiking and eventually they realize its a stick and proceed, as they walk past the stick still keeping a very close eye and proximity buffer between the stick and them.
The point of that story is this, that with an introduction to the standard test of a new variable (fear) you’re able to affect the experience dramatically to the point where you've also touched on a primal instinct. In software that “stick” moment can be anything from moving the “start button” on a menu through to moving the way a tabular amount of data has been traditionally been displayed. As a User Experience creator, we typically move the cheese a lot and it’s more to do with controlling change in our user(s) behavior (for the greater good). Persona(s) don’t measure that change, all they measure is what happened before you made the change. All you can do is create markers in the experience that help you map your initial persona baseline back to the new in the hopes it provides a bounty of data in which “change” is made obvious. It doesn't… sadly… it just doesn't and so all we can do is keep focusing on the past behavioral patterns in the hope that new patterns emerge. Persona(s) aren't bad, they aren't good, they are just a representative sample of something we knew yesterday that maybe still relevant today. The thing i do like about personas from marketing folks is this, it keeps everyone focused on behaviors they’d like to see tomorrow re-appear and that in the end is all i ever really needed. Where do you want to head tomorrow? Last example - NBC Olympics were streamed in 2009 to the entire US with every sport captured and made available. At the time everyone inferred that an average viewer would likely spend 2mins viewing time. In actuality they spent 20mins average viewing time and sent massive ripples in the TV/Movie industry in terms of the value of “online viewing”. If we had of asked candidates back then both as content publishers and consumers, they’d probably have told us data that they asserted to be relevant at the time. In this instance the Silverlight team were able to serve up HD video for the first time too many people online, and that’s what changed peoples experience. Today, its abnormal to even contemplate HD video streaming online as anything but an expected experience for "video" ... 5 years ago, it didn't exist. Personas compared to then and now are all dramatically different now, so while change can in some parts be slow... they can easily expedite to days, months as well as years. I don't dislike Persona's, i just remain skeptical always of the data that fuels them - but thats my job.

Related Posts:

When a Product Manager of legacy confronts UX.

 UX meets Product Management...

I've been sitting on this blog post for quite some time trying to articulate how I see the two sets of roles interacting and co-existing with one another. I've been in both these roles both separately and collectively and it’s really a puzzling uncomfortable experience. The problem today, is simple – UX has traction and before we all start high fiving each other on this victory, we still need to reconcile how this whole thing is going to work. Today, typically UX is a huge challenge for product teams to invest in, especially if they are deeply entrenched in legacy solution(s) that range in age (5-10 years old). The challenge isn't about providing “reasons” to go full UX, as anyone who’s picked up a tablet/smartphone will get the reasons immediately. The real challenge lies in the “how” do we move away from this old way of doing things to the new way of doing things that are often filled with “excuses” or “nostalgia” fuelled ranting. Moving forward with the times is hard enough but even more so when the reaction to UX is to just “hire” some UX person to dig everyone out of the hole they are in without a strategy being in place to begin with. As a UX Architect it’s somewhat more of a challenge for me than it was being a Product Manager, as in this role my job is to provoke teams into taking an introspective look at how they have arrived at where they are at today – chances are the very people in the team haven’t always been there through the whole journey either. If you ask a team three simple questions “What features do you have right now?”, “What’s working and what’s not working” and lastly “what do you want to do tomorrow?” well you get some pretty awkward frustrated looks (greenfield projects obviously it’s not the case). As really what I'm asking is “do you even know what you’re doing right now?” and it’s a pretty intimidating question as it forces people in that space to admit “well…not really..” As a Product Manager you’re really there to help steer the ship in a direction that has usually momentum attached and having some upstart UX guy make you sit down and account for what you have usually isn’t an appealing process given well “that’s the past lets talk about the future!” is the vibe at the time. We don’t ask these questions of status quo to weaken the position of the team or detour them from the future, it’s really there to give us all a platform to build up from. If they can answer the questions that means they are ready to start attaching “features” to “personas” so then we now have a better understanding of who owns what feature and why, which then leads into how you can take the said features and distribute them around various flavours of devices/desktops etc. Product Management and UX go hand in hand and the danger of a UX role is they could very easily become a Product manager, as its very easy to slip into this role especially if the current status quo doesn’t have a strategy on where they are going or how they arrived at the point in time they are in today. In summary – A UX role has to sit down unpack his tool kit, and simply break down the product back to its grass roots, force the team to sit down and focus on what they have right now and what they want to take from the past into the future while at the same time create “new”. It’s not as attractive at times but it’s the reality that must be faced otherwise you end up like Microsoft did in the past “oooh look shiney object, let’s go make it and screw what we have right now” (aka Silverlight/WPF strategies to date! 🙂 )

Related Posts:

The Apple-Microsoft Energizer Bunny.

  In 1989 Energizer hijacked the Duracell Bunny thus for most parts of the world hijacking the iconic toy from Duracell. Today I saw a Microsoft`ee still praise the company for its efforts in Metro design style in the Windows Phone. Asserting that basically Apple has copied them and they (Microsoft) will soon be rewarded for such greatness. The cautionary tale here for me is this. Apple are great ...no...they are surgically brilliant at design, not just in their own marketing but it embodies everything they do and beyond. Microsoft....well...don't. They have moments but often you can automatically sense the hesitation in executing their design (as if to say they have design sugar rushes). Point in case is Windows 8, start screen is interesting, parts of the way in which you design applications are different but then ...nothing... AppStore was a sad existence to stare at and the whole strategy around cultivating, nurturing, evangelising and so much more the design itself simply fell silent. My point today is simple, Apple copying or not copying Microsoft - who cares - isn't a point for the company to celebrate. It's a pretty loud warning shot across the design bow, that simply says - step up and lead, or step aside but make  a choice.      

Related Posts:

Dont be a clone, be different.

It’s been roughly a week or so now since I got my Windows Phone 8 iPhone clone – I mean, Nokia Lumia 920 (it was a joke, relax). The phone itself is quite large, and that for me isn’t an issue except I find my thumbs don’t get as much surface coverage on either side of the phone. The battery life on the phone is nice but the overall user experience within the phone drives me mad. The camera for instance was annoying because when it came to take a photo I had forgotten I had the setting on close up, so when I took my shot of choice it came out blurry.  It took a while for me to remember that the setting was changed as there was no visual indication that the said phone was in a particular setting – as if having an icon on display all the time was a failure Nokia wouldn’t tolerate (you failed me Nokia) There are a lot of other settings that also drive me crazy and I could list the postives & negatives all day (Still trying to sort through my emotions on whether this phone will last or go).  However, the one and most crucial thing of all that I dislike about the experience is the App Store clones. What I mean to say is, despite the various ups & downs that come with having the actual phone – which I can live with – the one piece to this equation is just how immature and terrible the applications that you have on offer are within the Microsoft store. It's like all the other kids (iPhone/Android) are riding dirtbikes but your parents give you  a new bmx bike (Windows Phone 8) with a fake muffler attached. I’m struggling even as I type this to come up with some examples of great apps, the ones you cannot live without. The only application that I find actually useful and fairly well designed was Skype. I found Twitter apps to be half-done, broken, prone to “an error has occurred” status messages or the worse offender of all – the official Facebook app (which feels like it was written by a first year programming intern). These are really two applications that a smartphone today must own in terms of unique experience, as these i'd argue are probably the most frequently used outside email (would it kill the design team to use "bold" font to indicate unread emails btw?? and text messaging + threads... really.. threads? what is this a texting forum?). There is much I’d tolerate about owning this phone but looking at my iPhone apps that are sitting idle and then staring at my Windows Phone I can’t but help develop buyer’s remorse at the moment. I miss my instagram, twitter, flipboard, facebook (yes even iPhone Facebook app), games,  XBMC remote, ANZ Bank and the list goes on and on. There are really only two applications within the Windows Phone 8 market place that stand out for me – Qantas and ZARA.  The Qantas app is still a bit flat but it looks different enough to give it a pass whilst the ZARA app (Fashion) looks quality elegant / tastefully done – even though I have zero use for it but can appreciate its design. My underlying point is this. I want to keep using this phone, I want to get off the iPhone crack and try new things but if you keep rinse & repeating the same stupid template driven applications whilst touting “I’m being authentically digital” then you in turn are killing yourselves more than my experience. If this phone has a chance of success it’s going to come down to development teams engaging a designer and throwing out the Windows Phone 8 “Design Guidelines” by Microsoft. Microsoft have not a consistent coherent clue as to what good design is and have consistently shown they themselves can’t even lock onto the concept of what good design is. They rely heavily on design agencies, contractors and partners to do the majority of the actual design for solutions they “make”. There are currently 90+ designers on the Windows Phone 8 “team” and I ask a simple question – What the f**K are you all doing? You’re not helping the community & marketplace that’s for sure. So please hire a designer today.

Related Posts:

Its time to get off the shoulders of UI giants.

In my usual daily grind, I am constantly called into variety of different projects to help out with some of the UX puzzles the teams face. All too often, it is a case of a project is already underway and the person doing the asking has this panic look of “please help us unlock this perceived usability issue”. I take a deep breath, I think about the problem in front of me, I tap into all the years of experience I have around how one could solve this and last but the most important of all - I wait for an idea to kick in (science, experience and luck). That system has proved to be quite beneficial for me and others I’ve worked with for years until recently. The change has occurred the day I noticed a Tablet for the first time. I’ve seen tablets for years but recently I sat down and focused my energy on one single thought – “what if Tablets replaced 100% of all PC’s / Laptops”. I am now obsessed with this thought, as while it is not going to come true in the next few years, it does force my skills into an area of unchartered and uncomfortable thinking. Today most user interfaces have tree controls and datagrids much more often than I am comfortable with. They also have menu(s) that typically drive via the mouse and not touch as with a mouse you have more precision (perception) and a finger you don’t (along with visual black spots due to hand being in the way). This all is fine if you keep the two inputs apart and design for both individually as in the end you are solving two problems right? Well.. I do not know if that’s a fair call to make (especially given how the desktop vs. tablet could have this transition period). I mean why can’t you build the same UI for both? The datagrid and tree control for example are holding you back but in the end if you can build a UI for touch why can’t that hold true for mouse? (ergonomics and form factors aside, just shut up and work with me here on this stream of thought). I am thinking that we should probably start tackling the problem of solving the same UX issues we face when wanting to present users with a visual hierarchy and large data sets. I do not think the datagrid and tree controls ever solved this problem but in a way we declared a truce on it via their creation. Tablets in my view create a unique opportunity for us all to start asking more questions like “Why do you use a Datagrid?, Why do you use Ribbon Menu? Why does Blend work on a 2D top-down design surface instead of a vanishing point perspective?. Why…why…why..” Start challenging the stuff you assume works, as I don’t recall ever seeing a whitepaper where they outlined “We tried 115 different ways to present data and datagrids came up with a higher score?” as well, they suck and they don’t really help the user as much as say an infographic would? Imho it’s time to get off the shoulders of our UI/UX elder giants and start doing this differently as with tablets our canvas has been somewhat wiped clean – my fear is we’ll see datagrid/tree/ribbons making an appearance on these devices (metro be damned, you’ll eventually revert back once the metro boredom kicks in). P.S If you find the grammar/spelling annoying use the Fix-It.  I’ve typed this on a plane and right now motion sickness is settling in from staring at the computer – must figure out why that happens.  

Related Posts:

  • No Related Posts
Posted in Uncategorized

Windows 8 Search could be better. My theory.

  Today I wanted to search for Word 2013 in Windows 8. At first, I hit START+W and began typing “Word”, and then of course nothing came up. Confused, I closed it down and went START+Z then found it the hard way via ALL APPS. My immediate thought was “hmm my Search experience is broken, this is stupid. I must be doing it wrong”. Sure enough, I realized after some rinse/repeat frustrations that each icon you click on represents the context in which you are searching. To me, that was far from obvious. It took someone else showing me that workflow before I realized what it was. I have made a point of not watching videos and tutorials before I use Windows, as I am keen to see how a Windows 7 user approaches Windows 8 without a crash course in the upgrades. Ok, now I get how Search works but what made me a little irritated was that I had assumed Search would act globally. In that just like Bing or Google, you type in your search and then you are presented with results that are global in nature. Google for instance not only has the ability to narrow your search context via its Web, Video, Images etc. but in that initial search screen it also brings those results from each of those into the feed (aggregate function). My thinking here is that Search should act globally but in order to do so it has to be quite smart in its formula on Windows 8, that is to say if you have 20+ apps installed and each has internet connectivity attached does that mean it makes 20+ internet connections outbound per keystroke? No. My thinking is that as you type in search you send out a broadcast to all apps and of course Windows 8 your current keystroke / search criteria. Then what happens is each app has a small agent that has a quite a strict footprint that it uses as a means to begin its contextually relevant search. The moment these agents begin the search they show a state of “I’m finding your answers” (whatever that may look like) whilst at the same time they head off to find the said answers. Once the answers come back it reports in the form of a “total results” meaning it lets the user know that “I have something here, you can now look at me should you find relevance”. This then invites the user to decide if the “Twitter” app may have the answers, it needs and so on. The formula for search could be refined based on both frequency of use of applications (popularity stack ranking) and chunking with timeouts. In that you can do a search batch at a time so that if the search has to trigger internet connections per app the allow 30-50 at a time with a 1min timeout. The architecture of Windows 8 right now wouldn’t allow this or scale very nicely but there’s this small little team in Redmond called “Bing” and they have this driving need to compete with another small startup called “Google” (You may have heard of both). I am sure if you grab these guys and their collective intelligence this is a problem that could be solved in a way that shifts people from thinking about Search differently when it comes to Windows 8. I see this problem with Microsoft now. They are not paying attention per say to the bigger picture, in that if you want to start setting the scene for platform of the future then think beyond Apple competes scenarios. Think of search as being a Windows problem not a web browser problem and more to the point if you want me to embrace the cloud in a fashion that’s elegant start creating endpoint packages that have a sole purpose of empowering developers to write their own search result for agents like Windows 8 Search and so on. If I was a developer and I paid for a Azure search result service that I basically connect inbound API calls to a data repository of my choice which then gets used by plethora of different solutions out there (Apps, Windows 8) etc. This to me is obfuscating the psychology of the cloud whilst at the same time giving me a content provider a sense of control on how my data gets prepared for searching. It has not to say that Search engines cannot access this data and then reformat / index it in their own way to prevent me from hijacking the results. My underlying point is that the future of Internet has and always been this TextInput box with a button next to it called “search”. The next screen will change as we move forward but in reality, more and more users of the internet and computing are keen to see just those two control elements on screen first. Why make me click, you click.. I gave you what I was interested in. you go find it and do not come back until you have solved it. I don’t care about your architecture limitations, solve it, patent it, sue others once you have patent it but just give me it. Search could be better!

Related Posts:

The Cost of Design.

It was the HTML generation that first gave mainstream hints that with a good designer in the cubicle you could in turn have a positive effect to business. That is to say by opening a simple browser, navigating to an address and interacting with “forms” that you in turn could begin taking consumerism to a new level of shop-less input(s0. Fast forward today the cost of design has not only increased but has also gotten higher in its requirement in terms of automating the shop-less facade. User’s albeit consumers are targeted in a variety of device ridden channel delivery and the cost of having a traditional developer and designer team has not been reduced.

Specialised Teams

Cubicles of tomorrow aren’t going to be housed with team members who can write  and design for multiple devices at the same time. They will be broken into specialised teams and the more and more business(s) begin to consolidate their branding into the device market(s) the more they will look to simplifying their product portfolios and brands. A team of .NET developers who write HTML and WPF client(s) will most likely need to include an iPhone/iPad, Android/Blackberry team(s) who mirror their offering. Companies will aggressively recruit and look for people who are agnostic in one or mediums but realistically given the complexity involved in all the current UX Platform(s) it just isn’t feasible to find that many people on the market waiting for a job call.

Browser Forking

Having a specialised team isn’t restricted to proprietary solutions, it will also factor into a more traditional medium of HTML development and design. As strange as this may sound to hear, the idea that HTML5 will bring the industry into a global position of unanimous parity of it’s implimentation amongst all browsers is simply not correct. The only browser that i’d argue has a vested interest in remaining pure would be Google Chrome and that’ simply because having the entire globe of online consumers still accessing HTML work’s to the search engine and advertising model of Google. Having a browser fork in API and extend beyond HTML/JS works to Apple, Google and Microsoft’s favor as well. In fact, I’d argue Microsoft are banking on the ye olde embrace / extend model it’s had in the past (with great success).

Diverse a Product Portfolio

Once companies have audited and forecasted what their internal current development team models will look like for the next 2-3 fiscals, they in turn will need to reflect on which bets to place in which markets that are dictated by their choice of development. It will depend on their choice but even then they still need to figure out how they can leverage an iPhone more than they can an Android (or substitute your own technology bet here). For every device you target brings a variety of constraints and expectations that you need to meet prior to even beginning development. Diversity in choice will ultimately have an impact on a companies brand and consistency model in how they want to broadcast their personality to their respective users. If you target an iPhone for example you have a pretty prescriptive UI design to leverage so it pays to run with it and not against it, given it will reduce your time to market cost(s). Same applies with Android and especially with Windows Phone. Problem with prescribed design isn’t its ability to convey a uniform user experience with an end user it’s core issue is the reduction of being able to stand out amongst your consumer(s). If you spend more you can overcome the prescribed approach but in doing so you also need to ensure you can leap beyond the baseline of expected behavior.

Metro-style could winout

Prescribed user interface design in turn will slowly become more and more weaponised in a way to again have a single designer rule many device(s). If you can invest in a smaller group of design professionals who have custodianship of a brand and the personality that comes with it, you in turn can reduce your costs on having less investment on design and more on engineering. A company may prefer the model of having a centralised design team that works with 3-4 device teams as a way of offsetting cost’s associated with multi-targeting. Metro-style design in turn plays a comfortable role as Android and Windows Phone 7 pretty much lend themselves well to this vision of the way design should be. iPhone/iPad however goes in the opposite way that is to say the composition found within these devices are much more detailed and focused in around theming the experience as much as it’s about enabling an input driven experience. The design(s) to date using metro oppose the idea of having real-world objects embedded into the 2D design composition (less turning knobs, wooden textures etc). The cost for design here is hugely decreased as a result meaning in reality a design team need only wireframe the composition of what they want a particular screen to look like, layer in color and ensure it adhere’s to some basic principles that relate to consistency, minimalism and lastly shape driven pictography / typography (pattern recognition 101). Having a metro-style solution going forward can work on all device(s) and whilst it may go against the iPhone/iPad design grain, it can still sit within and more to the point it would reduce a brand’s chance of inconsitency and personaltiy (closer to the one design on all belief).

Designers did this to themselves

Companies like Microsoft, Adobe, Google and even Apple have reacted to a problem that was created by designer(s). The problem started the day when designer’s went against the developer grain, that is they forked off onto their own technology stack which was designed for them (Apple). In forking their work flow away from mainstream development this in turn created a workflow issue and in turn fueled companies like Microsoft to invest in a lot of tactical decisions around how to solve this said problem (WPF, Silverlight etc). The forking also gave way to a different approach whereby companies like Adobe began to also invest not just in a x-platform tooling but also for a while there in a x-platform delivery that works with the said tools that designer(s) had grown to love (Flex, Flash). Once Apple had also moved to an Intel chipset this in turn gave away to what I would call the “Apple Developer” generation where over time more and more of a developer centric foundation was being build to which a series of tools could also now target the designer(s) (amongst other creative professionals). Design for the better part still remained in its own cut-off from the rest area and the more and more developer communities that didn’t have a dependency on Windows began to emerge, the more they instead crossed the divide and began to work on Apple with their designer sister & brother(s). The developer defection to Apple created a huge amount of issues and problem(s) from within Microsoft as now they are facing a massive problem around having developers target Windows but also ensuring there are the right amount of designer(s) ready to support such developers. The more applications being built on Windows the more they sell Windows is the simplified formula. Google also now posed an issue whereby they have no real dog in the fight, that is to say Apple or Windows it didn’t matter provided you targeted HTML and helped fuel their Advertising & Search revenue streams of tomorrow. I won’t go further into the various competitive back and forth that has gone on suffice to say at the heart of this entire issue around choice lies the designer. The illusive designer who often costs a lot and produces what will soon become the main differentiator in a companies offering - user experience.

Function is no longer important form is

As the industry reacts to the competitive changes that are ongoing, much like a teenage boy does around the time of puberty - design in will start to become the focal point of such change. A designer will feel suddenly more wanted, targeted and will be taunted and attracted with some quite lucrative offers. Developers will also see this and start or have began to shift career gears and start looking into ways of becoming a designer. Some will fail whilst others will discover a suppressed design gene within lurking and waiting to be unleashed. The designer will however become the leader of the pack, so used to being the one at the back or considered replaceable, they in turn will now become the most sort after as in turn what is replaceable is engineering (the market optimised for function instead of form for over 20years). Here in lies the issue, the designer isn’t really equipped to decide the outcome of a generation of computing, they will always prefer to take the right amount of time to do the right job in a way that adhere’s to their internal principles around how the world should look.

Winter's not coming, a Fork is

A designer today hasn’t gone down in price or time to market, the time it takes to produce a design still takes just as much time and effort as it did last year and the year before that. It however got a little more complicated as the canvas now comes in not just a 1024x768 screen (or there abouts) it now comes in a whole host of screen sizes and operating system level imposed limitations. Once the corporations that fork their design teams figure this out, styles like the metro-style will begin to emerge as they in turn can bypass the designer if in some fairly competent hands. As in reality the importance of User Experience Principles has become weaponised with the more specialised teams making their work public for all to borrow/steal. Don’t be suprised if a team in the near future had a designer but now doesn’t and solutions that look like Microsoft Metro were being produced. Google and Microsoft have began and in parts Adobe’s tooling also adhere’s to this as well. Having real world objects in 2D design isn’t a bad or limiting thing, it’s an ongoing design evolution around trialing & erroring deeper design beyond flat monochrome wireframes that have or haven’t been colored in. Don’t knock it as the alternative isn’t as deep in its composition.

Related Posts: