Xamarin & Microsoft merger may yet prove useful to designers.

The .NET community has been fractured for quite some time when it comes to mobile development, and a large amount of hate debt has been banked as a result. Products like Xamarin have been given the appropriate amount of adoption because they have a more agnostic vision of how .NET could work in a truly x-platform / x-device arena.

However, the approach to date isn’t an easy stroll down success lane, as to develop a mobile app even with Xamarin you’re faced with two decisions to begin with. Xamarin “native” or Xamarin “Forms”, each having their own set of pro’s and con’s attached from a pure “developer-centric” perspective.

Next decision after that is how do you design for three platforms (*maybe two*) and still retain constancy – yes I said constancy, not consistency. On one hand designing apps to work inside iPhone is different to how they work in Android – but only up to a specific context (as tradeoffs and split thinking naturally then occurs).

In order to achieve this, you have to essentially begin the same set of compromises you would make with the web, forking your feature design/development vision to accommodate and absorb the various limitations imposed on each platform in accordance to the restraints Xamarin imposes on top (ie there’s an element of decay implied).

To compound issues further, you then have Xamarin not really adhering to the previous iterations of XAML (aka Avalon) and whilst it looks kind of like XAML, it’s really in many ways just XML with limitations (ie you can’t really animate with it using the same Storyboard composition as you once had with Silverlight/WPF and so on). Xamarin’s XAML is the panacea we want but isn’t the same.

Now you have to programmatically design your composition with either a designer’s comps on your second monitor as a guide or worse, the designer is over your shoulder offering feedback loop hell.

Xamarin failed thus abandon it?

Hell no, Xamarin has all the ingredients one would need to really get the .NET x-platform / x-device story going, in fact, I’m more frustrated at the post platform execution than its original foundation itself. The secondary parts above can easily be fixed provided there’s some stronger thinking imposed about how “creative influence” applies to the composition of design – that is to say, at what point does the designer have free control over composition without haggling with a developer on limitations artificially imposed due to what i can only guess at being resource allocation issues on Xamarin’s part.

This, in turn, means that one would need to approach the composition of a Xamarin vNext with the idea or intent of using XAML/C# marriage the way .NET gods intended. What that means to say is that if you took the same conceptual develop/design pipeline that .appx or .xap has today and applied this to Mobile development this, in turn, unites the developer & designer workflow under the one constancy based banner, which in return reduces less feature editing / design cut-aways.

Why is this important?

In 2007, we were faced with a mission to get Designers more engaged with Developers, and that’s why Silverlight/WPF was born. We had small amount of success but in truth, we were side-tracked on conflicting priorities and poor management to really dig in on that same set of problems. Today, the various technical platforms have shifted but the core fundamental issue hasn’t gone away, in fact, it’s gotten smarter about how the two worlds collide – sadly, Microsoft has never really gotten an invite to that discussion due to its retreat positioning.

Microsoft’s answer, in general, has been to remove the designer from the equation given its complexity, instead, they gave developers a cookie-cut style template titled “metro/modern UI design” (aka Paint by numbers developer art) thinking that if you reduce the composition of design to basic minimal aesthetics, you, in turn, reduce the burden or need to have a designer influence the creative process.

That strategy is an utter failure and I’d promote the theory that the reason why Windows Phone has failed as a product is solely due to the UI (given the phone hardware is perfect, development SDK is the easiest by far but the design integration .. too boring, too hard).

Xamarin merger with Microsoft now has the potential to reboot a company’s mobile strategy in a way that it needs more than ever before, however, if the two worlds continue to solely double down on “developers, developers, developers” that don’t factor in “designers, designers, designers” all we really have achieved now is a license model reduction, better Visual Studio support, stronger echo chamber but still a designer stalemate, resulting in continued “developer-only” circle jerk sessions.

Related Posts:

Creating a designer & developer workflow end to end.

 

When I was at Microsoft we had one mission really with the whole Silverlight & WPF platform(s) – create a developer & designer workflow that keeps both parties working productively. It was a mission that i look back on even to this day with annoyance, because we never even came close to scratching the surface of that potential. Today, the problem still even with Adobe and Microsoft competitive battles in peace-time hasn’t actually been solved – if anything it kind of got slightly compounded or fragmented more so.

Absorbing the failure and having to take a defensive posture over the past five years around how one manages to inject design into a code-base in the most minimal way possible, I’ve sort of settled into a pipeline of design that may hint at the potential of a solution (i say hint..).

The core problem.

Ever since I think 2004, I’ve never been able to get a stable process in place that enables a designer and developer to share & communicate their intended ideas in a way that ends up in production. Sure they end up something of a higher quality state but it was never really what they originally set out to build, it was simply a end result of compromises both technically and visually.

Today, its kind of still there lingering. I can come up with a design that works on all platforms and browsers but unless i sit inside the developer enclosure and curate my design through their agile process in a concentrated pixel for pixel way, it just simply ends up getting slightly mutated or off target.

The symptoms.

A common issue in the process happens soon after the design in either static form or in prototype form gets handed off to the developer or delivery team. They look at the design, dissect it in their minds back to whatever code base they are working on and start to iterate on transforming it from this piece of artwork into actual living interactive experience.

The less prescriptive I am in the design (discovery phase) the less likely i’ll end up with a result that fits the way i had initially imagined it to begin with. Given most teams are also in an Agile way of life the idea that I have time or luxury of doing a “big up front” design rarely ever presents itself these days. Instead the ask is to be iterative and to design in chunking formations with the hope that once i’ve done my part its handed off to delivery and then it will come out unscathed, on time and without regression built in.

Nope. I end up the designer paying the tax bill on compromise, i’m the guy usually sacrificing design quality in lieu of “complexity” or “time” derived excuses.

I can sit here as most UX`ers typically do and wave my fist at “You don’t get us UI and UX people” or argue about “You need to be around the right people” all i want but in truth this is a formula that gets repeated throughout the world. It’s actually the very reason why ASP.NET MVC, WPF and Silverlight exist really – how do we keep the designer and developer separated in the hope they can come together more cleanly in design & development.

The actual root cause for this entire issue is right back at the tooling stage. The talent is there, the optimism is there but when you have two sets of tooling philosophies all trying to do similar or close to similar things it tends to kind of breed this area of stupidity. If for example i’m in Photoshop drawing a button on canvas and using a font to do so, well at the back my mind i realise that the chances of that font displaying on that button within a browser is less likely to happen then inside the tool – so i make compromises.

If i’m using a grid setting that doesn’t match the CSS framework i’m working with, well, guess what one of us is about to have a bad day when it comes to the designer & developer convergence.

If i’m using 8px padding for my According Panel headers in WPF and the designs outside that aren’t sharing the same constancy – well, again, someone’s in for a bad day.

It’s all about grids.

Obviously when you design these days a grid is used to help figure out portion allocation(s) but the thing is unless the tooling from design to development all share the same settings or agreed settings then you open yourself up from the outset to failure. If my grid is 32×32 and your CSS grid uses 30% and we get into the design hand over, well, someone in that discussion has to give up some ground to make it work (“lets just stretch that control” or “nope its fixed, just align it left…” etc start to arise).

Using a grid even at the wireframing stage can even tease out the right attitude as you’re all thinking in terms of portion and sizing weights (t-shirt size everything). The wireframes should never be 1:1 pixel ready or whatever unit of measure you choose, they are simply there to give a “sense” of what this thing could look like, but it won’t hurt to at least use a similar grid pattern.

T-shirt size it all.

Once you settle on a grid setting (column, gutters and N number of columns) you then have to really reduce the complexity back to simplicity in design. Creating T-shirt sizes (small, medium, large etc) isn’t a new concept but have you even considered making that happen for spacing, padding, fonts, buttons, textinputs, icons etc etc.

Keeping things simple and being able to say to a developer “Actually try using a medium button there when we get to that resolution” is at the very least a vocabulary that you can all converse in and understand. Having the ability to say “well, maybe use small spacing between those two controls” is not a guessing game, its a simple instruction that empowers the designer to make an after-design adjustment whilst at the same time not causing code-headaches for the developer.

Color Palettes aren’t RGB or Hex.

Simplicity in the language doesn’t end with T-shirt sizing it also has to happen with the way we use colors. Naming colors like ClrPrimaryNormal, ClrPrimaryDark, ClrPrimaryDarker, ClrSecondaryNormal etc help reduce the dependency of getting bogged down into color specifics whilst at the same time giving the same adjustment potential as the T-shirt sizes had as well – “try using ClrBrandingDarker instead of ClrBrandingLight”. If the developer is also color blind as in no they are actually colorblind, this instruction also helps as well.

Tools need to be the answer.

Once you sort the typography sizing, color palette and grid settings well you’re now on your way to having a slight chance of coming out of this design pipeline unscathed but the problem hasn’t still been solved. All we have done really is created a “virtual” agreement between how we work and operate but nothing really reinforces this behavior and the tools still aren’t being nice with one another as much as they could be.

If i do a design in say Adobe tools I can upload them to their creative cloud quite quickly or maybe even dropbox if have it embedded into my OS. However my developer team uses Visual Studio’s way of life so now i’m at this DMZ style area of annoyance. On one hand i’m very keep to give the development team assets they need to have but at the same time i don’t want to share my source files and much the same way they have with code. We need to figure out a solution here that ticks each others boxes as sure i can make them come to my front door – cloud or dropbox. That will work i guess, but they are using github soon so i guess do i install some command line terminal solution that lets me “Push” artwork files into this developer world?

There is no real “bridge” and yet these two set of tools has been the dogma of a lot teams lives of the better part of 10 years, still no real bridge other then copy & paste files one by one.

For instance if you were to use the aforementioned workflow and you realize at the CSS end that the padding pixels won’t work then how do you ensure everyone see’s the latest version of the design(s)? it realises heavily on your own backwater bridge process.

My point is this – for the better part of 10 years i’ve been working hard to find a solution for this developer / designer workflow. I’ve been in the trenches, i’ve been in the strategy meetings and i’ve even been the guy evangelizing  but i’m still baffled as to how I can see this clear linear workflow but the might of Adobe, Microsoft, Google, Apple and Sun just can’t seem to get past the developer focused approach.

Developers aren’t ready for design because the tools assume the developer will teach the designer how to work with them. The designer won’t go to the developer tools because simply put they have low tolerance for solutions that have an overburden of cognitive load mixed with shitty experiences.

5 years ago had we made Blend an intuitive experience that built a bridge between us and Adobe we’d probably be in a different discussion about day to day development. Instead we competed head-on and sent the entire developer/designer workflow backwards as to this day i still see no signs of recovery.

Related Posts:

Being Playful with Industrial Software

I’ve been sitting in the Enterprise space as a UX mercenary for probably around 5+ years. In every team, sales meeting and brainstorming session I’ve always encountered resistance around “maturity” in terms of design. The more money that is being spent on the software, the more “serious” the design should be. This line of thinking I think typically comes from the concern that if the design is not serious therefore the trust around it’s ability to do the various task(s) will be eroded.

The thing is, the more sales meetings I’ve been in or participated in the preparation for the more I’ve come to the conclusion that “design” isn’t even a bullet point in the overall sales pipeline. Sure the design makes an appearance at the brochure / demo level but overall nobody really sits down and discusses how the software should look or feel during the process. Furthermore the client(s) typically have invited the sales team(s) into the selection panel(s) based off their existing brand, known or rumoured capabilities and/or because they are legally required to.

To my way of thinking, being “playful” with your design is a very unnerving discussion to have in such a scenario. The moment you say the word “playful” most people respond with some word association positive or negative (usually negative) as the word may take you back to your childhood (playing with lego or dolls …I didn’t play with dolls..they were called G.I. JOE’S!). It’s that hint of immaturity with the word that makes it more appealing to me, as it forces you to think about maturity but with the constraints of immaturity (cognitive dissonance).

Playful however doesn’t have to be immature, there are very subtle ways to invoke the feeling of making something playful without actually being obvious about it. For example, Google+ and most of Google’s new branding is what I’d consider “playful” but at the same time the product(s) or body of work that goes into their solutions are quite serious.

Playful Mood Board

Playful Mood Board

Why be playful? My working theory is that the reason why users find software “unusable” has to do with confidence and incentive. If these two entities don’t’ fuel their usage furnace the overall behaviour around their usage decay(s), that is they begin to taper off and reduce it to an absolute “use at minimum” behavioural pattern. This theory is what I would class as being at the heart of invoking “emotion” or “feeling” into how software is made and often why a lot of UX Practitioners will preach as to why these two should be taken quite serious in the design process.

The art of being playful in a way regresses adults back to their childhood where they were encouraged to draw, build and decorate inanimate object(s) without consequences attached. As a early teenage child, you were encouraged to fail, you were given a blank piece of paper and asked to express your ideas without being reprimanded. You in short, “designed” without the fear of getting it wrong or for that matter right (although right was usually rewarded with your art piece being put on the fridge at home or something along those lines). A playful design composition can be both serious but inviting, as a good design will make you feel as if you’re “home” again. A great design will make that temporary break away into using other software and then back again an obvious confidence switch – as if you’re saying out loud “gah! that was a horrible experience, but I’m’m back to this app…man…it feels good to be home and why can’t other software be like this”

 

 

 

Related Posts:

What if you had to build software for undiagnosed adults that suffer from autism, Asperger’s, dyslexia, ADHD etc.

Software today is definitely getting more and more user experience focused, that is to say we are preoccupied with getting the minimal designs into the hands of adults for task driven operations. We pride ourselves on knowing how the human mind works through various blog based theories on how one is to design the user interface and what the likely hood of the adult behind the computers initial reaction will be.

The amount of conferences I’ve personally attended that has a person or person(s) on stage touting the latest and greatest in cognitive science buzz word bingo followed by best practices in software design is well, too many.

On a personal level, my son has a rare chromosome disorder called Trisomy 8, its quite an unexplored condition and I’ve pretty much spent the last eight years interacting with medical professions that touch on not just the psychology of humans but zeros in on the way in which our brains form over time.

In the last eight years of my research I have learnt quite a lot about how the human mind works specifically on how we react to information and more importantly our abilities to cope with change that aren’t just about our environments but also plays a role in Software.

I’ve personally read research papers that explore the impacts of society’s current structure on future generations and more importantly how our macro and micro environments play a role with regards to the children of tomorrow coping with change and learning at the same time – that is to say, we adults cope with the emerging technology advancements because for us “its about time” but for todays child of 5-8 this is a huge problem around having to manifest coping skills to dealing with a fluid technology adoption that often doesn’t make sense.

Yesterday we didn’t have NUI, today we do? Icons that have a 3.5” floppy disc that represent “saving” have no meaning to my son etc.

The list goes on just how rapid and fast we are changing our environments and more importantly how adults that haven’t formulated the necessary social skills to realistically control the way in which our children are parented often rely on technology as at times being the de-facto teacher or leader (Amount of insights I’ve read on how XBOX 360 has become the baby sitter in households is scary).

Getting back to the topic at hand, that is what if the people you are designing software have an undiagnosed mental illness or are better yet diagnosed. How would you design tomorrow’s user interface to cope with this dramatic new piece of evidence? To you the minimal design works, it seems fresh and clear and has definitive boundaries established.

To an adult suffering from Type-6 ADHD (if you believe in it) that has a degree of over-focus its not enough, in fact it could have the opposite effect of what you are trying to do in your design composition.

Autism also has a role, grid formation in design would obviously appeal to their autistic traits given it’s a pattern that they can lock onto and can often agree with – Asperger sufferers may disagree with it, and could annoy or irritate them in some way (colour choice, too much movement blah blah).

Who has to say your designs work, as if you ask people on the street a series of questions and observe their reactions you are not really providing an insight into how the human mind reacts to computer interaction. You’ve automatically failed in a clinical trial, as the person on the street isn’t just a normal adult there’s a whole pedigree of historical information you’re not factoring into the study that is relevant.

At the end of the day, the real heart of HCI for this and is my working theory is that we formulate our expectations around software design from our own personal development. That is to say, if we as children had normal or above average IQ level in around about ability to understand patterns and the ability to cope with change we in turn are more likely to adapt to both “well design” and “poorly designed” software.

That is to say, when you sit down and think about an Adult and how they react to tomorrow’s software one has to really think about the journey the adult has taken to arrive at this point, more to the point how easily they are also influenced.

A child who came from broken home, parents left and raised by other adults who is now a receptionist within a company is more likely to have absolutely no confidence in around making decisions. That person is now an easy mark for someone who has the opposite and can easily sway this person to adoption and change.

Put those two people into a clinical trial around how the next piece of software you are going to roll out for a company works and the various tests you put them through, watch what happens.

Most tests in UX / HCI often focus on the ability of the candidate to make their way through the digital maze in order to get the cheese (basic principles around reward / recognition) so to be fair its really about how the human mind can navigate a series of patterns to arrive at a result (positive / negative) and then furthermore how the said humans can recall that information at a later date (memory recall meets muscle memory).

These style of results will tell I guess the amount of friction associated with your change, and give you a score / credit in around what the impact it will likely have but in reality what you really probably need to do as an industry is zero in on how aggressively you can decrease the friction levels associated with change prior to the person arriving at the console.

How do you get Jenny the receptionist who came from an abused child hood enough confidence to tackle a product like Autodesk Maya (which is largely complex) as if you were to sit down with Jenny and learn that she also has a creative component to her that’s not obvious to all – its her way of medicating the abuse through design.

How do you get Jack the stock broker who spends most of his time jacked on speed/coke and caffeine to focus long enough to read the information in front of him through data visualisation metaphors / methodologies then the decisions he makes could impact the future of the global financial system(s) world wide (ok bit extreme but you get my point?)

It’s amazing how much we as a software industry associate normal from abnormal when it comes to software design. It is also amazing how we look at our personas in the design process and attach the basic “this is mike, he likes facebook” fluffy profiling. When what you may find even more interesting is that Mike may like facebook, but in his down time he likes to put fireworks in cats butts and set them on fire because he has this weird fascination with making things suffer – which is probably why Mike now runs ORACLE’s user experience program.

The persona in my view is simply the design team having a massive dose of confirmation bias as when you sit down and read research paper after research paper on how a child sees the world that later helps define him/her as an adult, well…. In my view, my designs take on a completely new shift in thinking in around how I approach them.

My son has been tested numerous times and has been given an IQ of around 135 which depending on how you look at it puts him in around the genius level. The problem though is my son can’t focus or pays full attention to things and relies heavily on patterns to speed through tasks but at the same time he’s not aware of how he did it.

Imagine designing software for him. I do, daily and have to help him figure out life, it just has taught me so much in the process.

Metro UI vs Apple OS 5..pft, don’t get me started on this subject as both have an amazing insight into pro’s and con’s.

Related Posts:

The 6 things that annoy me when you design my software.

1. Stop making bottleneck software

image

Technically you could write most software today as one big mega-class with loads of switch / if&else statements. If you did that, not only would every other developer you come across immediately punch you in the nose but it would also become hard to maintain over time.

We agree that would be stupid right. I mean one large file for all code! – yet why do I always see software designed in such a way that it becomes the Swiss army knife of all tasks associated to the user, in that it becomes feature-heavy based around feeble arguments of "but the user wanted.."

The user is 80% of the time a jackass.

You are armed with a plethora of programming models today, stop crowding (thereby creating UX bottlenecks) the user interface for every single role known to man. Figure out the "persona’s" attached to your software and if need be, make smaller contextually relevant versions of the software per person (whether it be modular or separate specific installations).

2. Third Party Controls do not negate the need for a designer.

image

When I first left Microsoft and joined the working class (mwahah), I was often thrown into the deepened of projects that needed some UX makeovers. Given I have both a programming and design background it seemed a natural fit so sure, go with the flow I say. I’d walk into a typical gig and sure enough I’d see 3rd party controls lurking about (ie Telerik, Component One etc).

Nothing against these brands but if you are dealing with WPF or Silverlight then let me give you a heads up on why this is a bad idea. Firstly, the 3rd party controls are just a quick dirty fix to get around bad UI design, I get it, budgets are non-existent so you do the best you can. Secondly, these controls are made for multiple developers around the world, so there are many keys to turn on and off for them to snap together – which means your controls are not on a diet. Thirdly, you need to walk a mile in the shoes of say a C++ programmer or some language that used to have to play a game of memory Tetris to really grasp the concept of the second point.

Diet is the keyword. If you are dealing in Silverlight space the leaner and smaller footprint your code has, the snappier things are going to get. I am not talking about pure CPU no-holds bar processing time; I am talking about rendering pipeline time. I am yet to see an example of 3rd party controls improving performance and not subtracting them.

Stop outsourcing design for third party controls and I am looking at you graph boy/girl.

3. Every screen has a soul.

image

In UI Principles space there’s this little concept call false affordance. It means something that looks like it was supposed to do xyz but does nothing (i.e. Push the button and all negative energy will disappear scams).

If you have some software that has a hierarchy of navigational elements, you click on the first node, and it does nothing but expand to the second node but at the same time shows a view with some "weak" summary (i.e. description etc.). Stop, you are doing it wrong.

Every click has a purpose of existence. If you have a dashboard, what is its purpose? Think about its relevance in the grand scheme of things. Should it be fresh content daily/weekly/monthly? Is a holding pattern screen necessary?  The screen, which is like the UX principles are buffering between two major waypoints – you know the one screen in the app that really has no purpose other than to get you from A to C but somehow you felt the need to keep B in place.

If you have a screen that is filled with say two Input controls and that is it. That is a freakin dialog box, it is not a screen. Stop being lazy and think about the problem not how easy it is for you to just whack up an app. It’s not about you, it is about them *points to the end user*.

4. You are not a magician so quit giving me the constant "surprise" moments.

image

Ever used an application that when you click on something random inside a screen suddenly a piece of User Interface randomly appears somewhere in the screen? Maybe hidden inside a secondary tree node somewhere?

This is not a magic show and you are not a magician. Progressive Disclosure is great when done in a way that leads the user on a journey, no more "I’ve just modified the screen, if you guess what I just did you get a fluffy kitten" moments.

5. Humans are smarter than you think

image

I have covered this quite a lot but let me re-iterate in the theme of this post. Over 90% of the world’s computer population right now has some piece of overly complicated unnecessary piece of crap software installed on their hard drive that they have somehow managed to figure out partially its inner workings.

The benchmark for success right now in this space is so low you could trip over it and still succeed.  My point is the end users are actually smarter than you give them credit for. If you are in a team and someone says, "Yeah our users aren’t smart enough to.." challenge that jackass upfront. As did he conduct a survey where One in Five housewives came back being dumber than he anticipated?

If an average worker-bee can sit through SAP ERP or any piece of software that Oracle/Microsoft throws at them, they can sit through your software as well.

The trick is to make it enjoyable for them, to be the software that does not feel like the others – the stand out. Rather than holding them hostage to complexity because of your own arrogance, try to think less about the complexity levels and more the enjoyment levels. Software should be enjoyable as we work WITH software – we do not USE it.

6. I did not buy a cat so it could be my master.

image

My kids wanted a kitten and so me being the "fun" dad I bought one. Today that cat rules the house most of the time because we react to it, not the other way around.

In software, this often is similar to what happens. We buy software thinking it will save us time and money as it will improve the master/slave relationship to our daily lives. Instead, we become more enslaved in its processes.

An example. Today I went to my bank ANZ (which I am ditching – F*K you ANZ). I said, "I’d like a copy of my home mortgage statement to give to your competitor so I can leave your dumbasses – i.e. YOU ARE FIRED"

I watched the teller pound away at a keyboard for like 5mins before she arrived at a point of some kind that then needed her co-worker to give her instructions on generating a printable report.

I am sitting there thinking the following things:

  • Why are you typing so much?
  • Why can’t I do this online myself? You give me access to every other account functionality yet why not this?
  • Why am I giving you everything but a DNA sample to authenticate I am who I say I am still to this day?

My point here is that aside from a crappy online service from ANZ Bank, the teller herself should have a simple input control that has a button next to it. Inside that input control, she types, "Print <AccountNumberXYZ> Mortgage Statement as of Today"

The input box then does the following:

  • Looks up my account number and verifies it still is active.
  • Takes the verb Print to mean "fetch" and the words Mortgage Statement as being what should be fetched whilst the word "Today" meaning as in Now(). Then spit out a piece of paper with that information. In otherwords “PrintMortgageStatementWorkflow(custId, date);”

I think I make my point(s) in saying why are we jumping through hurdles to make software do the work when it feels like we are a separate background thread in the software’s world.

Related Posts: