Data. Big Data. Data-driven. Trending buzz-words in the recent years, and as a designer I’ve heard, read, and witnessed how important data is in making decisions based data. Meanwhile, in reality a lot of companies are still making decisions based on what the CEO or client likes. Based on my experience working with a couple of start-ups, as part of small design agencies and from freelancing, “data-driven design” seemed to be an ideal concept: nice to have, but not always possible. Small teams or clients just won’t have the budget. But if I know that data is important in influencing design, then it must start somewhere — even if that means testing the waters myself.

Armed with this determination, reading about data and analysis online didn’t seem to push me forward. I still felt stuck — I can start adding Google Analytics on a site, for example, but what am I doing with all these data? Week after week, sprint after sprint, I felt like I had a vague idea of what we should measure, but the concepts are still a bit abstract in my head.

Last week, IxDA Singapore invited Cyrille Rentier to give a talk on Data Driven UX Design. His talk was so concise that it gave me a better idea on how to start using data to iterate or influence design decisions.

What it means to be ‘data-driven’

  • go live quickly, so you can start optimizing. The real work only starts after you go live
  • set concrete goals
  • use user-centered and hypothesis driven design and development
  • measure to create actionable insights through analysis

There are 4 steps to an optimizing cycle: 

  1. Know what to measure
  2.  Analyze
  3. Involve users
  4. Test

1. Know what to measure

I got used to designing like I’m running a ‘marathon’. It was a habit in order to ship fast and fail fast. But it came with the cost of thinking about analysis at the end of the design process when it should have been the other way around. Before starting, I should know:

  • what my goals are
  • when my goals are a successful
  • how to measure
  • my segmented target

Maybe KPIs within the team is ambiguous, especially since KPI can vary across companies and projects. How would you define KPI for UX, for example?

2. Analyze your data

The second step after figuring out what to measure, is to gather data and analyze it. This includes:

  • implementing tools to collect data
  • automating as much reporting as possible (or let someone automate else automate for me)

An example is using funnel reports to figure out where users drop off, and then optimize those drop-off points.

3. Involve Users

As a designer, it’s easy to forget that I’m not the target user. I’m not designing the app for myself. Easy to remember in theory, but not as easy to practice.

  • best to observe users because they are not designers
  • 70% projects fail due to user acceptance
  • 5 users to test will solve 75% usability problems
  • write tasks and questions. Focus on the areas in the Funnel Analytics
  • keep it small
  • quantify the results

I was talking to one of the guys that attended the talk and he shared that he’d do guerrilla testing to validate user needs or to interview and ask some questions, while doing in-house testing for the actual user tests (due to NDA’s with the client).

Testing internally, too, may not be the best practice but it can still be useful for finding usability problems.

By involving users, you should know:

  • what the goals are and how to measure
  • where things go wrong
  • why things go wrong

4. Test it

If you’re designing to optimize, the last step is to test the design.

  • choose a method to validate, i.e. weekly AB testing
  • see if the rework has any effect

Hypothesis driven design combines the user story (as a ROLE, I want FEATURE, so I can BUSINESS GOAL) with a hypothesis (AS A DESIGN TEAM, We believe X will result in Y; we know we have succeeded when Z). Was the hypothesis proven? If it failed, then the cycle continues with every iteration or rework.

To summarize

Data-driven design is a cycle: define the KPI -> analyze data -> involve users -> conversion optimization

  1. Decide on what to measure
  2. Make it measurable
  3. Research and analyze
  4. Create optimizations
  5. Validate the solutions