As we specialise in user research and user experience, we’ve always done a lot of work with other agencies and in-house teams. Rarely this may require a whole scale redesign of websites or applications. But often, this involves making (and testing) incremental changes over time.
The rise of user experience
Over the last 12 months we’ve noticed a substantial spike in the demand for these services from direct clients as well as third-party digital agencies. This certainly isn’t down to anything we’ve done from a marketing or sales perspective, although referrals have certainly played their part.
This indicates that there’s increasing buy-in from senior decision makers. Which isn’t surprising given the rise of user experience (or UX as it’s more commonly known). After all, stat’s like these used to startle people:
“A 5% increase in customer retention can increase a company’s profitability by 75%”
Bain & Co
“70% of software projects fail due to lack of adoption.”
Forrester Research
“Every dollar invested in user experience generates $100 in return.”
Forrester Research
“UX leaders outperform their peers.”
6-year Stock performance of customer experience Leaders vs. Laggards based on the S&P 500
“UX helps define requirements up-front which results in 50% less wastage and between 33-55% less development time.”
MIT eXchange
Now they just nod in agreement.
The net effect of this is that prospective customers are coming to us with a solid understanding of what we do, how we can help them and what they want us to achieve, which is great. In some cases, this will extend to how they want us to achieve it too.
The latter point can prove difficult. As with any research and consultancy project there’s an unpalatable truth that only experienced practitioners understand: if we’re engaged to conduct research, we should be learning things that inform our next steps. Some clients understand this, others find this disconcerting to say the least.
So, is a little knowledge a dangerous thing?
We’re often asked if we can help improve a website. This simple answer is yes, this is exactly what we do. However, when people ask how we’re going to do this the reality is that each project is different.
In large part the methods and techniques used are dictated by three things. The:
- Specific business context of the website
- Specific outcome (and not output or deliverable) we’ve been engaged to deliver
- Time and budget available.
Which is why we spend so much time at the outset of a project to identify what the client really wants out of engaging us and what dependencies they have in terms of implementing any recommendations we make. This allows us to identify whether the potential client is interested in engaging us to provide a one-off report (e.g. an expert review) or are looking to invest in a process (conversion rate optimisation).
Which is why my next post will try to explain the differences between an expert (or website) review and a conversion rate optimisation process. As the lines between the two have become increasingly blurred, even though they’re very different things.
Over the years we’ve learnt it’s our duty to drill into the outcomes the clients want during the sales process. It’s certainly not easy and sometimes it precludes a sale we would have otherwise made. But the reality is if we didn’t, we’d run the risk of a disconnect between what clients think they’re buying and what they receive.
If you’re interested in learning more about how we improve website’s and the specific techniques we use sign-up for our newsletter or get in touch to discuss things in greater detail.