Data-Triggered UX Reviews
I’m currently helping a client with the UX of their data and analytics reporting. In a recent report, an analyst noted that there was a very low and persistent click-through rate from a certain type of product page to an associated store that sells add-ons for those products. The analyst recommended further investigation, suggesting it might be because the store promotion would usually fall below-the-fold.
Not being able to resist, I took a quick look at one of the pages, and in about 5 minutes surmised that it wasn’t so much that the promotion of the store wasn’t obvious, but that a visitor would have to work quite a bit to find a way to get there. The low CTR we were seeing made perfect sense.
The store was actually mentioned quite prominently on the product page as one of a few text highlights, but unlike the others, there was no link with the store highlight. Further down the page, below the large product image (from just-above to just-below the fold) were a few graphic features, including one for the store. Clicking on this didn’t take you to the store. Instead, it changed the product image above (all of this was one big flash object), replaced the highlights with a slightly longer marketing description of the store, and provided a link to “Learn more” about the store. This was the one and only link on the page that took you to the store; it was the click-through the client was looking for; and it was buried in flash transitions with a possibly less-than-optimal label for the link. Oh, and the link opened a new window that in IE8 (but not Firefox 3.5) triggered the pop-up blocker, which required me to allow pop-ups, watch the page refresh, re-navigate through the Flash to the store link, and click again.
I’m writing up my findings for the client, but it’s a coincidence that someone with a UX-perspective read that report and investigated. And while I’m providing this insight and a few next step recommendations, I haven’t worked with the client long enough to know where those will go and if anything will happen with them.
This type of situation has happened several times before as I’ve worked either with or just nearby some analytics teams. It just reinforces what many (most?) of us in the UX field know—that combining our research, testing, and design with live-site data and analytics can improve the overall result. And while I’ve seen progress in the form of multivariate testing of various designs, where the resources needed to make change happen are all paying close attention, I haven’t seen this in the case of analytics of an existing site that turns up something puzzling.
I’ve suggested in the past, and will work on again this time, that a process be put in place wherein data anomalies trigger a UX review that feeds into ongoing site work. This trigger could be a data analyst raising a flag, or it could possibly be an algorithm that looks for anomalies that meet a certain threshold value.
What do you think? Have you had similar experiences or been involved with a process that worked like this? I’d love to hear your comments.
Leave a Reply
Want to join the discussion?Feel free to contribute!