Lou Rosenfeld has been a well-known information architect for many years, with a strong and varied consulting background. Back in the 90’s he wrote what is still likely considered THE bible on IA (and a must read for anyone who does this stuff). Organizing content for a web build, designing navigation, labeling, application and user flows, and page layouts are the types of things I’ve probably done for the longest, going back well over a decade, so IA is still what I think of as my bread and butter.
While I have used both open and closed card sorts, and other tools such as personas and basic web analytics to good effect, I’ve always felt my IA toolkit to be a little weak. This workshop really focused on the topic of Lou’s latest book, Search Analytics for Your Site, and gave me some great insights into things I can do almost immediately, to not only improve my practice, but to hopefully have a positive impact on our main web property in a more rapid and iterative fashion.
As with Steve’s workshop the day before, the main suggestion here was to eschew major redesigns in favour of finding ways to make smaller improvements more frequently to net bigger rewards. As with the book, Lou’s workshop used Michigan State University as a major case study for the day. Using a higher ed site was really a plus for me, since I experience many of the same issues in my work. And, as with user testing, my main issue is that I spend so much time doing project-specific work, that finding cycles to actually do iterative, regular operational-type design improvements to our main site is a major challenge.
The main arguments against major redesigns are that the moving target variables of users, content and organizational context, make it impossible to get redesigns right, that the word redesign itself is meaningless, because it is a different thing to everyone, and that silos and internal competition (a major problem in higher education) makes it almost impossible to do a good redesign. Instead of a redesign, then, we should consider the following:
- Prioritize: identify important problems regularly and cyclically.
- Tune: address those problems regularly.
- Be opportunistic: look for low-hanging fruit (again with the fruit).
I don’t want to rewrite tons from the workshop/book here, as that would be pretty unreadable for anyone who might bother looking at this. I write this after also reading Lou’s book on the plane home and I’d strongly recommend it to anyone who has any hand in designing, and particularly maintaining, large websites. The following presentation is a good overview of the main points from Lou’s workshop and book, suggesting you use Site Search Analytics (SSA) to improve your site’s user experience.
In addition to covering much of what the slide deck above contains, we did a couple of group exercises focused on content modelling and analyzing search queries to determine content types (which would be modelled into, for example, new or improved paths through your site). There are many different aspects to using site search data to improve your site’s user experience. While the workshop had an interesting flow, I found the book laid things out in a little more linear fashion (and I’m kind of a linear guy at times).
How to put SSA to work in real terms
Your search data can show you all sorts of patterns around tone, time, questions & answers, and other elements of user intention. Intention is an important concept, and one which makes SSA so powerful. While matching queries with results and session data can tell you what’s happening on your site, it can also show you where your site is failing. Which queries are returning zero, or poor, results? What about searches that lead to users immediately exiting from your site? It can often tell you enough to help you segment your audience by geographic location, visit frequency, conversion frequency, visit timing, and in other ways. I say ‘often’ because conversion measurement is a big sticking point for my main work site. I won’t go into detail here, but getting metrics for course purchases (a black hole of a system) and then tying them to our session data that leads to those conversions is pretty much an impossibility.
These types of analyses can then be used to fix all sorts of things. You can plug content gaps for popular queries, design best bets or suggestions of likely best content based on query analysis, while search results can quite easily be improved based on both the specialized content you have on your site and specialized queries your users are inputting. Site index (A to Z and similar) pages can actually be used experimentally to test new site hierarchies. As well, you can tune your metadata (particularly if you don’t manually have to manage it, page by page), purge never-accessed content and improve the relevancy of your more valuable content. Most of these fixes come down to designing and connecting content types and fairly simple changes you can make at the page and navigation level of your site. All good stuff.
The truth is, between the book and workshop, there’s just so much stuff here to dig into, I can’t possibly do it justice here. As with more frequent user-testing, my goal will be to begin playing with a few simple concepts in my work. The problem, though, will be getting some collective attention on it with so many other things on our plates.