Lessons learned from the user evaluation perspective (or can we define the ‘long tail’?)

The key lesson we’ve learned during this project is that the assumptions behind the hypothesis of this project need to be reconsidered, as in this context the ‘long tail’ is complex and difficult to measure. Firstly how do we evaluate what is ‘long tail’ from a user perspective? We may draw a line in the sand in terms of number of times an item has been borrowed, but this doesn’t necessarily translate into individual or community contexts. Most of this project was taken up with processing the data and creating the API and UI; if we’d had a bit more time we could have spent more resource dealing with these questions as they arose during testing.

The focus groups highlight how diverse and unique each researcher and what they are researching is. We chose humanities  postgrads, PhD’s and masters level, but in this group alone we have a huge range of topic areas, from the incredibly niche to the rather more popular. Therefore we had some respondents who found the niche searches fruitful and others who found nothing, because their research area is so niche, hardly any material they don’t already know about doesn’t exist. In addition, when long tail is revealed, some researchers find it outdated or irrelevant. This is why it isn’t borrowed that often. So is there any merit in bringing it to the attention of the research community?

Further more in-depth testing in this area needs to be done in order to find answers to some of these problems.  The testing for this project asked the respondents to rate their searches and pick out some of the more interesting texts. But we need to sit with fewer researchers and broaden the discussions. What is relevant? How do you guage it as relevant? Some of the respondents said the books were not relevant but more said they would borrow them, so where does this discrepancy come from? Perhaps ‘relevant’ is not the correct term, can the long tail of discovery produce new perspectives, interesting associations perhaps previously not thought of? Only one-to-one in-depth testing can give the right data which will then indicate which level the threshold should be set.

After all is there any point in having a recommender which only gives you recommendations you expect or know about already? However, some participants wanted this from a recommender or expected it and were disappointed when they got results they could not predict. I know if I search for a CD on Amazon that I’m familiar with I sometimes get recommendations I know about or already own. So the recommender means different things to different people. There is a group that are satisfied they know all the recommended texts and can sleep soundly knowing they have completely saturated their research topic and there is a group that need new material.

The long tail hypothesis is a difficult one to prove in a short term project of 6 months. As its name suggests the long tail needs to be explored over a long time. Monitoring borrowing patterns in the library, click through and feedback from the user community and librarians will help to refine the recommender tool for ultimate effectiveness.

About these ads
This entry was posted in Uncategorized and tagged , , , , , , , , , . Bookmark the permalink.

One Response to Lessons learned from the user evaluation perspective (or can we define the ‘long tail’?)

  1. NmOTGKW says:

    This website really rocks.

    Cheers,

    Angela

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s