Monday, February 14, 2005

The Magic of User Testing

"It's a miracle that user testing works," says Jakob Nielsen in today's edition of Alertbox.

He's right, of course. Usability testing puts people in an extraordinarily unfamiliar environment—and then asks them to perform tasks "just like you would at work or home." Cameras watch their every move. The facilitator sits close by. They're asked to "think aloud," to give a play-by-play of their thinking process. There may be a big one-way mirror with who-knows-who sitting behind it. Yet despite it all, most participants indeed tune out the distractions and effectively complete the requested test.

In this particularly practical issue, Nielsen explains that the miracle is due to the human tendencies to engage and suspend disbelief. He also suggests some good ways to help the relatively few users who do have difficulty with the lab setting.

The problems I've experienced with such users usually arise from less-than-optimal user selection and task creation. I'm infamous with a recruiting service for my meticulous screeners; I'm fastidious about finding users who accurately represent my Clients' user base. Even so, the screener is only as good as the information given me by my Client and the subsequent research allowed by the budget. Worse, there are some participants who see an opportunity to make a few easy bucks and misrepresent themselves during phone screening. For both cases, my solution is the same: Build the screener on the best information possible and schedule more test participants than needed in case a "ringer" slips through.

Task creation is equally important. I begin by asking my project team to complete a task prioritization exercise based upon Mike Kuniavsky's example in Observing the User Experience. This helps us zero in on what's most important and realistic. After defining the tasks, I complete a "task template" as suggested by Carolyn Snyder in Paper Prototyping. (A PDF template is available here.) This gathers all of the needed information about the tasks, sparks questions about them and aids in creation of user instructions. And then we test the tasks repeatedly to shake out any possible problems before testing begins.

In short, getting your usability test ducks lined up in a neat row facilitates the engagement and suspension of disbelief of test participants.

0 Comments:

Post a Comment

<< Home