Real-life website testing

AUSTIN, Texas - The woman, a test subject, sits at a computer listening to a set of scripted instructions.

"Tell me what you think, not what I want you to think. You can leave at any time. I'm here to learn about how travelers obtain traffic and road construction information through a website."

Conducting the test is a University of Texas master's in information science student, Donna Habersaat. She watches and answers questions as the test subject clicks and scrolls through drivetexas.org, a Texas Department of Transportation website for travelers.

In the next room at UT's School of Information eXperience Lab (IX Lab for short), Habersaat's test partner Tom Reavley is watching the woman's online actions through one-way glass and on a computer monitor that shows what she's doing. He listens in on headphones and checks off items on a list of ways a person might find information on the website.

A few minutes later, he says, "She's doing pretty good. She actually got through a lot of things people made mistakes on, but now she's hitting a mistake that nobody else has made so far."

Habersaat and Reavley will collect and analyze data from test subjects, turn it into recommendations on how the website might be improved, and hand that information over to TxDOT, where Habersaat works. Habersaat suggested the project to her supervisors, turning a class project into a real-world usability study.

Margo Richards, director of TxDOT's travel information division, sat in on some of the testing. The site, which launched in May, is evolving, and testing like this could help rid it of problems. "It really needs to be reliable for people to continue to use it," she said. "If we are going to put the effort and time we have into the site, it needs to be user-friendly and accurate."

The TxDOT study is an example of "usability testing" - the broad term for improving products based on how people really use them. If you've ever been stymied by a badly designed website, frustrated by a cellphone with byzantine menus, or tried to set up a DVD player with bad instructions, you've been the victim of bad design.

UT's School of Information usability guru is associate professor Randolph Bias, who as director of the lab has been teaching students how to test for a decade; previously, he worked at Bell Labs, IBM, and BMC Software as a usability expert.

When he talks to people about usability, he tends to refer to it as a religion (as in, "When did you get the usability religion?"). While the methodology of studying ease of use has existed for decades, it's only in the last 15 years that companies have gotten the message, Bias says.

"All this information, if human beings can't gain access to it, it's of no value. The last time you went to a website and you tried to do something and you couldn't, that's because it's bad usability. It's not because you're stupid, it's because they didn't design for us, the target audience," Bias said.

If that sounds painfully familiar and if there are proven methods for usability testing, why are there still so many badly designed products and websites? Bias says many companies test as an afterthought, just before a product release, rather than throughout the design process. Others don't test well, or simply refuse to believe they haven't invented the greatest thing since the Post-it. That's why good usability testers are important.

"Usability people are in the business of telling people their baby's ugly," Bias said. But, he added, "we don't just say, 'Your baby's ugly,' we say, 'Here's how we make this baby pretty' " - i.e., usable.

In the mid-'90s, as usability was starting to get a seat at the tech-design table, Bias cowrote a book, Cost-Justifying Usability, arguing that usability testing isn't just about making customers happy but about increasing profits. "You'll sell more, you'll have more customers and less customer support. At the same time, you'll get good press," he said.

But how do you teach students to be good product testers? In class, Bias uses humor, Skype calls to experts such as Don't Make Me Think author Steve Krug, and role-playing. In a recent class, Bias and a student demonstrated what not to do in a usability test. The student played a frustrated subject ("I'm lost") and Bias an unethical tester who wouldn't let his subject leave and undermined his confidence ("Well, nobody's ever clicked that").

The lesson: How you test is as important as what you test and the data you get back; it must be done ethically and without the tester's getting defensive in the face of negative feedback or going into "teacher mode" to influence the subject. But in the IX lab, students learn by conducting their own tests, then turning the observations into suggestions for improvements, often for nonprofits or government agencies that can use the help.

It's easy to see where in the tech world usability is an issue. For the last decade, Apple has had a run of computers, music players, and phones considered more intuitive and user-friendly than those of competitors. And Microsoft's Windows 8, a new version of its operating system, has been slammed by some usability experts for being confusing.

Testing thoroughly and properly doesn't necessarily mean improvement - designers can ignore usability data due to time constraints or cost - but designing without it can be a recipe for disaster. As Bias says, "It's impossible to have good intuitions about what other people's experiences are until you watch them."