Notes on Krug, Chapters 8 and 9

Chapter 8

Web teams aren't successful at making decisions regarding usability questions
- "religious debates" = people expressing strongly-held personal beliefs about things that cannot be proven, rarely result in anyone changing their point of view
- arguments create tension, erode respect among teammates, prevent team from making critical decisions
- several forces at work

Everyone likes?
- Web site workers are also Web users, have strong feelings about likes and dislikes about Web sites
- hard to check these feels at the door when working on a Web team
- natural tendency to project individual likes and dislikes onto users in general, think most users like the same things as workers do

Professional passion
- workers have different perspectives on what constitutes good Web design
- base this on what they do for a living
- farmers versus cowmen, designers versus developers
- reactions happen at brain chemical level, hard to imagine that everyone doesn't feel the same way
- differences in perspective lead to conflict and hard feelings when establishing design priorities
- hype culture = upper management, marketing, and business development; make whatever promises are necessary to attract capital, revenue-generators, and users to the site

Average User myths
- most users are like anything = false
- most users are like workers = conflict-causing belief
- there is no Average User
- "All Web users are unique and all Web use is basically idiosyncratic" - Krug, chapter 8, page 108
- a user's individual reactions to Web pages are based on many variables
- attempts to describe users in terms of one-dimensional likes and dislikes are futile and counterproductive
- - Average User myth reinforces the idea that good Web design is figuring out what people like
- there are no simple "right" answers for most Web design questions

"What works is good, integrated design that fills a need--carefully thought out, well executed, and tested." - Krug, chapter 8, page 108

Antidote
- only way to answer questions is through user testing
- build some version of the site and then watch people try to figure out what it is and how to use it
- "There is no substitute for it." - Krug, chapter 8, page 109
- defuses most arguments, breaks impasses by moving discussion away from what's right or wrong and what people like or dislike, into the topics of what works and what doesn't work
- testing makes it hard to keep thinking that all users are like the workers

Chapter 9

Usability testing
- testing can help settle design arguments
- usually reveals that argument points weren't all that important
- many tests get done too little, too late, and for all the wrong reasons

focus group = small group of people sit around table, talk about things, good for getting a sampling of users' feelings and opinions on things
usability testing = watching one person at a time try to use something to do typical tasks, lets creators detect or fix things that confuse or frustrate users

Focus group testing
- Marketing origins
- great for determining what audiences want, need, and like abstractly; whether the idea behind the site makes sense and value proposition is attractive; learn how people currently solve the problems the site will help with; how do users feel about creator and competitors
- not good for learning whether site works or how to improve it
- things learned from focus groups should be things creators know before they start designing or building; best used in planning stages of a project

"Usability tests should be used through the entire process" - Krug, chapter 9, page 113

Usability test truths
- test if you want a great site, reminds creators that not everyone sees things the way they do, gives creators a fresh perspective
- testing one user fully is better than testing no users, testing always works
- testing early in the project is better than testing near the end of a project, keep the test simple, don't make a big deal about it, easier to make changes to a site before it's in use, mistakes corrected early will save creators trouble

DIY (do-it-yourself) usability testing
- usability testing was elaborate and very expensive to perform in the beginning, it didn't happen very often
- discount usability in 1989 by Jakob Nielsen, not as expensive but still didn't happen often enough
- if you can hire a professional to test, do it; otherwise do it yourself so that it is tested

How often to test?
- do it a morning a month, according to Krug
- keeps it simple so creators will keep doing it, as much time as most Web development teams can afford to spend doing testing
- gives creators what they need, enough to fix for the next month
- frees creators from deciding when to test, pick a day of the month as designated test day, schedules often slip, there's always something you can test each month
- makes it more likely that people will attend, doing it in the morning on a predictable schedule increases the odds that teammates will come and watch some of the sessions, desirable

How many users are needed?
- three is the ideal number of participants, according to Krug
- purpose of DIY testing isn't to prove anything, qualitative method meant to improve what the creator is building by identifying and fixing usability problems, not a rigorous process, the result is actionable insights
- creators don't need to find all the problems, creators never will, fix the most serious problems first, more important to do more rounds of testing than to wring everything from each round

"You can find more problems in half a day than you can fix in a month." - Krug, chapter 9, page 119

Choosing participants
- recruiting people from a target audience isn't as important as it may seem
- your site probably has a number of usability flaws that will cause real problems to almost anyone recruited
- recruiting people in a target field who fit a narrow profile = more work + more money spent
- recruit loosely and grade on a curve
- find users who reflect target audience but don't be strict, loosen requirements and make allowances between participants and audience, note the differences when questions are asked
- if site requires specific domain knowledge, should have some with that knowledge, not all must have it though

Adding participants not from target audience
- usually not a good idea to design a site with only the target audience in mind, need to support novices and experts
- everyone is a beginner under the skin, people muddle through at a higher level
- experts are rarely insulted by something made clear enough for beginners, everyone appreciates clarity, don't dumb it down though

How to find participants?
- there are many places and ways to recruit test users
- include monetary incentives, Krug offers more than the going rate, makes it clear to users that creators value their time, improves chances that users will show up for testing

Where to test?
- need a quiet space with no interruptions, a table or desk, and two chairs
- need a computer with Internet access, a mouse, a keyboard, and a microphone
- use web-sharing software to observe tests from another room
- run screen-recording software to capture a record of what happens on-screen and what the facilitator and user say, good for checking something or using brief clips as part of a presentation

Who should do testing?
- facilitator = person who sits with participant and leads them through the test
- anyone can facilitate a usability test
- should be someone patient, calm, empathetic, and a good listener; no "not a people person" or "office crank" facilitators
- encourage participants to think out loud as much as possible, make users comfortable and focused

Who should observe?
- as many people as possible
- transforming experience, changes the way one thinks about users, suddenly "get it" that users aren't all like them
- lure people in with snacks
- need an observation room, a computer with Internet access and screen-sharing software, a large screen monitor or projector, and a pair of external speakers
- during breaks between each test session, have observers write down the three most serious usability problems they noticed that session, share them at the debriefing, identify the most serious problems so they get fixed first

What to test and when to test?
- start testing as early as possible, keep testing through the entire development process
- never too early to start testing, test competitive sites
- test a site before redesigning it so creators know what's working, what needs changing, and what isn't working
- throughout the project, continue testing everything the team produces

How to choose the tasks to test?
- tasks = things the participant will try to do
- depends on what creators have available to test
- start by making a list of the tasks people need to be able to do with whatever the creators are testing
- choose enough tasks to fill the available time, 35 minutes to an hour typically
- word each task carefully so users understand exactly is being asked of them, include info they'll need but won't have
- creators often get more revealing results if participants are allowed to choose some details of the task, increases the emotional investment and allows them to use their personal knowledge of the content

What happens during the test?
- use a script, read "lines" exactly as written since the wording has been carefully chosen
- start with a welcome, explain how the test will work so participant knows what to expect, 4 minutes roughly
- ask participant questions about themselves, puts participants at ease, gives creators an idea of how computer-savvy and Web-savvy they are, 2 minutes roughly
- open Home page of site, ask participant to look around and tell facilitator what they make of it, gives creators an idea of how easy it is to understand their Home page and how much the participant already knows the domain, 3 minutes roughly
- watch the participant perform tasks, make sure participant stays focused and thinks aloud, prompt them if they stop to think, it is crucial to let users work on their own, do not influence users, do not help if asked, 35 minutes roughly
- after tasks are done, ask the participant about what happened during testing, ask anything that people in the observation room may ask, 5 minutes roughly
- thank participants for their help, pay them and show them out, 5 minutes roughly

Typical problems
- users are unclear on the concept, they don't get it
- the words users are looking for aren't there, creators failed to anticipate what user would look for or the words used to describe something aren't what the user would use
- there's too much going on, creators need to reduce overall page noise or turn up the volume on things users need to see so they "pop" out of the visual hierarchy more

Debriefing and deciding what to fix
- debrief over lunch right after tests are done, while things are fresh in the observers' minds
- focus ruthlessly on fixing the most serious problems first
- make a collective list, what are three most serious problems that observers noted, write them down and add check-marks to any "me too" statements, no discussion here, have to be observed problems that actually happened during one of the test sessions
- choose the ten most serious problems, do informal voting, start with the ones that got the most check-marks
- rate the chosen ten, number from 1 to 10, 1 is worst, copy new list from worst to best, leave room
- create an ordered list, write down rough ideas of how to fix each problem in the next month, who will do it, and what resources it'll need; don't have to fix it perfectly or completely, at minimum do something to get it out of the "serious problem" category, once time and resources are allocated then stop, have what you came for and made a commitment

What to fix and what not to do
- keep a separate list of not-so-serious problems that are easy to fix, one person can fix these in less than an hour without getting permission from the debriefing members
- resist the impulse to add things, do not include an explanation or instructions, take away whatever may be obscuring the meaning, don't add more distractions in an attempt to help
- take "new feature" requests with a grain of salt, participants aren't designers, may occasionally come up with a great idea
- ignore "kayak" problems, people go astray but will realize it and get back on track quickly, if the user's second guess about where the find things is always right then that's good enough

Alternative testing
- Remote testing = participants do testing in their own home using screen-sharing, makes it easier to recruit busy people, expands the recruiting pool to "almost anyone", need high-speed Internet access and a microphone
- Unmoderated remote testing = services that provide people to record themselves doing a usability test, creators send in tasks and their site/prototype/mobile app link, done within an hour, cannot interact with participant in real time, relatively inexpensive and requires almost no effort on the creator's part, just have to watch the video
There are no comments on this page.
Valid XHTML :: Valid CSS: :: Powered by WikkaWiki