3 lessons learned (the hard way) from a user test expert
I think Danish physicist Niels Bohr said it best: “An expert is a person who has made all the mistakes that can be made in a very narrow field.” Having planned, analyzed and reported on over 700 user test sessions, and made lots of mistakes in the process, I certainly meet this definition.
But, by reading and heeding my advice here, you won’t have to repeat the same mistakes I made - and you’ll hop a few steps up the user test (UT) learning curve.

Ever watched a movie that had lots of great actors in but turned out to be terrible? (I certainly have!) Then you know the importance of a well-written and edited script. That’s especially true for user tests and doubly true for unmoderated user tests, which comprise over 80% of UTs done these days.
So, after first firming up your test objectives and scope, invest some time in writing a solid script, which includes:
For more involved research, like ‘path to purchase’ journey research, these scripts get much more involved. But these are the “core” parts.

When writing your script, make sure that:
If you’re a “problem child” and need consequences to take action, writing a bad test script can result in:
You even risk lowering your professional credibility, which no one wants to do.
If you’re not a good writer, hire one!
One thing I’ve learned in life: it’s best to focus on what you’re best at and let others do the other things.
So, if you’re not a strong writer, admit that early on and hire someone who is. If you’re running the test yourself, network around to find a writer within your organization. Or hire a freelance copywriter with a strong reputation.
If you’re running your test on a platform (Usertesting, Userlytics, etc.), leverage the writers and analysts on their professional services team. By doing so it’s 95% likely the first draft of your script will be good.
Lastly, make sure your script focuses on a specific point in your customer journey. A very common mistake I see: clients trying to assess too much in the same test. This makes the script, and the test, too long. You can always run another test next week or month (and it’s usually better to use this test-iterate-retest approach anyways).

I sometimes hear comments like, “I just tested it with my friends and family,” or “I tested it with my team members.” That’s fine if you’re testing a minimum viable product (MVP), or doing a formative study on a website or app with a smaller user base.
But if you’re testing a site or app that you will launch to a larger market, you need to bring in a representative set of participants — that is, participants who have the demographics, experience, mindset, and motivations of your target users. If you don’t, you may get a lot of feedback, but it won’t be valid.
Download our Business Resource – Web design project plan template
Even if you are using project planning software our template will still give you a checklist to review the key activities that need to be managed.
Access the Web design project plan template
Another common screening mistake I see: testing with participants who know too much. Remember, real-word users typically know nothing about the product or service you offer and have not previously experienced your website or app. So don’t test with subject matter experts or software developers; they know much more and usually try harder than non-technical people.
List out the “must-have” characteristics your testers must-have, followed by the “nice-to-haves.” In your recruitment screener, eliminate prospective testers who don’t match all the must-have criteria. Of course, if you’re testing with two or more user segments, you’ll have multiple sets of these criteria, and may need to include question “branching” logic.
Below I show some sample screening questions, in this case for a “configure your new minivan” user test.

Except for some special cases (like when you need to recruit professionals or other people with specialized skills or experience), it doesn’t cost more to ask more screening questions. So build them in from the start to minimize the chances of recruiting “miscast” participants.
Recruit some extra testers
Recruit 10-20% more testers than your requirements dictate. Why? Because no matter how good your screener, some respondents with either a) just plain when answering the screener, or a) not be good testers (share enough of their thoughts, follow your script instructions). Or, after running the first couple sessions, you may discover that something’s wrong with your script.
Let’s say you’re running a qualitative UT with 10 participants. To make sure you get 10 “good completes”, recruit 11 or 12 testers.
If you’re recruiting more participants — for example, 40 for a quantitative test — you should recruit 4-8 extras. Four should be enough if you “test your test” before fully launching it. More on that lesson later.
True, you won’t always get bad testers. But it happens enough that I’m willing to pay 20% more up-front to avoid having to do extra recruiting, or suffer from a late-in-the-analysis-game “data deficit.”

Think about it: “quality assurance” (QA) and “beta” testing are a standard part of the website and software development process. So why shouldn’t you also QA your user tests? It’s a proven way to mitigate test deployment risks.
First, before you run your first “pilot” or “soft launch” test, make sure you’ve done your own diligence by:
Once you’ve done this, “soft launch” your test. That is, launch it with the first participant only. Note issues that arise — with the script, screening criteria, or otherwise — and tweak things accordingly. When you’re sure the task wording’s clear and the question flow’s solid, fully launch the test to all testers.
A more agile (though somewhat riskier) alternative is to:
With this approach, if everything looks good, you don’t have to spend time relaunching the test. But you’ll need to keep a close eye on these tests, especially if they’re unmoderated, because with testers quickly “accepting” new tests, a few sessions may complete in less than an hour.
So either “launch one and pause” or “launch and monitor” based on your risk tolerance and testing workload.
Apply these learnings and avoid anguish
There’s no teacher like experience. And the emotional anguish that results from making public mistakes. The good news is, by heeding this advice you won’t have to make the same ones I’ve made.
Most importantly, you’ll spend more of your time collecting and sharing great user insights and building your “UX research rockstar” reputation.
Ronald O. Hamburger Named Legacy Award Winner for Northern California
Ronald O. Hamburger Named Legacy Award Winner for Northern California | 2020-01-28 | Engineering News-Record This website requires certain cookies to work and uses other cookies to help you have the best experience. By visiting this website, certain cookies have already been set, which you may delete and block. By closing this message or continuing to use our site, you agree to the use of cookies. Visit our updated privacy and cookie policy to learn more. This Website Uses CookiesBy closing this message or continuing to use our site, you agree to our cookie policy. Learn More This website requires certain cookies to work and uses other cookies to help you have the best experience. By visiting this website, certain cookies have already been set, which you may delete and block. By closing this message or continuing to use our site, you agree to the use of cookies. Visit our updated privacy and cookie policy to learn more.Robots, teamwork and lessons for life
With a "Star Wars" theme, 2020 FIRST tournament draws students from around Rhode Island.
PAWTUCKET — “They learn the value of working together and teamwork. They learn the value of problem-solving.”
Students participating in the robotics tournament Saturday at Jenks Middle School drew those lessons not only from the competition but from the months of designing, building, testing and perfecting their machines, which began in September, said Mary Parella, one of the educators deeply involved in the program.
More than a dozen teams from private and public middle schools and high schools in Rhode Island participated in this 2020 FIRST meet — FIRST being the acronym for “For Inspiration and Recognition of Science and Technology,” an organization that declares that its mission is to “inspire young people to be science and technology leaders and innovators."
Elianny Pena, captain of a Shea High School team and its lone female member, was inspired beginning in her freshman year. A junior now, she plans a career in engineering.
“When I started, I had no experience with robotics,” Pena told The Journal. “The boys — the graduates now at the University of Rhode Island — they taught me everything that I know. From then, I had to apply skill to now be captain of my team.”
And she is passing that wisdom on, she said, “just teaching the younger generations.”
An alliance of Shea and Pilgrim High School, in Warwick, won Saturday’s competition and will compete in the state finals on Feb. 8, Parella said.
Camilio DaRosa, a member of Pena’s team, said he was “definitely pleased with the robot’s performance.”
“I’m proud of it,” said teammate Lucas Lopes.
Parella said that the annual competitions have started many young people on the road to careers in the STEM disciplines: science, technology, engineering and mathematics. The scholarship money awarded to some is also important, she said.
So is what she called “gracious professionalism. That means all these kids are competing with each other, but they all help each other.”
Future employers, she said, will be impressed.
“They want people that can ‘team,’ people that can think critically, people that can work together and problem-solve,” she said. “And that is what this does.”
This season’s FIRST, according to the sponsoring organization, struck a "Star Wars" theme, with a goal of helping “citizens of the galaxy to work together [and] strengthening and protecting the Force that binds us and creating a place where collaboration and collective wisdom can elevate new ideas and foster growth.”
— gwmiller@providencejournal.com
(401) 277-7380
On Twitter: @gwmiller
Comments
Post a Comment