Publication | ACM SIGCHI Conference on Human Factors in Computing Systems 2021

Think-Aloud Computing

Supporting Rich and Low-Effort Knowledge Capture

This paper presents an entirely new way to capture what people are doing while using a computer – simply by speaking.

Download publication

Abstract

Think-Aloud Computing: Supporting Rich and Low-Effort Knowledge Capture

Rebecca Krosnick, Fraser Anderson, Justin Matejka, Steve Oney, Walter S. Lasecki, Tovi Grossman, George Fitzmaurice

ACM SIGCHI Conference on Human Factors in Computing Systems 2021

When users complete tasks on the computer, the knowledge they leverage and their intent is often lost because it is tedious or challenging to capture. This makes it harder to understand why a colleague designed a component a certain way or to remember requirements for software you wrote a year ago. We introduce think-aloud computing, a novel application of the think-aloud protocol where computer users are encouraged to speak while working to capture rich knowledge with relatively low effort. Through a formative study we find people shared information about design intent, work processes, problems encountered, to-do items, and other useful information. We developed a prototype that supports think-aloud computing by prompting users to speak and contextualizing speech with labels and application context. Our evaluation shows more subtle design decisions and process explanations were captured in think-aloud than via traditional documentation. Participants reported that think-aloud required similar effort as traditional documentation.

Get in touch

Something pique your interest? Get in touch if you’d like to learn more about Autodesk Research, our projects, people, and potential collaboration opportunities.

Contact us