The Basics

The End-User Elicitation Methodology

Terminology, Process, and Scientific Papers

Use Case

This is a hypothetical example of a researcher attempting to design gesture controls for a smartphone file-explorer that requires interactions to perform the following functions: open a file, close a file, and delete a file.

Why Run an Elicitation Study?

The researcher’s goal is to design touch screen gestures for each of her file-explorer app’s functions that feel natural and intuitive to users. The researcher hopes that running an elicitation study will allow her to elicit intuitive gesture designs from her target population. Also, the gesture designs proposed by users might be ones that the researcher herself had not imagined. Such a study also allows her to identify synonyms and variations for gesture designs.

How to Run an Elicitation Study?

There are two parts to every elicitation study: data gathering and data analysis. In the first part, the researcher invites end users to participate in the study. She shows the participants the results of an action, known as a referent. In this example, the referents shown are: opening a file, closing a file, and deleting a file. The researcher asks each participant to take an action that would cause each referent to occur. The proposed interaction designs in these studies are typically referred to as symbols. Symbols can be any form of interaction between a user and the technology they are using; typically, the researcher will specify to participants what forms of symbols the target technology can recognize. For example, the researcher might instruct participants to only propose symbols that are text strings for command line interfaces, audio clips for voice user interfaces, or sketches of icons for graphical user interfaces.

Once the researcher has collected enough symbols for her referents, she moves on to the second part of the study: analyzing the symbols. For each referent, the researcher has a set of symbols—touch gestures in this example—collected from the participants. The researcher compares all of the symbols for a given referent to each other and groups them based on their similarity. After grouping the symbols, the researcher calculates an agreement score, which quantifies the consensus among participants. The formula for calculating agreement is

In the equation, r is a referent in the set of all referents R. Pr is the set of all symbols proposed for referent r. Pi is a subset of similar symbols in Pr.
The researcher uses agreement calculations to determine if the largest set of similar symbols have sufficient membership to be a singularly good choice, or if multiple symbols should be used synonymously for the same referent. Another use of the agreement calculation is to resolve conflict in cases where the same symbol is proposed for multiple referents. In that case, the symbol usually is assigned to the referent for which that symbol has the highest agreement score.

Read Wobbrock et al.’s paper >

Terminology
Referent The result of an action on an interactive system.

Symbol User-proposed human input.

Symbols are actions meant to invoke referents—i.e, functions—in an interactive system.

The End-User Identification Methodology

Evaluating Interaction Designs

An end-user identification study is an evaluation method for the symbols that could or do appear in a user interface, including those generated by elicitation studies. Conceptually, identification studies are the reverse of elicitation studies. In identification studies, researchers present end users with symbols (actions for invoking effects on a computing system, e.g., mid-air or stroke gestures, command-line or voice commands, button icons or labels, etc.). Researchers then ask users to propose the referent (the effect on the computing system, i.e., what the symbol would do), usually without giving knowledge of the commands available in the target system. Researchers aggregate the user-generated referents in groups based upon similarity and proceed by either confirming the symbol-referent appropriateness or assigning new referents to symbols that had low referent-identifiability.

Read Ali et al.’s paper >

Reading List

Methods and Best Practices

  1. Ali, A.X., Morris, M.R. and Wobbrock, J.O. (2019). Crowdlicit: A system for conducting distributed end-user elicitation and identification studies. Proceedings of the ACM Conference on Human Factors in Computing Systems (CHI '19). Glasgow, Scotland (May 4-9, 2019). New York: ACM Press. To appear.
  2. Ali, A.X., Morris, M.R. and Wobbrock, J.O. (2018). Crowdsourcing similarity judgments for agreement analysis in end-user elicitation studies. Proceedings of the ACM Symposium on User Interface Software and Technology (UIST '18). Berlin, Germany (October 14-17, 2018). New York: ACM Press, pp. 177-188.
  3. Vatavu, R.-D. and Wobbrock, J.O. (2016). Between-subjects elicitation studies: Formalization and tool support. Proceedings of the ACM Conference on Human Factors in Computing Systems (CHI '16). San Jose, California (May 7-12, 2016). New York: ACM Press, pp. 3390-3402.
  4. Vatavu, R.-D. and Wobbrock, J.O. (2015). Formalizing agreement analysis for elicitation studies: New measures, significance test, and toolkit. Proceedings of the ACM Conference on Human Factors in Computing Systems (CHI '15). Seoul, Korea (April 18-23, 2015). New York: ACM Press, pp. 1325-1334. Honorable Mention Paper.
  5. Morris, M.R., Danielescu, A., Drucker, S., Fisher, D., Lee, B., schraefel, m.c. and Wobbrock, J.O. (2014). Reducing legacy bias in gesture elicitation studies. ACM Interactions 21 (3), May + June 2014, pp. 40-45.
  6. Morris, M.R. (2012). Web on the wall: insights from a multimodal interaction elicitation study. Proceedings of the ACM Conference on Interactive Tabletops and Surfaces (ITS '12). Cambridge, Massachusetts (November 11-14, 2012). New York: ACM Press, pp. 95-104.
  7. Morris, M.R., Wobbrock, J.O. and Wilson, A.D. (2010). Understanding users' preferences for surface gestures. Proceedings of Graphics Interface (GI '10). Ottawa, Ontario, Canada (May 31-June 2, 2010). Toronto, Ontario: Canadian Information Processing Society, pp. 261-268.
  8. Wobbrock, J.O., Morris, M.R. and Wilson, A.D. (2009). User-defined gestures for surface computing. Proceedings of the ACM Conference on Human Factors in Computing Systems (CHI '09). Boston, Massachusetts (April 4-9, 2009). New York: ACM Press, pp. 1083-1092. Best Paper Nominee.
  9. Wobbrock, J.O., Aung, H.H., Rothrock, B. and Myers, B.A. (2005). Maximizing the guessability of symbolic input. Extended Abstracts of the ACM Conference on Human Factors in Computing Systems (CHI '05). Portland, Oregon (April 2-7, 2005). New York: ACM Press, pp. 1869-1872.

Read next

The Process

An overview of the D.X.D. process

The System

Get familiar with the Crowldicit and Crowdsensus tools