Authentic vs. Synthetic: A Comparison of Different Methods for Studying Users’ Online Behaviors and Experiences (Jan 2018 – present)

In recent years, researchers have devoted considerable research efforts to investigate the relationships between users’ online behaviors and their tasks and to personalize information or support to suit users’ task at hand. Various methods have been used to collect user behavior such as laboratory experiments and field studies. Users may work on their own tasks or tasks assigned by the researcher. However, the impact of these methods on user behavior remain unclear; and reliability of the data collected is directly influenced by the methods used. This research aims to understand how study setting and task authenticity affect users’ searching and browsing behaviors and experiences, such as their engagement and barriers.

Data Collection

I conducted a 2×2 repeated design experiment to find out the influences of study setting and task authenticity on user behavior and user experience.

Procedure

Participants Recruitment

Thirty-six Rutgers University students from 21 different disciplines were recruited to participate in the this study. I posted recruitment messages and flyers at various places such as Facebook groups organized by Rutgers University students, academic buildings, dining halls, and student centers. Participants followed a link in the recruitment message to sign up for the study. On the registration form, they provided basic demographic information and signed an informed consent electronically.

Browser Extension

Before starting the main study, each participant installed a Chrome browser extension on the top right corner of their browser. They were provided with a unique combination of username and password to log into the study system. I worked with a developer in our lab to design the extension which gave participants access to all the study materials such as instructions, tasks, a page to schedule their lab session, etc. They also needed to submit two of their own search tasks (i.e., tasks in which they needed to search for information online). They were used as the authentic tasks later in this study. 

Laboratory Session

Each participant finished a lab session in which they came individually to an interaction lab located at Rutgers University and finished two search tasks. Before and after each task, they filled out a questionnaire to report their expectations (e.g., task difficulty) and experiences (e.g., engagement, barriers).

Remote Session

The remote session took the same procedure except that participants could finish the search tasks at anywhere of their choices. They had up to three days to finish the tasks and questionnaires and they could approach to those at anytime during that three days. The set-up was to mimic a real-life situation with a much less controlled environment than a lab.

To avoid ordering effects, the order of the two sessions were rotated. Half of the participants started in the lab while the other half started remotely.

Semi-structured Interview

While quantitative data collection and analysis address the “what” questions, they do not explain why things occur. To answer the “why” question, I interviewed each participant after they finished the study. I reviewed their log data and questionnaire responses before each interview and prepared questions in advance. Questions asked in the interview was primarily about the differences between two types of tasks and settings as perceived by them. They also picked up their $40 compensation when coming in for the interview.

Data Analysis

Data Pre-processing

Participants’ online activities-such as the queries they issued and pages viewed-were logged by the browser extension with timestamps. I first cleaned the raw data by removing irrelevant pages because participants occasionally forgot to turn off the extension when they were not working on the study. To prepare for quantitative data analysis, a variety of behavioral measurements (e.g., page dwell time, query time, number of queries issued per task, task completion time) were extracted in Python.

Quantitative Data Analysis

To examine the differences in questionnaire responses (5-point scale) between conditions, I used Wilcoxon signed-rank tests which is suitable for ordinal data. To examine the influences of study setting and task authenticity on behavioral measurements, I used multilevel modeling. All statistical tests were performed in R.

Qualitative Data Analysis

To analyze the interview data, I first familiarized myself with the data and “pre-coded” the data by highlighting interesting quotes. The main analysis was guided by an overarching theme: if, why, and how the study setting, or authenticity of tasks affected users’ search experiences. The primary focus was that how participants’ experiences were different between settings and tasks as perceived by themselves. I coded interview transcripts line by line. Lines were categorized into groups based on their connections and each group was assigned a code that summarized the knowledge in the group. An initial set of codes was drawn from interview questions and additional codes were added based on the data. A codebook was compiled and revised during the coding process. Codes were later categorized into larger sub-themes and themes.

Primary Findings

  1. Some aspects of user behavior, such as their page dwelling time and number of page visited, were significantly affected by study settings and task authenticity.
  2. Users’ behaviors at the beginning of a session were less affected than the whole session data.
  3. Users were more motivated to do their own tasks in the lab than in the field, but not the simulated tasks.
  4. Distraction, multi-tasking, and dividing tasks into parts were prevalent in the field but could rarely be observed in the lab.

Publication and Manuscript:

Wang, Y. (2019). Study setting and task configuration for task-based information seeking research. Presented at the Jean Tague-Sutcliffe Doctoral Student Research Poster Competition at the ALISE 2019 Annual Conference, Knoxville, TN. [Award Honorable Mention][HCI][HIB][IIR]

Wang, Y., & Shah, C. (under review). Authentic vs. synthetic: An investigation of the influences of study settings and task configurations on search behaviors. Information Processing & Management. [HCI][HIB][IIR]

Back to Research