Successes and Failures in Information Seeking Episodes (Project Lead)

Context is an essential aspect in user experience. Users constantly face obstacles and failures when they search for information online. These are often attributed to either the information seeker or the system being used for finding information. What is often ignored in such investigations is the context of the tasks that trigger their search or the strategies being used by the user.

In this research, I led a team of three and took a holistic approach to look at users’ information seeking experiences. Specifically, we investigated how and why users failed to find the information they need, and how those failures are related to the greater context of users’ jobs to be done, their search strategies (both web and non-web), and the barriers/challenges they encounter.

The data collection was carried out in two phases. Phase One of the research involved an online survey and semi-structured interviews. Phase Two included a diary study and more interviews.

Phase One

Methods

Online Survey

In Phase One, we conducted an online survey on Amazon’s Mechanical Turk, an online crowdsourcing site on which users can post tasks that require human intelligence (e.g. surveys, usability testing) and make them available for participants to finish and be compensated. We adopted the critical incident technique (CIT) that instructed participants to describe four examples of when they failed to find the information they needed to finish a task. We designed a set of open-ended questions and multiple-choice questions to guide participants’ narrative.

Semi-structured Interview

Based on the survey findings, we conducted ten in-depth individual interviews to further investigate the survey findings. Most questions included in the interviews were similar to those in the surveys with some changes that were tailored to further examine the survey findings. We asked each participant to describe two to three recent unsuccessful information seeking experiences. Interviewing as a secondary method provided a richness beyond what the survey data could have provided.

Data Analysis

Survey data containing 208 real life examples of information seeking failures was exported and coded in Nvivo. To look for connections between users’ information seeking failures and their context, we primarily focused on classifying and synthesizing the types of failures and barriers encountered, the types of tasks that users worked on, and their search strategies. We modified and used several existing classification schemes while also drew new codes inductively from the data.

All interviews were audio recorded and transcribed. We then analyzed the transcripts using the same classification schemes utilized for survey coding with special attention to the points that complemented or contradicted survey findings.

A few major findings are summarized as follows:

  • Users’ information seeking failures (ISFs) were usually not caused by one single reason, but triggered by various interconnected contextual factors starting with the tasks.
  • Tasks that involved decision making were remarkably difficult because they normally required information or opinions from more than one side.
  • Time constraint was one of the most predominant factor leading to ISFs. Users frequently believed that the information existed and they would have retrieved it if more time were given.
  • Users’ information needs were often too specific and situational to be fulfilled by existing information. They often wished to directly consult other people who can provide personalized answers. We are still in need of systems that can accurately match information seekers to people who have the knowledge and the willingness to help in a timely manner. Systems of that kind may also provide reasonable rewards for users to contribute.

Phase Two

Methods

Online Diary

In the second phase of the research, we conducted a diary study to further understand users’ information seeking experiences. We recruited fifty-three participants from all over the US on Amazon’s Mechanical Turk. They came from a variety of professional or educational backgrounds, including software engineering, biology, retail, painting, and firefighting. We assigned each participant four search tasks for which they needed to finding information using whatever sources they prefer. They documented their experiences of finding information including their information seeking strategies, the barriers they encountered (if any), their findings, and whether they considered each strategy successful, in a structured online diary.

Semi-structured Interview

We followed up with an interview request after each participant completed the diary. Twenty-three of them volunteered to be interviewed individually via GoToMeeting. Diaries were limited in capturing the full picture of some situations since filling out the diary was not an iterative process and participants sometimes missed important details. Follow-up interviews solicited in-depth explanations of participants’ experiences and helped us verify if our understanding of their narratives were correct.

Data Analysis

We used mixed methods to examine the connections between users’ information seeking outcomes and their information seeking strategies, quality judgment, and barriers encountered. Another coder and I first qualitatively coded the diary and interview data in Nvivo to extract the information sources used by participants, methods used to access those sources, and their barriers. Next, we conducted statistical tests such as Pearson’s chi-squared test and ordered logistic regression to investigate the relationships between participants’ information seeking outcomes and their sources, methods, and barriers.

Primary Findings:

This work further revealed the importance of non-web sources-such as friends and family-in information seeking. Users’ social networks were significantly related to their information seeking outcomes. However, they were more likely to encounter difficulties while interacting with other people than when interacting with information source on the web.

Find out more about this research in the following publications:

Wang, Y., & Shah, C. (2017). Investigating failures in information seeking episodes. Aslib Journal of Information Management, 69(4), 441-459. [Journal article][HIB] 

Wang, Y., Sarkar, S., & Shah, C. (2018). Juggling with information sources, task type, and information quality. In Proceedings of the 2018 SIGIR Conference on Human Information Interaction and Retrieval (CHIIR), 3, 82-91. [Conference full paper][HIB][IIR] 

Sarkar, S., Wang, Y., & Shah, C. (2017). Investigating relations of information seeking outcomes to the selection and use of information sources. In Proceedings of the Association for Information Science and Technology Annual Meeting, 54(1), 347-356. [Conference full paper][HIB][IIR] 

Wang, Y., Sarkar, S., & Shah, C. (2017). Investigating information seekers’ selection of interpersonal and impersonal sources. In Proceedings of the 2017 SIGIR Conference on Human Information Interaction and Retrieval (CHIIR), 353-356. [Conference short paper][HIB][IIR] 

Wang, Y., & Shah, C. (2016). Exploring support for the unconquerable barriers in information seeking. In Proceedings of the Association for Information Science and Technology Annual Meeting, 53(1). [Conference short paper][HIB] 

Back to Research