Context is an essential aspect in user experience. Users constantly face obstacles and failures when they search for information online. These are often attributed to either the information seeker or the system being used for finding information. What is often ignored in such investigations is the context of the tasks that trigger their search or the strategies being used by the user. Also, information search does not solely happen online, and users may seek information from other sources (e.g., people, physical books) and their usage of non-web sources can influence online search behaviors and strategies. In this research, I took a holistic approach to look at where and why information seeking episodes succeed or fail, and how those successes/failures are related to the greater context of users’ tasks, their search strategies (both web and non-web), and the barriers/challenges they encounter.
The data collection was carried out in two phases. Phase One of the research involved an online survey and semi-structured interviews. Phase Two of the research included a diary study and more interviews.
In Phase One, I conducted an online survey on Amazon’s Mechanical Turk, which is an online crowdsourcing site on which users can post tasks that require human intelligence (e.g. surveys, usability testing) and make them available to the public for participants to finish and be compensated. The survey adopted the critical incident technique (CIT) that instructed participants to describe four examples of when they failed to find the information they needed to finish a task at work, at school, or in their everyday lives (e.g. health, travel). A set of open-ended questions and multiple-choice questions were designed to guide participants’ narrative.
Based on the survey findings, I conducted ten semi-structured individual interviews with another group of participants to complement and further investigate the survey findings. Two undergraduate students and eight graduate students from various departments were interviewed in person. Most questions included in the interviews were similar to those in the surveys with some changes that were tailored to further examine the survey findings. I asked each participant to describe two to three recent unsuccessful information seeking experiences. Interviewing as a secondary method provided a richness beyond what the survey data could have provided.
Survey data containing 208 real life examples of information seeking failures was exported and coded in Nvivo. To look for connections between users’ information seeking failures and their context, I primarily focused on classifying and synthesizing the types of failures and barriers encountered, the types of tasks that users worked on, and their search strategies. I modified and used several existing classification schemes while also drew new codes inductively from the data. All interviews were audio recorded and transcribed. I then coded the transcripts using the same classification schemes utilized for survey coding with special attention to the points that complemented or contradicted survey findings.
A few major findings are summarized as follows:
- Users’ information seeking failures (ISFs) were usually not caused by one single reason, but triggered by various interconnected contextual factors starting with the tasks.
- Tasks that involved decision making were remarkably difficult because they normally required information or opinions from more than one side.
- Time constraint was one of the most predominant factor leading to ISFs. Users frequently believed that the information existed and they would have retrieved it if more time were given.
- Users’ information needs were often too specific and situational to be fulfilled by existing information. They often wished to directly consult other people who can provide personalized answers. We are still in need of systems that can accurately match information seekers to people who have the knowledge and the willingness to help in a timely manner. Systems of that kind may also provide reasonable rewards for users to contribute.
In the second phase of the research, I conducted a diary study to further understand users’ information seeking experiences. Fifty-three participants from all over the US were recruited from Amazon’s Mechanical Turk. They came from a variety of professional or educational backgrounds, including software engineering, biology, retail, painting, and firefighting. I designed and assigned each participant four search tasks for which they needed to finding information using whatever sources they prefer. They documented their experiences of finding information including their information seeking strategies, the barriers they encountered (if any), their findings, and whether they considered each strategy successful, in a structured online diary.
I followed up with an interview request after each participant completed the diary. Twenty-three of them volunteered to be interviewed individually via GoToMeeting. Diaries were limited in capturing the full picture of some situations since filling out the diary was not an iterative process and participants sometimes missed important details. Follow-up interviews solicited in-depth explanations of participants’ experiences and helped me verify if my understanding of their narratives were correct.
Mixed methods were used to examine the connections between users’ information seeking outcomes and their information seeking strategies, quality judgment, and barriers encountered. Another coder and I first qualitatively coded the diary and interview data in Nvivo to extract the information sources used by participants, methods used to access those sources, and their barriers. Next, we conducted statistical tests such as Pearson’s chi-squared test and ordered logistic regression to investigate the relationships between participants’ information seeking outcomes and their sources, methods, and barriers.
This work further revealed the importance of non-web sources-such as friends and family-in information seeking. Users’ social networks were significantly related to their information seeking outcomes. However, they were more likely to encounter difficulties while interacting with other people than interacting with information source on the web.
Find out more about this research in the following publications:
Wang, Y., & Shah, C. (2017). Investigating failures in information seeking episodes. Aslib Journal of Information Management, 69(4), 441-459. [Journal article][HIB]
Wang, Y., Sarkar, S., & Shah, C. (2018). Juggling with information sources, task type, and information quality. In Proceedings of the 2018 SIGIR Conference on Human Information Interaction and Retrieval (CHIIR), 3, 82-91. [Conference full paper][HIB][IIR]
Sarkar, S., Wang, Y., & Shah, C. (2017). Investigating relations of information seeking outcomes to the selection and use of information sources. In Proceedings of the Association for Information Science and Technology Annual Meeting, 54(1), 347-356. [Conference full paper][HIB][IIR]
Wang, Y., Sarkar, S., & Shah, C. (2017). Investigating information seekers’ selection of interpersonal and impersonal sources. In Proceedings of the 2017 SIGIR Conference on Human Information Interaction and Retrieval (CHIIR), 353-356. [Conference short paper][HIB][IIR]
Wang, Y., & Shah, C. (2016). Exploring support for the unconquerable barriers in information seeking. In Proceedings of the Association for Information Science and Technology Annual Meeting, 53(1). [Conference short paper][HIB]