Interviews are a qualitative research method involving one-on-one conversations with users. Interviews can be structured or unstructured, and can be conducted face-to-face, by phone, or by video conference. 

In the case of assessment of digital object use/reuse, practitioners would likely conduct interviews with people who have used or reused the organization’s digital content, but interviews may also be undertaken with users who have attempted digital object use/reuse, but were stymied, or users who have contemplated digital object use/reuse, but have not yet acted. 

Applications for assessing digital content use/reuse

Similar to focus groups, interviews are a time-intensive qualitative data collection method, typically conducted with small numbers of users. Interviews allow practitioners to gain a deep understanding of user experiences, beliefs, and attitudes, and discuss topics in detail. Ideally, an interviewer is trained in interview techniques and knows how to help participants feel comfortable, ask non-leading questions, follow up with additional questions when needed, and take measures to counteract any interviewer bias. Ideally, the interviewer should not “represent” the organization or system that is the topic of discussion, so that participants feel free to express a range of reactions in discussion. If a cultural heritage institution does not have access to trained interviewers, it is recommended that those conducting focus groups familiarize themselves with best practices for leading interviews (see resources below). Interviews can be held in-person or virtually.

Interviews come in many forms; three common types are structured, semi-structured, and unstructured. Structured interviews follow an interview protocol, or script, that is written ahead of time and contains the questions the interviewee will be asked, as well as predetermined probes for follow up questions. Structured interviews are essentially questionnaires read aloud by an interviewer. Questions are set, follow ups are set, and interviewers are “neutral”; these parameters allow conditions to be similar among interviews conducted with different interviewees at different times and sometimes conducted by different interviewers. Semi-structured interviews are based on broader question prompts and the interviewer has greater autonomy to rephrase questions, explain what questions mean, add questions, or omit questions. Semi-structured interviews allow for some flexibility, and interviewers must document any deviations, additions, or omissions from the original script. Unstructured interviews are more like conversations. Initial questions may be predetermined, but follow up and additional questions depend on the interviewees’ responses to opening questions. Unstructured interviews can be the most challenging to conduct, as interviewers must “think on their feet” and, without a pre-agreed script to follow, notetakers are more challenged to capture all interviewee input.

Interviews are often recorded. Recording may include video and audio, audio only, or notes taken by an observer. The interviewee’s consent must be acquired before doing so. The purposes of recording a focus group may include: to allowing the interviewer to lend their full attention to the interviewee itself instead of taking notes if a notetaker is not available, capturing verbal or non-verbal cues, transcribing discussion verbatim to avoid omissions, validating notes taken by an observer, enabling multiple listeners to check reliability, and to decreasing the likelihood of interviewer bias.

If interviews are used as part of assessment methods for digital object use/reuse, the interviewer would ask questions aimed at topics such as better understanding the frequency and context of use/reuse, motivations for use/reuse, and the impact of use/reuse. Interviews are typically between 30-60 minutes.

Interviews focused on the assessment of use/reuse of digital objects for a cultural heritage institution might include users who have previously used/reused digital objects from that institution; users who have attempted digital object use/reuse, but were stymied; or users who have contemplated digital object use/reuse, have not yet acted. As in any assessment, the stories of users’ experiences are valuable, but the stories of non-users or not-yet-users also have value for understanding the topic. In general, interviewees should be invited with an eye toward including as many perspectives as possible, unless a particular past experience or other attribute is a necessary characteristic of the interviewee pool. 

Supplemental materials

The D-CRAFT project team has compiled these additional supplemental materials to assist practitioners in conducting this method.

Ethical guidelines

Practitioners should follow the practices laid out in the “Ethical considerations and guidelines for the assessment of use and reuse of digital content.” The Guidelines are meant both to inform practitioners in their decision-making, and to model for users what they can expect from those who steward digital collections.

Additional guidelines for responsible practice

If audio or video of the session is being recorded, consent should be procured from each participant via a signed consent form. The form should explain the purpose of the recording, how privacy will be protected, the extent to which the recording will be shared, and plans for retention and disposal of the recording.

When taking notes or transcribing a session, include only the minimal personally identifying information necessary, particularly if files will be shared with other staff or are accessible via shared computer drives. It is not necessary to include a participant’s name on a transcript. Instead, each participant can be assigned an identifier that is used in notes, and if necessary the identifiers can be associated with other information about the participants in a separate, secure document.

Practitioners who are not trained in qualitative data collection methods should become acquainted with the concept of interviewer bias and how to avoid it before conducting focus groups. 

If working at an institution of higher education, practitioners will need to contact the Institutional Review Board (IRB) to determine whether an IRB application is needed for research with human subjects. An IRB application may not be needed if the goal of the human subject research is for program improvements and findings are not intended to be generalizable or published.


  • Unlike most quantitative data collection methods, interviews and focus groups offer the opportunity to seek clarification and ask follow up questions to better understand users’ behavior around use/reuse of digital objects.

  • Interviews and focus groups can contribute rich anecdotal data and a human aspect in contrast to quantitative numbers. This may help an organization in storytelling around the impact of use/reuse of digital collections.

  • Conducting interviews and focus groups does not require software or specialized technology.

  • In contrast to focus groups, individual interviews avoid potential issues that may arise with group dynamics and allow the practitioner to adjust the interview style to a single individual.

  • Interviews and focus groups often produce unexpected information, provided that the script/questions are flexible, open, and invite candor.


  • Interviews and focus groups are not useful ways to attempt to gather comprehensive data on the number of instances of use/reuse of digital objects or the ways in which the majority of such users reuse materials. Interviews will gather detailed experiential data from a few individual users. 

  • While no software is needed, practitioners should become as well-versed as possible in best practices for conducting one-on-one interviews (see supplemental materials).

  • Analysis of qualitative data can be time intensive, particularly if practitioners plan to create a full transcript of a recorded session or code the qualitative data.

  • It can be difficult to identify people to invite to one-on-one interviews on the assessment of use/reuse of digital objects because digital object use is often anonymous. 

  • Individual interviews are vulnerable to the introduction of interviewer bias. This can be based on the interviewee wanting to please the interviewer (e.g., discomfort saying anything negative about the institution’s digital collections) or by the interviewee picking up on subconscious cues from the interviewer indicating their own opinion on a topic (“interviewer bias”).

  • Asking users to describe their own behavior can provide less accurate data than methods that actually track user behavior (web analytics, citation analysis). Users may not remember or may not accurately report their own behavior. This can be mitigated by using critical incident framing (“Tell me about a time when X. What did you do?”); however, interviews and focus groups inherently focus on what participants say they did/felt/thought, which may not be what they actually did/felt/thought. 

Learn how practitioners have used this method

  • Interviews with advanced users of Library of Congress digital photographs
    A qualitative investigation of how people who have extensive experience using the photographic archives preserved by the Library of Congress choose digitized photographs for inclusion in a given project. The researchers conducted two-stage, semistructured interviews with experienced users, supplemented by an independent analysis of the content and context of the source materials users consulted for specific projects with defined outcomes. The interview protocol is included as an appendix.

    Conway, P. (2010). “Modes of Seeing: Digitized Photographic Archives and the Experienced User.” The American Archivist 73(2), 425–62.


  • Survey and semistructured interviews with arts faculty about use of digital collections
    The authors surveyed and interviewed humanities faculty from twelve research universities about their research practices with digital collections.The responses to the survey informed the development of the interview protocol, which asked how faculty use materials from digital collections (including reuse) in their research and scholarly practices. Interviews were conducted by phone and by email. The interview protocol is included in an appendix.

    Green, H., & Courtney, A. (2015). Beyond the Scanned Image: A Needs Assessment of Scholarly Users of Digital Collections. College & Research Libraries, 76(5), 690-707.

  • Interviews to understand image use by journalists and historians
    Thirty journalists and historians from academic and professional work settings were interviewed using a series of semi-structured questions regarding how they use images (for information or for illustration) and the types of image attributes used to describe an appropriate image for their work. This was done within the context of a work task model. The interview protocol is not included as an appendix.

    McCay‐Peet, L., & Toms, E. (2009). Image Use within the Work Task Model: Images as Information and IllustrationJournal of the American Society for Information Science and Technology, 60(12), 2416–2429.

Additional resources

Audenaert, N., and Furuta R. (2010). “What Humanists Want: How Scholars Use Source Materials.” In Proceedings of the 10th Annual Joint Conference on Digital Libraries – JCDL ’10, 283. Gold Coast, Queensland, Australia: ACM Press. 

Beaudoin, J., Brady, J. (2011). “Finding Visual Information: A Study of Image Resources Used by Archaeologists, Architects, Art Historians, and Artists.” Art Documentation: Journal of the Art Libraries Society of North America 30 (2): 24–36. 

Beaudoin, J. (2014). “A Framework of Image Use among Archaeologists, Architects, Art Historians and Artists.” Journal of Documentation; Bradford 70 (1): 119–47. 

Borgman, C., Darch, P. Sands, A., Pasquetto S., Golshan, M., Wallis, J., Traweek, S. (2015). “Knowledge Infrastructures in Science: Data, Diversity, and Digital Libraries.” International Journal on Digital Libraries 16 (3): 207–27. 

Brophy, P. (2008).  Measuring library performance: Principles and techniques. London: Facet Publishing. 

Chew. (2010). “Understanding the Everyday Use of Images on the Web.” 

Hughes, L., Ell P., Knight, G., Dobreva, M. (2015). “Assessing and Measuring Impact of a Digital Collection in the Humanities: An Analysis of the SPHERE (Stormont Parliamentary Hansards: Embedded in Research and Education) Project.” Literary and Linguisitic Computing 30 (2): 183–98. 

Interviews.”, US Department of Health and Human Services. 2006. Accessed 6/13/2020. 

Marsh, D., Punzalan, R., Leopold, R.,  Butler, B.,  Petrozzi, M. (2016). “Stories of Impact: The Role of Narrative in Understanding the Value and Impact of Digital Collections.” Archival Science 16 (4): 327–72. 

Meyer, E. (2013)“Interviews.” Toolkit for the impact of digitised scholarly resources (TIDSR). Joint Information Systems Committee (JISC). Accessed 6/14/2020. 

Rieger, O. (2009). “Search Engine Use Behavior of Students and Faculty: User Perceptions and Implications for Future Research.” First Monday 14 (12). 

Rowley, J. (2012) “Conducting research interviews.” Management research review.

Saracevic, T. (2004). “Evaluation of Digital Libraries: An Overview,” January.

Tanner, S, King’s College London. (n.d.) “Measuring the Impact of Digital Resources: The Balanced Value Impact Model,” 114.

U.S. Department of Health and Human Services. (2006).


Contributors to this page include Joyce Chapman and Megan Oakleaf.

Skip to content