Interviews are a qualitative research method involving one-on-one conversations with users. Interviews can be structured or unstructured, and can be conducted face-to-face, by phone, or by video conference.
In the case of assessment of digital object use/reuse, practitioners would likely conduct interviews with people who have used or reused the organization’s digital content, but interviews may also be undertaken with users who have attempted digital object use/reuse, but were stymied, or users who have contemplated digital object use/reuse, but have not yet acted.
Similar to focus groups, interviews are a time-intensive qualitative data collection method, typically conducted with small numbers of users. Interviews allow practitioners to gain a deep understanding of user experiences, beliefs, and attitudes, and discuss topics in detail. Ideally, an interviewer is trained in interview techniques and knows how to help participants feel comfortable, ask non-leading questions, follow up with additional questions when needed, and take measures to counteract any interviewer bias. Ideally, the interviewer should not “represent” the organization or system that is the topic of discussion, so that participants feel free to express a range of reactions in discussion. If a cultural heritage institution does not have access to trained interviewers, it is recommended that those conducting focus groups familiarize themselves with best practices for leading interviews (see resources below). Interviews can be held in-person or virtually.
Interviews come in many forms; three common types are structured, semi-structured, and unstructured. Structured interviews follow an interview protocol, or script, that is written ahead of time and contains the questions the interviewee will be asked, as well as predetermined probes for follow up questions. Structured interviews are essentially questionnaires read aloud by an interviewer. Questions are set, follow ups are set, and interviewers are “neutral”; these parameters allow conditions to be similar among interviews conducted with different interviewees at different times and sometimes conducted by different interviewers. Semi-structured interviews are based on broader question prompts and the interviewer has greater autonomy to rephrase questions, explain what questions mean, add questions, or omit questions. Semi-structured interviews allow for some flexibility, and interviewers must document any deviations, additions, or omissions from the original script. Unstructured interviews are more like conversations. Initial questions may be predetermined, but follow up and additional questions depend on the interviewees’ responses to opening questions. Unstructured interviews can be the most challenging to conduct, as interviewers must “think on their feet” and, without a pre-agreed script to follow, notetakers are more challenged to capture all interviewee input.
Interviews are often recorded. Recording may include video and audio, audio only, or notes taken by an observer. The interviewee’s consent must be acquired before doing so. The purposes of recording a focus group may include: to allowing the interviewer to lend their full attention to the interviewee itself instead of taking notes if a notetaker is not available, capturing verbal or non-verbal cues, transcribing discussion verbatim to avoid omissions, validating notes taken by an observer, enabling multiple listeners to check reliability, and to decreasing the likelihood of interviewer bias.
If interviews are used as part of assessment methods for digital object use/reuse, the interviewer would ask questions aimed at topics such as better understanding the frequency and context of use/reuse, motivations for use/reuse, and the impact of use/reuse. Interviews are typically between 30-60 minutes.
Interviews focused on the assessment of use/reuse of digital objects for a cultural heritage institution might include users who have previously used/reused digital objects from that institution; users who have attempted digital object use/reuse, but were stymied; or users who have contemplated digital object use/reuse, have not yet acted. As in any assessment, the stories of users’ experiences are valuable, but the stories of non-users or not-yet-users also have value for understanding the topic. In general, interviewees should be invited with an eye toward including as many perspectives as possible, unless a particular past experience or other attribute is a necessary characteristic of the interviewee pool.
The D-CRAFT project team has compiled these additional supplemental materials to assist practitioners in conducting this method.
Practitioners should follow the practices laid out in the “Ethical considerations and guidelines for the assessment of use and reuse of digital content.” The Guidelines are meant both to inform practitioners in their decision-making, and to model for users what they can expect from those who steward digital collections.
If audio or video of the session is being recorded, consent should be procured from each participant via a signed consent form. The form should explain the purpose of the recording, how privacy will be protected, the extent to which the recording will be shared, and plans for retention and disposal of the recording.
When taking notes or transcribing a session, include only the minimal personally identifying information necessary, particularly if files will be shared with other staff or are accessible via shared computer drives. It is not necessary to include a participant’s name on a transcript. Instead, each participant can be assigned an identifier that is used in notes, and if necessary the identifiers can be associated with other information about the participants in a separate, secure document.
Practitioners who are not trained in qualitative data collection methods should become acquainted with the concept of interviewer bias and how to avoid it before conducting focus groups.
If working at an institution of higher education, practitioners will need to contact the Institutional Review Board (IRB) to determine whether an IRB application is needed for research with human subjects. An IRB application may not be needed if the goal of the human subject research is for program improvements and findings are not intended to be generalizable or published.
Green, H., & Courtney, A. (2015). Beyond the Scanned Image: A Needs Assessment of Scholarly Users of Digital Collections. College & Research Libraries, 76(5), 690-707.
McCay‐Peet, L., & Toms, E. (2009). Image Use within the Work Task Model: Images as Information and Illustration. Journal of the American Society for Information Science and Technology, 60(12), 2416–2429.
Chassanoff, A. (2018). Historians’ experiences using digitized archival photographs as evidence. The American Archivist 81, no.1, 135-164.
Contributors to this page include Joyce Chapman and Megan Oakleaf.
Chapman, J., Oakleaf, M. (2023). Interviews. Digital Content Reuse Assessment Framework Toolkit (D-CRAFT); Council on Library & Information Resources. https://reuse.diglib.org/toolkit/interview/
Helping digital collections measure impact
Hashtag: #digreuse