Skip all navigation and go to page content
NN/LM Home About OERC | Contact Us | Feedback |OERC Sitemap | Help | Bookmark and Share

How Many Interviews Are Enough?

When planning an evaluation that features interviews, it is difficult to know in advance how many interviews should be conducted.  The usual approach is to continue to interview until you no longer hear new information.  (Olney & Barnes, Data Collection and Analysis, p. 23)  However, there are times when a numeric guideline would be very useful.  For example, such a guideline would help in creating the budget for the evaluation section of a proposal when that evaluation features qualitative data collection.  Guest, et al., conducted a research project in which they analyzed transcripts of sixty interviews and found that 94% of the coded topics that appeared were identified within six interviews.  Analysis of six additional interviews revealed only another 3% of all the coded topics that were eventually found in the sixty interviews.  They concluded that data saturation (the point at which no additional data are being found) occurred by the time they had analyzed twelve interviews.  They point out that similar evidence has been presented by Nielsen and Landauer, who found that technology usability studies uncover 80% of problems after six evaluations, and 90% after twelve.  In a later publication, Nielsen showed that usability testing with fifteen participants will uncover 100% of problems, but he recommends using a smaller number (“Why You Only Need to Test with Five Users” from Jakob Nielsen’s Alertbox, March 19, 2000).

Is twelve the magic number for interviews, then?  Not necessarily.  Guest, et al. caution that their study involved highly structured interviews with members of a relatively homogeneous group.  In addition, the interview topics were familiar to the participants.  With a heterogeneous group or a diffuse (vague) topic, more interviews will probably be needed.  The bottom line is to select a purposeful group of interview candidates carefully.  Olney and Barnes provide more details about purposeful sampling (Olney & Barnes, Data Collection and Analysis, p. 23).

The article by Guest, et al. is not open-access, but this is a link to the authors’ abstract:  Guest, G., et al.  “How many interviews are enough? :  An experiment with data saturation and variability.”  Field Methods, February 2006.  18(1):59-82.

The Value of Academic Libraries

Last year the Association of College and Research Libraries issued a very substantial and thorough review of the research that has been done on how to measure library value:  “The Value of Academic Libraries: A Comprehensive Research Review and Report” by Megan Oakleaf.  Althought its focus is academia, there are sections reviewing work in public libraries, school libraries, and special libraries.  I recommend the section on special libraries, which has quite an emphasis on medical libraries, including references to past work regarding clinical impacts.

For those who are interested in approaches that libraries have taken to establishing their value, there is potential benefit in reading the entire report cover-to-cover.  For those who want a quick overview, these are sections that I recommend:

  • Executive Summary
  • Defining “Value”
  • Special Libraries

For more information about this report, see “A tool kit to help academic librarians demonstrate their value” from the 9/14/2010 issue of the Chronicle of Higher Education.

The full report is available at http://www.acrl.ala.org/value/.

The Critical Incident Technique and Service Evaluation

In their systematic review of clinical library (CL) service evaluation, Brettle et al. summarize evidence showing that CLs contribute to patient care through saving time and providing effective results.  Pointing out the wisdom of using evaluation measures that can be linked to organizational objectives, they advocate for using the Critical Incident Technique to collect data on specific outcomes and demonstrate where library contributions make a difference.  In the Critical Incident Technique, respondents are asked about an “individual case of specific and recent library use/information provision rather than library use in general.”  In addition, the authors point to Weightman, et al.’s suggested approaches for conducting a practical and valid study of library services:

  • Researchers are independet of the library service.
  • Respondents are anonymous.
  • Participants are selected either by random sample or by choosing all members of specific user groups.
  • Questions are developed with input from library users.
  • Questionnaires and interviews are both used.

Brettle, et al., “Evaluating clinical librarian services: a systematic review.”  Health Information and Libraries Journal, March 2010.  28(1):3-22.

Weightman, et al., “The value and impact of information provided through library services for patient care: developing guidance for best practice.”  Health Information and Libraries Journal, March 2009.  26(1):63-71.

Research Design and Research Methods

Evidence-Based Library and Information Practice has published an open-access article as part of its EBL 101 series, titled “Research Methods: Design, Methods, Case Study…oh my!” in which the author talks about research design as the process of research from research question through “data collection, analysis, interpretation, and report writing”  and  the logic that connects questions and data to conclusions.  It can be thought of as “a particular style, employing different methods.”  In the way that evaluators love to discuss terminology, she then differentiates “research design” from “research method” and points out that research methods are ways to “systematize observation” including the approaches, tools, and techniques that are used during data collection.  Asking whether a “case study” is a research design or a research method, the author concludes that “case study” is an example of a kind of research design that can employ various methods.

Wilson, V. “Research Methods: Design, Methods, Case Study…oh my!”  Evidence Based Library and Information Practice 2011, 6(3):90-91.

IdeaScale – Quantified Qualitative Data

Colleagues at the National Library of Medicine Training Center notified the OERC about interesting new software that builds on social media to collect comments from communities of interest. It’s called IdeaScale.

The software is designed to support “crowdsourcing,” in which an open call is sent to a targeted group (a “community”) to participate in solving a problem or developing an innovation.  With IdeaScale, people can post their ideas about an issue, and then others can cast a “like/dislike” vote and add comments. This tool provides an interesting approach to evaluation because you can get both qualitative responses and a quantified measure of interest in the community.  The software can give evaluators a jump on thematic analysis of comments.

The Institute of Museum and Library Services and the Executive Office of the President of the United States have used IdeaScale to collect feedback from communities of interest.  Here are links to their sites (now closed for comment):

http://imls.ideascale.com/

http://opengov.ideascale.com/

One drawback, from an evaluation standpoint, is that the targeted group is largely undefined, so respondents are likely to be those who are particularly drawn to the topic and may not be representative of the community (or larger population). Comments from the more-engaged members of a community can be particularly helpful for needs and process assessment, but interpretation of the quantified “votes,” particularly for outcomes assessment, would require caution, such as checking findings against another source of data.

The other drawback is that people must sign up for an IdeaScale account to contribute ideas. A two-step process (setting up an account, and then contributing) can be a barrier to participation.  IdeaScale does allow participants to open accounts through Facebook, Twitter, or other social media accounts.  This might ease the participation barriers, but mostly for those who are comfortable with social media, which may further bias the data collected through IdeaScale. Still, it is an intriguing tool for the correct audiences and evaluation questions.

Here’s a link to the company website, which offers a demonstration video and a free subscription:

http://ideascale.com/ 

 

 

Can I Hear You Now?

“We are losing our listening.” This is the way sound specialist Julian Treasure begins his TEDtalk called “5 Ways to Listen Better.” 

If he’s correct, that’s bad news for those of us who conduct interviews and focus groups.  With quantitative methods, we use data collection tools.  With qualitative methods, we are the data collection tools and if we can’t listen, we aren’t valid or reliable.

If you aren’t familiar with TEDtalks, TED stands for Technology, Entertainment, and Design; and TEDtalks are a series of 1000+ short video podcasts where highly creative people share “ideas worth spreading.”  I heard Treasure’s TEDtalk on a public radio station the other day and realized it had ideas worth spreading to the OERC blog audience.

His talk includes some short, fun exercises for improving our listening skills, but his RASA acronym, about interpersonal listening, is particularly pertinent to evaluators using qualitative methods:

  • Receive or pay attention to the speaker
  • Appreciate by using verbal cues such as “uh-huh” and “I see.”
  • Summarize periodically. (Sentences that start with “So…” work well.)
  • Ask questions afterwards.

TEDtalks are available on the web. Here’s the link to Treasure’s presentation, which is less than 8 minutes long:

http://www.ted.com/talks/lang/eng/julian_treasure_5_ways_to_listen_better.html

New Human Subjects Review Regulations On the Horizon?

Did it seem a bit much when you had to wait for weeks (months?) for your IRB’s “exempt status” approval for that 5-item questionnaire assessing hospital librarians’ attitudes toward web-based learning? Well, take heart. The Office of Management and Budget convened a working group to review – for the first time in 20 years – the current federal human subjects review regulations. The Department of Health and Human Services has posted proposed revisions for public comment at http://www.hhs.gov/ohrp/humansubjects/anprm2011page.html. A recent article in the New England Journal of Medicine summarized proposed changes (source is listed below). Here are some of the highlights:

  • Review processes would be eliminated for “exempt” projects. Such projects really are no riskier for participants than everyday activities like laundry or housework. Under proposed guidelines, researchers would be permitted to begin low-risk projects immediately after registering them, which would involve submitting a brief description to their IRBs. (In other words, no waiting to see if your IRB agrees that your project is “exempt.”) The group also proposed, for consideration, allowing competent adults to provide oral consent to participation in focus groups, interviews, and surveys. The term “exempt” would be replaced by “excused,” because low-risk studies would be excused from the review process but not exempt from the oversight described in the next bullet.
  • Uniform data-security measures would be developed and enforced. While participants in “excused” studies face minimal risk from interventions, they can be harmed through inappropriate release of data. One huge blind spot in current human subjects regulations is a lack of uniform standards for data security. The committee proposes creation of uniform standards required for all projects, including low-risk ones. Institutions would oversee compliance with processes such as random audits of excused programs.
  • Studies using secondary sources of data would automatically be classified as “excused.”  Many publications and presentations that report evaluation data fit into this category. When we evaluate programs, our primary purpose is for program improvement and enhancement of services for our users. If we take evaluation data and analyze it for publication, it becomes a secondary source, meaning it was collected for program improvement and “recycled” for scholarship.
  • Multi-site research projects will have one IRB record. When libraries from different institutions collaborate, their projects often have to undergo separate review in each participating institution. Multiple reviews sometimes force variation in assessment practices that can compromise studies but does not enhance human subjects protection, so the working group recommends “one project, one record.”

 

Please note: No changes to federal policy have been made yet, so don’t stop following your institution’s IRB procedures. If you would like a more detailed, but readable, summary of proposed changes, please check out the following article:

Source: Emanuel EJ, Menikoff J. Reforming the Regulations Governing Research with Human Subjects New England Journal of Medicine, 2011 Jul 25. Available online at http://www.nejm.org/doi/full/10.1056/NEJMsb1106942)

Seven Practical Steps to Survey Creation

You can find a useful list of Seven Practical Steps to Create an Effective SurveyMonkey Survey at the SurveyMonkey Blog.  More detail is provided for each of these steps, which, in general, apply to creating any survey (regardless of mode of creation and distribution).  Briefly, here are the steps:

Book review: Focus Groups: A Practical Guide for Applied Research. (4th edition)

I recently purchased a copy of “Focus Groups: A Practical Guide for Applied Research” by Richard Krueger and Mary Anne Casey. Krueger, professor emeritus at University of Minnesota, has written some of the classic books on focus group research and his co-author has conducted focus groups for government agencies and nonprofits. The experience of these two authors shines through in the pages of this well-organized, thorough text, which has a lot to recommend it:

  • The operative term in the title is “applied research.” The authors talk about the purpose of the study being the “guiding star” for selecting participants, writing the question guide, deciding on moderators, and analyzing and reporting findings.
  • The content is full of nuts-and-bolts suggestions, including a very practical chapter about Internet and telephone interviews
  • There is an interesting chapter presenting four different approaches to focus group research: marketing research; academic research; public/nonprofit; and participatory. The chapter summarizes the evolution of the approaches and compares them in a table that will allow the readers to choose the approach that best fits the circumstances of their studies.  This chapter explains why evaluators have different takes on how to conduct focus groups.
  • There is a nice chapter on analyzing focus group data. It can be difficult to find step-by-step descriptions of how to analyze qualitative data, so this chapter alone is a reason to read this book. (You could generalize the process to analyzing other forms of qualitative evaluation data.)
  • The final chapter provides you with responses to challenging questions about the quality of your focus group research. For example, what do you say if someone asks “Is this scientific research?” and “how do you know your findings aren’t just your subjective opinions?” Along with suggesting responses, the authors provide their own analysis of why such questions are often posed and the assumptions lurking behind them. This section will help you defend your project and your conclusions. (It would be most helpful to read this chapter before you design your project because it helps you understand the standards for a defensible project.)

I recommend this book to anyone planning to run focus groups. I have conducted my fair share of discussions, but I learned new tips to use in my next project.

Reference: Krueger RA. Casey MA. Focus groups. A practical guide for applied research. 4th ed.Thousand Oaks, CA: Sage, 2009.

An Evaluation of the Use of Smartphones to Communicate Between Clinicians: A Mixed-Methods Study

A good example of mixed methods in this study, which combined quantitative measures of frequency of smartphone use and email messages with interviews and ethnographic observations. The semistructured interviews explored clinician’s perceptions of their smartphone experiences; participants were selected using a purposive sampling stratgy that chose from different groups of health care professionals with differing views on the use of smartphones for clinical communications. The observational methods included nonparticipatory “work-shadowing,” in which a researcher followed medical residents during day and evening shifts, plus observations at the general internal medicine nursing stations. Analysis of qualitative data resulted in five major themes: efficiency, interruptions, interprofessional relations, gaps in perceived urgency, and professionalism. The full article is available open access:

Wu, R, et al. An Evaluation of the Use of Smartphones to Communicate Between Clinicians: A Mixed-Methods Study. Journal of Medical Internet Research, 2011, volume 13, Issue 3.