“We are losing our listening.” This is the way sound specialist Julian Treasure begins his TEDtalk called “5 Ways to Listen Better.”
If he’s correct, that’s bad news for those of us who conduct interviews and focus groups. With quantitative methods, we use data collection tools. With qualitative methods, we are the data collection tools and if we can’t listen, we aren’t valid or reliable.
If you aren’t familiar with TEDtalks, TED stands for Technology, Entertainment, and Design; and TEDtalks are a series of 1000+ short video podcasts where highly creative people share “ideas worth spreading.” I heard Treasure’s TEDtalk on a public radio station the other day and realized it had ideas worth spreading to the OERC blog audience.
His talk includes some short, fun exercises for improving our listening skills, but his RASA acronym, about interpersonal listening, is particularly pertinent to evaluators using qualitative methods:
- Receive or pay attention to the speaker
- Appreciate by using verbal cues such as “uh-huh” and “I see.”
- Summarize periodically. (Sentences that start with “So…” work well.)
- Ask questions afterwards.
TEDtalks are available on the web. Here’s the link to Treasure’s presentation, which is less than 8 minutes long:
Did it seem a bit much when you had to wait for weeks (months?) for your IRB’s “exempt status” approval for that 5-item questionnaire assessing hospital librarians’ attitudes toward web-based learning? Well, take heart. The Office of Management and Budget convened a working group to review – for the first time in 20 years – the current federal human subjects review regulations. The Department of Health and Human Services has posted proposed revisions for public comment at http://www.hhs.gov/ohrp/humansubjects/anprm2011page.html. A recent article in the New England Journal of Medicine summarized proposed changes (source is listed below). Here are some of the highlights:
- Review processes would be eliminated for “exempt” projects. Such projects really are no riskier for participants than everyday activities like laundry or housework. Under proposed guidelines, researchers would be permitted to begin low-risk projects immediately after registering them, which would involve submitting a brief description to their IRBs. (In other words, no waiting to see if your IRB agrees that your project is “exempt.”) The group also proposed, for consideration, allowing competent adults to provide oral consent to participation in focus groups, interviews, and surveys. The term “exempt” would be replaced by “excused,” because low-risk studies would be excused from the review process but not exempt from the oversight described in the next bullet.
- Uniform data-security measures would be developed and enforced. While participants in “excused” studies face minimal risk from interventions, they can be harmed through inappropriate release of data. One huge blind spot in current human subjects regulations is a lack of uniform standards for data security. The committee proposes creation of uniform standards required for all projects, including low-risk ones. Institutions would oversee compliance with processes such as random audits of excused programs.
- Studies using secondary sources of data would automatically be classified as “excused.” Many publications and presentations that report evaluation data fit into this category. When we evaluate programs, our primary purpose is for program improvement and enhancement of services for our users. If we take evaluation data and analyze it for publication, it becomes a secondary source, meaning it was collected for program improvement and “recycled” for scholarship.
- Multi-site research projects will have one IRB record. When libraries from different institutions collaborate, their projects often have to undergo separate review in each participating institution. Multiple reviews sometimes force variation in assessment practices that can compromise studies but does not enhance human subjects protection, so the working group recommends “one project, one record.”
Please note: No changes to federal policy have been made yet, so don’t stop following your institution’s IRB procedures. If you would like a more detailed, but readable, summary of proposed changes, please check out the following article:
Source: Emanuel EJ, Menikoff J. Reforming the Regulations Governing Research with Human Subjects New England Journal of Medicine, 2011 Jul 25. Available online at http://www.nejm.org/doi/full/10.1056/NEJMsb1106942)
You can find a useful list of Seven Practical Steps to Create an Effective SurveyMonkey Survey at the SurveyMonkey Blog. More detail is provided for each of these steps, which, in general, apply to creating any survey (regardless of mode of creation and distribution). Briefly, here are the steps:
I recently purchased a copy of “Focus Groups: A Practical Guide for Applied Research” by Richard Krueger and Mary Anne Casey. Krueger, professor emeritus at University of Minnesota, has written some of the classic books on focus group research and his co-author has conducted focus groups for government agencies and nonprofits. The experience of these two authors shines through in the pages of this well-organized, thorough text, which has a lot to recommend it:
- The operative term in the title is “applied research.” The authors talk about the purpose of the study being the “guiding star” for selecting participants, writing the question guide, deciding on moderators, and analyzing and reporting findings.
- The content is full of nuts-and-bolts suggestions, including a very practical chapter about Internet and telephone interviews
- There is an interesting chapter presenting four different approaches to focus group research: marketing research; academic research; public/nonprofit; and participatory. The chapter summarizes the evolution of the approaches and compares them in a table that will allow the readers to choose the approach that best fits the circumstances of their studies. This chapter explains why evaluators have different takes on how to conduct focus groups.
- There is a nice chapter on analyzing focus group data. It can be difficult to find step-by-step descriptions of how to analyze qualitative data, so this chapter alone is a reason to read this book. (You could generalize the process to analyzing other forms of qualitative evaluation data.)
- The final chapter provides you with responses to challenging questions about the quality of your focus group research. For example, what do you say if someone asks “Is this scientific research?” and “how do you know your findings aren’t just your subjective opinions?” Along with suggesting responses, the authors provide their own analysis of why such questions are often posed and the assumptions lurking behind them. This section will help you defend your project and your conclusions. (It would be most helpful to read this chapter before you design your project because it helps you understand the standards for a defensible project.)
I recommend this book to anyone planning to run focus groups. I have conducted my fair share of discussions, but I learned new tips to use in my next project.
Reference: Krueger RA. Casey MA. Focus groups. A practical guide for applied research. 4th ed.Thousand Oaks, CA: Sage, 2009.
A good example of mixed methods in this study, which combined quantitative measures of frequency of smartphone use and email messages with interviews and ethnographic observations. The semistructured interviews explored clinician’s perceptions of their smartphone experiences; participants were selected using a purposive sampling stratgy that chose from different groups of health care professionals with differing views on the use of smartphones for clinical communications. The observational methods included nonparticipatory “work-shadowing,” in which a researcher followed medical residents during day and evening shifts, plus observations at the general internal medicine nursing stations. Analysis of qualitative data resulted in five major themes: efficiency, interruptions, interprofessional relations, gaps in perceived urgency, and professionalism. The full article is available open access:
Wu, R, et al. An Evaluation of the Use of Smartphones to Communicate Between Clinicians: A Mixed-Methods Study. Journal of Medical Internet Research, 2011, volume 13, Issue 3.
The Middle Atlantic Region’s focus group project, led last winter by Sue Hunter, recently received attention through two venues of the American Evaluation Association: at the 2010 AEA annual conference in San Antonio and on the aea365 blog. On November 12, Sue presented Using Appreciative Inquiry Focus Groups to Engage Members in Planning for the National Network of Libraries of Medicine Middle Atlantic Region. (Cindy Olney was a co-contributor to this presentation.) MAR used the focus group project, designed using appreciative inquiry methods, to collect network member feedback in preparation for its 2011-2016 proposal. The presentation highlighed the evaluation design and lessons learned about using the appreciative inquiry approach. The presentation abstract and PowerPoint slides are available here at the AEA Public Library.
In the run-up to the conference, AEA staff asked some presenters to submit blog entries about their conference presentations for AEA365, a tip-of-the-day blog by and for evaluators. On AEA’s invitation, Sue and Cindy wrote a blog entry that was posted here on November 5.
The Institute of Museum and Library Services has awarded a grant that will test and implement methodologies measuring the return on investment (ROI) in academic libraries. The goals are to provide evidence and a set of tested methodologies that academic libraries will be able to use in demonstrating their value. The University of Tenessee, Knoxville will be conducting this study in collaboration with the University of Illinois at Urbana-Champaign and the Association of Research Libraries. Dr. Carol Tenopir, professor in the School of Information Sciences, is the project’s lead investigator. This news item from the UT Knoxville’s Tennessee Today provides more details: “UT Shares in Grant to Study Value of Academic Libraries.”
In a complementary project, the Association of College and Research Libraries has selected Dr. Megan Oakleaf to conduct a review of the “quantitative and qualitative literature, methodologies and best practices currently in place for demonstrating the value of academic libraries.” The Association plans to issue a completed report later this year.
The American Evaluation Association “Coffee Break” webinar on June 10 featured a comparison of Survey Monkey and Zoomerang, two well-known and respected web survey tools. Both feature free accounts–you can sign up and test drive them as part of deciding whether to move to the paid options. In both cases, the free accounts feature most of the system functionality but with limits on the numbers of questions and responses. The prices for paid accounts are similar for both. The presenters, Lois Ritter and Tessa Robinette, highlighted some differences between the two systems.
Survey Monkey was, as of June 10, the only online survey application that is 508 compliant and, because one subscription can share multiple surveys, it is good for work being conducted by different groups in multiple locations. Survey Monkey is available in a variety of languages and can be used with the iPhone.
Zoomerang can be used with multiple mobile devices and offers a fee-based survey translation service. Zoomerang is designed for a one account per user and project environment, and can provided rented lists for sampling frames on 500 attributes that purport to be representative of the “general population.” Zoomerang also has a nice analytic feature: “tag clouds” for thematic grouping.
For a thorough overview of how to conduct online surveys, consult the Autumn, 2007 issue of New Directions for Evaluation, number 115.
The American Evaluation Association “Coffee Break” webinar series is a benefit of membership in the association. Recordings of these 20 minute sessions are archived in the AEA’s Webinar Archive E-Library (a members-only site).
Some news from Survey Monkey’s Newsletter:
Professional (paid) subscribers can now make customized links in lieu of those long automatically-generated ones. And you can analyze data based on respondents’ answers by using the Filter by Response tool within the Analyze section. Professional subscribers can also create Custom Reports within the Analyze section.
A belated note about an interesting item at the Medical Library Association meeting in DC this past May. Christine Chastain-Warheit gave a fascinating 5-minute “Lightning Round” presentation, “Can Hospital Librarians Demonstrate Internal Revenue Service-mandated Community Benefit for Their Nonprofit Organizations? Reflecting on Value Provided and Connecting the Hospital Library to Community Benefit.” She pointed out that the IRS Community Benefit standard for not-for-profit hospitals includes activities that promote health in response to community needs. Community Benefit is the basis of the tax-exemption of not-for-profit hospitals. Her institution has agreed that the library’s outreach activities can be included in calculating hospital community benefit efforts for IRS reporting (Poster presented at Medical Library Association Annual Meeting, May 23, 2010). This approach could have good potential for libraries in not-for-profit hospitals demonstrating their value to their institutions and their institutions’ communities, so it’s definitely something to watch. Ms. Chastain-Warheit is Director of Medical Libraries at Christiana Hospital in Newark, DE.