Skip all navigation and go to page content
NN/LM Home About OERC | Contact Us | Feedback |OERC Sitemap | Help | Bookmark and Share

Archive for the ‘Practical Evaluation’ Category

Strategies for Improving Response Rate

There are articles about strategies to improve survey response rate with health professionals in the open access December, 2011 issue of Evaluation and the Health Professions.   Each explored variations on Dillman’s Tailored Design Method, also known as TDM (see this Issue Brief from the University of Massachusetts Medical School’s Center for Mental Health Services Research for a summary of TDM).

Surveying Nurses: Identifying Strategies to Improve Participation” by J. VanGeest and T.P. Johnson (Evaluation and the Health Professions, 34(4):487-511)

The authors conducted a systematic review of efforts to improve response rates to nurse surveys, and found that small financial incentives were effective and nonmonetary incentives were not effective.  They also found that postal and telephone surveys were more successful than web-based approaches.

Surveying Ourselves:  Examining the Use of a Web-Based Approach for a Physician Survey” by K.A. Matteson; B.L. Anderson; S.B. Pinto; V. Lopes; J. Schulkin; and M.A. Clark (Eval Health Prof  34(4):448-463)

The authors distributed a survey via paper and the web to a national sample of obstetrician-gynecologists and found little systematic difference between responses using the two modes, except that university physicians were more likely to complete the web-based version than private practice physicians.  Data quality was also better for the web survey: fewer missing and inappropriate responses.  The authors speculate that university-based physicians may spend more time at computers than do private physicians.  However, given that response rate was good for both groups, the authors conclude that using web-based surveys is appropriate for physician populations and suggest controlling for practice type.

Effects of Incentives and Prenotification on Response Rates and Costs in a National Web Survey of Physicians” by J. Dykema; J. Stevenson; B. Day; S.L. Sellers; and V.L. Bonham (Eval Health Prof 34(4):434-447, 2011)

The authors found that response rates were highest in groups that were entered into a $50 or $100 lottery and that a prenotification letter containing a $2 preincentive.  They also found that use of postal prenotification letters increased response rates (even though the small token $2 had no additional effect and was not cost-effective).  The authors conclude that larger promised incentives are more effective than nominal preincentives.

A Randomized Trial of the Impact of Survey Design Characteristics on Response Rates among Nursing Home Providers” by M. Clark et al. (Eval Health Prof 34(4):464-486.

This article describes an experiment in maximizing participation by both the Director of Nursing and the Administrator of long-term care facilities.  One of the variables was incentive structure, in which the amount of incentive increased if both participated, and decreased if only one participated.  The authors found that there were no differences in the likelihood of both respondents participating by mode, questionnaire length, or incentive structure.

Evaluating Information Skills Training in Health Libraries

In her 2007 systematic review of how health libraries evaluate their training activities, Alison Brettle points out that these evaluations are designed to address various questions:  Are class participants learning?  Are resources being used in the best way?  Are more resources needed?  What changes should be made to improve materials and methods?   This review focuses on measures that examine changes in class participants’ knowledge, skills, or behavior.  The majority were used in these study designs:

  • Pre-experimental (one group, post-test only; one group with pre- and post-test; two groups post-test only)
  • Quasi-experimental (control group, pre- and post-testing without randomization)
  • Randomized controlled experiments

A few of the studies in the review were qualitative, and some were descriptive.  Methods of measuring outcomes of information skills classes included:

  • A score sheet or checklist listing features of a search
  • Surveys, including perceptions of the training, participant’s confidence, and ability to use knowledge gained
  • The Objective Structured Clinical Exam (OCSE) in which medical students perform clinical tasks in a short period of time, with literature searching included as one of the tasks

Appendix 2 of the article lists the studies that Brettle reviewed, describes methodologies and tools, and indicates how (or whether) instruments were deemed reliable and valid.  (Quick review–reliable instruments produce the same results if used again in the same situation; valid instruments actually measure what they claim to measure, and might produce generalizable results.)

This systematic review is open access and you can find the full text here:
Brettle, A. “Evaluating information skills training in health
libraries: a systematic review.” Health Information & Libraries Journal. 24: (Supplement s1): 18–37, December 2007.

Comparison of a Postal Survey and Mixed-Mode Survey

This open access article from the Journal of Medical Internet Research features a comparison between two modes of questionnaire delivery:  postal only, and mixed-method (email followed up by postal mail).  The authors looked at:

  • Respondent characteristics
  • Response rate
  • Response time
  • Rating scale responses
  • Data quality
  • Total costs

Here are the conclusions:

“Mixed-mode surveys are an alternative method to postal surveys that yield comparable response rates and groups of respondents, at lower costs. Moreover, quality of health care was not rated differently by respondents to the mixed-mode or postal survey. Researchers should consider using mixed-mode surveys instead of postal surveys, especially when investigating younger or more highly educated populations.”

Zuidgeest, M., et al. “A Comparison of a Postal Survey and Mixed-Mode Survey Using a Questionnaire on Patients’ Experiences With Breast Care.”  Journal of Medical Internet Research (JMIR):2011;13(3):e68.

How Many Interviews Are Enough?

When planning an evaluation that features interviews, it is difficult to know in advance how many interviews should be conducted.  The usual approach is to continue to interview until you no longer hear new information.  (Olney & Barnes, Data Collection and Analysis, p. 23)  However, there are times when a numeric guideline would be very useful.  For example, such a guideline would help in creating the budget for the evaluation section of a proposal when that evaluation features qualitative data collection.  Guest, et al., conducted a research project in which they analyzed transcripts of sixty interviews and found that 94% of the coded topics that appeared were identified within six interviews.  Analysis of six additional interviews revealed only another 3% of all the coded topics that were eventually found in the sixty interviews.  They concluded that data saturation (the point at which no additional data are being found) occurred by the time they had analyzed twelve interviews.  They point out that similar evidence has been presented by Nielsen and Landauer, who found that technology usability studies uncover 80% of problems after six evaluations, and 90% after twelve.  In a later publication, Nielsen showed that usability testing with fifteen participants will uncover 100% of problems, but he recommends using a smaller number (“Why You Only Need to Test with Five Users” from Jakob Nielsen’s Alertbox, March 19, 2000).

Is twelve the magic number for interviews, then?  Not necessarily.  Guest, et al. caution that their study involved highly structured interviews with members of a relatively homogeneous group.  In addition, the interview topics were familiar to the participants.  With a heterogeneous group or a diffuse (vague) topic, more interviews will probably be needed.  The bottom line is to select a purposeful group of interview candidates carefully.  Olney and Barnes provide more details about purposeful sampling (Olney & Barnes, Data Collection and Analysis, p. 23).

The article by Guest, et al. is not open-access, but this is a link to the authors’ abstract:  Guest, G., et al.  “How many interviews are enough? :  An experiment with data saturation and variability.”  Field Methods, February 2006.  18(1):59-82.

Seven Practical Steps to Survey Creation

You can find a useful list of Seven Practical Steps to Create an Effective SurveyMonkey Survey at the SurveyMonkey Blog.  More detail is provided for each of these steps, which, in general, apply to creating any survey (regardless of mode of creation and distribution).  Briefly, here are the steps:

Book review: Focus Groups: A Practical Guide for Applied Research. (4th edition)

I recently purchased a copy of “Focus Groups: A Practical Guide for Applied Research” by Richard Krueger and Mary Anne Casey. Krueger, professor emeritus at University of Minnesota, has written some of the classic books on focus group research and his co-author has conducted focus groups for government agencies and nonprofits. The experience of these two authors shines through in the pages of this well-organized, thorough text, which has a lot to recommend it:

  • The operative term in the title is “applied research.” The authors talk about the purpose of the study being the “guiding star” for selecting participants, writing the question guide, deciding on moderators, and analyzing and reporting findings.
  • The content is full of nuts-and-bolts suggestions, including a very practical chapter about Internet and telephone interviews
  • There is an interesting chapter presenting four different approaches to focus group research: marketing research; academic research; public/nonprofit; and participatory. The chapter summarizes the evolution of the approaches and compares them in a table that will allow the readers to choose the approach that best fits the circumstances of their studies.  This chapter explains why evaluators have different takes on how to conduct focus groups.
  • There is a nice chapter on analyzing focus group data. It can be difficult to find step-by-step descriptions of how to analyze qualitative data, so this chapter alone is a reason to read this book. (You could generalize the process to analyzing other forms of qualitative evaluation data.)
  • The final chapter provides you with responses to challenging questions about the quality of your focus group research. For example, what do you say if someone asks “Is this scientific research?” and “how do you know your findings aren’t just your subjective opinions?” Along with suggesting responses, the authors provide their own analysis of why such questions are often posed and the assumptions lurking behind them. This section will help you defend your project and your conclusions. (It would be most helpful to read this chapter before you design your project because it helps you understand the standards for a defensible project.)

I recommend this book to anyone planning to run focus groups. I have conducted my fair share of discussions, but I learned new tips to use in my next project.

Reference: Krueger RA. Casey MA. Focus groups. A practical guide for applied research. 4th ed.Thousand Oaks, CA: Sage, 2009.

Survey Monkey or Zoomerang: How to Choose?

The American Evaluation Association “Coffee Break” webinar on June 10 featured a comparison of Survey Monkey and Zoomerang, two well-known and respected web survey tools. Both feature free accounts–you can sign up and test drive them as part of deciding whether to move to the paid options. In both cases, the free accounts feature most of the system functionality but with limits on the numbers of questions and responses. The prices for paid accounts are similar for both. The presenters, Lois Ritter and Tessa Robinette, highlighted some differences between the two systems.

Survey Monkey was, as of June 10, the only online survey application that is 508 compliant and, because one subscription can share multiple surveys, it is good for work being conducted by different groups in multiple locations. Survey Monkey is available in a variety of languages and can be used with the iPhone.

Zoomerang can be used with multiple mobile devices and offers a fee-based survey translation service. Zoomerang is designed for a one account per user and project environment, and can provided rented lists for sampling frames on 500 attributes that purport to be representative of the “general population.” Zoomerang also has a nice analytic feature: “tag clouds” for thematic grouping.

For a thorough overview of how to conduct online surveys, consult the Autumn, 2007 issue of New Directions for Evaluation, number 115.

The American Evaluation Association “Coffee Break” webinar series is a benefit of membership in the association. Recordings of these 20 minute sessions are archived in the AEA’s Webinar Archive E-Library (a members-only site).

Survey Monkey News

Some news from Survey Monkey’s Newsletter:

Professional (paid) subscribers can now make customized links in lieu of those long automatically-generated ones. And you can analyze data based on respondents’ answers by using the Filter by Response tool within the Analyze section. Professional subscribers can also create Custom Reports within the Analyze section.

Empowerment Evaluation in Academic Medicine: Example

The OERC promotes collaborative evaluation approaches in our training and consultations, so it is always nice to have published examples of collaborative evaluation projects.  A recent issue of Academic Medicine features an article showing how a collaborative evaluation approach called Empowerment Evaluation was applied to improve the medical curriculum at Stanford University School of Medicine.  David Fetterman, originaor of  Empowerment Evaluation, is the primary author of the article.  A key contribution of this evaluation approach — which is exemplified in the article — is the ongoing involvment of stakeholders in the analysis and use of program data. This article provides strong evidence that the approach can lead to positive outcomes for programs.

Citation: Fetterman DM, Deitz J, Gesundheit N. Empowerment Evaluation: A Collaborative Approach to Evaluating and Transforming a Medical School Curriculum. Academic Medicine 2010 May; 85(5):813-820.  A link to the abstract is available at http://journals.lww.com/academicmedicine/Abstract/2010/05000/Empowerment_Evaluation__A_Collaborative_Approach.25.aspx

Are Focus Group Transcripts Necessary?

How important is it to transcribe focus group discussions? Dr. Rita O’Sullivan from the UNC-Chapel Hill School of Education sought an objective answer to that question. She and colleagues ran an experiment in which two co-facilitators ran seven focus groups and created summary reports of the discussions. Each co-facilitator produced a report for each focus group: one wrote a summary based on memory, handwritten notes and a transcript of the audio tape; the other wrote a summary using memory, notes and the audiotape. (Each facilitator prepared seven summaries, some using the first method and some using the second.)  Then, 18 educational professionals who were enrolled in a graduate-level educational research class compared the pairs of summaries.  Sixteen of the 18 reviewers found no substantive differences between the two versions of the summaries.

What does this mean for evaluators?  The authors concluded that their findings, although preliminary, suggest that, for the typical program evaluation setting, transcripts are not necessary to produce useful focus group discussion summaries. The findings also make it hard to justify the transcription costs for focus groups in evaluation settings – because every dollar spent on evaluation is one not spent on the program.  

Source: O’Sullivan et al. Transcribing focus group articles: Is there a viable alternative? 2004 November.  Paper presented at the joint international meeting of the American Evaluation Association and the Canadian Evaluation Society, Toronto, Canada.