Skip all navigation and go to page content
NN/LM Home About OERC | Contact Us | Feedback |OERC Sitemap | Help | Bookmark and Share

Archive for July, 2009

Are Focus Group Transcripts Necessary?

How important is it to transcribe focus group discussions? Dr. Rita O’Sullivan from the UNC-Chapel Hill School of Education sought an objective answer to that question. She and colleagues ran an experiment in which two co-facilitators ran seven focus groups and created summary reports of the discussions. Each co-facilitator produced a report for each focus group: one wrote a summary based on memory, handwritten notes and a transcript of the audio tape; the other wrote a summary using memory, notes and the audiotape. (Each facilitator prepared seven summaries, some using the first method and some using the second.)  Then, 18 educational professionals who were enrolled in a graduate-level educational research class compared the pairs of summaries.  Sixteen of the 18 reviewers found no substantive differences between the two versions of the summaries.

What does this mean for evaluators?  The authors concluded that their findings, although preliminary, suggest that, for the typical program evaluation setting, transcripts are not necessary to produce useful focus group discussion summaries. The findings also make it hard to justify the transcription costs for focus groups in evaluation settings – because every dollar spent on evaluation is one not spent on the program.  

Source: O’Sullivan et al. Transcribing focus group articles: Is there a viable alternative? 2004 November.  Paper presented at the joint international meeting of the American Evaluation Association and the Canadian Evaluation Society, Toronto, Canada.

SurveyMonkey software application meets federal accessibility guidelines

Someone recently asked me if SurveyMonkey forms are accessible to those with functional limitations and disabilities. In fact, SurveyMonkey received Section 508 certification in June 2008. According to the company’s Web site, they are the only commercial online survey application that has this certification.

While SurveyMonkey software automatically formats surveys to be accessible, there are a few practices that we need to follow to make sure SurveyMonkey questionnaires are user-friendly with screen-readers and other visual technologies. For instance, don’t add extra html coding to your questionnaire (e.g., to bold-face or italicize words) because screen-readers may read parts of html coding as text. Also, SurveyMonkey’s default color schemes are configured for maximum contrast to help low-vision users. Creating your own color schemes may make your forms less readable for this population. You can find more tips from SurveyMonkey for creating screen-reader friendly forms at this link.


AEA/CDC Training session: Utilization-Focused Evaluation

AEA/CDC Training session: Utilization-focused evaluation

The first training session I took at the AEA/CDC Institute was Michael Patton’s Utilization-Focused Evaluation.  This workshop was pitched primarily for evaluators who are sick of producing time-consuming evaluation report tombs that sit on shelves. (You’re thinking I should have written “evaluation report tomes,” but actually, those reports are where evaluation results go to die.)  Patton commented that you could probably attach an executive report to 500 sheets of blank paper – or 500 pages from a phone book pulled from your recycling bin – and no one would ever notice because they never read past the executive summary.

Here’s some interesting food for thought: Patton said that the order of the evaluation standards (Utility, Feasibility, Propriety, and Accuracy) is deliberate: Utility, or usefulness to intended users, is listed first because it’s deemed the most important. So, in evaluation design, the evaluation’s usefulness should be considered ahead of its feasibility (practicality and cost effectiveness), propriety (legality, ethics, and concern for the welfare of others), and accuracy (technically adequate information about features that determine merit or worth of a program). All are important standards, but utility gets top ranking. (Definitions for the four evaluation standards are listed here at the American Evaluation Association web site.)

To enhance the utility of evaluation findings, Patton said it is important to identify the intended users and uses of the evaluation information at the beginning of the evaluation and create an action plan for use of evaluation results that takes the following into account:

·         The decisions the evaluation findings are meant to inform

·         Timing of those decisions

·         The stakeholders who will see and respond to the data

The responsibility for facilitating use of the findings falls on the evaluation consultant (or whoever is in charge of conducting the evaluation.)

If you are interested in learning how to conduct more useful evaluations, I recommend Patton’s Utilization-Focused Evaluation (2008, Sage), which is now in its 4th edition.