Archive for the ‘Practical Evaluation’ Category
How important is it to transcribe focus group discussions? Dr. Rita O’Sullivan from the UNC-Chapel Hill School of Education sought an objective answer to that question. She and colleagues ran an experiment in which two co-facilitators ran seven focus groups and created summary reports of the discussions. Each co-facilitator produced a report for each focus group: one wrote a summary based on memory, handwritten notes and a transcript of the audio tape; the other wrote a summary using memory, notes and the audiotape. (Each facilitator prepared seven summaries, some using the first method and some using the second.) Then, 18 educational professionals who were enrolled in a graduate-level educational research class compared the pairs of summaries. Sixteen of the 18 reviewers found no substantive differences between the two versions of the summaries.
What does this mean for evaluators? The authors concluded that their findings, although preliminary, suggest that, for the typical program evaluation setting, transcripts are not necessary to produce useful focus group discussion summaries. The findings also make it hard to justify the transcription costs for focus groups in evaluation settings – because every dollar spent on evaluation is one not spent on the program.
Source: O’Sullivan et al. Transcribing focus group articles: Is there a viable alternative? 2004 November. Paper presented at the joint international meeting of the American Evaluation Association and the Canadian Evaluation Society, Toronto, Canada.
Someone recently asked me if SurveyMonkey forms are accessible to those with functional limitations and disabilities. In fact, SurveyMonkey received Section 508 certification in June 2008. According to the company’s Web site, they are the only commercial online survey application that has this certification.
While SurveyMonkey software automatically formats surveys to be accessible, there are a few practices that we need to follow to make sure SurveyMonkey questionnaires are user-friendly with screen-readers and other visual technologies. For instance, don’t add extra html coding to your questionnaire (e.g., to bold-face or italicize words) because screen-readers may read parts of html coding as text. Also, SurveyMonkey’s default color schemes are configured for maximum contrast to help low-vision users. Creating your own color schemes may make your forms less readable for this population. You can find more tips from SurveyMonkey for creating screen-reader friendly forms at this link.
AEA/CDC Training session: Utilization-focused evaluation
The first training session I took at the AEA/CDC Institute was Michael Patton’s Utilization-Focused Evaluation. This workshop was pitched primarily for evaluators who are sick of producing time-consuming evaluation report tombs that sit on shelves. (You’re thinking I should have written “evaluation report tomes,” but actually, those reports are where evaluation results go to die.) Patton commented that you could probably attach an executive report to 500 sheets of blank paper – or 500 pages from a phone book pulled from your recycling bin – and no one would ever notice because they never read past the executive summary.
Here’s some interesting food for thought: Patton said that the order of the evaluation standards (Utility, Feasibility, Propriety, and Accuracy) is deliberate: Utility, or usefulness to intended users, is listed first because it’s deemed the most important. So, in evaluation design, the evaluation’s usefulness should be considered ahead of its feasibility (practicality and cost effectiveness), propriety (legality, ethics, and concern for the welfare of others), and accuracy (technically adequate information about features that determine merit or worth of a program). All are important standards, but utility gets top ranking. (Definitions for the four evaluation standards are listed here at the American Evaluation Association web site.)
To enhance the utility of evaluation findings, Patton said it is important to identify the intended users and uses of the evaluation information at the beginning of the evaluation and create an action plan for use of evaluation results that takes the following into account:
· The decisions the evaluation findings are meant to inform
· Timing of those decisions
· The stakeholders who will see and respond to the data
The responsibility for facilitating use of the findings falls on the evaluation consultant (or whoever is in charge of conducting the evaluation.)
If you are interested in learning how to conduct more useful evaluations, I recommend Patton’s Utilization-Focused Evaluation (2008, Sage), which is now in its 4th edition.
SurveyMonkey’s newsletter reports that SurveyMonkey surveys are now optimized for use on iPhones. The June 2009 newsletter states:
“Because it is a device with a modern, standards-compliant browser, any respondent can receive a link to your survey and access it directly on their iPhone.”
Furthermore, SurveyMonkey is currently working to make their surveys optimized on other media or hand-held devices.
In addition, you now have the ability to do the following:
- Create and download custom charts to enhance the presentation of your survey data.
- Import these graphics into your own presentation software such as PowerPoint, Word, etc.
To learn more about the updates, you can visit the following topic in the help center: Creating Custom Charts
An article entitled “Demystifying Survey Research: Practical Suggestions for Effective Question Design” was published in the journal Evidence Based Library and Information Practice (2007). The aim of this article is to provide practical suggestions for effective questions when designing written surveys. Sample survey questions used in the article help to illustrate how some basic techniques, such as choosing appropriate question forms and incorporating the use of scales, can be used to improve survey questions.
Since this is a peer reviewed, open-access journal, those interested may access the full-text article online at: http://ejournals.library.ualberta.ca/index.php/EBLIP/article/view/516/668.
In addition, for those interested in exploring survey research more, I have found the following print resources to be very helpful in this learning process:
Converse, J.M., and S. Presser. Survey Questions: Handcrafting the Standardized Questionnaire. Thousand Oaks, CA: Sage, 1986.
Fink, A. How to Ask Survey Questions. Thousand Oaks: Sage Publications, 2003.
Fowler, F.J. Improving Survey Questions: Design and Evaluation. Thousand Oaks: Sage Publications, 1995.
SurveyMonkey’s April newsletter reports their new Bounce Report feature:
“Sometimes when sending survey invitations through our collector, the email addresses may bounce the message back to you because the email is invalid, the receiving server is too busy, the receiving email inbox is full, and so on.
Now when sending your survey invitations through our Email Invitation collector, the messages are delivered by our email server. If the message is undeliverable, the email will be considered a Hard Bounced email in the Edit Recipients portion of the collector.
You now have the ability to do the following:
- View the Bounced emails.
- Export them from the list.
- Remove them from the list.
This will help to ensure that your lists are current and contain valid emails for future survey response collections.
To learn more about the new Bounce Report feature, please refer to the following Help Topics:
Check Bounced Emails
Hard Bounce Tutorial“
Perley CM, Gentry GA, Fleming S, Sen KM. Conducting a user-centered information needs assessment: the Via Christi Libraries’ experience. J Med Libr Assoc 2007 Apr; 95(2):173-181.
This article provides a good example of a needs assessment using multiple evaluation methods. Librarians at the Via Christi Libraries in Wichita, Kansas, provide information services to all employees of the Via Christi Regional Medical Center (VCRMC) and needed to develop a strategic plan to meet the expanding use of their services and increasing cost of providing access. This article provides detailed descriptions of how the researchers used a self-administered survey, telephone survey, and focus groups to gather information of increasing depth among users, and includes appendices with survey and focus group questions. The samples used in the project were not random, but the researchers used many venues to capture a solid cross section of their user population; and the multi-method approach allowed them to corroborate findings across different perspectives. They also described how they used the findings to develop a strategic plan and listed their “lessons learned” about doing needs assessment. This is not a “how to conduct a needs assessment” article and the findings are the main point of the piece. But their concrete description of their methods provides added value to their article.
The North Suburban Library System (north suburban Chicago, IL) has created a Return on Investment (ROI) Calculator on their website in keeping with their theme for the month of July, 2007: Dollars and Sense: Why Libraries are a Good Investment. For more information about this tool, visit:http://www.nsls.info/articles/detail.aspx?articleID=143.
Imholz S, Arns JW. Worth Their Weight: An Assessment of the Evolving Field of Library Valuation. New York: Americans for Libraries Council, 2007.
“Worth Their Weight” takes stock of the field of library valuation, defined as the process of assessing the value of a library to its community in actual dollars and cents. The report was issued by Americans for Libraries Council (ALC), “a nonprofit organization dedicated to increasing innovation and investment in the nation’s libraries.” The report describes different valuation models adapted from business and the nonprofit sector, such as social return-on-investment, triple-bottom line accounting, corporate social responsibility reports, and the balanced scorecard. To provide an overview of the valuation field, the report includes summaries of 17 public library valuation and impact studies (with links to the full reports). These summaries include detailed descriptions of the methods used, including actual surveys employed by the libraries. Finally, the report suggests ways to build the field of valuation and apply findings for use in library advocacy. The report relates specifically to public libraries, but the information has applicability to hospital and health science library valuation.
Donald Dillman is probably the most recognized scholar of social science survey research. He has done numerous experiments about the effect of item formats on how people respond to survey items. For instance, in “check all that apply” survey questions, the items at the top get the most responses because respondents stop reading halfway through the list. Changing the list to a series of “Yes No” questions forces respondents to read the entire list and results in better survey data.
The link below takes you to an article authored by Dillman and Christian that reports results from a series of experiments on item formats that can guide us in the best design of survey questions:
The Influence of Words, Symbols, Numbers, and Graphics on Answers to Self-Administered Questionnaires: results from 18 Experiments
You can find many other articles about surveys at Dillman’s website: