Want to build your repertoire of evaluation skills? Check out this library of evaluation-related podcasts and webinars from the CDC’s Division of Heart Disease and Stroke Prevention. These are archived documents from 20-minute presentations about evaluation. The usual basic topics are represented, such as “Making Logic Models Work for You” and “How Do I Develop a Survey?” But a number of the presentations cover topics that are not standard fare. Here are just a few titles that caught my eye:
Most presentations consist of PDFs of PowerPoint slides and talking points, but there are a few podcasts as well. All presentations seem to be bird’s-eye overviews, but the final slides offer transcripts of Q&A discussion and a list of resources for more in-depth exploration of the topic. It’s a great way to check out a new evaluation interest!
The illustration above is from Stephanie Evergreen‘s excellent blog post that caution is needed with the use of line graphs in a chart to show change over time for multiple organizations so you don’t end up with a brightly colored bowl of spaghetti. The solution to passing on this pasta effect? Creating small multiple graphs, such as by each region in this example, which are done one at a time using the same scale as the original graph then stitched together with alignment tools including a ruler and Align > Align Top commands in your graphing software. Be sure to see the end result and step by step guidance on how to create these at http://stephanieevergreen.com/declutter-dataviz-with-small-multiples/
Showing change over time as a line graph instead of a bar graph is one of the quantitative data focus areas in our Outreach Evaluation Resource Center (OERC) webinar Data Burger: A ‘Good’ Questionnaire Response Rate Plus Basic Quantitative Analysis. You can listen to a recording of the Data Burger presentation for the Mid Atlantic Region at https://webmeeting.nih.gov/p2mn6k7tkv6/, and please contact us if you’d like to hear more about this or one of our other webinars.
In September, we blogged about a way to create qualitative data visualizations by chunking a long narrative into paragraphs with descriptive illustrations.
Ann Emery has shown six additional ways to create qualitative data visualization: 1) Strategic world cloud use (one word or before/after comparisons), 2) Quantitative + Qualitative combined (a graph of percentages and a quote from an open-ended text comment) 3) Photos alongside participant responses (only appropriate for non-anonymized data) 4) Icon images beside text narratives 5) Diagrams explaining processes or concepts (the illustration of a health worker’s protective gear from Ebola in the Washington Post is a great example) and 6) Graphic timelines. See these examples and overviews on how to make your own at http://annkemery.com/qual-dataviz/
Do you need more information about reporting and visualizing your data? We at the Outreach Evaluation Resource Center (OERC) have more resources available for you from the Reporting and Visualizing tab of our Tools and Resources for Evaluation Guide at http://guides.nnlm.gov/oerc/tools and welcome your suggestions for additional resources to include and your comments.
If you think you might want to do a photovoice evaluation study, then you definitely should consult Practical Guidance and Ethical Considerations for Studies Using Photo-Elicitation Interviews by Bugos et al. The authors reviewed articles describing research projects that employed photovoice and photo-elicitation. Then, they skillfully synthesized the information into practical and ethical guidelines for doing this type of work.
Photo-elicitation refers specifically to the interviewing methods used to get participants to talk about their photographs and videos. The key contribution of this article is its focus on how to interviewing. Effective interviewing technique is essential because the photographs are meaningless unless you understand the participants’ stories behind them. The practical guidelines help you elicit usable, trustworthy story data after the photographs have been taken.
While interviewing is the main focus of the article, you will find some advice on the photo collection phase as well. This article includes guidance on how to train your participants to protect their own safety and the dignity of their subjects when taking photographs. All of the research projects reviewed for this article received institutional review board approval. If you follow their guidelines, you can have confidence that you are protecting the safety, privacy and confidentiality of all involved.
Here is the full citation for this very pragmatic article:
Bugos E, Frasso R, FitzGerald E, True G, Adachi-Mejia AM, Cannuscio C. Practical Guidance and Ethical Considerations for Studies Using Photo- Elicitation Interviews. Prev Chronic Dis 2014;11:140216. DOI: http://dx.doi.org/10.5888/pcd11.140216
Rural and medically underserved areas often have challenges including both increased health disparities and population health issues combined with limited resources and healthcare providers to help meet these challenges. The use of appropriate program evaluation measures can help to assess what actually works for rural health settings since many evidence-based strategies are based on urban and non-rural populations.
The Rural Assistance Center (raconline.org) has recently issued a freely available online guide at http://www.raconline.org/topics/rural-health-research-assessment-evaluation The guide is intended to help an organization
- Identifies the similarities and differences among rural health research, assessment, and evaluation
- Discusses common methods, such as surveys and focus groups
- Provides contacts within the field of rural health research
- Addresses the importance of community-based participatory research to rural communities
- Looks at the community health needs assessment (CHNA) requirements for non-profit hospitals and public health
- Examines the importance of building the evidence-base so interventions conducted in rural areas have the maximum possible impact
Thanks to National Network of Libraries of Medicine (NN/LM) Network member (what does that mean?) Gail Kouame from HEALWA for sharing this great resource with us at the Outreach Evaluation Resource Center (OERC)! Do you have an evaluation-related resource to share? We would be happy to consider featuring it in our blog or possible inclusion in our Tools and Resources guide at guides.nnlm.gov/oerc/tools.
Coming soon to a computer near you! Chris Lysy of FreshSpectrum is offering a free seven-part data visualization workshop. Chris has provided data viz training for the American Evaluation Association. (His followers also love his cartoon-illustrated evaluation blog. ) He calls himself the Rachel Ray of data visualization, which makes his course description a nice feature for the OERC’s Thanksgiving blog post.
The workshop date is still TBA, but you can join his mailing list now to get full details when they are released.
Also, Thanksgiving activities often include movie-viewing. So here are some fun data visualizations of famous movie quotes by Flowingdata to help you through the last afternoon before the holiday weekend.
Looking for an ‘at a glance’ single page to determine which type of data visualization chart is helpful in order to clearly communicate your results?
This PDF flowchart at http://betterevaluation.org/plan/describe/visualise_data is a very handy reference! The flowchart guides you towards considering the appropriate data visualization chart options after your initial response to the question of “What would you like to show?” answers of comparison, distribution, composition, or relationship. There are brief descriptions of the charts at the Better Evaluation data visualization page that you can click through to get additional information such as a deviation bar graph that includes synonyms, a base definition, examples of how the chart is used, advice about their use, and links to resources for creating them.
Nothing beats qualitative (non-numerical) data collection methods for getting a high volume of rich, interesting information from project participants and stakeholders. The downside is that these methods are resource intensive, so you usually are limited to involving a relatively small number of participants in conversation.
But what if you want to collect a lot of qualitative responses from a lot of people?
If you do, check out the Liberating Structures website. It provides step-by-step instructions for activities to engage large groups in conversations for planning and evaluation. The website offers a menu of 33 activities with extensive planning details, plus ideas for combining activities into an almost unlimited number of group discussion formats.
I participated in a Liberating Structures activity in Denver last month when I attended the Quint*Essential Conference, hosted by five Medical Library Association chapters. Staff from National Network of Libraries of Medicine (NN/LM) regional offices invited all conference attendees to generate and evaluate ideas for future network initiatives. It was a high-energy activity that engaged more than 100 people in providing bold ideas for future activities.
The beauty of Liberating Structures activities is that the guidelines include how to document conversations so meeting facilitators will end their exercises with actual data. In some cases, the data can be quickly analyzed. NN/LM facilitators were able to compile and report results from the Quint discussion in the exhibit hall later that day.
I want to thank Claire Hamasu, the Associate Director of the NN/LM MidContinental Region, for pointing me to the Liberating Structures web site and including me in the Quint Conference activity. I personally look forward to trying more of these activities and hope other readers are inspired to do so as well.
We at the Outreach Evaluation Resource Center (OERC) have previously covered the American Evaluation Association’s (AEA) tip-a-day blog at http://aea365.org/blog as a helpful resource. This week posts about literature search strategies were shared on the AEA blog by Network member librarians from the Lamar Soutter Library at the University of Massachusetts Medical School. Have you been involved in a similar collaboration? Please let us know, we’d love to feature your work in a future OERC blog post!
Literature Search Strategy Week
- Best Databases – learn the most effective starting points for biomedical, interdisciplinary, specialized, and a handy Top Ten list of literature databases.
- Constructing a Literature Search – learn the value of a vocabulary roadmap, and the difference between keyword and controlled vocabulary searching.
- Grey Literature – strategies for understanding these non-traditional but highly valuable information resources and starting points on where to find them.
- Using MyNCBI – learn how to sign up for your free account, save your PubMed search strategies, receive email updates, customize your display and more.
- Citation Management – featuring both freely available and other options you may have access to through your academic organizations.
For the past couple of months, the OERC has engaged in an Appreciative Inquiry (AI) interview project to get feedback and advice from users on to our services. Appreciative Inquiry was developed in the 1980s by David Cooperrider and Suresh Srivastva as an approach to bring “collaborative and strength-based change” to organizations. The methods are designed to collect information emphasizing positive aspects of an organization and vision for a better future. Probably the best known AI tool is the interview, which covers three basic areas:
- A peak experience of the interviewee.
- Why the interviewee found that experience so valuable.
- What the interviewee wished could happen to bring about more exceptional experiences.
(You can find the OERC’s adaption of these basic questions here.)
When people are first introduced to AI evaluation processes, they skeptically ask if this approach doesn’t lead to positively biased data. I would say no, because we are asking for descriptive rather than evaluative comments. I call the interviews “constructive conversations without criticism.” You come away from the experience thinking “what could be?” rather than “what’s wrong?” The feedback was painless to me, because our users made recommendations in wishful, rather than judgmental, terms.
I also think AI is a superior way to get frank advice from users if they generally like your organization. When asked for feedback, particularly in interpersonal situations, interviewees may not want to offend the organization’s staff or, worse, cause negative repercussions. When you ask people to talk about dreams and wishes, their imaginations are engaged and fear of being critical falls away. They are free to give you great ideas for moving forward.
If your organization is about to embark on strategic planning of any kind, I highly recommend the AI approach. You can get more information about AI methods at the Appreciative Inquiry Commons or the Center for Appreciative Inquiry websites. For an excellent book on applying AI to evaluation practice, check out Reframing Evaluation through Appreciative Inquiry by Preskill and Catsambas (Sage, 2006).
Note: The OERC will post results of its AI project in a future blog post, when we have completed our analysis.