In September, we blogged about a way to create qualitative data visualizations by chunking a long narrative into paragraphs with descriptive illustrations.
Ann Emery has shown six additional ways to create qualitative data visualization: 1) Strategic world cloud use (one word or before/after comparisons), 2) Quantitative + Qualitative combined (a graph of percentages and a quote from an open-ended text comment) 3) Photos alongside participant responses (only appropriate for non-anonymized data) 4) Icon images beside text narratives 5) Diagrams explaining processes or concepts (the illustration of a health worker’s protective gear from Ebola in the Washington Post is a great example) and 6) Graphic timelines. See these examples and overviews on how to make your own at http://annkemery.com/qual-dataviz/
Do you need more information about reporting and visualizing your data? We at the Outreach Evaluation Resource Center (OERC) have more resources available for you from the Reporting and Visualizing tab of our Tools and Resources for Evaluation Guide at http://guides.nnlm.gov/oerc/tools and welcome your suggestions for additional resources to include and your comments.
If you think you might want to do a photovoice evaluation study, then you definitely should consult Practical Guidance and Ethical Considerations for Studies Using Photo-Elicitation Interviews by Bugos et al. The authors reviewed articles describing research projects that employed photovoice and photo-elicitation. Then, they skillfully synthesized the information into practical and ethical guidelines for doing this type of work.
Photo-elicitation refers specifically to the interviewing methods used to get participants to talk about their photographs and videos. The key contribution of this article is its focus on how to interviewing. Effective interviewing technique is essential because the photographs are meaningless unless you understand the participants’ stories behind them. The practical guidelines help you elicit usable, trustworthy story data after the photographs have been taken.
While interviewing is the main focus of the article, you will find some advice on the photo collection phase as well. This article includes guidance on how to train your participants to protect their own safety and the dignity of their subjects when taking photographs. All of the research projects reviewed for this article received institutional review board approval. If you follow their guidelines, you can have confidence that you are protecting the safety, privacy and confidentiality of all involved.
Here is the full citation for this very pragmatic article:
Bugos E, Frasso R, FitzGerald E, True G, Adachi-Mejia AM, Cannuscio C. Practical Guidance and Ethical Considerations for Studies Using Photo- Elicitation Interviews. Prev Chronic Dis 2014;11:140216. DOI: http://dx.doi.org/10.5888/pcd11.140216
Rural and medically underserved areas often have challenges including both increased health disparities and population health issues combined with limited resources and healthcare providers to help meet these challenges. The use of appropriate program evaluation measures can help to assess what actually works for rural health settings since many evidence-based strategies are based on urban and non-rural populations.
The Rural Assistance Center (raconline.org) has recently issued a freely available online guide at http://www.raconline.org/topics/rural-health-research-assessment-evaluation The guide is intended to help an organization
- Identifies the similarities and differences among rural health research, assessment, and evaluation
- Discusses common methods, such as surveys and focus groups
- Provides contacts within the field of rural health research
- Addresses the importance of community-based participatory research to rural communities
- Looks at the community health needs assessment (CHNA) requirements for non-profit hospitals and public health
- Examines the importance of building the evidence-base so interventions conducted in rural areas have the maximum possible impact
Thanks to National Network of Libraries of Medicine (NN/LM) Network member (what does that mean?) Gail Kouame from HEALWA for sharing this great resource with us at the Outreach Evaluation Resource Center (OERC)! Do you have an evaluation-related resource to share? We would be happy to consider featuring it in our blog or possible inclusion in our Tools and Resources guide at guides.nnlm.gov/oerc/tools.
Coming soon to a computer near you! Chris Lysy of FreshSpectrum is offering a free seven-part data visualization workshop. Chris has provided data viz training for the American Evaluation Association. (His followers also love his cartoon-illustrated evaluation blog. ) He calls himself the Rachel Ray of data visualization, which makes his course description a nice feature for the OERC’s Thanksgiving blog post.
The workshop date is still TBA, but you can join his mailing list now to get full details when they are released.
Also, Thanksgiving activities often include movie-viewing. So here are some fun data visualizations of famous movie quotes by Flowingdata to help you through the last afternoon before the holiday weekend.
Looking for an ‘at a glance’ single page to determine which type of data visualization chart is helpful in order to clearly communicate your results?
This PDF flowchart at http://betterevaluation.org/plan/describe/visualise_data is a very handy reference! The flowchart guides you towards considering the appropriate data visualization chart options after your initial response to the question of “What would you like to show?” answers of comparison, distribution, composition, or relationship. There are brief descriptions of the charts at the Better Evaluation data visualization page that you can click through to get additional information such as a deviation bar graph that includes synonyms, a base definition, examples of how the chart is used, advice about their use, and links to resources for creating them.
Nothing beats qualitative (non-numerical) data collection methods for getting a high volume of rich, interesting information from project participants and stakeholders. The downside is that these methods are resource intensive, so you usually are limited to involving a relatively small number of participants in conversation.
But what if you want to collect a lot of qualitative responses from a lot of people?
If you do, check out the Liberating Structures website. It provides step-by-step instructions for activities to engage large groups in conversations for planning and evaluation. The website offers a menu of 33 activities with extensive planning details, plus ideas for combining activities into an almost unlimited number of group discussion formats.
I participated in a Liberating Structures activity in Denver last month when I attended the Quint*Essential Conference, hosted by five Medical Library Association chapters. Staff from National Network of Libraries of Medicine (NN/LM) regional offices invited all conference attendees to generate and evaluate ideas for future network initiatives. It was a high-energy activity that engaged more than 100 people in providing bold ideas for future activities.
The beauty of Liberating Structures activities is that the guidelines include how to document conversations so meeting facilitators will end their exercises with actual data. In some cases, the data can be quickly analyzed. NN/LM facilitators were able to compile and report results from the Quint discussion in the exhibit hall later that day.
I want to thank Claire Hamasu, the Associate Director of the NN/LM MidContinental Region, for pointing me to the Liberating Structures web site and including me in the Quint Conference activity. I personally look forward to trying more of these activities and hope other readers are inspired to do so as well.
We at the Outreach Evaluation Resource Center (OERC) have previously covered the American Evaluation Association’s (AEA) tip-a-day blog at http://aea365.org/blog as a helpful resource. This week posts about literature search strategies were shared on the AEA blog by Network member librarians from the Lamar Soutter Library at the University of Massachusetts Medical School. Have you been involved in a similar collaboration? Please let us know, we’d love to feature your work in a future OERC blog post!
Literature Search Strategy Week
- Best Databases – learn the most effective starting points for biomedical, interdisciplinary, specialized, and a handy Top Ten list of literature databases.
- Constructing a Literature Search – learn the value of a vocabulary roadmap, and the difference between keyword and controlled vocabulary searching.
- Grey Literature – strategies for understanding these non-traditional but highly valuable information resources and starting points on where to find them.
- Using MyNCBI – learn how to sign up for your free account, save your PubMed search strategies, receive email updates, customize your display and more.
- Citation Management – featuring both freely available and other options you may have access to through your academic organizations.
For the past couple of months, the OERC has engaged in an Appreciative Inquiry (AI) interview project to get feedback and advice from users on to our services. Appreciative Inquiry was developed in the 1980s by David Cooperrider and Suresh Srivastva as an approach to bring “collaborative and strength-based change” to organizations. The methods are designed to collect information emphasizing positive aspects of an organization and vision for a better future. Probably the best known AI tool is the interview, which covers three basic areas:
- A peak experience of the interviewee.
- Why the interviewee found that experience so valuable.
- What the interviewee wished could happen to bring about more exceptional experiences.
(You can find the OERC’s adaption of these basic questions here.)
When people are first introduced to AI evaluation processes, they skeptically ask if this approach doesn’t lead to positively biased data. I would say no, because we are asking for descriptive rather than evaluative comments. I call the interviews “constructive conversations without criticism.” You come away from the experience thinking “what could be?” rather than “what’s wrong?” The feedback was painless to me, because our users made recommendations in wishful, rather than judgmental, terms.
I also think AI is a superior way to get frank advice from users if they generally like your organization. When asked for feedback, particularly in interpersonal situations, interviewees may not want to offend the organization’s staff or, worse, cause negative repercussions. When you ask people to talk about dreams and wishes, their imaginations are engaged and fear of being critical falls away. They are free to give you great ideas for moving forward.
If your organization is about to embark on strategic planning of any kind, I highly recommend the AI approach. You can get more information about AI methods at the Appreciative Inquiry Commons or the Center for Appreciative Inquiry websites. For an excellent book on applying AI to evaluation practice, check out Reframing Evaluation through Appreciative Inquiry by Preskill and Catsambas (Sage, 2006).
Note: The OERC will post results of its AI project in a future blog post, when we have completed our analysis.
Recently the AEA365 Evaluation Tip a Day resource the Outreach Evaluation Resource Center (OERC) previously blogged about featured a review and several hot tips for the use of Padlet, a freely available web-based bulletin board system. Their hot tips included use of Padlet as an anonymous brainstorming activity in response to a question or idea, and as a backchannel for students or conference attendees to share resources and raise questions for future discussion.
I took a closer look at Padlet’s bulletin board configuration settings and found them intuitive and easy to use with various backgrounds and freeform, tabular or grid note arrangement display on the bulletin board. Free Padlet accounts can be created by either signing up for one or by linking to an existing Google or Facebook account. Privacy is a key concern that Padlet delivers many options for that are clearly explained including Private (only you and others you invite to participate via email), requiring the use of a password to access the Padlet, and Public to view, write or moderate. A new update feature includes a variety of ways to share Padlet data, ranging from clicking the icon for 6 different social media channels to downloading data as a PDF or Excel/CSV file for analysis.
Please check out a Padlet about the OERC Evaluation Series and leave your input! Posts will be moderated on the Padlet before they display publicly.
Are you new to evaluation or need assistance with planning and implementing program evaluation resources from a public or community health perspective for your projects?
The Centers for Disease Control and Prevention (CDC) has a freely available ‘how to’ resource for you with an Introduction to Program Evaluation for Public Health Programs: A Self Study Guide. Examples of public and community health programs that can be considered for program evaluation include direct service interventions, community-based mobilization efforts, research initiatives into issues such as health disparities, advocacy work, and training programs. The guide is available online or as a PDF download that consists of a six step process (from Engaging Stakeholders to Ensure Use of Evaluation Findings), a helpful Glossary of program evaluation terminology, and Resources for additional publications, toolkits and more to support public and community health program evaluation work.
A related CDC guide (A Framework for Program Evaluation) is one of several resources we at the Outreach Evaluation Resource Center (OERC) feature in the Evaluation Planning section of our Tools and Resources for Evaluation Page at http://guides.nnlm.gov/oerc/tools