Skip all navigation and go to page content
NN/LM Home About OERC | Contact Us | Feedback |OERC Sitemap | Help | Bookmark and Share

Maximize your response rate

Did you know that the American Medical Association has a specific recommendation for its authors about questionnaire response rate? Here it is, from the JAMA Instructions for Authors:

Survey Research
Manuscripts reporting survey data, such as studies involving patients, clinicians, the public, or others, should report data collected as recently as possible, ideally within the past 2 years. Survey studies should have sufficient response rates (generally at least 60%) and appropriate characterization of nonresponders to ensure that nonresponse bias does not threaten the validity of the findings. For most surveys, such as those conducted by telephone, personal interviews (eg, drawn from a sample of households), mail, e-mail, or via the web, authors are encouraged to report the survey outcome rates using standard definitions and metrics, such as those proposed by the American Association for Public Opinion Research.

Meanwhile, response rates to questionnaires have been declining over the past 20 years, as reported by the Pew Research Center in “The Problem of Declining Response Rates.” Why should we care about the AMA’s recommendation regarding questionnaire response rates? Many of us will send questionnaires to health care professionals who, like physicians, are very busy and might not pay attention to our efforts to learn about them. Even JAMA authors such as Johnson and Wislar have pointed out that “60% is only a “rule of thumb” that masks a more complex issue.” (Johnson TP; Wislar JS. “Response Rates and Nonresponse Errors in Surveys.” JAMA, May 2, 2012—Vol 307, No. 17, p.1805) These authors recommend that we evaluate nonresponse bias in order to characterize differences between those who respond and those who don’t. These standard techniques include:

  • Conduct a follow-up survey with nonrespondents
  • Use data about your sampling frame and study population to compare respondents to nonrespondents
  • Compare the sample with other data sources
  • Compare early and late respondents

Johnson and Wislar’s article is not open access, unfortunately, but you can find more suggestions about increasing response rates to your questionnaires in two recent AEA365 blog posts that are open access:

Find more useful advice (e.g., make questionnaires short, personalize your mailings, send full reminder packs to nonrespondents) at this open access article: Sahlqvist S, et al., “Effect of questionnaire length, personalisation and reminder type on response rate to a complex postal survey: randomised controlled trial.” BMC Medical Research Methodology 2011, 11:62

Free Images for Your Evaluation Reports

The current trend in evaluation reporting is toward fewer words and more images. There are a number of companies that offer high-quality, royalty free photographs at minimal cost. (Stockfresh, for example, charges as little as $1 per image.) However, no-cost is even better than low-cost. Freelancers Union, a nonprofit organization dedicated to assisting freelance workers, recently published a list of the best websites for no-cost images.  If you are looking for free images for your presentations or reports, check out their article:

https://www.freelancersunion.org/blog/2014/02/07/best-free-image-resources-online/

(The article also describes the difference between public domain, royalty-free and Creative Commons-licensed images.)

Evaluation Tips: Recipe of Evaluation Techniques

Last week I attended a webinar presentation from Stanley Capela entitled Recipe of Evaluation Techniques for the Real World. This is one of the American Evaluation Association’s (AEA) ongoing 20 minute Coffee Break webinars . The webinars, offered Thursdays at 2pm Eastern time, often present similar tools and tips that are also covered in the Tip a Day blog but allow for audience questions &  answers and networking with the presenters.

Capela’s recipe focused primarily on internal evaluation in a non-profit or government settings where people are seeking realistic answers in response to your assessment efforts. His tips include:

  • Value People’s Timeall time is valuable, regardless of who you are working with, and clear communication on the intent of the evaluation helps to make the best use of everyone’s time.
  • Ethical Conduct – working within the parameters of organization and/or professional association codes of conducts in addition to established support of upper level administration will help to minimize the potential for ethical dilemmas.
  • Know Your Enemies – be aware of those who are resistant to program evaluation and may try to undermine these efforts, and also know that you as an evaluator may be perceived as an enemy by others. Again, clear communication helps!
  • Culture of Accountability – take the time to know the story of those you are working with – where are they coming from? What is their history with previous assessments? Were their needs met, or were there issues that had negative effects on relationships and outcomes?
  • Do Something – avoid cycles of conducting reviews, identifying deficiencies, and outcomes that only include developing correction plans. Also important to note is that program evaluation does not solve management problems.
  • A Picture is Worth 1,000 Words – find ways to integrate charts that direct the reader to the most important information clearly and concisely.
  • Let Go of Your Ego – working from a mindset that accepts the people conducting the program itself will most likely ‘get the credit’, and that your measure of success is doing your job to the best of your ability knowing you made a difference.
  • Give Back – develop of network of trusted colleagues, such as through personal and organization connections on LinkedIn and other platforms, share ideas, and asking questions since others have probably encountered a similar situation or can connect you with those who have.

Hopefully you have found the information we at the Outreach Evaluation Resource Center (OERC) have freely available for you in our updated Evaluation Guides helpful as an additional source of  ideas, strategies, and worksheets to include in your evaluation recipe collection!

Webinars and Workshops about Evaluating Outreach

The National Network of Libraries of Medicine Outreach Evaluation Resource Center (OERC) offers a range of webinars and workshops upon request by network members and coordinators from the various regions. Take a look at the list and see if one of the options appeals to you. To request a workshop or webinar, contact Susan Barnes.

The workshops were designed as face-to-face learning opportunities but we can tailor them to meet distance learning needs by distilling them to briefer webinars or offering them in series of 1-hour webinars.

Don’t see what you’re looking for on this list? Then please contact Susan and let her know!

We’re looking forward to hearing from you.

Exhibit evaluations with QuickTapSurvey

Exhibiting is a popular strategy for health information resource promotion, but exhibits can be challenging events to evaluate. Survey platforms for tablets and mobile phones can make it a little bit easier to collect feedback at your booths. At the OERC, we have explored QuickTapSurvey, which seems well-suited to getting point-of-contact responses from visitors. The application allows you to create short, touch-screen questionnaires on Apple or Android tablets. You simply hand the tablet to visitors for their quick replies. The same questionnaire can be put on multiple tablets, so you and your colleagues can collect responses simultaneously during an exhibit.

When you have an Internet connection, responses are automatically uploaded into your online QuickTapSurvey account. When no connection is available, data are stored on the tablet and uploaded later. You can use QuickTapSurvey’s analytics to summarize responses with statistics and graphs.  You also can download the data into a spreadsheet to analyze in Excel.

QuickTapSurvey is a commercial product, but there is a limited free version. The application is fairly user friendly, but we recommend experimenting with it before you take it on the road. Information about QuickTapSurvey, including the different pricing packages that are available, can be found here: http://quicktapsurvey.com/

Evaluation Tip a Day

Do you want to know more about great assessment resources, tools, and lessons learned from others who share your interest in evaluation?

Do you not want to add another professional journal to the existing TBR (to be read) stack in your office?

Check out the American Evaluation Association (AEA) 365 blog at http://aea365.org where anyone (not only AEA members) can subscribe via email or really simple syndication (RSS) feed. The established blog guidelines place a cap on contributions with a maximum of 450 words per entry. You will know at a glance what the subject is (Hot Tips, Cool Tricks, Rad Resources, or Lessons Learned) from the headers used within the entries, and all assumptions of prior knowledge and experience with evaluation and organizations are avoided with clarification of all acronyms and no jargon allowed.

A handy tip – Scroll down the right sidebar of the website to locate subjects arranged by the AEA Topical Interest Groups (TIGs). Some of these that are likely to be of interest to National Network of Libraries of Medicine (NN/LM) members are Data Visualization and Reporting, Disabilities and Other Vulnerable Populations, Health Evaluation, Integrating Technology into Evaluation, and Nonprofits and Foundations Evaluation to name only a few.

A brief review of a recent entry of interest to NN/LM members – Conducting a Health Needs Assessment of People With Disabilities -  shared lessons learned from the needs assessment work done in Massachusetts, and shared the rad resource of Disability and Health Data System (DHDS) with state-level disability health data available from the Centers for Disease Prevention and Control (CDC).

Interview tips: Talking with participants during a usability test

The Nielsen Norman Group (NNG) conducts research and publishes information about user experience with interfaces. NNG was an early critic of the troubled “healthcare.gov” web site: “Healthcare.gov’s Account Setup: 10 Broken Usability Guidelines.” recent post (“Talking with participants during a usability test”) provided tips for facilitating usability tests that could be very useful whenever you’re facilitating a discussion or conducting an observation. When in doubt about whether to speak to a participant, count to 10 and decide whether to say something. Consider using “Echo” or “Boomerang” or “Columbo” approaches:

  • Echo–repeat the last words or phrase, using an interrogatory tone.
  • Boomerang–formulate a nonthreatening question that “pushes” a user’s comment back and causes them to think of a response for you, such as “What would you do if you were on your own?”
  • Columbo–be smart but don’t act that way, as in the “Columbo” TV series from the 1960′s and 1970′s starring Peter Falk.

The full article “Talking with participants during a usability test” provides audio examples of these techniques that you can listen to. You can find a large amount of additional information about usability testing on the Nielsen Norman Group’s web site, such as “How to Conduct Usability Studies” and “Usability 101: Introduction to Usability.”

Cleaning Up Your Charts

So how are those New Year’s resolutions going?

Many of us like to start the year resolving to clean up some part of our lives. Our diet. Our spending habits. The five years of magazine subscriptions sitting by our recliner.

Here’s another suggestion: Resolve to clean up “chart junk” in the charts you add to PowerPoint presentations or written reports.

Now I can pack information into a bar chart with the best of them. But it is no longer in vogue to clutter charts with data labels, gridlines, and detailed legends. This is not just a fashion statement, either. Design experts point out that charts should make their point without the inclusion of a bunch of distracting details. If the main point of your chart is not visually obvious, you either have not designed it correctly or you are presenting a finding that is not particularly significant.

So the next time you create a chart, consider these suggestions:

  • Use your title to communicate the main point of the chart. Take a tip from newspaper headlines and make your title a complete sentence.
  • Don’t use three-dimensional displays. It interferes with people’s comprehension of charts.
  • Ditch the gridlines or make them faint so they don’t clutter the view.
  • Use contrast to make your point. Add a bright color to the bar or line that carries the main point and use gray or another faint color for the comparison bars or lines.
  • Be careful in picking colors. Use contrasting colors that are distinguishable to people with colorblindness. If your report is going to be printed, be sure the contrast still shows up when presented in black-and-white.
  • Consider not using data labels, or just label the bar or line associated with your main point.
  • Remove legends and apply legend labels inside the bars or at the end of lines.

For more comprehensive information on eliminating chart junk, check out this article:

Evergreen S, Mezner C. Design principles for data visualization in evaluation. In Azzam T, Evergreen S. (eds). Data visualization, part 2. New Directions in Evaluation. Winter 2013, 5-20.

Evaluation Terminology: An Overview and Free Resources

Last week I attended an excellent webinar session presented by Kylie Hutchinson of Community Solutions Planning & Evaluation about the vast and often jargony world of evaluation terminology. As part of Hutchison’s research she consulted three online evaluation glossaries* and counted thirty six different definitions of evaluation methods within them. What accounts for so much variation? Common reasons include the perspectives and language used by different sectors and funders such as education, government, and non-profit organizations.

A helpful tip when working with organizations on evaluation projects is to ask to see copies of documents such as annual reports, mission and vision statements, strategic planning, and promotional materials to learn more about what language they use to communicate about themselves. This will assist you in knowing if modifications in assessment terminology language are needed, and can help guide you with discussions on clarifying the organization’s purpose of the evaluation.

Hutchinson identified several common themes within the plethora of evaluation methods and created color-coded clusters of them within her Evaluation Terminology Map, which uses the bubbl.us online mind mapping program. She also created a freely available Evaluation Glossary app for use on both iPhone and Android mobile devices and has a web-based version under development. For additional resources to better understand health information outreach evaluation, be sure to visit our tools website at http://guides.nnlm.gov/oerc/tools.

* Two of the three online evaluation glossaries referenced are still available online

“Evidence” — what does that mean?

In our health information outreach work we are expected to provide evidence of the value of our work, but there are varying definitions of the word “evidence.” The classical evidence-based medicine approach (featuring results from randomized controlled clinical trials) is a model that is not always relevant in our work. At the 2013 EBLIP7 meeting in Saskatoon, Saskatchewan, Canada, Denise Kaufogiannakis presented a keynote address that is now available as an open-access article on the web:

“What We Talk About When We Talk About Evidence” Evidence-Based Library and Information Practice 2013 8.4

This article looks at various interpretations of what it means to provide “evidence” such as

theoretical (ideas, concepts and models to explain how and why something works),
empirical (measuring outcomes and effectiveness via empirical research), and
experiential (people’s experiences with an intervention).

Kaufogiannakis points out that academic librarians’ decisions are usually made in groups of people working together and she proposes a new model for evidence-based library and information practice:

1) Articulate – come to an understanding of the problem and articulate it. Set boundaries and clearly articulate a problem that requires a decision.

2) Assemble – assemble evidence from multiple sources that are most appropriate to the problem at hand. Gather evidence from appropriate sources.

3) Assess – place the evidence against all components of the wider overarching problem. Assess the evidence for its quantity and quality. Evaluate and weigh evidence sources. Determine what the evidence says as a whole.

4) Agree – determine the best way forward and if working with a group, try to achieve consensus based on the evidence and organizational goals. Determine a course of action and begin implementation of the decision.

5) Adapt – revisit goals and needs. Reflect on the success of the implementation. Evaluate the decision and how it has worked in practice. Reflect on your role and actions. Discuss the situation
with others and determine any changes required.

Kaufogiannakis concludes by reminding us that “Ultimately, evidence, in its many forms, helps us find answers. However, we can’t just accept evidence at face value. We need to better understand evidence – otherwise we don’t really know what ‘proof’ the various pieces of evidence provide.”