Skip all navigation and go to page content
NN/LM Home About OERC | Contact Us | Feedback |OERC Sitemap | Help | Bookmark and Share

Archive for the ‘News’ Category

Brush Up On Your Excel Skills

If you do evaluation, you likely use Excel.  So I recommend bookmarking this web page of short videos with excellent instructions and practical tips for analyzing data with Excel.  Ann Emery produced these videos, most of which are 2-5 minutes long. Some cover the basics, such as how to calculate descriptive statistics. If you find pivot tables difficult to grasp, there’s a short video describing the different components.  For the experienced user, there are videos about more advanced topics such as how to automate dashboards using Excel and Word.

Emery is an evaluator and data analyst with Innovation Network in Washington, DC.  She has a special interest in data visualization and covers that and other topics in her blog, also available at her website.

How to use Hashtags to Increase Social Media Presence

Hashtag #like
like 2 by misspixels on Flickr, Creative Commons license

 

Have you determined that the use of social media channels is appropriate for your organization following our coverage of evaluating social media activities last week?

If so you will quickly encounter hashtags, which are user-controlled categories prefaced with a pound sign. Hashtags were once limited to Twitter but are now used on most social media sites including Facebook and Google+. Conversational,  concise, and consistent use of up to two hashtags per social media message can result in double the amount of user engagement compared to messages without them. For more statistics specific to Twitter and user engagement, Buffer’s coverage at http://blog.bufferapp.com/10-new-twitter-stats-twitter-statistics-to-help-you-reach-your-followers is an excellent overview.

What are some of the ways to show that hashtags increase user engagement with your organization’s message? Look for performance indicators of reposts (the use of ‘Share’ on Facebook or retweets on Twitter), replies (comments under the message from Facebook followers, replies to the tweet from Twitter users), the number of clicks to any links included in your message (ideally to your organization’s website and resources), and hashtag usage frequency.

For tips on how to track these performance indicators and additional statistics regarding hashtag creation and use check out the helpful infographic at http://www.digitalinformationworld.com/2014/04/using-hashtags-to-boost-your-social-presence-infographic.html.

Healthy People 2020 and Infographics

HealthyPeople.gov provides science-based 10 year national objectives to help improve the health of all Americans. These objectives are focused on encouraging collaborations across communities, empowering people to make individual health choices, and measuring the impact of prevention activities.  Currently we are in the time frame referred to as Healthy People 2020 with goals and objectives to achieve by the year 2020 listed at their website http://www.healthypeople.gov/2020/about

There are 42 topic areas with over 1,200 objectives for Healthy People 2020,  so a smaller subset of objectives has been identified as Leading Health Indicators (LHI) to communicate high-priority health issues and actions to take for addressing them. These LHI objectives are included within the 12 areas of access to health services, clinical preventative services, environmental quality, injury and violence, maternal, infant and child health; mental health, nutrition, physical activity and obesity; oral health, reproductive and sexual health, social determinants, substance abuse, and tobacco.

Secondhand smoke

Monthly Leading Health Indicators (LHI) infographics help visually communicate Healthy People 2020 data, such as the one above about children’s exposure to secondhand smoke based on health insurance status, and are freely available for download and use by the public at http://www.healthypeople.gov/2020/LHI/infographicGallery.aspx. The infographics are arranged in chronological order, most recent month available at the top, with the LHI area included so you can quickly identify and click those health topics of interest to you and your programs. They are also a great source of data, both in raw downloads and these visualizations, for presentations and publications for your health information outreach programs that are related to the LHI areas. Be sure to sign up for their monthly newsletter at the infographic site so you won’t miss the latest one each month!

 

 

New Journal: Systematic Reviews

Librarians’ expert searching skills provide some great opportunities for collaboration with researchers. Biomed Central’s new open access Systematic Reviews journal is about a specialized type of expert searching that librarians can provide for their communities. More than a source of protocols and a record of others’ work, this journal has great potential for those of us in academia who want to publish articles to share information with our colleagues about what we have done.

Here’s more information from the Aims and Scope:

Systematic Reviews encompasses all aspects of the design, conduct and reporting of systematic reviews. The journal aims to publish high quality systematic review products including systematic review protocols, systematic reviews related to a very broad definition of health, rapid reviews, updates of already completed systematic reviews, and methods research related to the science of systematic reviews, such as decision modeling. The journal also aims to ensure that the results of all well-conducted systematic reviews are published, regardless of their outcome.

It is a long-term goal of the journal to ensure all systematic reviews are prospectively registered in an appropriate database, such as PROSPERO, as these resources for registration become available and are endorsed by the scientific community.

Article types include:

  • Research Articles
  • Commentaries
  • Letters
  • Methodologies
  • Protocols
  • Review Updates

The editors-in-chief comprise an international group hailing from the University of Ottawa; the RAND corporation and UCLA; and the University of York.

Take a look at this journal! It could be a source of inspiration for any librarian whose emphasis is on expert searching.

New SurveyMonkey mobile app

Attention iPad and iPhone users: SurveyMonkey recently launched a mobile app so you can create, send, and monitor your surveys from your phone or tablet. The app is free, although you need a SurveyMonkey account to use it.

With the SurveyMonkey app, you no longer have to rely on your computer to design and manage a survey. The app also allows you to conveniently view your data from any location with Internet access. I think the most notable benefit is that the analytic reports are optimized for mobile devices and are easy to read on small screens.

I have been asked how this app compares to QuickTapSurvey (see my previous blog entry). In my opinion, the app does not make SurveyMonkey comparable to QuickTapSurvey, which is designed specifically to collect onsite visitor feedback in informal settings such as exhibits and museums. SurveyMonkey, by comparison, is designed to collect data through email, web sites, or social media. Both apps work best in their respective settings. I think you could adapt SurveyMonkey to collect data at face-to-face events (if there is onsite Internet access), but it probably won’t work as smoothly as QuickTapSurvey.

For more information about the Survey Monkey mobile app, click here.

Maximize your response rate

Did you know that the American Medical Association has a specific recommendation for its authors about questionnaire response rate? Here it is, from the JAMA Instructions for Authors:

Survey Research
Manuscripts reporting survey data, such as studies involving patients, clinicians, the public, or others, should report data collected as recently as possible, ideally within the past 2 years. Survey studies should have sufficient response rates (generally at least 60%) and appropriate characterization of nonresponders to ensure that nonresponse bias does not threaten the validity of the findings. For most surveys, such as those conducted by telephone, personal interviews (eg, drawn from a sample of households), mail, e-mail, or via the web, authors are encouraged to report the survey outcome rates using standard definitions and metrics, such as those proposed by the American Association for Public Opinion Research.

Meanwhile, response rates to questionnaires have been declining over the past 20 years, as reported by the Pew Research Center in “The Problem of Declining Response Rates.” Why should we care about the AMA’s recommendation regarding questionnaire response rates? Many of us will send questionnaires to health care professionals who, like physicians, are very busy and might not pay attention to our efforts to learn about them. Even JAMA authors such as Johnson and Wislar have pointed out that “60% is only a “rule of thumb” that masks a more complex issue.” (Johnson TP; Wislar JS. “Response Rates and Nonresponse Errors in Surveys.” JAMA, May 2, 2012—Vol 307, No. 17, p.1805) These authors recommend that we evaluate nonresponse bias in order to characterize differences between those who respond and those who don’t. These standard techniques include:

  • Conduct a follow-up survey with nonrespondents
  • Use data about your sampling frame and study population to compare respondents to nonrespondents
  • Compare the sample with other data sources
  • Compare early and late respondents

Johnson and Wislar’s article is not open access, unfortunately, but you can find more suggestions about increasing response rates to your questionnaires in two recent AEA365 blog posts that are open access:

Find more useful advice (e.g., make questionnaires short, personalize your mailings, send full reminder packs to nonrespondents) at this open access article: Sahlqvist S, et al., “Effect of questionnaire length, personalisation and reminder type on response rate to a complex postal survey: randomised controlled trial.” BMC Medical Research Methodology 2011, 11:62

Webinars and Workshops about Evaluating Outreach

The National Network of Libraries of Medicine Outreach Evaluation Resource Center (OERC) offers a range of webinars and workshops upon request by network members and coordinators from the various regions. Take a look at the list and see if one of the options appeals to you. To request a workshop or webinar, contact Susan Barnes.

The workshops were designed as face-to-face learning opportunities but we can tailor them to meet distance learning needs by distilling them to briefer webinars or offering them in series of 1-hour webinars.

Don’t see what you’re looking for on this list? Then please contact Susan and let her know!

We’re looking forward to hearing from you.

Interview tips: Talking with participants during a usability test

The Nielsen Norman Group (NNG) conducts research and publishes information about user experience with interfaces. NNG was an early critic of the troubled “healthcare.gov” web site: “Healthcare.gov’s Account Setup: 10 Broken Usability Guidelines.” recent post (“Talking with participants during a usability test”) provided tips for facilitating usability tests that could be very useful whenever you’re facilitating a discussion or conducting an observation. When in doubt about whether to speak to a participant, count to 10 and decide whether to say something. Consider using “Echo” or “Boomerang” or “Columbo” approaches:

  • Echo–repeat the last words or phrase, using an interrogatory tone.
  • Boomerang–formulate a nonthreatening question that “pushes” a user’s comment back and causes them to think of a response for you, such as “What would you do if you were on your own?”
  • Columbo–be smart but don’t act that way, as in the “Columbo” TV series from the 1960’s and 1970’s starring Peter Falk.

The full article “Talking with participants during a usability test” provides audio examples of these techniques that you can listen to. You can find a large amount of additional information about usability testing on the Nielsen Norman Group’s web site, such as “How to Conduct Usability Studies” and “Usability 101: Introduction to Usability.”

Cleaning Up Your Charts

So how are those New Year’s resolutions going?

Many of us like to start the year resolving to clean up some part of our lives. Our diet. Our spending habits. The five years of magazine subscriptions sitting by our recliner.

Here’s another suggestion: Resolve to clean up “chart junk” in the charts you add to PowerPoint presentations or written reports.

Now I can pack information into a bar chart with the best of them. But it is no longer in vogue to clutter charts with data labels, gridlines, and detailed legends. This is not just a fashion statement, either. Design experts point out that charts should make their point without the inclusion of a bunch of distracting details. If the main point of your chart is not visually obvious, you either have not designed it correctly or you are presenting a finding that is not particularly significant.

So the next time you create a chart, consider these suggestions:

  • Use your title to communicate the main point of the chart. Take a tip from newspaper headlines and make your title a complete sentence.
  • Don’t use three-dimensional displays. It interferes with people’s comprehension of charts.
  • Ditch the gridlines or make them faint so they don’t clutter the view.
  • Use contrast to make your point. Add a bright color to the bar or line that carries the main point and use gray or another faint color for the comparison bars or lines.
  • Be careful in picking colors. Use contrasting colors that are distinguishable to people with colorblindness. If your report is going to be printed, be sure the contrast still shows up when presented in black-and-white.
  • Consider not using data labels, or just label the bar or line associated with your main point.
  • Remove legends and apply legend labels inside the bars or at the end of lines.

For more comprehensive information on eliminating chart junk, check out this article:

Evergreen S, Mezner C. Design principles for data visualization in evaluation. In Azzam T, Evergreen S. (eds). Data visualization, part 2. New Directions in Evaluation. Winter 2013, 5-20.

“Evidence” — what does that mean?

In our health information outreach work we are expected to provide evidence of the value of our work, but there are varying definitions of the word “evidence.” The classical evidence-based medicine approach (featuring results from randomized controlled clinical trials) is a model that is not always relevant in our work. At the 2013 EBLIP7 meeting in Saskatoon, Saskatchewan, Canada, Denise Kaufogiannakis presented a keynote address that is now available as an open-access article on the web:

“What We Talk About When We Talk About Evidence” Evidence-Based Library and Information Practice 2013 8.4

This article looks at various interpretations of what it means to provide “evidence” such as

theoretical (ideas, concepts and models to explain how and why something works),
empirical (measuring outcomes and effectiveness via empirical research), and
experiential (people’s experiences with an intervention).

Kaufogiannakis points out that academic librarians’ decisions are usually made in groups of people working together and she proposes a new model for evidence-based library and information practice:

1) Articulate – come to an understanding of the problem and articulate it. Set boundaries and clearly articulate a problem that requires a decision.

2) Assemble – assemble evidence from multiple sources that are most appropriate to the problem at hand. Gather evidence from appropriate sources.

3) Assess – place the evidence against all components of the wider overarching problem. Assess the evidence for its quantity and quality. Evaluate and weigh evidence sources. Determine what the evidence says as a whole.

4) Agree – determine the best way forward and if working with a group, try to achieve consensus based on the evidence and organizational goals. Determine a course of action and begin implementation of the decision.

5) Adapt – revisit goals and needs. Reflect on the success of the implementation. Evaluate the decision and how it has worked in practice. Reflect on your role and actions. Discuss the situation
with others and determine any changes required.

Kaufogiannakis concludes by reminding us that “Ultimately, evidence, in its many forms, helps us find answers. However, we can’t just accept evidence at face value. We need to better understand evidence – otherwise we don’t really know what ‘proof’ the various pieces of evidence provide.”