Skip all navigation and go to page content

NEO Shop Talk

The blog of the National Network of Libraries of Medicine Evaluation Office

Archive for the ‘News’ Category

Take The Pie, Leave The Pie Chart

Wednesday, November 25th, 2015

Evaluation and data visualization folks may disagree on the best pie to serve at Thanksgiving dinner.  Pumpkin?  Pecan?  A nice silk pie made with chocolate liqueur and tofu? (Thank you, Alton Brown.)

You see, the whole point of charts is to give people an instantaneous understanding of your findings.  Your readers can easily discern differences in bars and lines.  In wedges of pie, not so much. Data visualization expert Stephen Few explained the problem during this interview with the New York Times: “When looking at parts of a whole, the primary task is to rank them to see the relative performance of the parts. That can’t be done easily when relying on angles formed by a slice.”

(Note:  This parts-to-whole angle problem may also explain why most of us can’t understand how our Whole Foods pumpkin pie could possibly have eight servings. Eight? Are you kidding me?)

So, for today’s pre-Thanksgiving holiday post, I thought I would point you to some online articles about the much used and much vilified pie chart.

First, here’s an article by American Evaluation Association’s president-elect John Gargani, arguing for retirement of the venerable pie chart.  He make points that are repeated in many anti-pie chart blog posts.  But in the interest of objectivity, you should know that agreement to send pie charts to the cosmic dust bin is not universal. Here’s a post by Bruce Gabrielle of Speaking PowerPoint that describes situations where pie charts can shine.

In general, most experts believe that the times and places to use pie charts are few and far between. If you have found one of those rare times, then here’s a post at Better Evaluation with the design tips to follow.

But for heaven sake, turn off that three-dimensional feature in your pie chart, or in any chart, for that matter. Nobody wants to see that!

And for humorous examples of what not to do, check out Michael Friendly’s Evil Pies blog,

Data Party Like it’s 2099! How to Throw a Data Party

Friday, November 20th, 2015

two funny birthday dogs celebrating close together as a coupleWhat’s a “data party?” We attended a program by evaluator Kylie Hutchinson entitled “It’s a Data Party!” at the AEA 2015 conference last week in Chicago.  A data party is another name for a kind of participatory data analysis, where you gather stakeholders together, show them some of the data that you have gathered and ask them to help analyze it.

Isn’t analyzing the data part of your job?  Here are some reasons you might want to include stakeholders in the data analysis stage:

  • It allows stakeholders to get to know and engage with the data
  • Stakeholders may bring context to the data that will help explain some of the results
  • When stakeholders participate in analyzing the data, they are more likely to understand it and use it
  • Watching their interactions, you can often find out who is the person with the power to act on your recommendations

So how do you throw a data party? First of all you need to know what you hope to get from the attendees, since you may only be able to hold an event like this one time. There are a number of different ways to organize the event.  You might want to consider using a World Cafe format, where everyone works together to explore a set of questions, or you could use an Open Space system in which attendees create their own agenda about what questions they want to discuss.  Recently the AEA held a very successful online unconference using MIT’s Unhangout that could be used for an online Data Party with people from multiple locations.

The kinds of questions Kylie Hutchinson suggested asking at a data party include:

  • What does this data tell you?
  • How does this align with your expectations?
  • What do you think is occurring here and why?
  • What other information do you need to make this actionable?

At the end of the party it might be time to present some findings and recommendations that you have.  Considering the work that they have done, they may be more willing to listen.  As Kylie Hutchinson said “People support what they helped create.”

 

The OERC on the Road at the American Evaluation Association Conference

Friday, November 13th, 2015

As you read this, the OERC’s Karen Vargas and Cindy Olney are attending the American Evaluation Association’s annual conference in Chicago, hearing about the latest and greatest trends in evaluation.  The AEA conference is always excellent, with evaluators from all disciplines sharing their skills, lessons learned, and new approaches to the art and science of evaluation. Look for our favorite topics from this year’s conference in our future blog posts.

In the meantime, you can get your Friday afternoon evaluation fix from the publicly available resources offered at the AEA web site. The AEA 365 blog has daily posts from AEA members that feature hot topics and rad resources on almost every evaluation topic imaginable.  Conference and other materials are archived in the AEA Public Library.  And don’t worry. The OERC will be back to regular weekly postings next Friday.

"Chicago sunrise 1" by Daniel Schwen - Own work. Licensed under CC BY-SA 4.0 via Commons - https://commons.wikimedia.org/wiki/File:Chicago_sunrise_1.jpg#/media/File:Chicago_sunrise_1.jpg
“Chicago sunrise 1” by Daniel Schwen – Own work. Licensed under CC BY-SA 4.0 via Commons – https://commons.wikimedia.org/wiki/File:Chicago_sunrise_1.jpg#/media/File:Chicago_sunrise_1.jpg

 

Infographics – Guide from NIH Library Informationists

Friday, November 6th, 2015

cool infographic elements for the web and print usageThe Medical Library Association’s October 28 webinar was on Data Visualization, presented by Lisa Federer, NIH Library’s Research Data Informationist.  The webinar was a tour of different aspects of data visualization, including information about elements of design, like color, line, contrast and proximity, as well as loads and loads of specific resources for more information.

For those of you who were not able to attend or would like to know more, Lisa Federer has a LibGuide called Creating Infographics with Inkscape, which contains the resources for a class she taught with NIH Informationist Chris Belter.  The LibGuide includes a Power Point from the lecture part of their class. The slides cover design principles and design elements.  Many of the slides have links to resources that you can use to learn more about the topic.  For example:

Vischeck – a cool tool for finding out what your colors in your chart look like to someone who is color blind

10 Commandments of Typography – suggestions for making font combinations that work

The second part of the class is a hands-on section on using Inkscape, a free, open-source graphics program, to make infographics.  Inkscape allows you to use “vector graphics” to design infographics.  What are vector graphics and why use them? You know images that work when they’re small but get all blurry when they get big? Those images are based on pixels. Vector graphics are based on pathways defined by mathematical expressions like lines, curves, and triangles, so they can get larger and smaller without losing any quality. Sounds hard to do, right? Luckily there are tutorials on Inkscape and it’s easier than you might think (you don’t need to know the math…): https://inkscape.org/en/doc/tutorials/basic/tutorial-basic.en.html

If you want to take a look at other vector graphics editors, there are other free ones, like Apache Open Office Draw, or ones you may already own, like Adobe Illustrator.  Comparisons with links to detailed information can be found in Wikipedia’s “Comparison of Vector Graphics Editors.”

Boosting Response Rates with Invitation Letters

Friday, October 30th, 2015

"You've got mail" graphicwith mail spelled m@il

Today’s topic: The humble survey invitation letter.

I used to think of the invitation letter (or email) as a “questionnaire delivery device.”  You needed some way to get the URL to your prospective respondents, and the letter (or, more specifically, the email) was how you distributed the link. The invitation email was always an afterthought, hastily composed after the arduous process of developing the questionnaire itself.

Then I was introduced to Donald Dillman’s “Tailored Design Method” and learned that I needed to take as much care with the letter as I did the questionnaire. A carefully crafted invitation has been proven to boost response rates. And response rate is a key concern when conducting surveys, for reasons clearly articulated in this quote from the American Association of Public Opinion Research:

“A low cooperation or response rate does more damage in rendering a survey’s results questionable than a small sample, because there may be no valid way scientifically of inferring the characteristics of the population represented by the non-respondents.” (AAPOR, Best Practices for Research)

With response rate at stake, we need to pay attention to how we write and send out our invitation emails.

This blog post features my most-used tips for writing invitation emails, all of which are included in Internet, Phone, Mail, and Mixed-Mode Surveys: The Tailored Design Method by Dillman, Smyth, and Christian (2014). Now in its fourth edition, this book is the go-to resource for how to conduct all aspects of the survey process. It is evidence-based, drawing on an extensive body of research literature on survey practice.

Plan for Multiple Contacts

Don’t think “invitation email.”  Think “communication plan,” because Dillman et al. emphasized a need for multiple contacts with participants to elicit good response rates. The book outlines various mailing schedules, but you should plan for a minimum of four contacts:

  • A preliminary email message to let your participants know you will be sending them a questionnaire. (Do not include the questionnaire link)
  • An invitation email with a link to your questionnaire (2-3 days after preliminary letter)
  • A reminder notice, preferably only to those who have not responded (one week after the invitation email)
  • A final reminder notice, also specifically to those who have not responded (one week after the first reminder).

 Tell Them Why Their Feedback Matters

Emphasize how the participants’ feedback will help your organization improve services or programs. This simple request appeals to a common desire among humans to help others. If applicable, emphasize that you need their advice specifically because of their special experience or expertise. It is best to use mail merge to personalize your email messages, so that each participant is personally invited by name to submit their feedback.

If you are contacting people who have a relationships with your organization, such as your library users or members of your organization, play up that relationship. Also, make a commitment to share results with them at a later date. (And be sure to keep that commitment.)

Make Sure They Know Who’s Asking

With phishing and email scams abounding, people are leery about clicking on URLs if an email message seems “off” in any way. Make sure they know they can trust your invitation email and survey link. Take opportunities to publicize your institutional affiliation. Incorporate logos or letterhead into your emails, when possible.

Provide names, email addresses and phone numbers of one or two members of your evaluation team, so participants know who to contact with questions or to authenticate the source of the email request. You may never get a call, but they will feel better about answering questions if you give them convenient access to a member of the project team.

It is also helpful to get a public endorsement of your survey project from someone who is known and trusted by your participants.  You can ask someone influential in your organization to send out your preliminary letter on your behalf. Also you or your champion can publicize your project over social media channels or through organizational newsletters or blogs.

And How You Will Protect Their Information

Be explicit about who will have access to individual-level data and will know how they answered specific questions. Be sure you know the difference between anonymity (where no one knows what any given participant specifically said) and confidentiality (where identifiable comments are seen by a few specific people). You can also let them know how you will protect their identity, but don’t go overboard. Long explanations also can cast doubt on the trustworthiness of your invitation.

Provide Status Updates

While this may seem “so high school,” most of us want to act in a manner consistent with our peer group. So if you casually mention in reminder emails that you are getting great feedback from other respondents, you may motivate the late responders who want to match the behavior of their peers.

Gifts Work Better Than Promises

The research consistently shows that sending a small gift to everyone, with your preliminary or invitation letter, is more effective than promising an incentive to those who complete your questionnaire. If you are bothered by the thought of rewarding those who may never follow through, keep in mind that small tokens (worth $2-3) sent to all participants is the most cost effective practice involving incentives. More expensive gifts are generally no more influential than small gifts when it comes to response rates. Also, cash works better than gift cards or other nonmonetary incentives, even if the cash is of less value.

Beyond Invitation Letters

The emails in your survey projects are good tools for enhancing response rate, but questionnaire design also matters. Visual layout, item order, and wording also influence response rate. While questionnaire design is beyond the scope of today’s post, I recommend The Tailored Design Method to anyone who plans to conduct survey-based evaluation in the near future. The complete source is provided below.

Source: Dillman DA, Smyth JD, and Christian LM. Internet, Phone, Mail, and Mixed-Mode Surveys: The Tailored Design Method, 4th edition. Hoboken, NJ: Wiley; 2014.

 

 

 

Group Learning about Evaluation

Friday, October 23rd, 2015

Group of young business people talking on business meeting at office.

I recently got to participate in a very successful roundtable at a library conference.  I co-moderated an evaluation roundtable entitled “Library assessment: You’ve measured your success – now how do you get people to listen?” with OU-Tulsa Schusterman Library Associate Director Katie Prentice at the South Central Chapter of the Medical Library Association Annual Meeting in Little Rock, AR.

What makes roundtables unique among educational opportunities at library conferences is that unlike presentations or papers where attendees sit and listen, in a roundtable everyone can participate. It is a moderated discussion on a given topic among the people who attend, and since anyone can chime in, learning is active instead of passive.

About 25 people attended this roundtable and enthusiastically participated in a discussion about library assessment and evaluation data. Katie and I led the discussion with questions starting from what kind of data you collect at your library and leading to what libraries do with the data and how to make it work better for them. Our goal was to use our questions to all issues and solutions to come from the attendees themselves.

As an example, when we asked the question “what would you really like to know about your library and what do you dream of finding out about your users?” one hospital librarian said that she wanted to know how doctors were using the information and how it impacted the patients. Katie Prentice asked “can anyone help her with this?” and another hospital librarian responded that she sends emails to some of her doctors to ask for a sentence or two describing how the information was used.  These sentences, when collected and analyzed, could be a powerful tool to show hospital administration the importance of the library to patient outcomes.

Other kinds of evaluation ideas that were generated from attendees at this roundtable were:

  • using heat map software to determine where people go most often on your website
  • having student workers note what pieces of furniture are being used to improve furniture types and placement in the library
  • using a product like Constant Contact or Mail Chimp to send library newsletters to the doctors and employees at hospitals with assessment data.

While not all roundtables at conferences are this successful, this roundtable demonstrated the ability of librarians brought together in a group to learn from each other and solve problems.

OERC Travels: The Michigan Health Sciences Libraries Association

Friday, October 16th, 2015

Measuring What Matters to Stakeholders 9.21.15 (master)

I love to talk about evaluation strategies that can make our organizational successes more visible.  So I was thrilled for the opportunity to team-teach a continuing education workshop with Beth Layton,  Associate Director of the NN/LM Greater Midwest Region, at the annual Michigan Health Sciences Libraries Association (MHSLA) meeting. We taught Measuring What Matters to Stakeholders on September 25 in Flint, MI.

Our primary objective was to help participants discover how they can use evaluation to collect evidence of their libraries’ contributions to their larger organizations. Participants first identified their library activities that directly support the mission of their organizations and goals of key decision-makers. They then focused on developing evaluation plans for activities with high impact potential. The plans emphasized metrics related to organizational mission. We talked about analytic strategies, including monetary-based approaches such as cost-benefit analysis.  The workshop concluded with tips for communicating with stakeholders.

There is a 10-year history behind this workshop. The concept of linking evaluation and library advocacy was introduced to the NN/LM by Maryanne Blake from the NN/LM Pacific Northwest Region (now retired), and Betsy Kelly, Assessment and Evaluation Coordinator at the NN/LM MidContental Region. They developed the first NN/LM evaluation-for-library-advocacy workshop, Measuring Your Impact (MYI), which exploded in popularity and was taught by regional medical library staff throughout the country. Betsy made additional contributions in this area, including her value and cost-benefit analysis calculators. These calculators are publically available and have been featured in various NN/LM evaluation classes. Several regions have adapted MYI into shorter workshops to better fit the meeting schedules of associations that host NN/LM workshops.

Beth and I merged the GMR and OERC short versions for our MHSLA workshop. We stuck with the basic goals of the original MYI class, but introduced some new exercises and new approaches to communicating evaluation results.  I developed a new worksheet to help participants practice matching library activities to organizational mission and goals.  For the communication part of the workshop, we practiced messaging with Nancy Duarte’s “sparkline” presentation structure (presented here in her TEDTalk) and Tim David’s version of elevator speeches.

The class was well-received, with 92% of participants giving us an “A” rating (on a scale of “A” to “F”). Eighty-two percent of participants said the class improved their confidence about using evaluation to demonstrate their library’s value.  Asked how likely they were to use the information from the class, 73% said “very likely” and 27% said “somewhat likely.”  A quick qualitative analysis of their written comments on the evaluation form indicated that the following strategies were they were most interested in pursuing: Appreciative Inquiry interviews, logic models, and elevator pitches (as described in OERC blog posts here and here).

I want to express my appreciation to Jacqueline Leskovec at the NN/LM GMR, who bravely piloted some of the new content in advance of the our MHSLA workshop. Jacqueline is one of the GMR coordinators who adapted MYI for her region. She was on the North Dakota Library Association’s meeting program to teach Measuring What Matters to Stakeholders on September 17, more than a week before Beth and I traveled to Flint. Jacqueline tried out our new material and reported back.

I also want to thank our MHSLA participants, who made the class so engaging and enjoyable.  If any of our attendees have successes or lessons learned from using the strategies covered in the class, please contact me at olneyc@uw.edu. I would love to feature your experience in a blog post.

Cindy Olney roleplays elevator pitches with Oren Duda from Windsor Regional Hospital Library (Canada
Cindy roleplays elevator pitches with Oren Duda from Windsor Regional Hospital Library.

Creative Annual Reports

Friday, October 9th, 2015

Ah, the annual report – at its best we expect to see a glossy booklet with pie charts, short paragraphs and some quotes. At its worst it can be pages of dry text. Our main hope with annual reports is that our stakeholders and others will read them and be impressed with the successes of our organizations.

Last month I ran across the annual report from the Nowra Public Library in New South Wales, Australia, which was so compelling and understandable that over 100,000 people have viewed it on YouTube:

Photo of librarian links to Nowra library video

Since most organizations don’t have the resources to do their own music video (e.g. singers, writers, silly costumes), I thought I would look at a few other examples to consider when it’s time to do your annual report.

One of my all-time favorites is the annual report from the Schusterman Library of The University of Oklahoma-Tulsa. Their annual report is an infographic that shows the data that is collected, but also describes the data in such a way that 1) you have a better feel for what is going on in the library; and also 2) you might think “I didn’t know they would help me with that!”  For example: “7,274 Reference questions answered in person, by phone, by email, and instant message or text on everything from ADHD and child welfare to decision trees, LEED homes, and census reporting.” It is available on their website, and the librarians at the Schusterman Library say they frequently find students looking at it.

The Michigan State University College of Education won a gold ADDY and a Best in Show Award for their 2012 Annual Report (an ADDY is the advertising industry’s largest competition).  Their report featured a tri-fold, die-cut skyline that presented the college’s missions and strengths with an emphasis on “college as community.” The annual report also included a video and a website that gives detailed narratives that show institutional successes in terms of personal stories.

Of course, not all institutions want an unusual annual report.  But it is important to consider the target audience.  Annual reports reach the upper administration, potential funders, and patrons of the library. The success of this years annual report might shape the library users view of the library for years to come.

The OERC is on the Road Again

Friday, September 25th, 2015

The OERC is on the road again.  Today, Cindy and Beth Layton, Associate Director of the NN/LM Greater Midwest Region, are team-teaching Measuring What Matters to Stakeholders at the Michigan Health Sciences Library Association’s annual conference in Flint, MI.

Logo for Michigan Health Sciences Library Association

This workshop covers strategies for using evaluation to enhance and communicate a library’s value to organizational decision-makers and stakeholders who influence decision makers. The workshop combines updated information with material from the NN/LM MidContinental Region’s Measuring Your Impact and the OERC’s Valuing Your Library workshops that have been taught by a number of regional medical library staff members over the past decade.

On Saturday, Karen is presenting a brand-new workshop for the Texas Library Association’s District 8 Conference called Adding Meaning to Planning: A Step-by-Step Method for Involving Your Community in Meaningful Library Planning.

TLA District 8 Logo

The workshop is a method of involving community members in creating pain-free logic models to ensure that the long term vision is always in sight when planning.  Karen wrote a blog entry about the creating “tearless” logic models here. http://nnlm.gov/evaluation/blog/2015/04/10/tearless-logic-models/  This is Karen’s first experience creating and delivering a workshop that is purely about library evaluation.

The NN/LM travel season is about to go into full swing.  We know we aren’t the only ones out and about with presentations, trainings, and exhibits.  So safe travels. And we will see you in a week with another OERC blog post.

Which Online Survey Tool Should I Use? A Review of Reviews

Friday, September 4th, 2015

Quality survey close up with a thumbtack pointing on the word excellentRecently we faced the realization that we would have to reevaluate the online survey tool that we have been using. We thought that we would share some of the things that we learn along the way.

First of all, finding a place that evaluates survey products (like Survey Monkey or Survey Gizmo), is not as easy as going to Consumer Reports or Amazon (or CNET, Epinions, or Buzzillions).  A number of places can be found on the internet that provide reviews of surveys, but their quality is highly varied.   So for this week our project has been to compare review websites to see what we can learn from and about them.

Here are the best ones I could find that compare online survey tools:

Zapier.com’s Ultimate Guide to Forms and Surveys, Chapter 7 “The 20 Best Online Survey Builder Tools”

This resource compares 20 different online survey tools. There is a chart with a brief statement of what each survey tool is best for, what you get for free, and the lowest plan cost. Additionally, there is a paragraph description of each tool and what it does best.  Note: this is part of an eBook published in 2015 which includes chapters like “The Best Online Form Builders for Every Task.”

Appstorm.net’s “18 Awesome Survey & Poll Apps”

This review was posted on May 27, 2015 which reassures us that the information is most likely up to date.  While there are very brief descriptions, it is good for a quick comparison of the survey products. Each review includes whether or not there is a free account, if the surveys can be customized, and whether or not there are ready-made templates.

Capterra.com’s “Top Survey Software Products”

Check boxes showing the features of the different productsThis resource appears to be almost too good to be true. Alas, no date shown means that the specificity in the comparisons might not be accurate.  Nevertheless, this website lists over 200 survey software products, has separate profile pages on each product (with varying amounts of detail), and lists features that each product offers.  You can even narrow down the surveys you are looking for by filtering by feature.  Hopefully the features in Capterra’s database are kept updated for each product.  One thing to point out is that at least two fairly well-known survey products (that I know of) are not in their list.

AppAppeal.com’s “Top 31 Free Survey Apps”

Another review site with no date listed. This one compares 31 apps by popularity, presumably in the year the article was written. One thing that is unique about this review site is that the in-depth review includes the history and popularity of the app, the differences of each app to other apps, and who they would recommend the app to.  Many of the reviews include videos showing how to use the app.  Pretty cool.

TopTenReviews.com’s 2015 Best Survey Software Reviews and Comparisons

This website has the feel of Consumer Reports. It has a long article explaining why you would use survey software, how and what the reviewers tested, and the kinds of things that are important when selecting survey software. Also like Consumer Reports, it has ratings of each product (including the experiences of the business, the respondents, and the quality of the support), and individual reviews of each product showing pros and cons. Because the date is included in the review name, the information is fairly current.

This is a starting point. There are individual reviews of online survey products on a variety of websites and blogs, which are not included here.  Stay tuned for more information on online survey tools as we move forward.

 

Last updated on Thursday, May 19, 2016

Funded by the National Library of Medicine under contract # HHS-N-276-2011-00008-C.