Skip all navigation and go to page content

NEO Shop Talk

The blog of the National Network of Libraries of Medicine Evaluation Office

Archive for the ‘News’ Category

New Outcome Measurement Resource for Public Libraries

Friday, January 22nd, 2016

Librarian and children looking at globe in library

About six months ago Public Library Association (PLA) initiated a service called Project Outcome, that I have been following with interest. An article entitled “Project Outcome – Looking Back, Looking Forward” by Carolyn Anthony, director of the Skokie Public Library, IL was recently published in Public Libraries Online that describes the successes of libraries using this service over the past 6 months.

Project Outcome is an online resource that provides evaluation tools that are designed to measure the impact of library programs and services, such as summer reading program or career development programming. It also provides ready-made reports and data dashboards that can be used to give libraries and stakeholders immediate data on their programs’ outcomes.  And Project Outcome provides support and peer sharing opportunities to address common challenges and increase capacity for outcomes evaluation.

Here are some of the things that make me the most excited about this service:

  1. Project Outcome has managed to create a structured approach for program outcome evaluation that can be used online by public libraries of all shapes and sizes by people who have not done outcome evaluation before.  Along with tools for collecting data, the resource has tutorials and support for libraries doing outcomes evaluation for the first time.
  2. Continued support and peer sharing as an integral part of the service means that PLA is building a community of librarians who use outcome evaluation.
  3. The stories that are shared by the peers as described in the article will increase the understanding that evaluation isn’t something forced on you from outside, but can be something that helps you to create a better library and enhance the meaning of your library’s programs.
  4. This process teaches librarians to start with the evaluation question (“decide what you want to learn about outcomes in your community”) and a plan for what to do with the findings. And the process ends with successfully communicating your findings to stakeholders and implementing next steps.
  5. Lastly, I love that Project Outcome and the PLA Performance Measurement Task Force are planning the next iteration of their project that will measure whether program participants followed through with their intended outcomes.  It will be very interesting to find out how this long term outcome evaluation comes out.

I’ll end with this statement from Carolyn Anthony, who said “the opportunity to spark internal conversations and shift the way libraries think about their programs and services is what outcome measurement is all about.”

Simply Elegant Evaluation: GMR’s Pilot Assessment of a Chapter Exhibit

Friday, January 15th, 2016

If you spend any time with librarians who work for the National Network of Libraries of Medicine (NN/LM), you’ll likely hear about their adventures with conference exhibits. Exhibiting is a typical outreach activity for the NN/LM Regional Medical Libraries, which are eight health sciences libraries that lead other libraries and organizations in their region in promoting the fine health information resources of the National Library of Medicine (NLM) and National Institutes of Health.  The partnering organizations are called “network members” and, together with RMLs, are the NN/LM.

Jacqueline Leskovec

Exhibiting is quite an endeavor. It requires muscles for hauling equipment and supplies. You have to be friendly and outgoing when your feet hurt and you’re fighting jet lag. You need creative problem-solving skills when you’re in one state and your materials are stuck in another.

More than one RML outreach librarian has asked the question: Is exhibiting worth it?

Jacqueline Leskovec, at the NN/LM Greater Midwest Regional Medical Library (GMR), decided to investigate this question last October. The Outreach, Planning, and Evaluation Coordinator for NN/LM GMR, Jacqueline specifically chose to assess a particular type of NN/LM exhibits: those held at Medical Library Association chapter meetings.  The question was raised at a GMR staff meeting about the value of exhibiting at a conference where most attendees were medical librarians, many of whom already knew about NLM and NIH resources.

Jacqueline decided to look at the question from a different angle. Could they consider, instead, the networking potential of their exhibit? The NN/LM runs on relationships between regional medical library staff and other librarians in their respective regions. Possibly the booth’s value was that it provided an opportunity for the GMR staff to meet with librarians from both long-standing and potential member organizations of the GMR.

Collecting Feedback

Jacqueline decided to ask two simple evaluation questions.  First, did existing GMR users stop by the exhibit booth to visit with the GMR staff at the chapter meeting booth?  Second, did the booth provide the GMR staff with opportunities to meet librarians who were unaware of the NN/LM? In a nutshell, the questions focused on the booth’s potential to promote active participation in the NN/LM. This was a valid goal for an exhibit targeting this particular audience, where the GMR could find partners to support the network’s mission of promoting NLM resources.

She worked with the OERC to develop a point-of-contact questionnaire that she administered to visitors using an iPad. Her questionnaire had five items that people responded to via touch screen.  She chose the app Quick Tap Survey because it produced an attractive questionnaire, data could be collected without an Internet connection, and she could purchase a one-month subscription for the software.  The app also has a feature that allows the administrator to randomly pull a name for a door prize. Jacqueline used this feature to give away an NLM portfolio that was prominently displayed on the exhibit table. (Participation was voluntary, and the personally identifiable information was deleted after the drawing.)

Jacqueline stood in front of the booth to attract visitors, a practice she uses at all exhibits. She did not find that the questionnaire created any barriers to holding conversations with visitors. Quite the contrary, many were intrigued with the technology. Almost no one turned down her request to complete the form. Of the 120 conference attendees (the count reported by the Midwest MLA chapter), 38 (32%) visited the GMR booth and virtually all agreed to complete the questionnaire.

What Did GMR Learn?

 Jacqueline learned that 50% of the visitors came to the booth specifically to visit with GMR staff, while 26% came to get NLM resources.  This confirmed that the visits were more related to networking than information-seeking about NLM or NIH resources. She also learned that more than half were return visitors who had visited at past conferences, while 46% had never stopped by the booth before.  It appeared that the booth served equally as a way for GMR staff to talk with existing users and to meet potential new ones. Those who were return visitors also were the more likely users of the GMR: 68% said that the GMR was the first place they would seek answers to questions they had about NLM or NIH resources. (Although one added that she would first look online, then contact them if she couldn’t find the answer on her own.)  In contrast, 56% of new booth visitors said they usually sought help from a friend or colleague. Only 26% would contact the GMR. Findings do not indicate that exhibits cause librarians to become more involved with GMR. However, when GMR offers opportunities for face-to-face interactions, their users take advantage of it.

Visitors also got an opportunity to voice their opinion about the continuation of GMR exhibits at chapter meetings. There was fairly universal agreement: 92% said they thought the GMR should continue. The other 8% said they weren’t sure, but no one said GMR should stop.

Lessons learned

Jacqueline found it was easy to get people to take her questionnaire, particularly with a smooth application like Quick Tap Survey. She also learned that, regardless of the care she took in developing her questions, she still had at least one item that could have been worded better. However, tweaks can easily be implemented for future exhibits.

Overall, she said this assessment project added depth to the typical booth assessments that GMR typically conducts. Previous assessments focused on describing booth traffic, such as number of visitors, staff hours in booth, or number of promotional materials distributed. This project described the actual visitors and what they got out of the exhibit.

 Prologue: Why The OERC Loves This Project

We love this project because Jacqueline thought carefully about the outcomes of exhibiting to this particular audience and designed her questionnaire accordingly.  She recognizes that exhibits at chapter meetings are a specific type of event. The goals of NN/LM exhibits at other types of conferences are different, so the questionnaires would have to be adapted for those goals.

We also love this project because it shows that you can assess exhibits. Back in the day, point-of-contact assessment required paper-and-pencil methods.  It was a data collection approach that seemed likely to be self-defeating. Visitors would cut a wide path to avoid requests to fill out a form. Now that we have the technology (tablets and easy-to-use apps) that makes the task less daunting, the OERC has been promoting the idea of exhibit assessment.  Jacqueline’s project is proof that it can be done!

The OERC Blog – Moving Forward

Friday, January 8th, 2016

turtle climbing up staircase

Since last week’s message the OERC has been looking at some additional data about the blog in order to update our online communications plan going forward. The earlier OERC strategy had been to use social media to increase the use of evaluation resources, the OERC’s educational services, and the OERC’s coaching services. These continue to be the goals of the OERC’s plan. However, due to the following pieces of information, a new strategy has emerged.

  • The OERC Blog is increasing in popularity. As reported last week, more people find it, share it with their regions, and engage with it by clicking on the links than ever before.
  • The blog always has new content and is time-intensive to create: it takes approximately 6 person-hours each week to write and publish new content.
  • Although the OERC does not have a Facebook page, and the OERC Twitter account @nnlmOERC has been used primarily promote the blog, still Facebook refers more people to the blog than come from Twitter (this was kind of a shocker for us!)

We feel that that the OERC Blog, based on the results described in last week’s post, has become one of the most successful products of the OERC. The blog has become a source of educational content, and is itself an evaluation resource in need of promotion. Because of this, our Online Communications Plan going forward has a special focus on promoting the blog. Here are some of the new process goals for this purpose.

  • Facebook: The OERC will create a Facebook page that will promote the blog, link to other online evaluation resources, and show photos of what the OERC team is up to.
  • Twitter: Karen and Cindy will post at least one additional post per week on Twitter to increase the Twitter presence. Included will be retweets to build social capital which may lead to more retweets of our blog tweets (here is an interesting dissertation by Thomas Plotkowiak explaining this).
  • Training: We will make a point of promoting the blog during our in-person classes and webinars. For example, we may refer people to articles in our blog that supplement the content in the training sessions.
  • Email: The blog URL will be added to Karen and Cindy’s email signatures

So, what kinds of things will we measure? Naturally we want process measurements (showing that things are working the way they should along the way) and outcome measurements (showing that we are meeting our goals).

Here are our process goals, which are the activities we are committing to this year:

  • 52 blog posts a year
  • 3 tweets a week
  • Minimum of 1 Facebook post a week
  • Blog information added to class resource information and email signatures

In the short-term, we hope to see people interacting with our social media posts. So we are hoping to see increases on the following measures of our short-term outcomes:

  • # of Twitter retweets, likes and messages
  • # of Facebook likes, comments, and shares
  • # of new followers on Twitter and Facebook

We hope that the increased interaction with our Facebook and Twitter posts will lead more readers to our blog. So we will be monitoring increases on the following long-term outcome measures:

  • # of blog visitors per month
  • # of blog average views per day
  • # of blog link “click-throughs” to measure engagement with the blog articles
  • # of people who register for weekly blog updates and add the OERC Blog to their blog feeds
  • # of times blog posts are re-posted in Regional Medical Library blogs and newsletters.

This is our strategy for increasing use of our blog content. We will keep you updated and share tools that we develop to track these outcomes.

References

Plotkowiak, Thomas. “The Influence of Social Capital on Information Diffusion in Twitter’s Interest-Based Social Networks.” Diss. University of St.Gallen, 2014. Web. 8 Jan. 2016.

An OERC Resolution Realized

Friday, January 1st, 2016

Happy new year greeting card or poster design with colorful triangle 2015 shape and vintage label illustration. EPS10 vector.

Happy New Year, readers.  We decided to start 2016 on a high note with the OERC Blog’s Annual Report.

Wait, don’t leave! 

I promise, there’s something for everyone at the end of this post. There are links to the most popular evaluation tools and resources from our blog entries this year.  We judged popularity using our “clickthrough” statistics, showing the links that were most likely to be clicked by our readers to further investigate the resources featured in our blog posts.

(If you want, you can skip to the bottom now. We’ll never know.)

But first, we are going to take a moment to describe how far our blog has come since its inception in 2006.  That’s right, August 2016 is the 10-year anniversary of the OERC blog. I’ve been a contributing blogger for almost a decade. Karen, the newcomer, started contributing in February 2015, the month she joined the OERC. Her entries are quite popular: Most of the top 10 clickthroughs below were presented in Karen’s posts.

For most of the blog’s history, OERC staff posted 12-16 times per year (about 1 per month).  However, in January 2014, the OERC staff committed to increasing our blog activity. That year, we managed a little better than three posts per month.  This year, we finally met our goal of once-per-week posts.  We had 52 entries between January 1 and December 31, 2015.

Of course, writing blog posts is one thing. Writing blog posts that people read is another. Our first indication that our blog was gaining readership was through the OERC’s appreciative inquiry interviews conducted in late 2014. (See our blog post on October 31, 2014 for a description of this project)  Many of the interviewees mentioned the blog as one of their favorite OERC services.

Now, we have quantitative evidence of a growing readership: our end-of-year site statistics. Our stats only go back to June 2014. Even with that limited timeline, you can see substantial growth. Our peak month in 2014 had 241 total views.  In 2015, our peak month had more than double the traffic, with 508 views.  In 2014, we had an average of 7 views per day.  In 2015, our average was 12 views per day.

Two graphs showing increased readership for the OERC Blog. One shows increasing monthly views (June 2014=41; December 2015=452) Line chart 2 shows increasing average daily views (June 2014=7; December 2015=12)

So thank you, readers.  You are behind those numbers. In the coming year, we resolve to continue our weekly posts on a variety of evaluation topics.

And now, as promised, here are the top 10 “clickthrough” URLS from last year.  If you missed any of the trending evaluation resources from our blog, here’s your chance to catch up.

 

 

 

 

Fun for Data Lovers: Two Interactive Data Visualizations

Wednesday, December 23rd, 2015

‘Tis the season of gift-giving and who doesn’t love getting toys during the holidays? So we want to give our readers links to two fun data visualizations to play with over the holidays. Both were designed by David McCandless at Information is BeautifulSnake Oil Superfoods summarizes the evidence (or lack thereof) for health claims about foods popularly believed to have healing properties. Snake Oil Supplements gives the same scrutiny to dietary supplements.

You can readily check the science behind the infographics. Both link to abstracts of scientific studies indexed in reputable sources such as PubMed or Cochran Library. The pretty-colored bubbles and the filters give you an enjoyable way to check some of those food-related miracles being proclaimed in popular magazines and your Facebook feed.

Meanwhile,  Happy Holidays to you and yours from OERC bloggers Cindy Olney and Karen Vargas.

Christmas 2015
Cindy (left) and Karen at the American Evaluation Association’s 2015 Conference

 

 

 

Dashboards for Your Library? Here Are Some Examples

Friday, December 18th, 2015

illustration of different business graphs on white backgroundLast week’s blog post was about using Excel to make data dashboards. As Cindy pointed out a dashboard is “a reporting format that allows stakeholders to view and interact with program or organizational data, exploring their own questions and interests.”

What can that mean for your library? What does a library data dashboard look like?

In the OERC Tools and Resources for Evaluation, we have a Libguide for Reporting and Visualizing, which includes a section on data dashboards.  In it are some examples of libraries using data dashboards.  In their dashboards, libraries are sharing data on some of the following things:

  • How much time is spent helping students and faculty with research
  • What databases are used most often
  • How e-books are changing the library picture
  • What librarians have been learning at their professional development conferences
  • What is the use of study rooms over time
  • What month is the busiest for library instruction
  • What department does the most inter-library loan

Can you create a dashboard to tell a story? While libraries can keep (and post) statistics on all kinds of things, consider who the dashboard is for, and what story you want to tell them about your library.  Maybe it’s the story of how the library is using its resources wisely.  Or maybe it’s the story of why the library decided it needed more study rooms.  Or the story of whether or not the library should eliminate it’s book collection and increase e-books and databases.

Consider what data you want to share and what people are interesting in knowing.  Happy dashboarding!

Excel Dashboards at Chandoo.org

Friday, December 11th, 2015

Chandoo.org is a website that excels at Excel.  More specifically, it provides an extensive collection of resources to help the rest of us use Excel effectively.  There’s something for everyone at this website, whether you’re a basic or advanced user. Today, however. I want to specifically talk about Chandoo.org’s resources on building data dashboards with Excel.

Data dashboards are THE cool new data tools. A dashboard is a reporting format that allows stakeholders to view and interact with program or organizational data, exploring their own questions and interests. When the OERC offered a basic data dashboard webinar several years ago, we hit our class limit within hours of opening registration. If you are unfamiliar with data dashboards, here are slides from a presentation by Buhler, Lewellen, and Murphy that describe and provide samples of data dashboards. .

Tableau seems to have grabbed the limelight as the go-to software for data dashboard development. Yet it may not be accessible to many of our blog readers.  It’s expensive and, unless you are a data analyst savant, Tableau may require a fair amount of training.

The good news is that Excel software is a perfectly fine tool for creating data dashboards. Some of the best known data visualization folks in the American Evaluation Association (AEA) are primarily Excel users. Stephanie Evergreen of Evergreen Data  and Ann Emery write popular blogs about data visualizations built from Excel. At the AEA’s annual conference in November, I attended a presentation by Miranda Lee of EvaluATE on creating dashboards with Excel.  She has some how-to dashboarding videos in the works that will be available to the public in the near future. (We’ll let our blog readers know when they become available.)

Hand of a business man checking data on a handheld device

There are free resources all over the Internet if you are good at do-it-yourself training.  However, for a modest fee, Chandoo.org offers a more systematic class on how to design a data dashboard with Excel. Depending on how many resources you want to take away from the class, the cost is between $97 (online viewing only) and $247 (downloads and extra modules). I have not taken the class yet, but I have heard positive feedback about Chandoo.org’s other courses and have plans to take this class in the near future.

If you are an Excel user but don’t see dashboard-building in your future, you still may find a wealth of useful tips and resources about Excel at Chandoo.org. My favorite is this list of 100+ Excel tips. I attended several data dashboard sessions at the AEA conference last month. The word on the street is that Microsoft is rising to the challenge to develop its data visualization capabilities.  Apparently, each new release is better than the last.  It may be getting easier to work dashboard magic with Excel.

Measuring What Matters in Your Social Media Strategy

Thursday, December 3rd, 2015

Thumbs up symbols with text "get more likes"

We’re all trying to find ways to improve evaluation of our social media efforts. It’s fun to count the number of retweets, and the number of ‘likes’ warms our hearts.  But there’s a nagging concern to evaluators – are these numbers meaningful?

Your intrepid OERC Team, Cindy and Karen, attended a program at the American Evaluation Association conference in Chicago called “Do Likes Save Lives? Measuring What Really Matters in Social Media and Digital Advocacy Efforts,” presented by Lisa Hilt and Rebecca Perlmutter of Oxfam.  The purpose of their presentation was to build knowledge and skills in planning and measuring social media strategies, setting digital objectives, selecting meaningful indicators and choosing the right tools and approaches for analyzing social media data.

What was interesting about this presentation is that the presenters did not want to rely solely on what they called “vanity metrics,” for example the number of “impressions” or “likes.”  Alone these metrics show very little actual engagement with the information.  Instead they chose to focus on specific social media objectives based on their overall digital strategy.

Develop a digital strategy

  • Connect the overall digital strategy to campaign objectives: (for example: To influence a concrete change in policy, or to change the debate on a particular issue.)

Develop social media objectives

  • You want people to be exposed to your message
  • Then you want people to engage with it somehow (for example, sharing your message) or make them work with it somehow (for example: sign an online petition after reading it).

Collect specific information based on objectives

  • Collect data about social media engagement supporting your objectives that can be measured (for example “the Oxfam Twitter campaign drove 15% of the readers to signing its petition” vs. “we got 1500 likes”)

The presenters suggested some types of more meaningful metrics:

  • On Twitter you can look at the number of profiles who take the action you want them to take, and then the number of tweets or retweets about your topic.
  • For Facebook, the number of likes, shares and comments mean that your audience was definitely exposed to your message.
  • Changes in the rate of likes or follows (for example if you normally get 5 new followers to your fan page a week, but due to a particular campaign strategy, you suddenly started getting 50 new followers a week)
  • Number of “influential” supporters (for example, being retweeted by Karen Vargas is not the same as being retweeted by Wil Wheaton).
  • Qualitative analysis: Consider analyzing comments on Facebook posts, or conversation around a hashtag in Twitter.

Overall, your goal is to have a plan for how you would like to see people interact with your messages in relation to your overall organizational and digital strategies, and find metrics to see if your plan worked.

 

Take The Pie, Leave The Pie Chart

Wednesday, November 25th, 2015

Evaluation and data visualization folks may disagree on the best pie to serve at Thanksgiving dinner.  Pumpkin?  Pecan?  A nice silk pie made with chocolate liqueur and tofu? (Thank you, Alton Brown.)

You see, the whole point of charts is to give people an instantaneous understanding of your findings.  Your readers can easily discern differences in bars and lines.  In wedges of pie, not so much. Data visualization expert Stephen Few explained the problem during this interview with the New York Times: “When looking at parts of a whole, the primary task is to rank them to see the relative performance of the parts. That can’t be done easily when relying on angles formed by a slice.”

(Note:  This parts-to-whole angle problem may also explain why most of us can’t understand how our Whole Foods pumpkin pie could possibly have eight servings. Eight? Are you kidding me?)

So, for today’s pre-Thanksgiving holiday post, I thought I would point you to some online articles about the much used and much vilified pie chart.

First, here’s an article by American Evaluation Association’s president-elect John Gargani, arguing for retirement of the venerable pie chart.  He make points that are repeated in many anti-pie chart blog posts.  But in the interest of objectivity, you should know that agreement to send pie charts to the cosmic dust bin is not universal. Here’s a post by Bruce Gabrielle of Speaking PowerPoint that describes situations where pie charts can shine.

In general, most experts believe that the times and places to use pie charts are few and far between. If you have found one of those rare times, then here’s a post at Better Evaluation with the design tips to follow.

But for heaven sake, turn off that three-dimensional feature in your pie chart, or in any chart, for that matter. Nobody wants to see that!

And for humorous examples of what not to do, check out Michael Friendly’s Evil Pies blog,

Data Party Like it’s 2099! How to Throw a Data Party

Friday, November 20th, 2015

two funny birthday dogs celebrating close together as a coupleWhat’s a “data party?” We attended a program by evaluator Kylie Hutchinson entitled “It’s a Data Party!” at the AEA 2015 conference last week in Chicago.  A data party is another name for a kind of participatory data analysis, where you gather stakeholders together, show them some of the data that you have gathered and ask them to help analyze it.

Isn’t analyzing the data part of your job?  Here are some reasons you might want to include stakeholders in the data analysis stage:

  • It allows stakeholders to get to know and engage with the data
  • Stakeholders may bring context to the data that will help explain some of the results
  • When stakeholders participate in analyzing the data, they are more likely to understand it and use it
  • Watching their interactions, you can often find out who is the person with the power to act on your recommendations

So how do you throw a data party? First of all you need to know what you hope to get from the attendees, since you may only be able to hold an event like this one time. There are a number of different ways to organize the event.  You might want to consider using a World Cafe format, where everyone works together to explore a set of questions, or you could use an Open Space system in which attendees create their own agenda about what questions they want to discuss.  Recently the AEA held a very successful online unconference using MIT’s Unhangout that could be used for an online Data Party with people from multiple locations.

The kinds of questions Kylie Hutchinson suggested asking at a data party include:

  • What does this data tell you?
  • How does this align with your expectations?
  • What do you think is occurring here and why?
  • What other information do you need to make this actionable?

At the end of the party it might be time to present some findings and recommendations that you have.  Considering the work that they have done, they may be more willing to listen.  As Kylie Hutchinson said “People support what they helped create.”

 

Last updated on Saturday, 23 November, 2013

Funded by the National Library of Medicine under contract # HHS-N-276-2011-00008-C.