Skip all navigation and go to page content

NEO Shop Talk

The blog of the National Network of Libraries of Medicine Evaluation Office

Archive for the ‘News’ Category

A Most Pragmatic Theory: Diffusion of Innovation and User Assessment (Part 1)

Friday, March 4th, 2016

Seven tomatoes in a row, increasing in maturity from left to right

If your work includes teaching or providing products or services to people, you are in the business of influencing behavior change. In that case, behavior change theories should be one of the tools in your evaluation toolbox. These theories are evidence-based descriptions of how people change and the factors that affect the change process. If you have a handle on these influences, you will be much more effective in gathering data and planning programs or services.

Today and next week, I’m going to talk about my go-to behavioral change theory: Diffusion of Innovations. It was introduced in the 1960s by communication professor Everett Rogers to explain how innovations spread (diffuse) through a population over time. The term innovation is broadly defined as anything new: activities, technologies; resources; or beliefs. There are a number of behavioral change theories that guide work in health and human services, but I particularly like Diffusion of Innovations because it emphasizes how social networks and interpersonal relationships may impact your success in getting people to try something new.

I use Diffusion of Innovations for most user or community assessment studies I design. Next week, we’ll talk about using these concepts to frame community or user assessment studies. This week, I want to cover the basic principles I found to be most helpful.

People change in phases

The heart of behavior change is need.  People adopt an innovation if it that solves a problem or improves quality of life. Adoption is not automatic, however. People change in phases. They first become aware and gather information about an innovation. If it is appealing, they decide to employ it and assess its usefulness. Adoption occurs if the innovation lives up to or exceeds their expectation.

Product characteristics influence phase of adoption

Five criteria impact the rate and success of adoption within a group. First, the innovation must be better than the product or idea it is designed to replace. Second, it must fit well with people’s values, needs and experiences. Innovations that are easy to use will catch on faster, as will technologies or resources that can allow experimentation before the user must commit to it. Finally, if people can easily perceive that the innovation will lead to positive results, they are more likely to use it.

Peers’ opinions matter greatly when it comes to innovation adoptions. Marketers will tell you that mass media spreads information, but people choose to adopt innovations based on recommendations from others who are “just like them.” Conversations and social networks are key channels for spreading information about new products and ideas. If you are going to influence change, you have to identify and use how members of your audience communicate with one another.

Migration of flock of birds flying in V-formation at dusk

Riding the Wave

Segments of a population adopt innovations at different rates. In any given target population, there will be people who will try an innovation immediately just for the pleasure of using something new. They are called innovators. The second speediest are the early adopters, who like to be the trendsetters. They will use an innovation if they perceive it will give them a social edge. They value being the “opinion leaders” of their communities.

Sixty-eight percent of a population comprise the majority.  The first half (early majority) will adopt an innovation once its reliability and usefulness have been established. (For example, these are the folks who wait to update software until the “bugs” have been worked out.) The other half (late majority) are more risk adverse and tend to succumb through peer pressure, which builds as an innovation gathers momentum. The last adopters are called the laggards, who are the most fearful of change. They prefer to stick with what they know. Laggards may have a curmudgeonly name, but Les Robinson of Enabling Change pointed out that they also may be prophetic, so ignore them at your own risk.

Next Step: Diffusion of Innovations and User/Community Assessment

Next week, I will show you how I develop my needs assessment methods around Diffusion of Innovation concepts. In the meantime, here are some sources that might interest you. Everett Rogers and Karyn Scott wrote an article specifically for the NN/LM Pacific Northwest Region that you can read here. Les Robinson’s article has an interesting discussion of the specific needs of the different population segments: Finally, If you want the classic text by Ev Rogers himself, here is the full citation.

Rogers EM.  Diffusion of innovations (5th ed). New York, NY: The Free Press, 2003.

Appreciative Inquiry of Oz: Building on the Best in the Emerald City

Friday, February 19th, 2016

Cartoon image of an Emerald City

“One day not very long ago, librarians came to the Emerald City from their libraries in all of the countries of Oz. They came to visit the Great Library of the Emerald City, and to petition the Wizard allow them to borrow books and other items at the Great Library. Their hope was to transport items from one library to another using the Winged Monkeys, who offered their skills for this task after they were set free and got bored.”

Thus begins the latest OERC project – an online class in Appreciative Inquiry (AI), offered through the MidContinental Region’s Librarians in the Wonderful Land of Oz Moodle ‘game’ (i.e. series of online classes worth game points and CE credits from the Medical Library Association).  The game is made up of several ‘challenges’ (online classes) for librarians offered by NN/LM instructors.

In OERC’s challenge, Building on the Best at the Great Library of the Emerald City: Using Appreciative Inquiry to Enhance Services and Programs, the Wizard of Oz makes a deal with the librarians.  He will allow interlibrary loan of the Great Library’s resources if the librarians will assess customer satisfaction of the Great Library’s services and find things to improve.  And students in the class will learn to use a qualitative data collection technique called Appreciative Inquiry to do this assessment.

Sometimes people avoid customer service assessment because they find the methods to be complicated and time-consuming. Negative feedback can be uncomfortable on the part of the listener and the speaker. Appreciative Inquiry, with a focus on identifying and building on organizational strengths, removes that discomfort. A number of OERC workshops touch on Appreciative Inquiry but this Librarians of Oz challenge allows you to practice the technique, something that the OERC has not been able to provide in the traditional webinar or workshop context.  Completing the class is worth 14 MLA CE credits.

The class is free, but in order to take it you will need to register for the game Librarians in the Wonderful Land of Oz .  If you don’t want to take the class, but would still like to learn more about Appreciative Inquiry, I recommend these earlier blog posts:

From Cindy and Karen’s perspective, one of the best parts of this experience is that we finally get the official title of Wizard.  Special thanks to John Game Wizard Bramble of the NN/LM MCR who made all this happen.

 

Logic Model for a Birthday Party

Thursday, February 4th, 2016

Cindy and I feel that logic models are wonderful planning tools that can be used in many life events to stay focused on what’s meaningful. This blog post is an example of such a logic model.

My daughter’s birthday is coming up this week and we are having a party for her. My husband and I have quite a few friends with children about the same age as our daughter (who is turning 3).  This means that we go to birthday parties and we have birthday parties, and we are looking forward to another 15 years or so of birthday parties.  Even though we live in the 4th largest city in the country, it’s a bit of an project to come up with a place for the party.  I could see this problem stretching out into future years of Chuck E. Cheese’s and trampoline parks. Not that there’s anything wrong with those places, but we realized that for us it was time to stop the train before we went off the rails.  Looking at my own childhood, my birthday parties growing up were all at my own house. So we decided to see if we could have a party at our house and just have fun.

To make sure we had a great event and kept our heads on straight (and had something to blog about this week), I created a logic model for my daughter’s birthday party. We needed an evaluation question, which is “is it possible to have a party of preschoolers at our tiny, not-that-childproofed-house without going crazy?”

So here is the event we have planned.

Birthday Party Logic Model

If you’re new to logic models, they are planning tools that you use from right to left, starting with long-term outcomes (what you hope to see out in the future), intermediate outcomes, and short term outcomes. Then you think of the activities that would lead to those outcomes, and then inputs, the things you need in order to do the activities. (For more information on logic models, take a look at the OERC Blog category “Logic Models“).

What I’ve learned from this process is that every time I would come up with an idea about what we could do at the party, it would need to pass the test of whether or not it leads to the long-term outcome of being willing to throw parties in the house in the future – in other words if the party takes too much work or money (or it isn’t fun), we won’t remember it as an event we are likely to do again. For example, while we are inviting a person to our house to entertain the kids, we’re bringing our daughter’s music teacher from her preschool, so it should be fun for the kids that she knows from pre-school and everyone will know the music and can sing along.  Another activity that has high enjoyment and low effort is the dance party with bubbles. All toddlers love to dance, and we can make a playlist of all of our daughter’s favorite dance songs.  Adding bubbles to the mix is frosting.

The short term goals are our immediate party goals.  We would like the party to be fun for our daughter and for most of her friends (can we really hope for 100%?  Probably not, so we put 90%).  My husband and I may be a little stressed but we’re setting our goal fairly low at being relaxed 60% of the time (you’ll have to imagine maniacal laughter here).  Our intermediate goals are simply that we all can feel comfortable having our daughter’s friends over to our house in the near future. And the long term goal is to think this is a good idea to do again and again.

Wish us luck!

The Zen Trend in Data Visualization

Friday, January 29th, 2016

You may have noticed there is a data visualization bandwagon out there and the OERC bloggers are on it. We like to write about tools and resources that can help you communicate visually.  There’s a bit of self-interest here. We want to fuel this bandwagon because we personally prefer to review a one-page visual synopsis of data over a table-dense written report.

Unless, that is, we have to grapple with a poorly designed data visualization. These are displays with too many details; unnecessary icons; many variables piled into one chart. It can make your head spin.

That isn’t to say you have to be an artist to do good visual displays. Quite the contrary.  Most information designers adamantly state that data visualization is about communication, not art. However, you have to know how to design with a purpose.

To understand the basics of good design, you need to understand why humans respond so well to visual displays of data.  That is the topic of an excellent blog article by Stephen Few, a thought leader in the data visualization field. First, he advocates that data visualizations aid users in performing three primary functions: exploring, making sense of, and communicating data.  Evaluators would add that our ultimate goal is to help our users apply data in planning and decision-making. To that end, Few argues that data visualizations should be designed to support readers’ ability to perform these four cognitive tasks:

  1. See the big picture in the data
  2. Compare values
  3. See patterns among values
  4. Compare patterns

To demonstrate Few’s point, I am sharing a sample of the OERC Blog’s monthly site statistics.  The blog dashboard uses a table format to display total views per month. The table shows monthly view counts starting in June 2014, when we started tracking site statistics.

Table showing total page views per month for the OERC blog from June 2016 to Jan 2016.  The numbers generally get higher over time, but there is fluctuation month to month. It is difficult to see trendts

 

Slide3
Line graph of data shown in previous table (June 2014 to January 2015) It shows total page-view-per-months. The general upward trend is much more apparent than in the table. 2015 has more page views than 2014 for all months. There are some times during the year where there is a zigzag pattern, with high viewership one month followed by lower readership the next. This would suggests a need to look at topics or schedules of the blog readership.

Now here’s the same information presented in a line graph created in Excel.  You quickly see the big picture: Our page views are trending upward.  You can easily compare the direction of month-to-month traffic.  The chart allows comparison of monthly patterns in 2014 and 2015.  It’s the same data, but its almost impossible to see any of these findings in tabled data.

Dataviz design experts like Stephen Few are avowed minimalists. They hate chart junk, such as gridlines and data labels.  They have an affinity for small multiples, which are series of graphs displaying different slices of data.  (If you have never seen small multiples, here’s a post from Juice Analytics with good examples) In general, they do not include any element that will hinder users’ ability to make comparisons, find patterns, and identify pattern abnormalities that may be indicators of important events.  Needless to say, most data visualization luminaries are not big on decorative features like gas gauges and paper doll icons. These, too, are viewed as unnecessary distraction.

Yet, there is a lot of chart art out there and you may wonder why. There is a distinction between data visualizations and infographics.  Alberto Cairo, Knight Chair in Visual Journalism at the University of Miami’s School of Communication,  wrote that data visualizations are tools for interactive data exploration while infographics are visual displays that make a specific point.  I tend to think of data visualizations as having users, while infographics have readers. Chart art may be more legitimate in infographics because it supports the primary message or story.

Yet, Cairo admits that the boundary between infographic and data visualizations is fuzzy. He noted a trend toward infographics with two layers: a presentation layer, and an exploration one.  The infographics have an obvious primary message, but readers are also presented with opportunities to explore their own questions. One of my favorites from the Washington Post is an example of an infographic with both layers.

That said, Cairo still argues that data design principles hold true for both data visualizations and infographics.  There is no excuse to drown your readers in images or to distract them with bling.

Zen is in; bells and whistles are out. The good news is that simple data visualizations do not require sophisticated software or design skills.  That’s not to say that simple is the same as easy. Good data visualizations and infographics take a lot of thought. For the more interactive data visualizations, you must identify how your users will use your data and design accordingly. For infographics, you need first to clearly identify your central message and then be sure that every element has a supporting role.

Want to start developing good visual design habits? I recommend Presenting Data Effectively by Stephanie Evergreen (Sage, 2013).

 

 

New Outcome Measurement Resource for Public Libraries

Friday, January 22nd, 2016

Librarian and children looking at globe in library

About six months ago Public Library Association (PLA) initiated a service called Project Outcome, that I have been following with interest. An article entitled “Project Outcome – Looking Back, Looking Forward” by Carolyn Anthony, director of the Skokie Public Library, IL was recently published in Public Libraries Online that describes the successes of libraries using this service over the past 6 months.

Project Outcome is an online resource that provides evaluation tools that are designed to measure the impact of library programs and services, such as summer reading program or career development programming. It also provides ready-made reports and data dashboards that can be used to give libraries and stakeholders immediate data on their programs’ outcomes.  And Project Outcome provides support and peer sharing opportunities to address common challenges and increase capacity for outcomes evaluation.

Here are some of the things that make me the most excited about this service:

  1. Project Outcome has managed to create a structured approach for program outcome evaluation that can be used online by public libraries of all shapes and sizes by people who have not done outcome evaluation before.  Along with tools for collecting data, the resource has tutorials and support for libraries doing outcomes evaluation for the first time.
  2. Continued support and peer sharing as an integral part of the service means that PLA is building a community of librarians who use outcome evaluation.
  3. The stories that are shared by the peers as described in the article will increase the understanding that evaluation isn’t something forced on you from outside, but can be something that helps you to create a better library and enhance the meaning of your library’s programs.
  4. This process teaches librarians to start with the evaluation question (“decide what you want to learn about outcomes in your community”) and a plan for what to do with the findings. And the process ends with successfully communicating your findings to stakeholders and implementing next steps.
  5. Lastly, I love that Project Outcome and the PLA Performance Measurement Task Force are planning the next iteration of their project that will measure whether program participants followed through with their intended outcomes.  It will be very interesting to find out how this long term outcome evaluation comes out.

I’ll end with this statement from Carolyn Anthony, who said “the opportunity to spark internal conversations and shift the way libraries think about their programs and services is what outcome measurement is all about.”

Simply Elegant Evaluation: GMR’s Pilot Assessment of a Chapter Exhibit

Friday, January 15th, 2016

If you spend any time with librarians who work for the National Network of Libraries of Medicine (NN/LM), you’ll likely hear about their adventures with conference exhibits. Exhibiting is a typical outreach activity for the NN/LM Regional Medical Libraries, which are eight health sciences libraries that lead other libraries and organizations in their region in promoting the fine health information resources of the National Library of Medicine (NLM) and National Institutes of Health.  The partnering organizations are called “network members” and, together with RMLs, are the NN/LM.

Jacqueline Leskovec

Exhibiting is quite an endeavor. It requires muscles for hauling equipment and supplies. You have to be friendly and outgoing when your feet hurt and you’re fighting jet lag. You need creative problem-solving skills when you’re in one state and your materials are stuck in another.

More than one RML outreach librarian has asked the question: Is exhibiting worth it?

Jacqueline Leskovec, at the NN/LM Greater Midwest Regional Medical Library (GMR), decided to investigate this question last October. The Outreach, Planning, and Evaluation Coordinator for NN/LM GMR, Jacqueline specifically chose to assess a particular type of NN/LM exhibits: those held at Medical Library Association chapter meetings.  The question was raised at a GMR staff meeting about the value of exhibiting at a conference where most attendees were medical librarians, many of whom already knew about NLM and NIH resources.

Jacqueline decided to look at the question from a different angle. Could they consider, instead, the networking potential of their exhibit? The NN/LM runs on relationships between regional medical library staff and other librarians in their respective regions. Possibly the booth’s value was that it provided an opportunity for the GMR staff to meet with librarians from both long-standing and potential member organizations of the GMR.

Collecting Feedback

Jacqueline decided to ask two simple evaluation questions.  First, did existing GMR users stop by the exhibit booth to visit with the GMR staff at the chapter meeting booth?  Second, did the booth provide the GMR staff with opportunities to meet librarians who were unaware of the NN/LM? In a nutshell, the questions focused on the booth’s potential to promote active participation in the NN/LM. This was a valid goal for an exhibit targeting this particular audience, where the GMR could find partners to support the network’s mission of promoting NLM resources.

She worked with the OERC to develop a point-of-contact questionnaire that she administered to visitors using an iPad. Her questionnaire had five items that people responded to via touch screen.  She chose the app Quick Tap Survey because it produced an attractive questionnaire, data could be collected without an Internet connection, and she could purchase a one-month subscription for the software.  The app also has a feature that allows the administrator to randomly pull a name for a door prize. Jacqueline used this feature to give away an NLM portfolio that was prominently displayed on the exhibit table. (Participation was voluntary, and the personally identifiable information was deleted after the drawing.)

Jacqueline stood in front of the booth to attract visitors, a practice she uses at all exhibits. She did not find that the questionnaire created any barriers to holding conversations with visitors. Quite the contrary, many were intrigued with the technology. Almost no one turned down her request to complete the form. Of the 120 conference attendees (the count reported by the Midwest MLA chapter), 38 (32%) visited the GMR booth and virtually all agreed to complete the questionnaire.

What Did GMR Learn?

 Jacqueline learned that 50% of the visitors came to the booth specifically to visit with GMR staff, while 26% came to get NLM resources.  This confirmed that the visits were more related to networking than information-seeking about NLM or NIH resources. She also learned that more than half were return visitors who had visited at past conferences, while 46% had never stopped by the booth before.  It appeared that the booth served equally as a way for GMR staff to talk with existing users and to meet potential new ones. Those who were return visitors also were the more likely users of the GMR: 68% said that the GMR was the first place they would seek answers to questions they had about NLM or NIH resources. (Although one added that she would first look online, then contact them if she couldn’t find the answer on her own.)  In contrast, 56% of new booth visitors said they usually sought help from a friend or colleague. Only 26% would contact the GMR. Findings do not indicate that exhibits cause librarians to become more involved with GMR. However, when GMR offers opportunities for face-to-face interactions, their users take advantage of it.

Visitors also got an opportunity to voice their opinion about the continuation of GMR exhibits at chapter meetings. There was fairly universal agreement: 92% said they thought the GMR should continue. The other 8% said they weren’t sure, but no one said GMR should stop.

Lessons learned

Jacqueline found it was easy to get people to take her questionnaire, particularly with a smooth application like Quick Tap Survey. She also learned that, regardless of the care she took in developing her questions, she still had at least one item that could have been worded better. However, tweaks can easily be implemented for future exhibits.

Overall, she said this assessment project added depth to the typical booth assessments that GMR typically conducts. Previous assessments focused on describing booth traffic, such as number of visitors, staff hours in booth, or number of promotional materials distributed. This project described the actual visitors and what they got out of the exhibit.

 Prologue: Why The OERC Loves This Project

We love this project because Jacqueline thought carefully about the outcomes of exhibiting to this particular audience and designed her questionnaire accordingly.  She recognizes that exhibits at chapter meetings are a specific type of event. The goals of NN/LM exhibits at other types of conferences are different, so the questionnaires would have to be adapted for those goals.

We also love this project because it shows that you can assess exhibits. Back in the day, point-of-contact assessment required paper-and-pencil methods.  It was a data collection approach that seemed likely to be self-defeating. Visitors would cut a wide path to avoid requests to fill out a form. Now that we have the technology (tablets and easy-to-use apps) that makes the task less daunting, the OERC has been promoting the idea of exhibit assessment.  Jacqueline’s project is proof that it can be done!

The OERC Blog – Moving Forward

Friday, January 8th, 2016

turtle climbing up staircase

Since last week’s message the OERC has been looking at some additional data about the blog in order to update our online communications plan going forward. The earlier OERC strategy had been to use social media to increase the use of evaluation resources, the OERC’s educational services, and the OERC’s coaching services. These continue to be the goals of the OERC’s plan. However, due to the following pieces of information, a new strategy has emerged.

  • The OERC Blog is increasing in popularity. As reported last week, more people find it, share it with their regions, and engage with it by clicking on the links than ever before.
  • The blog always has new content and is time-intensive to create: it takes approximately 6 person-hours each week to write and publish new content.
  • Although the OERC does not have a Facebook page, and the OERC Twitter account @nnlmOERC has been used primarily promote the blog, still Facebook refers more people to the blog than come from Twitter (this was kind of a shocker for us!)

We feel that that the OERC Blog, based on the results described in last week’s post, has become one of the most successful products of the OERC. The blog has become a source of educational content, and is itself an evaluation resource in need of promotion. Because of this, our Online Communications Plan going forward has a special focus on promoting the blog. Here are some of the new process goals for this purpose.

  • Facebook: The OERC will create a Facebook page that will promote the blog, link to other online evaluation resources, and show photos of what the OERC team is up to.
  • Twitter: Karen and Cindy will post at least one additional post per week on Twitter to increase the Twitter presence. Included will be retweets to build social capital which may lead to more retweets of our blog tweets (here is an interesting dissertation by Thomas Plotkowiak explaining this).
  • Training: We will make a point of promoting the blog during our in-person classes and webinars. For example, we may refer people to articles in our blog that supplement the content in the training sessions.
  • Email: The blog URL will be added to Karen and Cindy’s email signatures

So, what kinds of things will we measure? Naturally we want process measurements (showing that things are working the way they should along the way) and outcome measurements (showing that we are meeting our goals).

Here are our process goals, which are the activities we are committing to this year:

  • 52 blog posts a year
  • 3 tweets a week
  • Minimum of 1 Facebook post a week
  • Blog information added to class resource information and email signatures

In the short-term, we hope to see people interacting with our social media posts. So we are hoping to see increases on the following measures of our short-term outcomes:

  • # of Twitter retweets, likes and messages
  • # of Facebook likes, comments, and shares
  • # of new followers on Twitter and Facebook

We hope that the increased interaction with our Facebook and Twitter posts will lead more readers to our blog. So we will be monitoring increases on the following long-term outcome measures:

  • # of blog visitors per month
  • # of blog average views per day
  • # of blog link “click-throughs” to measure engagement with the blog articles
  • # of people who register for weekly blog updates and add the OERC Blog to their blog feeds
  • # of times blog posts are re-posted in Regional Medical Library blogs and newsletters.

This is our strategy for increasing use of our blog content. We will keep you updated and share tools that we develop to track these outcomes.

References

Plotkowiak, Thomas. “The Influence of Social Capital on Information Diffusion in Twitter’s Interest-Based Social Networks.” Diss. University of St.Gallen, 2014. Web. 8 Jan. 2016.

An OERC Resolution Realized

Friday, January 1st, 2016

Happy new year greeting card or poster design with colorful triangle 2015 shape and vintage label illustration. EPS10 vector.

Happy New Year, readers.  We decided to start 2016 on a high note with the OERC Blog’s Annual Report.

Wait, don’t leave! 

I promise, there’s something for everyone at the end of this post. There are links to the most popular evaluation tools and resources from our blog entries this year.  We judged popularity using our “clickthrough” statistics, showing the links that were most likely to be clicked by our readers to further investigate the resources featured in our blog posts.

(If you want, you can skip to the bottom now. We’ll never know.)

But first, we are going to take a moment to describe how far our blog has come since its inception in 2006.  That’s right, August 2016 is the 10-year anniversary of the OERC blog. I’ve been a contributing blogger for almost a decade. Karen, the newcomer, started contributing in February 2015, the month she joined the OERC. Her entries are quite popular: Most of the top 10 clickthroughs below were presented in Karen’s posts.

For most of the blog’s history, OERC staff posted 12-16 times per year (about 1 per month).  However, in January 2014, the OERC staff committed to increasing our blog activity. That year, we managed a little better than three posts per month.  This year, we finally met our goal of once-per-week posts.  We had 52 entries between January 1 and December 31, 2015.

Of course, writing blog posts is one thing. Writing blog posts that people read is another. Our first indication that our blog was gaining readership was through the OERC’s appreciative inquiry interviews conducted in late 2014. (See our blog post on October 31, 2014 for a description of this project)  Many of the interviewees mentioned the blog as one of their favorite OERC services.

Now, we have quantitative evidence of a growing readership: our end-of-year site statistics. Our stats only go back to June 2014. Even with that limited timeline, you can see substantial growth. Our peak month in 2014 had 241 total views.  In 2015, our peak month had more than double the traffic, with 508 views.  In 2014, we had an average of 7 views per day.  In 2015, our average was 12 views per day.

Two graphs showing increased readership for the OERC Blog. One shows increasing monthly views (June 2014=41; December 2015=452) Line chart 2 shows increasing average daily views (June 2014=7; December 2015=12)

So thank you, readers.  You are behind those numbers. In the coming year, we resolve to continue our weekly posts on a variety of evaluation topics.

And now, as promised, here are the top 10 “clickthrough” URLS from last year.  If you missed any of the trending evaluation resources from our blog, here’s your chance to catch up.

 

 

 

 

Fun for Data Lovers: Two Interactive Data Visualizations

Wednesday, December 23rd, 2015

‘Tis the season of gift-giving and who doesn’t love getting toys during the holidays? So we want to give our readers links to two fun data visualizations to play with over the holidays. Both were designed by David McCandless at Information is BeautifulSnake Oil Superfoods summarizes the evidence (or lack thereof) for health claims about foods popularly believed to have healing properties. Snake Oil Supplements gives the same scrutiny to dietary supplements.

You can readily check the science behind the infographics. Both link to abstracts of scientific studies indexed in reputable sources such as PubMed or Cochran Library. The pretty-colored bubbles and the filters give you an enjoyable way to check some of those food-related miracles being proclaimed in popular magazines and your Facebook feed.

Meanwhile,  Happy Holidays to you and yours from OERC bloggers Cindy Olney and Karen Vargas.

Christmas 2015
Cindy (left) and Karen at the American Evaluation Association’s 2015 Conference

 

 

 

Dashboards for Your Library? Here Are Some Examples

Friday, December 18th, 2015

illustration of different business graphs on white backgroundLast week’s blog post was about using Excel to make data dashboards. As Cindy pointed out a dashboard is “a reporting format that allows stakeholders to view and interact with program or organizational data, exploring their own questions and interests.”

What can that mean for your library? What does a library data dashboard look like?

In the OERC Tools and Resources for Evaluation, we have a Libguide for Reporting and Visualizing, which includes a section on data dashboards.  In it are some examples of libraries using data dashboards.  In their dashboards, libraries are sharing data on some of the following things:

  • How much time is spent helping students and faculty with research
  • What databases are used most often
  • How e-books are changing the library picture
  • What librarians have been learning at their professional development conferences
  • What is the use of study rooms over time
  • What month is the busiest for library instruction
  • What department does the most inter-library loan

Can you create a dashboard to tell a story? While libraries can keep (and post) statistics on all kinds of things, consider who the dashboard is for, and what story you want to tell them about your library.  Maybe it’s the story of how the library is using its resources wisely.  Or maybe it’s the story of why the library decided it needed more study rooms.  Or the story of whether or not the library should eliminate it’s book collection and increase e-books and databases.

Consider what data you want to share and what people are interesting in knowing.  Happy dashboarding!

Last updated on Monday, June 27, 2016

Funded by the National Library of Medicine under Contract No. UG4LM012343 with the University of Washington.