Skip all navigation and go to page content

NEO Shop Talk

The blog of the National Network of Libraries of Medicine Evaluation Office

Archive for the ‘Practical Evaluation’ Category

Logic Model for a Birthday Party

Thursday, February 4th, 2016

Cindy and I feel that logic models are wonderful planning tools that can be used in many life events to stay focused on what’s meaningful. This blog post is an example of such a logic model.

My daughter’s birthday is coming up this week and we are having a party for her. My husband and I have quite a few friends with children about the same age as our daughter (who is turning 3).  This means that we go to birthday parties and we have birthday parties, and we are looking forward to another 15 years or so of birthday parties.  Even though we live in the 4th largest city in the country, it’s a bit of an project to come up with a place for the party.  I could see this problem stretching out into future years of Chuck E. Cheese’s and trampoline parks. Not that there’s anything wrong with those places, but we realized that for us it was time to stop the train before we went off the rails.  Looking at my own childhood, my birthday parties growing up were all at my own house. So we decided to see if we could have a party at our house and just have fun.

To make sure we had a great event and kept our heads on straight (and had something to blog about this week), I created a logic model for my daughter’s birthday party. We needed an evaluation question, which is “is it possible to have a party of preschoolers at our tiny, not-that-childproofed-house without going crazy?”

So here is the event we have planned.

Birthday Party Logic Model

If you’re new to logic models, they are planning tools that you use from right to left, starting with long-term outcomes (what you hope to see out in the future), intermediate outcomes, and short term outcomes. Then you think of the activities that would lead to those outcomes, and then inputs, the things you need in order to do the activities. (For more information on logic models, take a look at the OERC Blog category “Logic Models“).

What I’ve learned from this process is that every time I would come up with an idea about what we could do at the party, it would need to pass the test of whether or not it leads to the long-term outcome of being willing to throw parties in the house in the future – in other words if the party takes too much work or money (or it isn’t fun), we won’t remember it as an event we are likely to do again. For example, while we are inviting a person to our house to entertain the kids, we’re bringing our daughter’s music teacher from her preschool, so it should be fun for the kids that she knows from pre-school and everyone will know the music and can sing along.  Another activity that has high enjoyment and low effort is the dance party with bubbles. All toddlers love to dance, and we can make a playlist of all of our daughter’s favorite dance songs.  Adding bubbles to the mix is frosting.

The short term goals are our immediate party goals.  We would like the party to be fun for our daughter and for most of her friends (can we really hope for 100%?  Probably not, so we put 90%).  My husband and I may be a little stressed but we’re setting our goal fairly low at being relaxed 60% of the time (you’ll have to imagine maniacal laughter here).  Our intermediate goals are simply that we all can feel comfortable having our daughter’s friends over to our house in the near future. And the long term goal is to think this is a good idea to do again and again.

Wish us luck!

New Outcome Measurement Resource for Public Libraries

Friday, January 22nd, 2016

Librarian and children looking at globe in library

About six months ago Public Library Association (PLA) initiated a service called Project Outcome, that I have been following with interest. An article entitled “Project Outcome – Looking Back, Looking Forward” by Carolyn Anthony, director of the Skokie Public Library, IL was recently published in Public Libraries Online that describes the successes of libraries using this service over the past 6 months.

Project Outcome is an online resource that provides evaluation tools that are designed to measure the impact of library programs and services, such as summer reading program or career development programming. It also provides ready-made reports and data dashboards that can be used to give libraries and stakeholders immediate data on their programs’ outcomes.  And Project Outcome provides support and peer sharing opportunities to address common challenges and increase capacity for outcomes evaluation.

Here are some of the things that make me the most excited about this service:

  1. Project Outcome has managed to create a structured approach for program outcome evaluation that can be used online by public libraries of all shapes and sizes by people who have not done outcome evaluation before.  Along with tools for collecting data, the resource has tutorials and support for libraries doing outcomes evaluation for the first time.
  2. Continued support and peer sharing as an integral part of the service means that PLA is building a community of librarians who use outcome evaluation.
  3. The stories that are shared by the peers as described in the article will increase the understanding that evaluation isn’t something forced on you from outside, but can be something that helps you to create a better library and enhance the meaning of your library’s programs.
  4. This process teaches librarians to start with the evaluation question (“decide what you want to learn about outcomes in your community”) and a plan for what to do with the findings. And the process ends with successfully communicating your findings to stakeholders and implementing next steps.
  5. Lastly, I love that Project Outcome and the PLA Performance Measurement Task Force are planning the next iteration of their project that will measure whether program participants followed through with their intended outcomes.  It will be very interesting to find out how this long term outcome evaluation comes out.

I’ll end with this statement from Carolyn Anthony, who said “the opportunity to spark internal conversations and shift the way libraries think about their programs and services is what outcome measurement is all about.”

Data Party Like it’s 2099! How to Throw a Data Party

Friday, November 20th, 2015

two funny birthday dogs celebrating close together as a coupleWhat’s a “data party?” We attended a program by evaluator Kylie Hutchinson entitled “It’s a Data Party!” at the AEA 2015 conference last week in Chicago.  A data party is another name for a kind of participatory data analysis, where you gather stakeholders together, show them some of the data that you have gathered and ask them to help analyze it.

Isn’t analyzing the data part of your job?  Here are some reasons you might want to include stakeholders in the data analysis stage:

  • It allows stakeholders to get to know and engage with the data
  • Stakeholders may bring context to the data that will help explain some of the results
  • When stakeholders participate in analyzing the data, they are more likely to understand it and use it
  • Watching their interactions, you can often find out who is the person with the power to act on your recommendations

So how do you throw a data party? First of all you need to know what you hope to get from the attendees, since you may only be able to hold an event like this one time. There are a number of different ways to organize the event.  You might want to consider using a World Cafe format, where everyone works together to explore a set of questions, or you could use an Open Space system in which attendees create their own agenda about what questions they want to discuss.  Recently the AEA held a very successful online unconference using MIT’s Unhangout that could be used for an online Data Party with people from multiple locations.

The kinds of questions Kylie Hutchinson suggested asking at a data party include:

  • What does this data tell you?
  • How does this align with your expectations?
  • What do you think is occurring here and why?
  • What other information do you need to make this actionable?

At the end of the party it might be time to present some findings and recommendations that you have.  Considering the work that they have done, they may be more willing to listen.  As Kylie Hutchinson said “People support what they helped create.”

 

Group Learning about Evaluation

Friday, October 23rd, 2015

Group of young business people talking on business meeting at office.

I recently got to participate in a very successful roundtable at a library conference.  I co-moderated an evaluation roundtable entitled “Library assessment: You’ve measured your success – now how do you get people to listen?” with OU-Tulsa Schusterman Library Associate Director Katie Prentice at the South Central Chapter of the Medical Library Association Annual Meeting in Little Rock, AR.

What makes roundtables unique among educational opportunities at library conferences is that unlike presentations or papers where attendees sit and listen, in a roundtable everyone can participate. It is a moderated discussion on a given topic among the people who attend, and since anyone can chime in, learning is active instead of passive.

About 25 people attended this roundtable and enthusiastically participated in a discussion about library assessment and evaluation data. Katie and I led the discussion with questions starting from what kind of data you collect at your library and leading to what libraries do with the data and how to make it work better for them. Our goal was to use our questions to all issues and solutions to come from the attendees themselves.

As an example, when we asked the question “what would you really like to know about your library and what do you dream of finding out about your users?” one hospital librarian said that she wanted to know how doctors were using the information and how it impacted the patients. Katie Prentice asked “can anyone help her with this?” and another hospital librarian responded that she sends emails to some of her doctors to ask for a sentence or two describing how the information was used.  These sentences, when collected and analyzed, could be a powerful tool to show hospital administration the importance of the library to patient outcomes.

Other kinds of evaluation ideas that were generated from attendees at this roundtable were:

  • using heat map software to determine where people go most often on your website
  • having student workers note what pieces of furniture are being used to improve furniture types and placement in the library
  • using a product like Constant Contact or Mail Chimp to send library newsletters to the doctors and employees at hospitals with assessment data.

While not all roundtables at conferences are this successful, this roundtable demonstrated the ability of librarians brought together in a group to learn from each other and solve problems.

Creative Annual Reports

Friday, October 9th, 2015

Ah, the annual report – at its best we expect to see a glossy booklet with pie charts, short paragraphs and some quotes. At its worst it can be pages of dry text. Our main hope with annual reports is that our stakeholders and others will read them and be impressed with the successes of our organizations.

Last month I ran across the annual report from the Nowra Public Library in New South Wales, Australia, which was so compelling and understandable that over 100,000 people have viewed it on YouTube:

Photo of librarian links to Nowra library video

Since most organizations don’t have the resources to do their own music video (e.g. singers, writers, silly costumes), I thought I would look at a few other examples to consider when it’s time to do your annual report.

One of my all-time favorites is the annual report from the Schusterman Library of The University of Oklahoma-Tulsa. Their annual report is an infographic that shows the data that is collected, but also describes the data in such a way that 1) you have a better feel for what is going on in the library; and also 2) you might think “I didn’t know they would help me with that!”  For example: “7,274 Reference questions answered in person, by phone, by email, and instant message or text on everything from ADHD and child welfare to decision trees, LEED homes, and census reporting.” It is available on their website, and the librarians at the Schusterman Library say they frequently find students looking at it.

The Michigan State University College of Education won a gold ADDY and a Best in Show Award for their 2012 Annual Report (an ADDY is the advertising industry’s largest competition).  Their report featured a tri-fold, die-cut skyline that presented the college’s missions and strengths with an emphasis on “college as community.” The annual report also included a video and a website that gives detailed narratives that show institutional successes in terms of personal stories.

Of course, not all institutions want an unusual annual report.  But it is important to consider the target audience.  Annual reports reach the upper administration, potential funders, and patrons of the library. The success of this years annual report might shape the library users view of the library for years to come.

The OERC is on the Road Again

Friday, September 25th, 2015

The OERC is on the road again.  Today, Cindy and Beth Layton, Associate Director of the NN/LM Greater Midwest Region, are team-teaching Measuring What Matters to Stakeholders at the Michigan Health Sciences Library Association’s annual conference in Flint, MI.

Logo for Michigan Health Sciences Library Association

This workshop covers strategies for using evaluation to enhance and communicate a library’s value to organizational decision-makers and stakeholders who influence decision makers. The workshop combines updated information with material from the NN/LM MidContinental Region’s Measuring Your Impact and the OERC’s Valuing Your Library workshops that have been taught by a number of regional medical library staff members over the past decade.

On Saturday, Karen is presenting a brand-new workshop for the Texas Library Association’s District 8 Conference called Adding Meaning to Planning: A Step-by-Step Method for Involving Your Community in Meaningful Library Planning.

TLA District 8 Logo

The workshop is a method of involving community members in creating pain-free logic models to ensure that the long term vision is always in sight when planning.  Karen wrote a blog entry about the creating “tearless” logic models here. http://nnlm.gov/evaluation/blog/2015/04/10/tearless-logic-models/  This is Karen’s first experience creating and delivering a workshop that is purely about library evaluation.

The NN/LM travel season is about to go into full swing.  We know we aren’t the only ones out and about with presentations, trainings, and exhibits.  So safe travels. And we will see you in a week with another OERC blog post.

Which Online Survey Tool Should I Use? A Review of Reviews

Friday, September 4th, 2015

Quality survey close up with a thumbtack pointing on the word excellentRecently we faced the realization that we would have to reevaluate the online survey tool that we have been using. We thought that we would share some of the things that we learn along the way.

First of all, finding a place that evaluates survey products (like Survey Monkey or Survey Gizmo), is not as easy as going to Consumer Reports or Amazon (or CNET, Epinions, or Buzzillions).  A number of places can be found on the internet that provide reviews of surveys, but their quality is highly varied.   So for this week our project has been to compare review websites to see what we can learn from and about them.

Here are the best ones I could find that compare online survey tools:

Zapier.com’s Ultimate Guide to Forms and Surveys, Chapter 7 “The 20 Best Online Survey Builder Tools”

This resource compares 20 different online survey tools. There is a chart with a brief statement of what each survey tool is best for, what you get for free, and the lowest plan cost. Additionally, there is a paragraph description of each tool and what it does best.  Note: this is part of an eBook published in 2015 which includes chapters like “The Best Online Form Builders for Every Task.”

Appstorm.net’s “18 Awesome Survey & Poll Apps”

This review was posted on May 27, 2015 which reassures us that the information is most likely up to date.  While there are very brief descriptions, it is good for a quick comparison of the survey products. Each review includes whether or not there is a free account, if the surveys can be customized, and whether or not there are ready-made templates.

Capterra.com’s “Top Survey Software Products”

Check boxes showing the features of the different productsThis resource appears to be almost too good to be true. Alas, no date shown means that the specificity in the comparisons might not be accurate.  Nevertheless, this website lists over 200 survey software products, has separate profile pages on each product (with varying amounts of detail), and lists features that each product offers.  You can even narrow down the surveys you are looking for by filtering by feature.  Hopefully the features in Capterra’s database are kept updated for each product.  One thing to point out is that at least two fairly well-known survey products (that I know of) are not in their list.

AppAppeal.com’s “Top 31 Free Survey Apps”

Another review site with no date listed. This one compares 31 apps by popularity, presumably in the year the article was written. One thing that is unique about this review site is that the in-depth review includes the history and popularity of the app, the differences of each app to other apps, and who they would recommend the app to.  Many of the reviews include videos showing how to use the app.  Pretty cool.

TopTenReviews.com’s 2015 Best Survey Software Reviews and Comparisons

This website has the feel of Consumer Reports. It has a long article explaining why you would use survey software, how and what the reviewers tested, and the kinds of things that are important when selecting survey software. Also like Consumer Reports, it has ratings of each product (including the experiences of the business, the respondents, and the quality of the support), and individual reviews of each product showing pros and cons. Because the date is included in the review name, the information is fairly current.

This is a starting point. There are individual reviews of online survey products on a variety of websites and blogs, which are not included here.  Stay tuned for more information on online survey tools as we move forward.

 

Improving Your Data Storytelling in 30 Days

Friday, August 21st, 2015

Here are some more great techniques to help with telling a story to report your evaluation data so it will get the attention it deserves.

Friends at campfire telling storiesJuice Analytics has this truly wonderful collection of resources in a guide called “30 Days to Data Storytelling.” With assignments of less than 30 minutes a day, this guide links to data visualization and storytelling resources from sources as varied as Pixar, Harvard Business Review, Ira Glass, the New York Times, and Bono (yes, that Bono).

The document is a checklist of daily activities lasting no longer than 30 minutes per day. Each activity is either an article to read, a video to watch, or a small project to do.

The resources answer valuable questions like:

  • What do you do when you’re stuck?
  • How do I decide between visual narrative techniques?
  • Where can I find some examples of using data visualization to tell a story?

A Rainbow Connection? The Evaluation Rainbow Framework

Friday, August 7th, 2015

Once you start looking online for help with your evaluation project, you will find a veritable storm of evaluation resources out there. So many that it can be confusing how choose the best ones for your needs.  But don’t worry, once you’ve looked at this online tool you will find the rainbow of hope that follows the storm (okay that was pretty cheesy – stay with me, it gets better).

A group of evaluators from all over the world created a website called BetterEvaluation.org for the purpose of organizing and sharing useful online evaluation resources. The framework they created to organize resources is called the Rainbow Framework because it divides the world of evaluation into seven different “clusters” which are delineated by rainbow colors.  Each cluster is then broken down into a number of tasks, and each task broken down into options and resources.

Here is an example of the Rainbow Framework in action.  By clicking on the yellow “Describe” category, the image opens a window on the right that lists seven tasks: 1) Sample; 2) Use measures, indicators, or metrics; 3) Collect and/or retrieve data; 4) Manage data; 5) Combine qualitative and quantitative data; 6) Analyze data; and 7) Visualize data.

When you click on a specific task, a page listing a variety of Options and Resources will open, like this:Rainbow Framework Options

 

BetterEvaluation made eight 20 minute “coffee break” webinars in conjunction with AEA that you can watch for free on the BetterEvaluation website. Each webinar describes a cluster, and there is one overview webinar.  The webinars are two years old, so the actual image of the rainbow looks a little different from the webinar video, but the content is still relevant.  Here is a link to the webinar series: http://betterevaluation.org/events/coffee_break_webinars_2013

The Rainbow Framework does more than just organize resources. Here are some reasons you might want to use this Framework.

1) Help designing and planning an evaluation

2) Check the quality of an ongoing evaluation

3) Commission an evaluation – will help formulate what’s important to include when you commission an evaluator and then when you assess the quality of the proposals

4) Embed stakeholder participation thoughtfully throughout the evaluation

5) Develop your evaluation capacity – lifelong learning – to fill in gaps of knowledge.

So, somewhere over the rainbow, your evaluation skies may be blue…

How to Write a Mission Statement Without Losing Your Mind

Friday, July 31st, 2015

Mission statements are important. Organizations use them to declare to the world how their work matters. They are the North Star for employees, guiding their efforts toward supporting organizational priorities.  And mission statements are important to evaluators, because evaluation methods are ultimately designed to assess an organization’s value.  Having those values explicitly stated is very helpful.

Yet most of us would rather clean out the office refrigerator than participate in a mission-writing process. Now imagine involving 30 people in the writing process. Make that the refrigerator and the microwave, right?

That’s why I am so enthusiastic about the Nonprofit Hub’s document A Step-By-Step Exercise for Creating a Mission Statement, which the authors promise  is a tool “for those who want to skip the nitpicking, word choice arguments or needing to create the elusive ‘perfect mission statement.’”

I won’t go into details about how their process works, because the guide lays it out elegantly and concisely. You can read through the process in five minutes, it is so succinct.   I’ll just tell you what I like most:

  • The exercise reportedly takes 1-2 hours, even though you are engaging up to 30 stakeholders in the process.
  • Stories comprise the foundation of the mission statement: people start by sharing stories about the organization’s best work.
  • The individuals do group qualitative analysis on the stories to begin to understand the organization’s cause, activities, and impact.
  • Small groups draft mission statements, with instruction to write short, simple sentences. In fact, 10- word sentences are held up as an ideal. The small groups share back with the large group, where big ideas are identified and discussed.
  • The actual final wording is assigned to a small task force to create after the meeting, which prevents wordsmithing from dampening the momentum (and the mood).
  • In the end, everyone understands and endorses the mission statement because they helped develop it.

This exercise has potential that reaches beyond development of mission statements.  It would be a great exercise for advisory groups to contribute their ideas about future activities. Their advice will be based on your organization’s past successes.  The stories generated are data that can be analyzed for organizational impact.  If you are familiar with Appreciative Inquiry, you’ll recognize the AI influence in this exercise.

The group qualitative analysis process, alone, could be adapted to other situations (see steps 1 and 2).  For example, a small project team could use the process to analyze stories from interviews, focus groups, or even written comments to open-ended survey questions.

Even if mission statements are not on your horizon, check out the Nonprofit Hub’s document. There might be something you can adapt for future planning and evaluation projects.

Cover sheet for the Nonprofit Hub's "A Step-by-Step Exercise for Creating a Mission Statement" exercise instructions

Last updated on Thursday, May 19, 2016

Funded by the National Library of Medicine under contract # HHS-N-276-2011-00008-C.