Skip all navigation and go to page content

NEO Shop Talk

The blog of the National Network of Libraries of Medicine Evaluation Office

Archive for May, 2016

Worksheets and Checklists to Help with Evaluation

Friday, May 27th, 2016

You may already know that the NEO offers some booklets that work through some basic evaluation processes, called Planning and Evaluating Health Information Outreach Projects. You can view them as a PDF, an HTML or you can order a print version (there are limited copies of these left, so no promises). But I think the under-marketed gem in these booklets are the checklists and worksheets at the end of each one. And the ones in the HTML version of the Booklets are Word docs that you can download, modify if you want, and use.

Covers of the three Evaluation Booklets

For example, let’s say you’d like to create a survey to find out if you’ve reached your project’s outcomes. A process for this is explained in Booklet 3: Collecting and Analyzing Evaluation Data. At the end of this booklet is a blank worksheet called “Planning a Survey” that you can download and use to work through writing your survey questions.  Along with that, there’s also an example of a filled out worksheet based on a realistic scenario that helps demonstrate how the worksheet can be used.

The importance of checklists in improving outcomes is underscored in Dr. Atul Gawande’s book The Checklist Manifesto.  While he’s mostly talking about medical scenarios, the same can be true with evaluation.  Let’s face it, even if you feel fairly confident in evaluation, there are a lot of little things to remember, especially if you don’t do it all the time.

In Booklet 1: Getting Started with Community-Based Outreach, the checklist items are sorted into the three categories “Step 1 – Get Organized,” “Step 2 – Gather Information,” and “Step 3 – Assemble, Interpret and Act.”  These are the same categories as the chapters in the book.  So for example, one of the items under “Get Organized” is “Gather primary data to complete the picture of your target community.”  If you’d like a reminder or some suggestions of how to go about this, go to the chapter heading “Step 2 – Gather Information” where you can find a list of ways to gather primary data.  These checklists can also be downloaded as a Word document and adapted to your own needs.

I hope this isn’t too meta, but while you’re using evaluation to help you reach your project’s outcomes, you also have a personal outcome of doing a good job with your evaluation plan!  So when you head into your evaluation projects, don’t forget your checklists to make sure all of your evaluation outcomes are successful.

 

Steering by Outcomes: Begin with the End in Mind

Friday, May 20th, 2016

If you don’t know where you’re going, you might not get there – Yogi Berra

Toy car sitting on a road map

Next week, Karen and I will be facilitating an online version of one of NEO’s oldest workshops, Planning Outcomes-Based Outreach Projects for the Health Science Information Section of the North Dakota Library Association.The main tool we teach in this workshop is the program logic model, but our key takeaway message is this: Figure out where you’re going before you start driving.

If you drive to a new place, your navigator app will insist on a destination, right?  Well, I’m like an evaluation consulting app: those who work with me on evaluation planning have to define what they hope to accomplish before we start designing anything.

In fact, I get positively antsy until we nail down the desired end results.  If I’m helping a colleague develop a needs assessment, I want to know how he or she plans to use the data.  To design a program evaluation process, I have to know how the project team defines success. When consulting with others on survey design, I help them determine how each question will provide them with actionable information.

My obsession with outcomes crept into my personal life years ago. Before I sign up for continuing education or personal development workshops, I consider how they will change my life.  When my husband and I plan vacations, we talk about what we hope to gain on our trip. Do we want to connect with friends? See a new landscape? Catch up on some excellent Chicago comedy? Outcomes-thinking may be an occupational hazard for evaluation professionals. Case in point: Have you seen Karen Vargas’s birthday party logic model?

Top 5 Reasons to Love Outcomes

So how did I become an outcomes geek? Here are the top five reasons:

  • Outcomes are motivating: Activities describe work and who among us needs more work? Outcomes, on the other hand, are visionary. They allow you to imagine and bask in a job well done. Group discussions about outcomes are almost always more uplifting and enthusiastic than discussions about project implementation. Plus, you will attract more key supporters by talking about the positive benefits you hope to attain.
  • Outcomes help you focus: Once you have determined what success looks like, you’ll think more carefully about how to accomplish it.
  • Outcomes provide a reality check: Once you know what you want to accomplish, you’ll think more critically about your project plans. If the logical connection doesn’t hold, you can course-correct before you even start.
  • Planned outcomes set the final scene for your project story: Ultimately, most of us want or have to report our efforts to stakeholders, who, by definition, have a vested interest in our program. Project stories, like fairy tales, unfold in three acts: (Act 1) This is where we started; (Act 2) This is what we did; (Act 3) This is what happened in the end.  Program teams notoriously focus on collecting evaluation data to tell Act 2, while stakeholders are more interested in Act 3.  However, if you articulate your outcomes clearly from the start, you are more likely to collect good data to produce a compelling final act.
  • Identifying expected outcomes helps you notice the unexpected ones. Once you start monitoring for planned outcomes, you’ll pick up on the unplanned ones as well. In my experience, most unplanned outcomes are sweet surprises: results that no one on the team ever imagined in the planning phase.  However, you also may catch the not-so-great outcomes early and address them before any real damage is done.

How to Steer by Outcomes 

When I work with individuals or small project teams, here are the questions we address when trying to identify program outcomes:

  • What will project success look like?
  • What will you observe that will convince you that this project was worth your effort?
  • What story do you want to tell at the end of this project?
  • Who needs to hear your story and what will they want to hear?

These questions help small project teams identify outcomes and figure out how to measure them. If you want a larger group to participate in your outcomes-planning discussion, consider adapting the Nine Whys exercise from Liberating Structures.

Once the outcomes are identified, you’re ready to check the logical connection between your program strategies and your planned results. The logic model is a great tool for this stage of planning. The NEO’s booklet Planning Outcomes-Based Programs provides detailed guidance for how to create project logic models.

Yogi Berra famously said “When you come to a fork in the road, take it.”  I would paraphrase that to say “When you come to a fork in the road, check your outcomes and proceed.”

Summer Evaluation Institute Registration is Open!

Friday, May 13th, 2016

AEA Summer Institute Logo

Whether you call what you do evaluation or assessment, the American Evaluation Association’s Summer Evaluation Institute is an amazing event, sure to teach you something and give you a different perspective on the job you do.

The institute, held June 26-29, 2016, is made up of ½ day hands-on training sessions, taught by the best professionals in the field of evaluation.  It’s attended by people from all over the world who want to improve their skills in different aspects of the evaluation process.

Why would you as a librarian want to attend the AEA Summer Evaluation Institute?   Here are some ideas:

Let’s say you were in charge of eliminating much of your print journal collection and increasing your online journals, and you want to figure out what data you should collect that will show that your users are still getting what they need. There’s a great program called “Development and Use of Indicators for Program Evaluation” by Goldie MacDonald that covers criteria for selection of indicators. Goldie MacDonald is a Health Scientist in the Center for Global Health at the U.S. Centers for Disease Control and Prevention (CDC) and a dynamic speaker and trainer.

Are you taking on the planning of an important project, like finding ways to ensure that your hospital administration values the contributions of your library?  Logic models are great planning tools, but additionally are useful for integrating evaluation plans and strategic plans.  How would you like to take a four hour class on logic models taught by the Chief Evaluation Officer at the CDC, Tom Chapel?

What if you’re a liaison librarian to your university’s biology department and you’re looking for ways to improve collaboration with the faculty? There’s a program called Evaluating and Improving Organizational Collaboration that gives participants the opportunity to increase their capacity to quantitatively and qualitatively examine the development of inter- and intra-organizational partnerships. It’s taught by Rebecca Woodland, Chair of the Department of Educational Policy and Administration at University of Massachusetts Amherst (and recognized as one of the AEA’s most effective presenters).

Maybe you’ve been responsible for a program at your public library training the community in using MedlinePlus for their health information needs. You’ve collecting a lot of data showing the success of your programs, and want to make sure your stakeholders take notice of it. How about giving them an opportunity to work with the data themselves? There’s a program called: A Participatory Method for Engaging Stakeholders with Evaluation Findings, taught by Adrienne Adams at Michigan State University.

This is only a small sampling of the great workshops at the Summer Evaluation Institute.

For those of you who don’t know much about the American Evaluation Association: The AEA is an international professional association devoted to the application and exploration of program evaluation, personnel evaluation, technology, and many other forms of evaluation. Evaluation involves assessing the strengths and weaknesses of programs, policies, personnel, products, and organizations to improve their effectiveness. AEA has approximately 7000 members representing all 50 states in the United States as well as over 60 foreign countries.

Cindy and I will be there – we hope to see you there too!

 

Data Party for Public Librarians

Friday, May 6th, 2016

The Engage for Health project team from left to right: Lydia Collins, Kathy Silks, Susan Jeffery, Cindy Olney

Last week, I threw my first data party. I served descriptive statistics and graphs; my co-hosts brought chocolate.

I first learned about data parties from evaluation consultant Kylie Hutchinson’s presentation It’s A Data Party that she gave at the 2016 American Evaluation Association Conference. Also known as data briefings or sense-making sessions, data parties actively engage stakeholders with evaluation findings.

Guest List

My guests were librarians from a cohort of public libraries that participated in the Engage for Health project, a statewide collaboration led by the NN/LM Middle Atlantic Region (MAR) and the Pennsylvania Library Association (PaLA). The NN/LM MAR is one of PaLA’s partners in a statewide literacy initiative called PA Forward, an initiative to engage libraries in activities that address five types of literacy.  The project team was composed of Lydia Collins of NN/LM MAR (which also funded the project), Kathy Silks of the PaLA, and Susan Jeffery of the North Pocono Public Library. I joined the team to help them evaluate the project and develop reports to bring visibility to the initiative.  Specifically, my charge was to use this project to provide experiential evaluation training to the participating librarians.

Librarians from our 18 cohort libraries participated in all phases of the planning and evaluation process.  Kathy and Susan managed our participant recruitment and communication. Lydia provided training on how to promote and deliver the program, as well as assistance with finding health care partners to team-teach with the librarians. I involved the librarians in every phase of the program planning and evaluation process. We met to create the project logic model, develop the evaluation forms, and establish a standard process for printing, distributing, and returning the forms to the project team. In the end, librarians delivered completed evaluation forms from 77% of their adult participants from Engage for Health training sessions.

What We Evaluated

The objective of PA Forward includes improving health literacy, so the group’s outcomes for Engage for Health was to empower people to better manage their health. Specifically, we wanted them to learn strategies that would lead to more effective conversations with their health care providers. Librarians and their health care partners emphasized strategies such as researching health issues using quality online health resources, making a list of medications, and writing down questions to discuss at their appointments.  We also wanted them to know how to use two trustworthy online health information sources from the National Library of Medicine: MedlinePlus and NIHSeniorHealth.

 Party Activities

Sharing with Appreciative Inquiry. The data party kicked off with Appreciative Inquiry interviews. Participants interviewed each other, sharing their peak experiences and what they valued about those experiences. Everyone then shared their peak experiences in a large group. (See our blog entries here and here for detailed examples of using Appreciative Inquiry.)

Data sense-making: Participants then worked with a fact sheet with graphs and summary statistics compiled from the session evaluation data.  As a group, we reviewed our logic model and discussed whether our data showed that we achieved our anticipated outcomes.  The group also drew on both the fact sheet and the stories from the Appreciative Inquiry interviews to identify unanticipated outcomes.  Finally they identified metrics they wish we had collected. What was missing?

Consulting Circles: After a morning of sharing successes, the group got together to help each other with challenges.  We had three challenge areas that the group wanted to address: integration of technology into the classes; finding partners from local health organizations; and promotional strategies.  No area was a problem for all librarians: some were quite successful in a given areas, while others struggled. The consulting groups were a chance to brainstorm effective practices in each area.

Next steps:  As with most funded projects, both host organizations hoped that the libraries would continue providing health literacy activities beyond the funding period.  To get the group thinking about program continuity, we ran a 1-2-4-All discussion about next steps.  They first identified the next steps they will take at their libraries, then provided suggestions to NN/LM MAR and PALA on how to support their continued efforts.

Post Party Activities

For each of the four party activities, a recorder from each group took discussion notes on a worksheet developed for the activity, then turned it into the project team. We will incorporate their group feedback into written reports that are currently in process.

If you are curious about our findings, I will say generally that our data supports the success of this project.  We have plans to publish our findings in a number of venues, once we have a chance to synthesize everything.  So watch this blog space and I’ll let you know when a report of our findings becomes available.

Meanwhile, if you are interested in reading more about data parties, check out this article in the Journal of Extension.

 

Diversity, Texas Libraries and Participatory Data Collection

Monday, May 2nd, 2016

On April 20, Cindy Olney and I facilitated a program for the Texas Library Association Annual Conference called Open Libraries! Making Your Library Welcome to All.  The program was sponsored by TLA’s Diversity and Inclusion Committee and the plan for the program was for attendees to work cooperatively to discover ways to ensure that people of diverse cultures, languages, ages, religions, sexual orientations, physical abilities, and others feel welcome at the library.  The committee wanted to get ideas from the wealth of TLA librarians’ experiences, so Cindy was invited to gather as much information from the attendees as possible. As co-chair of the TLA Diversity and Inclusion Committee, I co-facilitated the event.

The process used was a modified 1-2-4-All process, that you can find on the Liberating Structures website.  Our primary question was “What can my library do to become more welcoming to all people?”  We asked everyone in the room to brainstorm together all the different parts of a library that could be modified to make it more welcoming (e.g., reference services, facility, etc.).  We wanted to be sure that everyone thought as broadly and creatively as possible.

TLA Diversity Data Collection Program 2016

The discussion process actually had two parts.  For part one, we gave everyone two minutes to write as many ideas as they could on index cards (one idea per card).  Then we asked people to take two minutes to share their ideas with a partner.  They then shared their ideas with the entire table (up to 10 participants). The group then chose and wrote down the three best ideas and turned them in to the moderators.  Participants were instructed to leave their index cards with their ideas piled in the middle of their tables.

Here were some of the ideas that were generated through this discussion.

  • Welcome signs in different languages
  • Signage
  • Physical spaces – access to mobility

As you can see, the responses were fairly non-specific. We wanted richer descriptions of modifications of programs or services.  So part two of the process involved asking participants to develop more detailed plans for making their libraries more welcoming. Using a method involving dog, cat, and sea creature stickers, we moved participants randomly to new tables so that they ended up with a new group of colleagues.  They then chose a partner from their new table members and, as a pair, randomly chose one idea card the piles generated in part one of the process. They worked on a plan for one idea for eight minutes.  When the moderator called time, they pulled another card and worked on plans for a second idea. In the final eight minutes of the session, we asked for idea sharing by table to the entire group.

The plans in part 2 were better articulated and detailed than those we got in part one. Here are some examples of the kind of result we got from that exercise:

  • Signage: Making clearer, more colorful. Different languages signage or use digital signage.
  • Language material specific to the community and programming in various language spoken in the community. ESL classes partnered with community colleges.
  • Invite representatives from ADA/disability advocates to give suggestions on making library desks/areas more accessible.

The whole process was completed in a 50-minute conference program session.  Both myself and the other Diversity and Inclusion co-chair, Sharon Amastae from El Paso, TX, were impressed with the energy and enthusiasm that was present among attendees in the room.

The results of this data gathering event will be communicated to the TLA membership.  When that project is completed, we’ll let you know here on the NEO Shop Talk blog!

Photo credit: Esther Garcia

 

Last updated on Monday, June 27, 2016

Funded by the National Library of Medicine under Contract No. UG4LM012343 with the University of Washington.