Skip all navigation and go to page content

NEO Shop Talk

The blog of the National Network of Libraries of Medicine Evaluation Office

Archive for the ‘News’ Category

Meet the NEO’s New Program Assistant Kalyna Durbak

Friday, October 14th, 2016

Kalyna Durbak

I am pleased to introduce the NEO’s new program assistant, Kalyna Durbak, MLIS, who joined our staff on October 3.  Kalyna will be our go-to person for managing the NEO website, providing technical support with webinars, and helping with the “roll-up-your-sleeves” work involved in carrying out evaluation projects.

Kalyna began working for the UW Health Sciences Library in May 2016. Prior to joining the NEO, Kalyna was the Web Content Assistant on the team that created and promotes the Response & Recovery App in Washington (RRAIN), designed to provide emergency responders with quick access to disaster-management resources. It also provides local information such as weather alerts and traffic reports. Kalyna also provided web content and social media assistance for the Health Evidence Resource for Washington State (HEALWA), a portal that provides affordable online access to clinical information and health education resources. The portal is available to health professionals who are licensed through 23 state organizations. A 2015 evaluation study conducted by HEALWA showed that many health professionals eligible to use the portal are not aware of it.  Kalyna helped promote HEALWA through social media and exhibits.

Kalyna earned her MLIS degree from University of Illinois at Urbana–Champaign and a BA in History from University of Illinois at Chicago. She was an intern at the Smithsonian Ralph Rinzler Folklife Archives and Collections and the Rochester Institute of Technology Archives. A Midwestern native, she recently moved to Seattle with her husband because they were attracted to Seattle’s mix of urban and outdoor opportunities. To introduce herself to our readers, Kalyna agreed to answer a few questions about herself.

What made you want to pursue an MLS?

I always considered myself a “jack of all trades.” At school I did not excel in one subject, but rather did fairly well in most areas of study. I also fell in love with researching, and doing “deep dives” into different subjects. I figured that with an MLS, I could end up working in vastly different environments, help others with research, and pursue my dream of being a lifelong learner.

What made you want to join the NN/LM Evaluation Office?

I recently realized that I needed to strengthen my evaluation skills. Whether I am working or volunteering, I am constantly trying to solve issues concerning outreach and training. For most of my career, I just created solutions without ever thinking “How can I measure my success in solving this issue?” and “Are these solutions working the way I intended?” These questions are key in determining whether the solution is actually solving any problems, or just wasting time and energy.

What experience have you had with evaluation?

My experiences with evaluation come from managing social media accounts. Once I realized I had a whole dashboard of statistics to my disposal, I used them to set optimization goals in terms of posting times and types of content that resonate with my audiences.

What evaluation skills do you particularly hope to develop?

I am very interested in developing my outcome assessment skills. I am usually the big idea person of a group, and enjoy setting lofty goals. In the past, I have measured the success of an initiative based on the number of tasks my group completed for the project. What I want to do going forward is measure success by the initiative’s impact on the intended audience and community.

What other interests do you have?

I am very active in a Ukrainian Scouting Organization called Plast. Through scouting I found my love for the outdoors, and made countless friends all over the United States and around the world. When I’m not working on scouting activities, I find myself crafting. My favorite crafts include quilling, card making, and traditional Ukrainian embroidery.

When I am crafting or commuting to work, I listen to various nerdy podcasts. Some of my favorites include 99% Invisible, LibUX, and Reply All

What is the bravest thing you’ve ever done?

I took a hike with my husband down the Grand Canyon. I didn’t make it that far, because I’m afraid of heights. There was one foot between my body and a drop into the canyon, and that was not where I wanted to be. I had to turn back partway down. My husband said, “I love you. Do you mind if I keep going?”  So I had to walk back up the trail alone. Looking back, I’m glad I went through it.  Once I climbed up, I felt so proud of myself.

What’s in a Name? Convey Your Chart’s Meaning with a Great Title

Friday, October 7th, 2016

Some of you may be working on conference posters and paper presentations for Fall conferences.  And some of those will probably include charts to demonstrate data representing a lot of hard work on your part.  In most cases you have minutes to use that chart to get your audience to understand the data.

Stephanie Evergreen has great advice for displaying chart data.  She literally wrote the books on it: Presenting Data Effectively and Effective Data Visualization.  Her recent blog post is about one of the simplest and most powerful changes you can make to effectively present your chart data: “Strong Titles Are The Biggest Bang for Your Buck.

What many of us do is present the data with a generic title, like “Attendance rates.” Then the viewer has to spend time working through the data and you hope that they see what you wanted them to.  What Stephanie Evergreen proposes (backed by persuasive research) is to give your charts a clear title that explains what the data shows. Your poster or paper is almost certainly making a point.  Determine how your chart supports the point of your presentation and state that in the title.  Here are some reasons why:

  • It respects your viewers’ time
  • It forces you to be clear about the point you want your data to make
  • It makes the data more memorable

Stephanie Evergreen’s post has some great examples of how a good title can really improve the impact of the chart.  In addition, here is an example from the NEO webinar Make Your Point: Good Design for Data Visualization.

Looking at this original chart, you might notice that in each activity the follow-up showed an increase over the baseline.  If you, the viewer, didn’t have a lot of time, that might be all you noticed.

Chart with title: Comparison of emergency preparedness activities from baseline to follow-up

With a simple change of title , you can see that the author of this presentation is highlighting the increased number of continuity of services plans.  This is designed to enhance the point of the presentation, and not waste the viewers’ time. Also, note that the title is left justified instead of centered.  Because the title is a full sentence, a left-justified format is easier to read.

Chart with title: The biggest improvement in emergency preparedness from baseline to follow-up was the number of network member organizations reporting that they had or were working on a service continuity plan.

So, while Shakespeare might have been correct when he wrote “What’s in a name? that which we call a rose / By any other name would smell as sweet,” what if the presenter was trying to show the fortitude of Texas antique roses to survive in harsh weather conditions, and the viewer only noticed how sweet the rose smelled?  Maybe the heading “A Rose” sometimes isn’t enough information.

Save

Save

Save

Save

Save

Save

Save

Update Your Evaluation Toolbox: Two Great Conferences

Friday, September 23rd, 2016

It’s the fall, also known as the beginning of conference season. It’s a very exciting time if you like evaluation/assessment.  If you want to improve your evaluation skills, two great conferences are coming up, back to back.  Take a look at some of these highlights and pick one to go to!

Oct. 24-29, 2016 Evaluation 2016, Atlanta GA

Eavlaution 2016 October 24-29, Atlanta, GA

This is the annual conference of the American Evaluation Association, an international organization with over 7000 members, and interest groups that cover topics like Assessment in Higher Education; Collaborative, Participatory & Empowerment Evaluation; and Data Visualization and Reporting.  The theme of this year’s conference is Evaluation + Design.

The conference has 40 workshops and 850 sessions.  Here are some example programs:

  • From crap to oh snap: Using DIY templates to (easily) improve information design across an organization
  • Developing Evaluation Tools to Measure MOOC Learner Outcomes in Higher Education
  • Evaluation Design for Innovation/Pilot Projects

There’s still time for Early Bird Registration (ends October 3)!

Oct. 31-November 2, 2016 Library Assessment Conference, Arlington VA

Library Assessment Conference 2016

This conference only happens every other year and is co-sponsored by the Association of Research Libraries (ARL) and the University of Washington (UW) Libraries (disclosure – the NEO is part of the UW Libraries–something we’re quite proud of).   The theme for this conference is Building Effective, Sustainable, Practical Assessment.

This conference is bookended by workshops like Getting the Message Out: Creating a Multi-Directional Approach to Communicating Assessment and Learning Analytics, Academic Libraries, and Institutional Context: Getting Started, Gaining Traction, Going Forward.

Scholarly papers and posters with titles like “How Well Do We Collaborate? Using Social Network Analysis (SNA) to Evaluate Engagement in Assessment Program” and “Consulting Detectives: How One Library Deduced the Effectiveness of Its Consultation Area & Services” are organized around a variety of topics, such as Organizational Issues; Ithaka S+R; and Analytics/Value.

 

This is an exciting time to be in the assessment and evaluation business.  Take this amazing opportunity to go to one of these conferences.

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

From Logic Model to Proposal Evaluation – Part 2: The Evaluation Plan

Friday, September 2nd, 2016

Photo of black and white cat with fangsLast week we wrote some basic goals and objectives for a proposal about teaching health literacy skills to vampires in Sunnydale.  Here’s what the goals and objectives look like, taken from the Executive Summary statement in last week’s post:

Goal: The goal of our From Dusk to Dawn project is to improve the health and well-being of vampires in the Sunnydale community.

Objective 1: We will teach 4 hands-on evening classes on the use of MedlinePlus and PubMed to improve Sunnydale vampires’ ability to find consumer health information and up to date research about health conditions.

Objective 2: We will open a 12-hour “Dusk-to-Dawn” health reference hotline to help the vampires with their reference questions.

There are also three outcomes that we have identified:

  1. Short-term: Increased ability of the Internet-using Sunnydale vampires to research needed health information.
  2. Intermediate: These vampires will use their increased skills to research health information for their brood.
  3. Long-term: Overall, the Sunnydale vampires will have improved health and as a result form better relationships with the human community of Sunnydale.

To get to an evaluation plan from here you have to know that there are basically two kinds of things you’ll want to measure: process and outcomes.

Process assessment measures that you did what you said you would do and the way you said you would do it. For example, you can count the number of classes you taught, how many people attended, and whether their survey responses showed that they thought you did a good job teaching.

Also you might want to show that you were willing to make changes in the plan if review of your process assessment showed that you weren’t getting the results you wanted.  For example, if you planned all your classes in early evening, but few vampires attended, you might interview some vampires and find out that early evening is mealtime for most vampires, and move your classes to a different time to increase attendance.  Your evaluation plan could show that you are collecting that information and that you will be responsive to what you see happening.

Outcome assessment measures the extent to which your project had the impact that you hoped it would on the recipients of the project, or even greater on their overall organizations or communities. We showed the first step of outcome assessment in last week’s assignment, but I’m going to break it down a little more here.  Put in basic terms, to do an outcome assessment, you state your outcome, you add in an indicator, a target, and a time frame to come up with a measurable objective, and then you write out the source of your data, your data collection method, and your data collection timing to complete the picture.  Let’s talk about each item here:

Indicator: This is the evidence you can gather that shows whether or not you met your outcomes.  If one of your outcomes is that the vampires have increased ability to research health information, how would you know if that had happened? The indicator could be their increased confidence level in finding health information, or it could be improvement in skills test scores given before and after a training session.

Target: The target is the goal that makes this project look like a success to you.  For example, if the vampires improve their test scores by 50% over a baseline test, is that enough to say you have successfully reached that outcome?  And how many of the vampires need to reach that 50% goal?  All of them? One of them?  Targets can be hard to identify, because you don’t want them to be too hard to reach but if they’re too easy your funder may not be impressed with your ambition.  Sometimes you can work with the funder or other stakeholders on setting targets that are credible.

Time frame: This is the point in time that when the threshold for success will be achieved.  So if you want to make sure the vampires increased their ability by the end of your training, then the time frame would be by the end of your training.

Data Source: This is the location where your information is found. Often, data sources are people (such as participants or observers) but they also may be records, pictures, or meeting notes. Here are some examples of data sources.

Data Collection Methods: Evaluation methods are the tools you use to collect data, such as a survey, observation, or quiz.  Here is more examples of data collection methods.

Data Collection Timing: The data collection timing is describing exactly when you will be collecting the data.

What does your final evaluation plan look like? 

Here is a sample piece of an evaluation plan for the Dusk to Dawn proposal.

Objective 1: teach 4 hands-on evening classes on the use of MedlinePlus and PubMed to improve Sunnydale vampires’ ability to research consumer health information and up to date research about health conditions.

Process Assessment: The PI will collect the following information to ensure that classes are being taught; expected attendance figures are being reached; teachers are doing a good job teaching classes (including surviving the classes).  Data will be reviewed after each class and changes will be made to the program as needed to reach target goals:

◊ Participant roster to measure attendance figures
◊ Class evaluations to measure teacher performance
◊ Count of number of teachers at the beginning and ending of each class to measure survival of instructors
◊ Project team will meet after the second class to review success and lessons learned and to consider course corrections to ensure objectives are met

Outcome Assessment:
Measureable Objective: In a post-test given immediately after each class, a minimum of 75% of Sunnydale vampire attendees demonstrate that they learned how to find needed resources in PubMed and MedlinePlus by showing at least a 50% improvement over the pre-test.

Based on Level 2 (Learning) in the Kirkpatrick Model, a test will be created with some basic health questions to be researched. Class participants will be given these questions as a pre-test before the class, and then will be given the same questions after the class as a post-test.  This learning outcome will be considered successful if a minimum of 75% of Sunnydale vampire participants demonstrate that their scores improved by at least 50%.

Last wishes, I mean, thoughts

This is not a complete evaluation plan, but the purpose of these two posts has been to show how you can go from a logic model to the evaluation plan of a proposal.  Don’t worry if all your outcomes cannot be measured in the scope of your project.  For example, in this Dusk to Dawn project, it might have been dangerous to find out if the vampires had passed on needed health information to their brood, even harder to find out whether the vampires had become more healthy as a result of the information.  This doesn’t mean to leave these outcomes out, but you may want to acknowledge that measuring some outcomes is out of the scope of the project’s resources.

As Grandpa Munster once said “Don’t let time or space detain ya, here you go, to Transylvania.”

Photo credit: Photo of 365::79 – Vampire Cat by Sarah Reid on Flickr under Creative Commons license CC BY 2.0.  No changes were made.

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

From Logic Model to Proposal Evaluation – Part 1: Goals and Objectives

Friday, August 26th, 2016

Vocabulary. Jargon. Semantics.  Sometimes I think it’s the death of us all.  Seriously, it’s really hard to have a conversation about anything when you use the same words in the same context to mean completely different things.

Take Goals and Objectives.  I can’t tell you how many different ways this has been taught to me.  But in general all the explanations agree that a goal is a big concept, and an objective is more specific.

Things get complicated when we use words like Activities, Outcomes, and Measurable Objectives when teaching you about logic models as a way of planning a project.  Which of those words correlate with Goals and Objectives when writing a proposal for the project you just planned?

Bela Lugosi as Dracula

I’m going to walk through an example of how we can connect the dots between the logic model that we use to plan projects, and the terminology used in proposal writing.  There isn’t necessarily going to be a one to one relationship, and it might depend on the number of goals you have.

As has been stated in previous posts, we’ve never actually done any work with the fictional community of Sunnydale, a place where there was, in the past, a large number of vampires and other assorted demons.  But in order to work through this problem, let’s go back to this hypothetical post where we used the Kirkpatrick Model to determine outcomes that we would like to see with any remaining vampires who want to live healthy long lives, and get along with their human neighbors.  For this post, I’m going to pretend I’m writing a proposal to do a training project for them based on those outcomes, and then show how they lead to an evaluation plan.

Goals

The goal can be your long-term outcome or it can be somewhat separate from the outcomes. But either way, your goal needs to be logically connected to the work you’re planning to do.  For example, if you’re going to train vampires to use MedlinePlus, goals like “making the world a better place,” or “achieving world peace,” are not as connected to your project as something like “improving health and well being of vampires” or “improving the health-literacy of vampires so they can make good decisions about their health.”

Here is a logic model showing how this could be laid out, using the outcomes established in the earlier post:

Dusk to Dawn Logic Model

Keep in mind that the purpose of a proposal is to persuade someone to fund your project.  So for the sake of my proposal, I’m going to combine the long-term outcomes into one goal statement.

The goal of this project is to improve the health and well being of vampires in the Sunnydale community.

Objectives

The objectives can be taken from the logic model Activities column. But keep something in mind.  Logic models are small – one page at most.  So you can’t use a lot of words to describe activities.  Objectives on the other hand are activities with some detail filled in. So in the logic model the activity might be “Evening hands-on training on MedlinePlus and PubMed,” while the objective I put in my proposal might be “Objective 1: We will teach 4 hands-on evening classes on the use of MedlinePlus and PubMed to improve Sunnydale vampires’ ability to find consumer health information and up to date research.”

Objectives in Context

Here’s a sample of my Executive Summary of the project, showing goals, objectives, and outcomes in a narrative format:

Executive Summary: The goal of our From Dusk to Dawn project is to improve the health and well being of vampires in the Sunnydale community. In order to reach this goal, we will 1) teach 4 hands-on evening classes on the use of MedlinePlus and PubMed to improve Sunnydale vampires’ ability to find consumer health information and up to date research about health conditions; and 2) open a 12-hour “Dusk-to-Dawn” health reference hotline to help the vampires with their reference questions.  With these activities, we hope to see a) increased ability of the Internet-using Sunnydale vampires to research needed health information; b) that those vampires will use their increased skills to research health information for their brood; and c) these vampires will use this information to make good health decisions leading to improved health, and as a result form better relationships with the human community of Sunnydale.

Please note that in this executive summary, I do not use the word “objectives” to identify the phrases numbered 1 and 2, and I also do not use the word “outcomes” to identify the phrases lettered a, b, and c (because I like the way it reads better without them). However, in detailed narrative of my proposal I would use those terms to go with those exact phrases.

So then, what are Measurable Objectives?

The key to the evaluation plan is creating another kind of objective: what we call a measurable outcome objective. When you create your evaluation plan, along with showing how you plan to measure that you did what you said you would do (process assessment), you will also want to plan how to collect data showing the degree to which you have reached your outcomes (outcome assessment).  These statements are what we call measurable outcome objectives.

Using the “Book 2 Worksheet: Outcome Objectives” found on our Evaluation Resources web page, you start with your outcomes, add an indicator, target and time frame to get measurable objectives  and write it in a single sentence.  Here’s an example of what that would look like using the first outcome listed in the Executive Summary:

Dusk to Dawn Measurable Objective

We’ve gotten through some terminology and some steps for going from your logic model to measuring your outcomes.

Stay tuned for next week when we turn all of this into an Evaluation Plan!

Dare I say it? Same bat time, same bat channel…

 

 

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Evaluation Planning for Proposals: a New NEO Resource

Friday, August 12th, 2016

Angry crazy Business woman with a laptop

Have you ever found yourself in this situation?  You’re well along in your proposal writing when you get to the section that says “how will you evaluate your project?”  Do you think:

  1. “Oh #%$*! It’s that section again.”
  2. “Why do they make us do this?”
  3. “Yay! Here’s where I get to describe how I will collect evidence that my project is really working!”

We at the NEO suggest thinking about evaluation from the get-go, so you’ll be prepared when you get to that section.  And we have some great booklets that show how to do that.  But sometimes people aren’t happy when we say “here are some booklets to read to get started,” even though they are awesome booklets.

So the NEO has made a new web page to make it easier to incorporate evaluation into the project planning process and end up with an evaluation plan that develops naturally.

1. Do a Community Assessment; 2. Make a Logic Model; 3. Develop Measurable Objectives; 4. Create an Evaluation Plan

We group the process into 4 steps: 1) Do a Community Assessment; 2) Make a Logic Model; 3) Develop Measurable Objectives for Your Outcomes; and 4) Create an Evaluation Plan.   Rather than explain what everything is and how to use it (for that you can read the booklets), this page links to the worksheets and samples (and some how-to sections) from the booklets so that you can jump right into planning.  And you can skip the things you don’t need or that you’ve already done.

In addition, we have included links to posts in this blog that show examples of the areas covered so people can put them in context.

We hope this helps with your entire project planning and proposal writing experience, as well as provides support for that pesky evaluation section of the proposal.

Please let Cindy (olneyc@uw.edu) or me (kjvargas@uw.edu) know how it works for you, and feel free to make suggestions.  Cheers!

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Happy Fourth of July in Numbers!

Friday, July 1st, 2016

4th of July graphic image

Before holidays we sometimes do a post on the value of putting your data in a visually understandable format, perhaps some kind of infographic.

As I write this, some of you may be sitting at your desks pondering how you will celebrate U.S. Independence Day. To help turn your ponderings into a work-related activity, here are some examples of Fourth of July Infographics.  Since some of them have numbers but no dates (for example the number of fireworks purchased in the US “this year”) you might use them as templates for the next holiday-based infographic you create yourself.

If you like History, the History Channel has a fun infographic called 4th of July by the Numbers.  It includes useful information such as:

  • the oldest signer of the Declaration of Independence was Benjamin Franklin at 70,
  • the record for the hot dog eating contest on Coney Island is 68 hotdogs in 10 minutes, and
  • 80% of Americans attend a barbecue, picnic or cookout on the 4th of July

Thinking about the food for your picnic (if you’re one of the 80% having one)?

From the perspective of work (remember work?) here is an infographic from Unmetric on how and why you should create your own infographics for the 4th of July: How to Create Engaging 4th of July Content.

Have a great 4th of July weekend!

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Design Principles in Evaluation Design

Friday, June 17th, 2016

Robot and human hands almost touching

“Sometimes… it seems to me that… all the works of the human brain and hand are either design itself or a branch of that art.” Michelangelo

Michelangelo is not the only one who thinks design is important in all human activities.  In his book A Whole New Mind, Dan Pink considers design to be one of the 6 senses that we need to develop to thrive in this world. As Mauro Porcini, PepsiCo’s Chief Design Officer points out “There is brand design. There is industrial design. There is interior design. There is UX and experience design. And there is innovation in strategy.” ¹

There is also evaluation design. Whether we’re talking about designing evaluation for an entire project or just one section, like the needs assessment or presenting evaluation results, evaluators are still actively involved in design.

Most of us don’t think of ourselves as designers, however.  Juice Analytics has a clever tool called “Design Tips for Non-Designers” to teach basic design skills and concepts.  Some of these are very specific design tips for charts and power points (which by the way are very important and useful, like “avoiding chart junk” and “whitespace matters”).  But some of the other tips can be jumping off points for thinking about bigger picture design skills, such as:

  • Using Hick’s Law and Occam’s Razor to explain the importance of simplicity
  • Learning how to keep your audience in mind by thinking of how to persuade them, balancing Aristotle’s suggested methods of ethical appeal (ethos), emotional appeal (pathos), and logical appeal (logos)
  • Learning how Gestalt theory applies to the mind’s ability to acquire and maintain meaningful perceptions in an apparently chaotic world
  • Considering the psychology of what motivates users to take action

The September 2015 issue of Harvard Business Review highlighted design thinking as corporate strategy in their spotlighted articles (which are freely available online, as long as you don’t open more than 4 a month).  Here are some cool things you can read about in these articles:

  • Using design thinking changed the way PepsiCo designed products to fit their users’ needs (my favorite line is how they used to design products for women by taking currently existing products and then applying the strategy of “shrink it or pink it.”)
  • Design is about deeply understanding people.
  • Principles of design can be applied to the way people work: empathy with users, a discipline of prototyping and tolerance for failure.
  • Create models to explain complex problems, and then use prototypes to explore potential solutions.
  • If it is likely that a new program or strategy may not be readily accepted, use design principles to plan the program implementation.

Some people are seen as being born with design skills.  But it’s clear that a lot can be learned with study and practice.  Even Michelangelo said, “If people knew how hard I worked to get my mastery, it wouldn’t seem so wonderful after all.”


¹ James De Vries. “PepsiCo’s Chief Design Officer on Creating an Organization Where Design Can Thrive.Harvard Business Review. 11 Aug 2015.  Web. 17 June 2016.

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Summer Evaluation Institute Registration is Open!

Friday, May 13th, 2016

AEA Summer Institute Logo

Whether you call what you do evaluation or assessment, the American Evaluation Association’s Summer Evaluation Institute is an amazing event, sure to teach you something and give you a different perspective on the job you do.

The institute, held June 26-29, 2016, is made up of ½ day hands-on training sessions, taught by the best professionals in the field of evaluation.  It’s attended by people from all over the world who want to improve their skills in different aspects of the evaluation process.

Why would you as a librarian want to attend the AEA Summer Evaluation Institute?   Here are some ideas:

Let’s say you were in charge of eliminating much of your print journal collection and increasing your online journals, and you want to figure out what data you should collect that will show that your users are still getting what they need. There’s a great program called “Development and Use of Indicators for Program Evaluation” by Goldie MacDonald that covers criteria for selection of indicators. Goldie MacDonald is a Health Scientist in the Center for Global Health at the U.S. Centers for Disease Control and Prevention (CDC) and a dynamic speaker and trainer.

Are you taking on the planning of an important project, like finding ways to ensure that your hospital administration values the contributions of your library?  Logic models are great planning tools, but additionally are useful for integrating evaluation plans and strategic plans.  How would you like to take a four hour class on logic models taught by the Chief Evaluation Officer at the CDC, Tom Chapel?

What if you’re a liaison librarian to your university’s biology department and you’re looking for ways to improve collaboration with the faculty? There’s a program called Evaluating and Improving Organizational Collaboration that gives participants the opportunity to increase their capacity to quantitatively and qualitatively examine the development of inter- and intra-organizational partnerships. It’s taught by Rebecca Woodland, Chair of the Department of Educational Policy and Administration at University of Massachusetts Amherst (and recognized as one of the AEA’s most effective presenters).

Maybe you’ve been responsible for a program at your public library training the community in using MedlinePlus for their health information needs. You’ve collecting a lot of data showing the success of your programs, and want to make sure your stakeholders take notice of it. How about giving them an opportunity to work with the data themselves? There’s a program called: A Participatory Method for Engaging Stakeholders with Evaluation Findings, taught by Adrienne Adams at Michigan State University.

This is only a small sampling of the great workshops at the Summer Evaluation Institute.

For those of you who don’t know much about the American Evaluation Association: The AEA is an international professional association devoted to the application and exploration of program evaluation, personnel evaluation, technology, and many other forms of evaluation. Evaluation involves assessing the strengths and weaknesses of programs, policies, personnel, products, and organizations to improve their effectiveness. AEA has approximately 7000 members representing all 50 states in the United States as well as over 60 foreign countries.

Cindy and I will be there – we hope to see you there too!

 

Data Party for Public Librarians

Friday, May 6th, 2016

The Engage for Health project team from left to right: Lydia Collins, Kathy Silks, Susan Jeffery, Cindy Olney

Last week, I threw my first data party. I served descriptive statistics and graphs; my co-hosts brought chocolate.

I first learned about data parties from evaluation consultant Kylie Hutchinson’s presentation It’s A Data Party that she gave at the 2016 American Evaluation Association Conference. Also known as data briefings or sense-making sessions, data parties actively engage stakeholders with evaluation findings.

Guest List

My guests were librarians from a cohort of public libraries that participated in the Engage for Health project, a statewide collaboration led by the NN/LM Middle Atlantic Region (MAR) and the Pennsylvania Library Association (PaLA). The NN/LM MAR is one of PaLA’s partners in a statewide literacy initiative called PA Forward, an initiative to engage libraries in activities that address five types of literacy.  The project team was composed of Lydia Collins of NN/LM MAR (which also funded the project), Kathy Silks of the PaLA, and Susan Jeffery of the North Pocono Public Library. I joined the team to help them evaluate the project and develop reports to bring visibility to the initiative.  Specifically, my charge was to use this project to provide experiential evaluation training to the participating librarians.

Librarians from our 18 cohort libraries participated in all phases of the planning and evaluation process.  Kathy and Susan managed our participant recruitment and communication. Lydia provided training on how to promote and deliver the program, as well as assistance with finding health care partners to team-teach with the librarians. I involved the librarians in every phase of the program planning and evaluation process. We met to create the project logic model, develop the evaluation forms, and establish a standard process for printing, distributing, and returning the forms to the project team. In the end, librarians delivered completed evaluation forms from 77% of their adult participants from Engage for Health training sessions.

What We Evaluated

The objective of PA Forward includes improving health literacy, so the group’s outcomes for Engage for Health was to empower people to better manage their health. Specifically, we wanted them to learn strategies that would lead to more effective conversations with their health care providers. Librarians and their health care partners emphasized strategies such as researching health issues using quality online health resources, making a list of medications, and writing down questions to discuss at their appointments.  We also wanted them to know how to use two trustworthy online health information sources from the National Library of Medicine: MedlinePlus and NIHSeniorHealth.

 Party Activities

Sharing with Appreciative Inquiry. The data party kicked off with Appreciative Inquiry interviews. Participants interviewed each other, sharing their peak experiences and what they valued about those experiences. Everyone then shared their peak experiences in a large group. (See our blog entries here and here for detailed examples of using Appreciative Inquiry.)

Data sense-making: Participants then worked with a fact sheet with graphs and summary statistics compiled from the session evaluation data.  As a group, we reviewed our logic model and discussed whether our data showed that we achieved our anticipated outcomes.  The group also drew on both the fact sheet and the stories from the Appreciative Inquiry interviews to identify unanticipated outcomes.  Finally they identified metrics they wish we had collected. What was missing?

Consulting Circles: After a morning of sharing successes, the group got together to help each other with challenges.  We had three challenge areas that the group wanted to address: integration of technology into the classes; finding partners from local health organizations; and promotional strategies.  No area was a problem for all librarians: some were quite successful in a given areas, while others struggled. The consulting groups were a chance to brainstorm effective practices in each area.

Next steps:  As with most funded projects, both host organizations hoped that the libraries would continue providing health literacy activities beyond the funding period.  To get the group thinking about program continuity, we ran a 1-2-4-All discussion about next steps.  They first identified the next steps they will take at their libraries, then provided suggestions to NN/LM MAR and PALA on how to support their continued efforts.

Post Party Activities

For each of the four party activities, a recorder from each group took discussion notes on a worksheet developed for the activity, then turned it into the project team. We will incorporate their group feedback into written reports that are currently in process.

If you are curious about our findings, I will say generally that our data supports the success of this project.  We have plans to publish our findings in a number of venues, once we have a chance to synthesize everything.  So watch this blog space and I’ll let you know when a report of our findings becomes available.

Meanwhile, if you are interested in reading more about data parties, check out this article in the Journal of Extension.

 

Last updated on Monday, June 27, 2016

Funded by the National Library of Medicine under Contract No. UG4LM012343 with the University of Washington.