Skip all navigation and go to page content

NEO Shop Talk

The blog of the National Network of Libraries of Medicine Evaluation Office

Archive for the ‘Practical Evaluation’ Category

Evaluation Planning for Proposals: a New NEO Resource

Friday, August 12th, 2016

Angry crazy Business woman with a laptop

Have you ever found yourself in this situation?  You’re well along in your proposal writing when you get to the section that says “how will you evaluate your project?”  Do you think:

  1. “Oh #%$*! It’s that section again.”
  2. “Why do they make us do this?”
  3. “Yay! Here’s where I get to describe how I will collect evidence that my project is really working!”

We at the NEO suggest thinking about evaluation from the get-go, so you’ll be prepared when you get to that section.  And we have some great booklets that show how to do that.  But sometimes people aren’t happy when we say “here are some booklets to read to get started,” even though they are awesome booklets.

So the NEO has made a new web page to make it easier to incorporate evaluation into the project planning process and end up with an evaluation plan that develops naturally.

1. Do a Community Assessment; 2. Make a Logic Model; 3. Develop Measurable Objectives; 4. Create an Evaluation Plan

We group the process into 4 steps: 1) Do a Community Assessment; 2) Make a Logic Model; 3) Develop Measurable Objectives for Your Outcomes; and 4) Create an Evaluation Plan.   Rather than explain what everything is and how to use it (for that you can read the booklets), this page links to the worksheets and samples (and some how-to sections) from the booklets so that you can jump right into planning.  And you can skip the things you don’t need or that you’ve already done.

In addition, we have included links to posts in this blog that show examples of the areas covered so people can put them in context.

We hope this helps with your entire project planning and proposal writing experience, as well as provides support for that pesky evaluation section of the proposal.

Please let Cindy (olneyc@uw.edu) or me (kjvargas@uw.edu) know how it works for you, and feel free to make suggestions.  Cheers!

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Developing Program Outcomes using the Kirkpatrick Model – with Vampires

Thursday, July 28th, 2016

Are you in the first stages of planning an outreach program, but not sure how to get started?  The NEO generally recommends starting your program planning by thinking about your outcomes.  Outcomes are the results that you would like to see happen because of your planned activities.  They are the “why we do it” of your program.  However, sometimes it can be difficult to develop your outcome statements in ways that lead to measurable evaluation plans. One way to do it is to use the Kirkpatrick Model, a system that was developed 60 years ago.

Let’s start with an example. Let’s say that your needs assessment of the local vampire community found that they have a lot of health concerns: some preventative (how to avoid garlic, identifying holy water); some about the latest research (blood borne diseases, wooden stake survival); and some mental health (depression from having trouble keeping human friends). Further discussion led to finding out that while they have access to the internet from their homes, they don’t have much help with finding health information due to the fact that most of their public libraries are closed when the vampires are out and about.

So you’ve decided to do some training on both MedlinePlus and PubMed (in the evenings), and you want to come up with outcomes for your training program.

The Kirkpatrick Model breaks down training outcomes to 4 levels, each building on the previous one:

The Kirkpatrick Model shown as a pyramid

Level 1: Reaction – This level of outcome is about satisfaction. People find your training satisfying, engaging, and relevant. While satisfaction is not a very strong impact, Kirkpatrick believes it is essential to motivate people to learn.

Level 2: Learning – This outcome is that by the end of your training, people have learned something, whether knowledge, skills, attitude, or confidence.

Level 3: Behavior – This outcome is related to the degree to which people take what they learned in your training and apply it in their jobs or their lives.

Level 4: Results – This level of outcome is “the greater good” – changes in the community or organization that occur as a result of changes in individuals’ knowledge or behavior.

Back to our training example.  Here are some outcome statements that are based on the Kirkpatrick Model.  I am listing them here as Short-term, Intermediate and Long-term, like you might find in a Logic Model (for more on logic models, see Logic Model blog entries).

Short-term Outcomes:

  • Sunnydale vampires found their MedlinePlus/PubMed classes to be engaging and relevant to their “lives.” (Reaction)
  • Sunnydale vampires demonstrate that they learned how to find needed resources in PubMed and MedlinePlus. (Learning)

Intermediate Outcomes:

  • Sunnydale vampires use MedlinePlus and PubMed to research current and new health issues for themselves and their brood. (Behavior)

Long-term Outcomes

  • Healthier vampires form stronger bonds with their human community and there is less friction between the two groups. (Results)

    Dracula leaning over woman

Once you’ve stated your outcomes, you can use them in a number of ways. You can think through your training activities to ensure that they are working towards those outcomes. And you can assign indicators, target criteria, and time frames to each outcome to come up with measurable objectives for your evaluation plan.

Happy Hunting Outcomes!

Save

Save

Save

Save

Save

Save

Save

Save

Rubber Duck Evaluation Planning

Friday, July 15th, 2016

Yellow rubber duck with reflection

Programmers have a process for solving coding problems called “Rubber Duck Debugging.” It emerged from the realization that when they explained a problem they were having in coding to non-programmer, suddenly the solution would come to them. Then they realized that they could get the same results by explaining the problem to a rubber duck (or some other inanimate object) and they wouldn’t have to bother someone.  What they do is explain each line of code to a rubber duck, until they hit on the solution to their problem.

How does this apply to evaluation planning?  Cindy and I kind of did this yesterday (for full disclosure, I will admit that I was the rubber duck).  We were walking through a complicated timeline for an evaluation process. It had a lot of steps. It was easy to leave one out. Some of them overlapped. We really had to explain to each other how it was going to happen.

Rubber Duck Debugging can be employed at almost any stage of the evaluation planning process. Here are some examples:

Logic Models

When creating a logic model, you usually work from the right side (the outcomes you want to see), and work your way left to the activities that you want to do that will bring about the outcomes, then further left to the things you need to have in place to do the activities (here’s a sample logic model from the NEO’s Booklet 2 Planning Outcomes Based Outreach Projects).  Once you’ve got your first draft of the logic model, get your rubber duck and carefully describe your logic model to it from left to right, saying “If we have these things in place, then we will be able to do these activities. If these activities are done correctly, they will lead to these results we want to see.  If those things happen, over time it is logical that these other long term outcomes may happen.”  Explain thoroughly so the duck understands how it all works, and you know you haven’t missed anything.

Indicators

Process Indicators: in the logic model section, you explained to your duck “If these activities are done correctly they will lead to these results.”  What does “correctly” look like? Explain to your duck how things need to be done or they won’t lead to the results you want to see. Be as specific as you can.  Then think about what you can measure to see how well you’re doing the activities.  Explain those things to the duck so you can be sure that you are measuring the things you want to see happen.

Outcome Indicators: Looking at your logic model, you know what results you’d like to see.  Think about what would indicate that those results had happened? Then think about how and when you would measure those indicators. Talk it out with the duck. In some cases you may not have the time, money or staff needed to measure an indicator you would really like to measure.  In some cases the data that you can easily collect with your money, staff and time will not be acceptable to your funders or stakeholders.  You will need to make sure you have indicators that you can measure successfully that are credible to your stakeholders. The rubber duck’s masterful silence will help you work this out.

Data collection

I think this is where the duck will really come in handy.  To collect the data that you have described above, you will need to have some data collection tools, like questionnaires or forms. Once you’ve put together the tools, you should explain to the duck what data each question is intend to gather. When you explain it out loud, you might catch some basic mistakes, like asking questions you don’t really need the answers to, or asking a question that is really two questions in one.

Then you need a system for collecting the data using your tools.  If it’s a big project, a number of people may be collecting the data and you will have to write instructions to make sure they are all doing it the same way.  Read each instruction to the duck and explain why it’s important to the success of the project. Did the duck’s ominous silence suggest areas where someone might misunderstand the instructions?

I hope this is helpful to you in your evaluation planning, and maybe other areas of your life.  Why use a rubber duck instead of something else? Well, they are awfully cute. And they come with a great song that you can sing when you’ve completed your plan: https://www.youtube.com/watch?v=Mh85R-S-dh8

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Designing Surveys: Does The Order of Response Options Matter?

Friday, July 8th, 2016

I recently reviewed a questionnaire for a colleague, who used one of the most common question formats around: the Likert question. Here is an example:

Beiber 1

This is not a question from my colleague’s survey.  (I thought I should point that out in case you were wondering about my professional network.)  However, her response options were laid out similarly, with the most positive ones to the left.  So I shared with her the best practice I learned from my own training in survey design: Reverse the order of the response options so the most negative ones are at the left.

 

Bieber 2

 

This “negative first” response order tends to be accepted as a best practice because it is thought to reduce positive response bias (that is, people overstating how much they agree with a given statement).  Because I find myself repeating this advice often, I thought the topic of “response option order” would make a good blog post. After all, who doesn’t like simple little rules to follow?  To write a credible blog post, I decided to track down the empirical evidence that underpins this recommended practice.

And I learned something new: that evidence is pretty flimsy.

This article by Jeff Sauro, from Measuring U, provides a nice summary article and references about the evidence for our “left-side bias.”  “Left-side bias” refers to the tendency for survey-takers to choose the left-most choices in a horizontal list of response options. No one really knows why we do this.  Some speculate it’s because we read from left to right and the left options are the first ones we see.  I suppose we’re lazy: we stop reading mid-page and make a choice, picking one of the few options we laid eyes on. This speculation comes from findings that show the left-side bias is more pronounced if the question is vague or confusing, or if the survey takers flat-out don’t care about the question topic.

Here’s how left-side bias is studied. Let’s say you really care what people think about Justin Bieber. (I know it’s a stretch, but work with me here.)  Let’s say you asked 50 people their opinion of the pop star using the sample question from above. You randomly assign the first version to 25 of the respondents (group 1) and the second version 2 to the other 25 respondents (group 2).  Findings would predict that group 1 will seem to have a more favorable opinion of Justin Bieber, purely because the positive options are on the left.

Sauro’s references for his article do provide evidence of “left-side bias.” However, after reviewing his references, I drew the same conclusion that he did: the effect of response option order is small, to the point of being insignificant. I became more convinced that this was the case when I looked for guidance in the work of Donald Dillman, who has either conducted or synthesized studies on almost every imaginable characteristic of surveys. Yet I could not find any Dillman source that addressed how to order response options for Likert questions.  In his examples, Dillman follows the “negative option to the left” principle, but I couldn’t find his explicit recommendation for the format. Response option order does not seem to be on Dillman’s radar.

So, how does all this information change my own practice going forward?

three smiley faces: left is frown; middle is neutral; right is smiling. Finger is touching the smiling one.

For new surveys, I’ll still format Likert-type survey questions with the most negative response options to the left.  You may be saying to yourself, “But if the negative options are on the left, doesn’t that introduce negative bias?” Maybe, but I would argue that using the “negative to the left” format will give me the most conservative estimate of my respondents’ level of endorsement on a given topic.

However, if I’m helping to modify an existing survey, particularly one that has been used several times, I won’t suggest changing the order of the response options. If people are used to seeing your questions with positive responses to the left, keep doing that.  You’ll introduce a different and potentially worse type of error by switching the order. More importantly, if you are planning to compare findings from a survey given at two points in time, you want to keep the order of response options consistent.  That way, you’ll have a comparable amount of error cause by bias in time 1 and time 2.

Finally, I would pay much closer attention to the survey question itself.  Extreme language seems to be a stronger source of bias than order of the response options. Edit out those extreme words like “very,”  “extremely,” “best” and “worst.”   For example, I would rewrite our sample question to “Justin Bieber is a good singer.”   Dillman would suggest neutralizing the question further with wording like this: “Do you agree or disagree that Justin Bieber is a good singer?”

Of course, this is one of many nuanced considerations you have to make when writing survey questions.  The most comprehensive resource I know for survey design is Internet, Phone, Mail and Mixed-Mode Surveys: The Tailored Design Method (4th ed.), by Dillman, Smyth, and Christian (Hoboken, NJ: John Wiley and Sons, Inc, 2014).

 

Party Guides for Data Dives

Friday, June 24th, 2016

Children hands holding numbers. Black isolated multicolor numbers.

Children who grow vegetables are more likely to eat them. Likewise, stakeholders who have a hand in interpreting evaluation data are more likely to use the findings.

Traditionally, the data analysis phase of evaluation has been relegated to technical experts with research skills. However, as the field sharpens its focus on evaluation use, more evaluators are working on developing methods to engage groups of stakeholders in data analysis. While evaluation use is one objective of this approach, evaluators also are compelled to use participatory data analysis because it

  • Provides context for findings and interpretations that include multiple viewpoints
  • Generates stakeholder interest in the evaluation process
  • Determines which findings are most important to those impacted by a program

Last week, Karen and I attended Participatory Sense-making for Evaluators: Data Parties and Dabbling in Data, a webinar offered by Kylie Hutchinson of Community Solutions  and Corey Newhouse of Public Profit. They shared their strategies for getting groups of stakeholders to roll up their sleeves and dive elbow-deep in data. Such events are referred to as data parties, sense-making sessions, results-briefings, and data-driven reviews. Hutchinson also shared her data party resource page on (more…)

Design Principles in Evaluation Design

Friday, June 17th, 2016

Robot and human hands almost touching

“Sometimes… it seems to me that… all the works of the human brain and hand are either design itself or a branch of that art.” Michelangelo

Michelangelo is not the only one who thinks design is important in all human activities.  In his book A Whole New Mind, Dan Pink considers design to be one of the 6 senses that we need to develop to thrive in this world. As Mauro Porcini, PepsiCo’s Chief Design Officer points out “There is brand design. There is industrial design. There is interior design. There is UX and experience design. And there is innovation in strategy.” ¹

There is also evaluation design. Whether we’re talking about designing evaluation for an entire project or just one section, like the needs assessment or presenting evaluation results, evaluators are still actively involved in design.

Most of us don’t think of ourselves as designers, however.  Juice Analytics has a clever tool called “Design Tips for Non-Designers” to teach basic design skills and concepts.  Some of these are very specific design tips for charts and power points (which by the way are very important and useful, like “avoiding chart junk” and “whitespace matters”).  But some of the other tips can be jumping off points for thinking about bigger picture design skills, such as:

  • Using Hick’s Law and Occam’s Razor to explain the importance of simplicity
  • Learning how to keep your audience in mind by thinking of how to persuade them, balancing Aristotle’s suggested methods of ethical appeal (ethos), emotional appeal (pathos), and logical appeal (logos)
  • Learning how Gestalt theory applies to the mind’s ability to acquire and maintain meaningful perceptions in an apparently chaotic world
  • Considering the psychology of what motivates users to take action

The September 2015 issue of Harvard Business Review highlighted design thinking as corporate strategy in their spotlighted articles (which are freely available online, as long as you don’t open more than 4 a month).  Here are some cool things you can read about in these articles:

  • Using design thinking changed the way PepsiCo designed products to fit their users’ needs (my favorite line is how they used to design products for women by taking currently existing products and then applying the strategy of “shrink it or pink it.”)
  • Design is about deeply understanding people.
  • Principles of design can be applied to the way people work: empathy with users, a discipline of prototyping and tolerance for failure.
  • Create models to explain complex problems, and then use prototypes to explore potential solutions.
  • If it is likely that a new program or strategy may not be readily accepted, use design principles to plan the program implementation.

Some people are seen as being born with design skills.  But it’s clear that a lot can be learned with study and practice.  Even Michelangelo said, “If people knew how hard I worked to get my mastery, it wouldn’t seem so wonderful after all.”


¹ James De Vries. “PepsiCo’s Chief Design Officer on Creating an Organization Where Design Can Thrive.Harvard Business Review. 11 Aug 2015.  Web. 17 June 2016.

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

The Accidental Objective

Friday, June 10th, 2016

A couple of months ago, I accidentally set an objective for the NEO Shop Talk blog.

First, let me give you my definition of an objective. I think of objectives as observable outcomes. An outcome defines the effect you want to make through a project or initiative. (You can read more about outcomes here.)  Then, you add a measure (something observable), a target (what amount of change constitutes success), and a timeframe for achieving that target.  For example, say your doctor tells you to lower your blood pressure.  She likely will suggest you make some lifestyle changes, then return in six months (timeframe) so she can take a BP reading (measure) to see if it’s below 120/80, the commonly accepted target for “normal” blood pressure.

In February 2015, Karen and I had set an outcome to increase our blog readership.  We monitored our site statistics, which we considered a measure of readership. However, we never wrote an objective or set a target. Like most of the world, we only write objectives when a VIP (such as an administrator or funder) insists. Otherwise, we are as uncommitted as the next person.

But then this happened.  I was preparing slides for a webinar on data visualization design principles and wanted to show how a benchmark line improves the meaning of data displayed in a line graph. A benchmark line basically represents your target and allows readers to compare actual performance against that target.  The best “real” data I had for creating a nice line graph was our blog’s site statistics.  But I needed a target to create the benchmark line.

NEO Blog Monthly Site Visits 2014-2016 line graph showing steady increase of monthly site visits, starting near 100 in the first month represented on the graph and and steadily increasing toward the target goal of 500. There is a straight benchmark line, going straight across the graph at the 500 mark, demonstrating our target. We hit the target once in the time frame represented on the graph.

So I made one up: 500 views per month by March 1. I did check our historical site statistics to see what looked reasonable. However, I mostly choose 500 because it was a nice, simple number to use during a presentation. I didn’t even consult Karen. She learned about it when she reviewed my webinar slides.

After all, it was a “pretend” objective.

But a funny thing happened.  As luck would have it, the NEO had nine presentations scheduled for the month of February, the month after I prepared the graph. Our new target motivated us to promote our blog in every webinar. By the end of February, we exceeded our goal, with 892 site visits.

It was game on! We started monitoring our site statistics the way cats watch a gecko. Whenever we feared we might not squeak across that monthly target line, we began strategizing about how to bump up readership. At first, we focused on promotion. We worked on building our following in Twitter, where we promoted our blog posts each week. Karen created a Facebook page so we had another social media outlet to promote our blog.

Eventually, though, we shifted our focus toward strategies for creating better content. Here were some of our ideas:

  • Show our readers how to apply evaluation in their work settings. Most of our readers are librarians, so we make a point of using examples in our articles that demonstrate how evaluation is used in library programs.
  • Demonstrate how the NEO evaluates its own program. We do posts about our own evaluation activities so that we can model the techniques we write and teach about.
  • Allow our readers to learn about assessment from each other. We work for a national network and our readers like to read about and learn from their colleagues. We now seek interviews with readers who have evaluation success stories to share.
  • Supplement our trainings. We create blog posts to supplement our training sessions. We then list relevant blog posts (with links) in our workshop and webinar resource lists.
  • Improve our consulting services. We offer evaluation consultations to members of the National Network of Libraries of Medicine. We now send them URLs to blog posts that we think will help them with particular projects.
  • Introduce new evaluation trends and tools: Both Karen and I participate in the American Evaluation Association, which has many creative practitioners who are always introducing new approaches to evaluation. We use NEO Shop Talk to pass along innovations and methods to our readers.

 

In the end, this accidental objective has improved our service.  It nudged us toward thinking of our blog contributes to the mission of the NEO and the NN/LM.

So I challenge you to set some secret objectives, telling only those who are helping you achieve that target.  See if it changes how you work.

If it does, email us. We’ll write about you in our blog.

By the way, if you want to learn how to add a benchmark line to your graphs, check out this post from Evergreen Data.

What is a Need?

Friday, June 3rd, 2016

Puzzle pieces, only the Solution piece is missing

Requests for Proposals often have a “Needs” section that usually says something like “Describe the need for your project.”  For over a decade I read hundreds of proposals for funding in my last position, and it often seemed like there was a lack of understanding of what that question meant.

As you probably know from our posts (in particular Cindy’s most recent post Steering by Outcomes: Begin with the End in Mind), we think it’s very important to plan the outcomes you want to see first. Outcomes come out of changes that you want to see happen that you discover when doing a community or needs assessment.  So for example, you do a needs assessment and find out that older adults in your hospital have lower health outcomes than those in other hospitals.  You also find out that they often do not follow their doctors’ advice. So one of your outcomes that you might want to see in your project plan would be “increased compliance with doctors’ advice” and a longer-term outcome like “older adults have better health outcomes.”

This is often where the needs description ends in proposals I have read.  People clearly demonstrate with data that there is a need for the outcomes they want to see (in this case better compliance and better health outcomes).  But as a reviewer, how do I know that there is a need for their solution to the problem?  When I think of “need for the program,” I also want to know what is lacking (or needed) in the community that their program will provide that will lead to those outcomes.

The only way for you, the project planner, to uncover that missing thing is by asking more questions of your community members and stakeholders.  In Liberating Structures Nine Whys exercise, you are encouraged to ask the question “Why is that important to you?” nine times to help you determine the fundamental purpose of what you’re doing.  This can help you clarify your outcomes.  I suggest that once you have your outcomes, then ask your community members the question “Why isn’t this already happening?” nine times to find out the core reasons that the outcomes aren’t already being met. This is what will provide the ‘need’ for your project plan, and help you design the perfect activities.

Here’s an example.  Way back in 2004 I did a site visit with a group called Healthcare for the Homeless – Houston, who were recipients of NLM funding for computer and internet access.  While I was there, the director told me a story about a needs assessment they did to find out what the barriers were for homeless people to get healthcare services.  In their needs assessment they worked together with other organizations that serve homeless people to find out how their services could be used better as well.  In their interviews with homeless people, what they found out was that people wanted to use the services but could not get to them – most of the services were spread out around the city, and without transportation it was a big deal to just get to one location, not to mention all of them. This realization spawned an innovative program called Project Access, which is a free bus service for Houston’s homeless residents that travels around to 21 agencies that provide essential services such as health care, meals, shelter, and social services.

What makes this project especially impressive to me was that they did an in-depth study to find out why the need for better healthcare wasn’t already being met. If Healthcare for the Homeless had only used data to determine that there was a need for homeless Houstonians to get better healthcare, what would have driven the choice of their activities to solve that problem?  In many projects and proposals that I’ve seen, the activities chosen were potentially good ideas, but not informed by discussions with members of the community, or in some cases had already been decided before doing the needs assessment.  My definition of “need” is the thing that is missing that, once you provide it, will logically bring about the outcomes that you want to see.

When you’re writing a proposal, or planning a project, it always comes back to the story you want to tell.  You want to tell a logical story that connects all of the dots. Whether you’re talking to a funder, an administrator, a city manager, or whoever decides whether or not you get to do your project, the story you want to tell is 1) there is a serious problem you would like to address with a project you have designed; 2) there are several specific outcomes associated with that problem that your project will accomplish, and 3) you’ve learned there is this thing that is missing that is preventing those outcomes from happening, and 4) your project is going to provide that thing.  So from the program planning perspective, you need to go out and find out what those needs are before planning your activities.

Or, to paraphrase Cindy, paraphrasing Yogi Berra, “When you come to a fork in the road, check your outcomes, then figure out why they aren’t being met, and then proceed.”

To refresh your knowledge of community assessment, take a look at NEO’s booklet Getting Started with Community-Based Outreach.

 

Worksheets and Checklists to Help with Evaluation

Friday, May 27th, 2016

You may already know that the NEO offers some booklets that work through some basic evaluation processes, called Planning and Evaluating Health Information Outreach Projects. You can view them as a PDF, an HTML or you can order a print version (there are limited copies of these left, so no promises). But I think the under-marketed gem in these booklets are the checklists and worksheets at the end of each one. And the ones in the HTML version of the Booklets are Word docs that you can download, modify if you want, and use.

Covers of the three Evaluation Booklets

For example, let’s say you’d like to create a survey to find out if you’ve reached your project’s outcomes. A process for this is explained in Booklet 3: Collecting and Analyzing Evaluation Data. At the end of this booklet is a blank worksheet called “Planning a Survey” that you can download and use to work through writing your survey questions.  Along with that, there’s also an example of a filled out worksheet based on a realistic scenario that helps demonstrate how the worksheet can be used.

The importance of checklists in improving outcomes is underscored in Dr. Atul Gawande’s book The Checklist Manifesto.  While he’s mostly talking about medical scenarios, the same can be true with evaluation.  Let’s face it, even if you feel fairly confident in evaluation, there are a lot of little things to remember, especially if you don’t do it all the time.

In Booklet 1: Getting Started with Community-Based Outreach, the checklist items are sorted into the three categories “Step 1 – Get Organized,” “Step 2 – Gather Information,” and “Step 3 – Assemble, Interpret and Act.”  These are the same categories as the chapters in the book.  So for example, one of the items under “Get Organized” is “Gather primary data to complete the picture of your target community.”  If you’d like a reminder or some suggestions of how to go about this, go to the chapter heading “Step 2 – Gather Information” where you can find a list of ways to gather primary data.  These checklists can also be downloaded as a Word document and adapted to your own needs.

I hope this isn’t too meta, but while you’re using evaluation to help you reach your project’s outcomes, you also have a personal outcome of doing a good job with your evaluation plan!  So when you head into your evaluation projects, don’t forget your checklists to make sure all of your evaluation outcomes are successful.

 

Steering by Outcomes: Begin with the End in Mind

Friday, May 20th, 2016

If you don’t know where you’re going, you might not get there – Yogi Berra

Toy car sitting on a road map

Next week, Karen and I will be facilitating an online version of one of NEO’s oldest workshops, Planning Outcomes-Based Outreach Projects for the Health Science Information Section of the North Dakota Library Association.The main tool we teach in this workshop is the program logic model, but our key takeaway message is this: Figure out where you’re going before you start driving.

If you drive to a new place, your navigator app will insist on a destination, right?  Well, I’m like an evaluation consulting app: those who work with me on evaluation planning have to define what they hope to accomplish before we start designing anything.

In fact, I get positively antsy until we nail down the desired end results.  If I’m helping a colleague develop a needs assessment, I want to know how he or she plans to use the data.  To design a program evaluation process, I have to know how the project team defines success. When consulting with others on survey design, I help them determine how each question will provide them with actionable information.

My obsession with outcomes crept into my personal life years ago. Before I sign up for continuing education or personal development workshops, I consider how they will change my life.  When my husband and I plan vacations, we talk about what we hope to gain on our trip. Do we want to connect with friends? See a new landscape? Catch up on some excellent Chicago comedy? Outcomes-thinking may be an occupational hazard for evaluation professionals. Case in point: Have you seen Karen Vargas’s birthday party logic model?

Top 5 Reasons to Love Outcomes

So how did I become an outcomes geek? Here are the top five reasons:

  • Outcomes are motivating: Activities describe work and who among us needs more work? Outcomes, on the other hand, are visionary. They allow you to imagine and bask in a job well done. Group discussions about outcomes are almost always more uplifting and enthusiastic than discussions about project implementation. Plus, you will attract more key supporters by talking about the positive benefits you hope to attain.
  • Outcomes help you focus: Once you have determined what success looks like, you’ll think more carefully about how to accomplish it.
  • Outcomes provide a reality check: Once you know what you want to accomplish, you’ll think more critically about your project plans. If the logical connection doesn’t hold, you can course-correct before you even start.
  • Planned outcomes set the final scene for your project story: Ultimately, most of us want or have to report our efforts to stakeholders, who, by definition, have a vested interest in our program. Project stories, like fairy tales, unfold in three acts: (Act 1) This is where we started; (Act 2) This is what we did; (Act 3) This is what happened in the end.  Program teams notoriously focus on collecting evaluation data to tell Act 2, while stakeholders are more interested in Act 3.  However, if you articulate your outcomes clearly from the start, you are more likely to collect good data to produce a compelling final act.
  • Identifying expected outcomes helps you notice the unexpected ones. Once you start monitoring for planned outcomes, you’ll pick up on the unplanned ones as well. In my experience, most unplanned outcomes are sweet surprises: results that no one on the team ever imagined in the planning phase.  However, you also may catch the not-so-great outcomes early and address them before any real damage is done.

How to Steer by Outcomes 

When I work with individuals or small project teams, here are the questions we address when trying to identify program outcomes:

  • What will project success look like?
  • What will you observe that will convince you that this project was worth your effort?
  • What story do you want to tell at the end of this project?
  • Who needs to hear your story and what will they want to hear?

These questions help small project teams identify outcomes and figure out how to measure them. If you want a larger group to participate in your outcomes-planning discussion, consider adapting the Nine Whys exercise from Liberating Structures.

Once the outcomes are identified, you’re ready to check the logical connection between your program strategies and your planned results. The logic model is a great tool for this stage of planning. The NEO’s booklet Planning Outcomes-Based Programs provides detailed guidance for how to create project logic models.

Yogi Berra famously said “When you come to a fork in the road, take it.”  I would paraphrase that to say “When you come to a fork in the road, check your outcomes and proceed.”

Last updated on Monday, June 27, 2016

Funded by the National Library of Medicine under Contract No. UG4LM012343 with the University of Washington.