Skip all navigation and go to page content

NEO Shop Talk

The blog of the National Network of Libraries of Medicine Evaluation Office

From Logic Model to Proposal Evaluation – Part 1

Vocabulary. Jargon. Semantics.  Sometimes I think it’s the death of us all.  Seriously, it’s really hard to have a conversation about anything when you use the same words in the same context to mean completely different things.

Take Goals and Objectives.  I can’t tell you how many different ways this has been taught to me.  But in general all the explanations agree that a goal is a big concept, and an objective is more specific.

Things get complicated when we use words like Activities, Outcomes, and Measurable Objectives when teaching you about logic models as a way of planning a project.  Which of those words correlate with Goals and Objectives when writing a proposal for the project you just planned?

Bela Lugosi as Dracula

I’m going to walk through an example of how we can connect the dots between the logic model that we use to plan projects, and the terminology used in proposal writing.  There isn’t necessarily going to be a one to one relationship, and it might depend on the number of goals you have.

As has been stated in previous posts, we’ve never actually done any work with the fictional community of Sunnydale, a place where there was, in the past, a large number of vampires and other assorted demons.  But in order to work through this problem, let’s go back to this hypothetical post where we used the Kirkpatrick Model to determine outcomes that we would like to see with any remaining vampires who want to live healthy long lives, and get along with their human neighbors.  For this post, I’m going to pretend I’m writing a proposal to do a training project for them based on those outcomes, and then show how they lead to an evaluation plan.

Goals

The goal can be your long-term outcome or it can be somewhat separate from the outcomes. But either way, your goal needs to be logically connected to the work you’re planning to do.  For example, if you’re going to train vampires to use MedlinePlus, goals like “making the world a better place,” or “achieving world peace,” are not as connected to your project as something like “improving health and well being of vampires” or “improving the health-literacy of vampires so they can make good decisions about their health.”

Here is a logic model showing how this could be laid out, using the outcomes established in the earlier post:

Dusk to Dawn Logic Model

Keep in mind that the purpose of a proposal is to persuade someone to fund your project.  So for the sake of my proposal, I’m going to combine the long-term outcomes into one goal statement.

The goal of this project is to improve the health and well being of vampires in the Sunnydale community.

Objectives

The objectives can be taken from the logic model Activities column. But keep something in mind.  Logic models are small – one page at most.  So you can’t use a lot of words to describe activities.  Objectives on the other hand are activities with some detail filled in. So in the logic model the activity might be “Evening hands-on training on MedlinePlus and PubMed,” while the objective I put in my proposal might be “Objective 1: We will teach 4 hands-on evening classes on the use of MedlinePlus and PubMed to improve Sunnydale vampires’ ability to find consumer health information and up to date research.”

Objectives in Context

Here’s a sample of my Executive Summary of the project, showing goals, objectives, and outcomes in a narrative format:

Executive Summary: The goal of our From Dusk to Dawn project is to improve the health and well being of vampires in the Sunnydale community. In order to reach this goal, we will 1) teach 4 hands-on evening classes on the use of MedlinePlus and PubMed to improve Sunnydale vampires’ ability to find consumer health information and up to date research about health conditions; and 2) open a 12-hour “Dusk-to-Dawn” health reference hotline to help the vampires with their reference questions.  With these activities, we hope to see a) increased ability of the Internet-using Sunnydale vampires to research needed health information; b) that those vampires will use their increased skills to research health information for their brood; and c) these vampires will use this information to make good health decisions leading to improved health, and as a result form better relationships with the human community of Sunnydale.

Please note that in this executive summary, I do not use the word “objectives” to identify the phrases numbered 1 and 2, and I also do not use the word “outcomes” to identify the phrases lettered a, b, and c (because I like the way it reads better without them). However, in detailed narrative of my proposal I would use those terms to go with those exact phrases.

So then, what are Measurable Objectives?

The key to the evaluation plan is creating another kind of objective: what we call a measurable outcome objective. When you create your evaluation plan, along with showing how you plan to measure that you did what you said you would do (process assessment), you will also want to plan how to collect data showing the degree to which you have reached your outcomes (outcome assessment).  These statements are what we call measurable outcome objectives.

Using the “Book 2 Worksheet: Outcome Objectives” found on our Evaluation Resources web page, you start with your outcomes, add an indicator, target and time frame to get measurable objectives  and write it in a single sentence.  Here’s an example of what that would look like using the first outcome listed in the Executive Summary:

Dusk to Dawn Measurable Objective

We’ve gotten through some terminology and some steps for going from your logic model to measuring your outcomes.

Stay tuned for next week when we turn all of this into an Evaluation Plan!

Dare I say it? Same bat time, same bat channel…

 

 

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Shop Talk SWOT Hack for Proposal Writers

SWOT (strengths, weaknesses, opportunities, and threats) analysis, strategic planning method presented as diagram on blackboard with white chalk and sticky notes

Every self-respecting workshop has its share of hacks. Today’s post is about the NEO Shop Talk’s SWOT hack.

Most of our readers have heard of SWOT analysis, because of its widespread use in strategic planning. NEO developed its own special version of SWOT analysis to help our readers and training participants with preparation of funding proposals.  Our version of SWOT analysis is one of a number of methods on the NEO’s new resource page for proposal planning featured in last week’s post.

“SWOT” stands for Strengths, Weaknesses, Opportunities, and Threats.  Businesses use SWOT analysis to examine their organizations’ internal strengths and weaknesses, and to identify external opportunities and threats that may impact future success. Strategic plans are then designed to exploit the positive factors and manage the negative factors identified in the analysis.

SWOT analysis can be a great proposal-planning tool. After all, funding proposal are essentially strategic plans. The analysis will prepare you to write a plan that describes the following:

  • Your organization’s unique ability to meet the needs of your primary project beneficiaries (Strengths)
  • The weaknesses in your organization that you hope to address through the funding requested in your proposal. (Weaknesses)
  • Resources external to your organization that you have discovered and can leverage for project success, such as experts, partners, or technology.(Opportunities)
  • Potential challenges you have identified and your contingency plan for addressing them, should they arise. (Threats)

Funding proposal do differ in one key way from organizational strategic plans: they are persuasive in nature. Your proposal must argue convincingly that an initiative is needed. It must also demonstrate your organization’s readiness to address that need. To make your arguments credible, you will need data, and you get that data from a community assessment. (I use the word “community” for any group that you want to serve through your project.) The NEO has tweaked the SWOT analysis process so that it can serve as the first step in the community assessment process.

Every SWOT analysis uses a chart.  We altered the traditional SWOT chart a bit, adding a third column.  In that column, you can record questions that arise during your SWOT discussion to be explored in your community assessment. Our chart looks like this:

NEO's version of the SWOT charts with a third column in gray for the internal and external unknowns

Here are the basic steps we suggest for facilitating a SWOT discussion:

  1. Convene a SWOT team. Ideally, representatives’ expertise and experience will lead to a thorough understanding of the internal and external factors that can impact your project. You want team members who know your organization well and those who know the beneficiary community well.  It’s great if you can find people who know both, such as key informants who belong to the beneficiary group and also use the services of your library or organization.
  2. Ask the group to brainstorm ideas for each of the six squares in the chart above. To record group input, facilitators favor poster-size SWOT charts pinned to the wall and stacks of sticky pads that allow team members to add their ideas to each square.
  3. Once you have exhausted the discussion about the six squares, you now want to see if you have evidence to support the facts and ideas. Examine each idea on the chart, asking the following questions: (a) What source of information exists to support our claims about the identified strengths, weaknesses, opportunities and threats? If you have no real evidence for an idea, it may need to be moved to an “unknown” square (b) How important is it that we include this claim in our proposal (c) If we do need to include it, is our data credible enough to support our claim? It it’s weak, how can we get more persuasive data or additional corroborating information?
  4. Now, work with your “unknowns.” How can you educate yourself about those gray areas? What data sources and methods can you use?
  5. At this point, you now know where to focus your community assessment efforts. Your last step is to make a community assessment plan. Assign tasks to team members and determine a data collection timeline.

Once you have collected your data, your core project team can revisit the SWOT chart. Your community assessment findings should fit neatly into the four SWOT squares and, hopefully, you will have far fewer “unknowns.” Some of your community assessment findings will help you build your rationale for your project. Other information will help you refine your project strategies, which you will work out using another great planning tool from our proposal-planning page: the logic model.  For a group project-planning process, check out the NEO post on tearless logic models.

Save

Evaluation Planning for Proposals: a New NEO Resource

Angry crazy Business woman with a laptop

Have you ever found yourself in this situation?  You’re well along in your proposal writing when you get to the section that says “how will you evaluate your project?”  Do you think:

  1. “Oh #%$*! It’s that section again.”
  2. “Why do they make us do this?”
  3. “Yay! Here’s where I get to describe how I will collect evidence that my project is really working!”

We at the NEO suggest thinking about evaluation from the get-go, so you’ll be prepared when you get to that section.  And we have some great booklets that show how to do that.  But sometimes people aren’t happy when we say “here are some booklets to read to get started,” even though they are awesome booklets.

So the NEO has made a new web page to make it easier to incorporate evaluation into the project planning process and end up with an evaluation plan that develops naturally.

1. Do a Community Assessment; 2. Make a Logic Model; 3. Develop Measurable Objectives; 4. Create an Evaluation Plan

We group the process into 4 steps: 1) Do a Community Assessment; 2) Make a Logic Model; 3) Develop Measurable Objectives for Your Outcomes; and 4) Create an Evaluation Plan.   Rather than explain what everything is and how to use it (for that you can read the booklets), this page links to the worksheets and samples (and some how-to sections) from the booklets so that you can jump right into planning.  And you can skip the things you don’t need or that you’ve already done.

In addition, we have included links to posts in this blog that show examples of the areas covered so people can put them in context.

We hope this helps with your entire project planning and proposal writing experience, as well as provides support for that pesky evaluation section of the proposal.

Please let Cindy (olneyc@uw.edu) or me (kjvargas@uw.edu) know how it works for you, and feel free to make suggestions.  Cheers!

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

The Kirkpatrick Model (Part 2) — With Humans

Disclaimer: Karen’s blog post last week on the Kirkpatrick Model used an example that was hypothetical.  We want to be clear that the NEO has never evaluated any programs directed toward improving health outcomes for vampires.

However, we can claim success in applying the Kirkpatrick Model for National Network of Libraries of Medicine (NN/LM) training programs.

The NN/LM’s mission is to promote the biomedical and consumer health resources of the National Library of Medicine.  One strategy that is popular with NN/LM’s Regional Medical Libraries, which lead and manage the network, is the “train-the-trainer” program. These programs teach librarians and others about NLM resources so that they, in turn, will teach their peers, patients, or clients. When the NEO provides evaluation consulting for train-the-trainer programs, we rely heavily on the Kirkpatrick Model.

Kirkpatrick Outcomes Levels and Logic Models

For example, the NN/LM’s initiative to reach out to community college librarians incorporated “train-the-trainer” as one of several strategies to promote use of NLM resources in community college health professions programs. While the initiative was multi-pronged, train-the-trainer programs for community college librarians was a major strategy of the project. The Kirkpatrick Model helped our task force define outcomes and develop measures for this activity.

Our logic model led us to the following program theory:

If we train community college librarians to use National Library of Medicine Resources

  • They will respond favorably to our message (Reaction)
  • And discover new, useful health information resources that (Learning)
  • They will use when working with faculty and students (Behavior)
  • Which will lead to increased use of NLM resources among community college faculty and staff (Results)

Slide1

Measuring Outcomes

We developed two simple measurement tools to assess the four outcome levels.  To measure Reaction, RML instructors administered a standard one-page session evaluation form that has been used for years by instructors who provide NN/LM training sessions. The form collects participants’ feedback, including the grade (A through F) they would assign to the class. This form was our measure of participant reaction.

The other three levels were assessed using a follow-up questionnaire sent to the training participants several months after their training. On this questionnaire, we asked them a series of yes/no questions:

Learning: At this training session, did you learn about health information resources that were NEW to you?

Behavior: Regarding the NEW resources you learned at the training, have you done any of the following?

  • Shown these resources to students?
  • Used the resources when preparing lesson plans?
  • Shown the resources to community college faculty or staff?
  • Used the resources to answer reference questions?

Results: Do you know if the resources are being used by

  • Students?
  • Faculty, administration, or staff at your organization?
  • The librarians at your institution?

We knew our Results questions were weak. They obviously were very subjective. Most of the respondents said they didn’t know about use beyond their library staff members. Unfortunately, we did not have resources for a more objective measure of our anticipated results (e.g., surveying faculty and students at participating schools). Our dilemma was not unusual. Many practitioners of the Kirkpatrick Model agree that assessing Results-level outcomes can be costly and challenging.

However, in anticipation that this Results-level measure might not work, we had a back-up plan inspired by Robert Brinkerhoff’s Success Case Method (which we posted about here). In this approach, evaluators ask training participants to describe how their training benefited the organization.  We ended the questionnaire with the following open-ended question: Please describe how the training you received on National Library of Medicine resources has made a difference for you or your organization.

This question worked well, with 57% of respondents providing examples of how the training improved their customer services. They reported using the NLM resources to provide reference services and incorporating NLM resources into their information literacy classes for health professions students.  Some also were talking to faculty about the importance of teaching health professions students about RML resources that students could use after graduation.

In the end, the Kirkpatrick Model helped us get metrics and qualitative information that helped to assess the effectiveness of our train-the-trainer activities.  Most of the training participants who responded to our follow-up questionnaire learned new resources and were promoting them to student and faculty. Their stories showed that the NN/LM training improved the services they were delivering to their users.

The NEO has drawn on the Kirkpatrick Model to design evaluation methods for similar projects, including our own evaluation training programs.  It is a great tool for helping program planners to define concrete objectives and create measures that are closely linked to desired outcomes.

 

Developing Program Outcomes using the Kirkpatrick Model – with Vampires

Are you in the first stages of planning an outreach program, but not sure how to get started?  The NEO generally recommends starting your program planning by thinking about your outcomes.  Outcomes are the results that you would like to see happen because of your planned activities.  They are the “why we do it” of your program.  However, sometimes it can be difficult to develop your outcome statements in ways that lead to measurable evaluation plans. One way to do it is to use the Kirkpatrick Model, a system that was developed 60 years ago.

Let’s start with an example. Let’s say that your needs assessment of the local vampire community found that they have a lot of health concerns: some preventative (how to avoid garlic, identifying holy water); some about the latest research (blood borne diseases, wooden stake survival); and some mental health (depression from having trouble keeping human friends). Further discussion led to finding out that while they have access to the internet from their homes, they don’t have much help with finding health information due to the fact that most of their public libraries are closed when the vampires are out and about.

So you’ve decided to do some training on both MedlinePlus and PubMed (in the evenings), and you want to come up with outcomes for your training program.

The Kirkpatrick Model breaks down training outcomes to 4 levels, each building on the previous one:

The Kirkpatrick Model shown as a pyramid

Level 1: Reaction – This level of outcome is about satisfaction. People find your training satisfying, engaging, and relevant. While satisfaction is not a very strong impact, Kirkpatrick believes it is essential to motivate people to learn.

Level 2: Learning – This outcome is that by the end of your training, people have learned something, whether knowledge, skills, attitude, or confidence.

Level 3: Behavior – This outcome is related to the degree to which people take what they learned in your training and apply it in their jobs or their lives.

Level 4: Results – This level of outcome is “the greater good” – changes in the community or organization that occur as a result of changes in individuals’ knowledge or behavior.

Back to our training example.  Here are some outcome statements that are based on the Kirkpatrick Model.  I am listing them here as Short-term, Intermediate and Long-term, like you might find in a Logic Model (for more on logic models, see Logic Model blog entries).

Short-term Outcomes:

  • Sunnydale vampires found their MedlinePlus/PubMed classes to be engaging and relevant to their “lives.” (Reaction)
  • Sunnydale vampires demonstrate that they learned how to find needed resources in PubMed and MedlinePlus. (Learning)

Intermediate Outcomes:

  • Sunnydale vampires use MedlinePlus and PubMed to research current and new health issues for themselves and their brood. (Behavior)

Long-term Outcomes

  • Healthier vampires form stronger bonds with their human community and there is less friction between the two groups. (Results)

    Dracula leaning over woman

Once you’ve stated your outcomes, you can use them in a number of ways. You can think through your training activities to ensure that they are working towards those outcomes. And you can assign indicators, target criteria, and time frames to each outcome to come up with measurable objectives for your evaluation plan.

Happy Hunting Outcomes!

Save

Save

Save

Save

Save

Save

Save

Save

From QWERTY to Quality Responses: How To Make Survey Comment Boxes More Inviting

The ubiquitous comment box.  It’s usually stuck at the end of a survey with a simple label such as “Suggestions,” “Comments:” or “Please add additional comments here.”

Those of us who write surveys over-idealistic faith in the potential of comment boxes, also known as open-ended survey items or questions.  These items will unleash our respondents’ desire to provide creative, useful suggestions! Their comments will shed light on the difficult-to-interpret quantitative findings from closed-ended questions!

In reality, responses in comment boxes tend to be sparse and incoherent. You get a smattering of “high-five” comments from your fans. A few longer responses may come from those with an ax to grind, although their feedback may be completely off topic.  More often, comment boxes are left blank, unless you make the mistake of requiring an answers before the respondent can move on to the next item. Then you’ll probably get a lot of QWERTYs in your blank space.

Let’s face it.  Comment boxes are the vacant lots of Survey City.  Survey writers don’t put such effort into cultivating them. Survey respondents don’t even notice them.

Can we do better than that?  Yes, we can, say the survey methods experts.

First, you have to appreciate this fact: open-ended questions ask a lot of respondents.  They have to create a response. That’s much harder than registering their level of agreement to a statement you wrote for them. So you need strategies that make open-ended questions easier and more motivating for the survey taker.

In his online class Don’t (Survey)Monkey Around: Learn to Make Your Surveys Work,  Matthew Champagne provides the following tips for making comment boxes more inviting to respondents:

  • Focus your question. Get specific and give guidance on how you want respondents to answer. For example, “Please tell us what you think about our new web site. Tell us both what you like and what you think we can do better.” I try to make the question even easier by putting boundaries on how much I expect from them.  So, when requesting feedback on a training session, I might ask my respondents to “Please describe one action step you will take based on what you learned in this class.”
  • Place the open-ended question near related closed-ended questions. For example, if you are asking users to rate the programming at your library, ask for suggestions for future programs right after they rate the current program. The closed-ended questions have primed them to write their response.
  • Give them a good reason to respond. A motivational statement tells respondents how their answers will be used. Champagne says that this technique is particularly effective if you can explain how their responses will be used for their personal For example, “Please give us one or two suggestions for improving our references services.  Your feedback will help our reference librarians know how to provide better service to users like you.”
  • Give them room to write. You need a sizable blank space that encourages your respondents to be generous with their comments. Personally, when I’m responding to an open-ended comment on a survey, I want my entire response to be in view while I’m writing.  As a survey developer, I tend to uses boxes that are about three lines deep and half the width of the survey page

Do we know that Champagne’s techniques work?  In the Dillman et al.’s classic book on survey methods, the authors present research findings to support Champagne’s advice. Adding motivational words to the open-ended survey questions showed a 5-15 word increase in response length and a 12-20% increase in how many respondents’ submitted answers.  The authors caution, though, that you need to use open-ended questions sparingly for the motivational statements to work well. When four open-ended questions were added to a survey, the motivational statements worked better for questions placed earlier in the survey.

I should add, however, to never make your first survey question an open-ended one.  The format itself seems to make people close their browsers and run for the hills.  I always warm up the respondents with some easy closed-ended questions before they see an open-ended item.

Dillman et al. gave an additional technique for getting better responses to open-ended items: Asking follow-up questions.  Many online software packages now allow you to take a respondent’s verbatim answer and repeat it in a follow-up question.  For example, a follow-up question about a respondent’s suggestions for improving the library facility might look like this:

“You made this suggestion about how to improve the library facility: ‘The library should add more group study rooms.’ Do you have any other suggestions for improving the library facility?” [Bolded statement is the respondents’ verbatim written comment.]

Follow-up questions like this have been shown to increase the detail of respondents’ answers to open-ended questions.  If you are interested in testing out this format, search your survey software system for instructions on “piping.”

When possible, I like to use an Appreciative Inquiry approach for open-ended questions. The typical Appreciative Inquiry approach requires two boxes, for example:

  • Please tell us what you liked most about the proposal-writing workshop.
  • What could the instructors do to make this the best workshop possible on proposal writing?

People find it easier to give you an example rooted in experience.  We are story tellers at heart and you are asking for a mini-story. Once they tell their story, they are better prepared to give you advice on how to improve that experience. The Appreciative Inquiry structure also gives specific guidance on how you want them to structure their responses.  The format used for the second question is more likely to gather actionable suggestions.

So if you really want to hear from your respondents, put some thought into your comment box questions.  It lets them know that you want their thoughtful answers in return.

Source:  The research findings reported in this post are from Internet, Phone, Mail and Mixed-Mode Surveys: The Tailored Design Method (4th ed.), by Dillman, Smyth, and Christian (Hoboken, NJ: John Wiley and Sons, Inc, 2014, pp 128-134.

Rubber Duck Evaluation Planning

Yellow rubber duck with reflection

Programmers have a process for solving coding problems called “Rubber Duck Debugging.” It emerged from the realization that when they explained a problem they were having in coding to non-programmer, suddenly the solution would come to them. Then they realized that they could get the same results by explaining the problem to a rubber duck (or some other inanimate object) and they wouldn’t have to bother someone.  What they do is explain each line of code to a rubber duck, until they hit on the solution to their problem.

How does this apply to evaluation planning?  Cindy and I kind of did this yesterday (for full disclosure, I will admit that I was the rubber duck).  We were walking through a complicated timeline for an evaluation process. It had a lot of steps. It was easy to leave one out. Some of them overlapped. We really had to explain to each other how it was going to happen.

Rubber Duck Debugging can be employed at almost any stage of the evaluation planning process. Here are some examples:

Logic Models

When creating a logic model, you usually work from the right side (the outcomes you want to see), and work your way left to the activities that you want to do that will bring about the outcomes, then further left to the things you need to have in place to do the activities (here’s a sample logic model from the NEO’s Booklet 2 Planning Outcomes Based Outreach Projects).  Once you’ve got your first draft of the logic model, get your rubber duck and carefully describe your logic model to it from left to right, saying “If we have these things in place, then we will be able to do these activities. If these activities are done correctly, they will lead to these results we want to see.  If those things happen, over time it is logical that these other long term outcomes may happen.”  Explain thoroughly so the duck understands how it all works, and you know you haven’t missed anything.

Indicators

Process Indicators: in the logic model section, you explained to your duck “If these activities are done correctly they will lead to these results.”  What does “correctly” look like? Explain to your duck how things need to be done or they won’t lead to the results you want to see. Be as specific as you can.  Then think about what you can measure to see how well you’re doing the activities.  Explain those things to the duck so you can be sure that you are measuring the things you want to see happen.

Outcome Indicators: Looking at your logic model, you know what results you’d like to see.  Think about what would indicate that those results had happened? Then think about how and when you would measure those indicators. Talk it out with the duck. In some cases you may not have the time, money or staff needed to measure an indicator you would really like to measure.  In some cases the data that you can easily collect with your money, staff and time will not be acceptable to your funders or stakeholders.  You will need to make sure you have indicators that you can measure successfully that are credible to your stakeholders. The rubber duck’s masterful silence will help you work this out.

Data collection

I think this is where the duck will really come in handy.  To collect the data that you have described above, you will need to have some data collection tools, like questionnaires or forms. Once you’ve put together the tools, you should explain to the duck what data each question is intend to gather. When you explain it out loud, you might catch some basic mistakes, like asking questions you don’t really need the answers to, or asking a question that is really two questions in one.

Then you need a system for collecting the data using your tools.  If it’s a big project, a number of people may be collecting the data and you will have to write instructions to make sure they are all doing it the same way.  Read each instruction to the duck and explain why it’s important to the success of the project. Did the duck’s ominous silence suggest areas where someone might misunderstand the instructions?

I hope this is helpful to you in your evaluation planning, and maybe other areas of your life.  Why use a rubber duck instead of something else? Well, they are awfully cute. And they come with a great song that you can sing when you’ve completed your plan: https://www.youtube.com/watch?v=Mh85R-S-dh8

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Designing Surveys: Does The Order of Response Options Matter?

I recently reviewed a questionnaire for a colleague, who used one of the most common question formats around: the Likert question. Here is an example:

Beiber 1

This is not a question from my colleague’s survey.  (I thought I should point that out in case you were wondering about my professional network.)  However, her response options were laid out similarly, with the most positive ones to the left.  So I shared with her the best practice I learned from my own training in survey design: Reverse the order of the response options so the most negative ones are at the left.

 

Bieber 2

 

This “negative first” response order tends to be accepted as a best practice because it is thought to reduce positive response bias (that is, people overstating how much they agree with a given statement).  Because I find myself repeating this advice often, I thought the topic of “response option order” would make a good blog post. After all, who doesn’t like simple little rules to follow?  To write a credible blog post, I decided to track down the empirical evidence that underpins this recommended practice.

And I learned something new: that evidence is pretty flimsy.

This article by Jeff Sauro, from Measuring U, provides a nice summary article and references about the evidence for our “left-side bias.”  “Left-side bias” refers to the tendency for survey-takers to choose the left-most choices in a horizontal list of response options. No one really knows why we do this.  Some speculate it’s because we read from left to right and the left options are the first ones we see.  I suppose we’re lazy: we stop reading mid-page and make a choice, picking one of the few options we laid eyes on. This speculation comes from findings that show the left-side bias is more pronounced if the question is vague or confusing, or if the survey takers flat-out don’t care about the question topic.

Here’s how left-side bias is studied. Let’s say you really care what people think about Justin Bieber. (I know it’s a stretch, but work with me here.)  Let’s say you asked 50 people their opinion of the pop star using the sample question from above. You randomly assign the first version to 25 of the respondents (group 1) and the second version 2 to the other 25 respondents (group 2).  Findings would predict that group 1 will seem to have a more favorable opinion of Justin Bieber, purely because the positive options are on the left.

Sauro’s references for his article do provide evidence of “left-side bias.” However, after reviewing his references, I drew the same conclusion that he did: the effect of response option order is small, to the point of being insignificant. I became more convinced that this was the case when I looked for guidance in the work of Donald Dillman, who has either conducted or synthesized studies on almost every imaginable characteristic of surveys. Yet I could not find any Dillman source that addressed how to order response options for Likert questions.  In his examples, Dillman follows the “negative option to the left” principle, but I couldn’t find his explicit recommendation for the format. Response option order does not seem to be on Dillman’s radar.

So, how does all this information change my own practice going forward?

three smiley faces: left is frown; middle is neutral; right is smiling. Finger is touching the smiling one.

For new surveys, I’ll still format Likert-type survey questions with the most negative response options to the left.  You may be saying to yourself, “But if the negative options are on the left, doesn’t that introduce negative bias?” Maybe, but I would argue that using the “negative to the left” format will give me the most conservative estimate of my respondents’ level of endorsement on a given topic.

However, if I’m helping to modify an existing survey, particularly one that has been used several times, I won’t suggest changing the order of the response options. If people are used to seeing your questions with positive responses to the left, keep doing that.  You’ll introduce a different and potentially worse type of error by switching the order. More importantly, if you are planning to compare findings from a survey given at two points in time, you want to keep the order of response options consistent.  That way, you’ll have a comparable amount of error cause by bias in time 1 and time 2.

Finally, I would pay much closer attention to the survey question itself.  Extreme language seems to be a stronger source of bias than order of the response options. Edit out those extreme words like “very,”  “extremely,” “best” and “worst.”   For example, I would rewrite our sample question to “Justin Bieber is a good singer.”   Dillman would suggest neutralizing the question further with wording like this: “Do you agree or disagree that Justin Bieber is a good singer?”

Of course, this is one of many nuanced considerations you have to make when writing survey questions.  The most comprehensive resource I know for survey design is Internet, Phone, Mail and Mixed-Mode Surveys: The Tailored Design Method (4th ed.), by Dillman, Smyth, and Christian (Hoboken, NJ: John Wiley and Sons, Inc, 2014).

 

Happy Fourth of July in Numbers!

4th of July graphic image

Before holidays we sometimes do a post on the value of putting your data in a visually understandable format, perhaps some kind of infographic.

As I write this, some of you may be sitting at your desks pondering how you will celebrate U.S. Independence Day. To help turn your ponderings into a work-related activity, here are some examples of Fourth of July Infographics.  Since some of them have numbers but no dates (for example the number of fireworks purchased in the US “this year”) you might use them as templates for the next holiday-based infographic you create yourself.

If you like History, the History Channel has a fun infographic called 4th of July by the Numbers.  It includes useful information such as:

  • the oldest signer of the Declaration of Independence was Benjamin Franklin at 70,
  • the record for the hot dog eating contest on Coney Island is 68 hotdogs in 10 minutes, and
  • 80% of Americans attend a barbecue, picnic or cookout on the 4th of July

Thinking about the food for your picnic (if you’re one of the 80% having one)?

From the perspective of work (remember work?) here is an infographic from Unmetric on how and why you should create your own infographics for the 4th of July: How to Create Engaging 4th of July Content.

Have a great 4th of July weekend!

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Party Guides for Data Dives

Children hands holding numbers. Black isolated multicolor numbers.

Children who grow vegetables are more likely to eat them. Likewise, stakeholders who have a hand in interpreting evaluation data are more likely to use the findings.

Traditionally, the data analysis phase of evaluation has been relegated to technical experts with research skills. However, as the field sharpens its focus on evaluation use, more evaluators are working on developing methods to engage groups of stakeholders in data analysis. While evaluation use is one objective of this approach, evaluators also are compelled to use participatory data analysis because it

  • Provides context for findings and interpretations that include multiple viewpoints
  • Generates stakeholder interest in the evaluation process
  • Determines which findings are most important to those impacted by a program

Last week, Karen and I attended Participatory Sense-making for Evaluators: Data Parties and Dabbling in Data, a webinar offered by Kylie Hutchinson of Community Solutions  and Corey Newhouse of Public Profit. They shared their strategies for getting groups of stakeholders to roll up their sleeves and dive elbow-deep in data. Such events are referred to as data parties, sense-making sessions, results-briefings, and data-driven reviews. Hutchinson also shared her data party resource page on Read more »

Last updated on Monday, June 27, 2016

Funded by the National Library of Medicine under Contract No. UG4LM012343 with the University of Washington.