Skip all navigation and go to page content

NEO Shop Talk

The blog of the National Network of Libraries of Medicine Evaluation Office

Archive for July, 2016

Developing Program Outcomes using the Kirkpatrick Model – with Vampires

Thursday, July 28th, 2016

Are you in the first stages of planning an outreach program, but not sure how to get started?  The NEO generally recommends starting your program planning by thinking about your outcomes.  Outcomes are the results that you would like to see happen because of your planned activities.  They are the “why we do it” of your program.  However, sometimes it can be difficult to develop your outcome statements in ways that lead to measurable evaluation plans. One way to do it is to use the Kirkpatrick Model, a system that was developed 60 years ago.

Let’s start with an example. Let’s say that your needs assessment of the local vampire community found that they have a lot of health concerns: some preventative (how to avoid garlic, identifying holy water); some about the latest research (blood borne diseases, wooden stake survival); and some mental health (depression from having trouble keeping human friends). Further discussion led to finding out that while they have access to the internet from their homes, they don’t have much help with finding health information due to the fact that most of their public libraries are closed when the vampires are out and about.

So you’ve decided to do some training on both MedlinePlus and PubMed (in the evenings), and you want to come up with outcomes for your training program.

The Kirkpatrick Model breaks down training outcomes to 4 levels, each building on the previous one:

The Kirkpatrick Model shown as a pyramid

Level 1: Reaction – This level of outcome is about satisfaction. People find your training satisfying, engaging, and relevant. While satisfaction is not a very strong impact, Kirkpatrick believes it is essential to motivate people to learn.

Level 2: Learning – This outcome is that by the end of your training, people have learned something, whether knowledge, skills, attitude, or confidence.

Level 3: Behavior – This outcome is related to the degree to which people take what they learned in your training and apply it in their jobs or their lives.

Level 4: Results – This level of outcome is “the greater good” – changes in the community or organization that occur as a result of changes in individuals’ knowledge or behavior.

Back to our training example.  Here are some outcome statements that are based on the Kirkpatrick Model.  I am listing them here as Short-term, Intermediate and Long-term, like you might find in a Logic Model (for more on logic models, see Logic Model blog entries).

Short-term Outcomes:

  • Sunnydale vampires found their MedlinePlus/PubMed classes to be engaging and relevant to their “lives.” (Reaction)
  • Sunnydale vampires demonstrate that they learned how to find needed resources in PubMed and MedlinePlus. (Learning)

Intermediate Outcomes:

  • Sunnydale vampires use MedlinePlus and PubMed to research current and new health issues for themselves and their brood. (Behavior)

Long-term Outcomes

  • Healthier vampires form stronger bonds with their human community and there is less friction between the two groups. (Results)

    Dracula leaning over woman

Once you’ve stated your outcomes, you can use them in a number of ways. You can think through your training activities to ensure that they are working towards those outcomes. And you can assign indicators, target criteria, and time frames to each outcome to come up with measurable objectives for your evaluation plan.

Happy Hunting Outcomes!

Save

Save

Save

Save

Save

Save

Save

Save

From QWERTY to Quality Responses: How To Make Survey Comment Boxes More Inviting

Friday, July 22nd, 2016

The ubiquitous comment box.  It’s usually stuck at the end of a survey with a simple label such as “Suggestions,” “Comments:” or “Please add additional comments here.”

Those of us who write surveys over-idealistic faith in the potential of comment boxes, also known as open-ended survey items or questions.  These items will unleash our respondents’ desire to provide creative, useful suggestions! Their comments will shed light on the difficult-to-interpret quantitative findings from closed-ended questions!

In reality, responses in comment boxes tend to be sparse and incoherent. You get a smattering of “high-five” comments from your fans. A few longer responses may come from those with an ax to grind, although their feedback may be completely off topic.  More often, comment boxes are left blank, unless you make the mistake of requiring an answers before the respondent can move on to the next item. Then you’ll probably get a lot of QWERTYs in your blank space.

Let’s face it.  Comment boxes are the vacant lots of Survey City.  Survey writers don’t put such effort into cultivating them. Survey respondents don’t even notice them.

Can we do better than that?  Yes, we can, say the survey methods experts.

First, you have to appreciate this fact: open-ended questions ask a lot of respondents.  They have to create a response. That’s much harder than registering their level of agreement to a statement you wrote for them. So you need strategies that make open-ended questions easier and more motivating for the survey taker.

In his online class Don’t (Survey)Monkey Around: Learn to Make Your Surveys Work,  Matthew Champagne provides the following tips for making comment boxes more inviting to respondents:

  • Focus your question. Get specific and give guidance on how you want respondents to answer. For example, “Please tell us what you think about our new web site. Tell us both what you like and what you think we can do better.” I try to make the question even easier by putting boundaries on how much I expect from them.  So, when requesting feedback on a training session, I might ask my respondents to “Please describe one action step you will take based on what you learned in this class.”
  • Place the open-ended question near related closed-ended questions. For example, if you are asking users to rate the programming at your library, ask for suggestions for future programs right after they rate the current program. The closed-ended questions have primed them to write their response.
  • Give them a good reason to respond. A motivational statement tells respondents how their answers will be used. Champagne says that this technique is particularly effective if you can explain how their responses will be used for their personal For example, “Please give us one or two suggestions for improving our references services.  Your feedback will help our reference librarians know how to provide better service to users like you.”
  • Give them room to write. You need a sizable blank space that encourages your respondents to be generous with their comments. Personally, when I’m responding to an open-ended comment on a survey, I want my entire response to be in view while I’m writing.  As a survey developer, I tend to uses boxes that are about three lines deep and half the width of the survey page

Do we know that Champagne’s techniques work?  In the Dillman et al.’s classic book on survey methods, the authors present research findings to support Champagne’s advice. Adding motivational words to the open-ended survey questions showed a 5-15 word increase in response length and a 12-20% increase in how many respondents’ submitted answers.  The authors caution, though, that you need to use open-ended questions sparingly for the motivational statements to work well. When four open-ended questions were added to a survey, the motivational statements worked better for questions placed earlier in the survey.

I should add, however, to never make your first survey question an open-ended one.  The format itself seems to make people close their browsers and run for the hills.  I always warm up the respondents with some easy closed-ended questions before they see an open-ended item.

Dillman et al. gave an additional technique for getting better responses to open-ended items: Asking follow-up questions.  Many online software packages now allow you to take a respondent’s verbatim answer and repeat it in a follow-up question.  For example, a follow-up question about a respondent’s suggestions for improving the library facility might look like this:

“You made this suggestion about how to improve the library facility: ‘The library should add more group study rooms.’ Do you have any other suggestions for improving the library facility?” [Bolded statement is the respondents’ verbatim written comment.]

Follow-up questions like this have been shown to increase the detail of respondents’ answers to open-ended questions.  If you are interested in testing out this format, search your survey software system for instructions on “piping.”

When possible, I like to use an Appreciative Inquiry approach for open-ended questions. The typical Appreciative Inquiry approach requires two boxes, for example:

  • Please tell us what you liked most about the proposal-writing workshop.
  • What could the instructors do to make this the best workshop possible on proposal writing?

People find it easier to give you an example rooted in experience.  We are story tellers at heart and you are asking for a mini-story. Once they tell their story, they are better prepared to give you advice on how to improve that experience. The Appreciative Inquiry structure also gives specific guidance on how you want them to structure their responses.  The format used for the second question is more likely to gather actionable suggestions.

So if you really want to hear from your respondents, put some thought into your comment box questions.  It lets them know that you want their thoughtful answers in return.

Source:  The research findings reported in this post are from Internet, Phone, Mail and Mixed-Mode Surveys: The Tailored Design Method (4th ed.), by Dillman, Smyth, and Christian (Hoboken, NJ: John Wiley and Sons, Inc, 2014, pp 128-134.

Rubber Duck Evaluation Planning

Friday, July 15th, 2016

Yellow rubber duck with reflection

Programmers have a process for solving coding problems called “Rubber Duck Debugging.” It emerged from the realization that when they explained a problem they were having in coding to non-programmer, suddenly the solution would come to them. Then they realized that they could get the same results by explaining the problem to a rubber duck (or some other inanimate object) and they wouldn’t have to bother someone.  What they do is explain each line of code to a rubber duck, until they hit on the solution to their problem.

How does this apply to evaluation planning?  Cindy and I kind of did this yesterday (for full disclosure, I will admit that I was the rubber duck).  We were walking through a complicated timeline for an evaluation process. It had a lot of steps. It was easy to leave one out. Some of them overlapped. We really had to explain to each other how it was going to happen.

Rubber Duck Debugging can be employed at almost any stage of the evaluation planning process. Here are some examples:

Logic Models

When creating a logic model, you usually work from the right side (the outcomes you want to see), and work your way left to the activities that you want to do that will bring about the outcomes, then further left to the things you need to have in place to do the activities (here’s a sample logic model from the NEO’s Booklet 2 Planning Outcomes Based Outreach Projects).  Once you’ve got your first draft of the logic model, get your rubber duck and carefully describe your logic model to it from left to right, saying “If we have these things in place, then we will be able to do these activities. If these activities are done correctly, they will lead to these results we want to see.  If those things happen, over time it is logical that these other long term outcomes may happen.”  Explain thoroughly so the duck understands how it all works, and you know you haven’t missed anything.

Indicators

Process Indicators: in the logic model section, you explained to your duck “If these activities are done correctly they will lead to these results.”  What does “correctly” look like? Explain to your duck how things need to be done or they won’t lead to the results you want to see. Be as specific as you can.  Then think about what you can measure to see how well you’re doing the activities.  Explain those things to the duck so you can be sure that you are measuring the things you want to see happen.

Outcome Indicators: Looking at your logic model, you know what results you’d like to see.  Think about what would indicate that those results had happened? Then think about how and when you would measure those indicators. Talk it out with the duck. In some cases you may not have the time, money or staff needed to measure an indicator you would really like to measure.  In some cases the data that you can easily collect with your money, staff and time will not be acceptable to your funders or stakeholders.  You will need to make sure you have indicators that you can measure successfully that are credible to your stakeholders. The rubber duck’s masterful silence will help you work this out.

Data collection

I think this is where the duck will really come in handy.  To collect the data that you have described above, you will need to have some data collection tools, like questionnaires or forms. Once you’ve put together the tools, you should explain to the duck what data each question is intend to gather. When you explain it out loud, you might catch some basic mistakes, like asking questions you don’t really need the answers to, or asking a question that is really two questions in one.

Then you need a system for collecting the data using your tools.  If it’s a big project, a number of people may be collecting the data and you will have to write instructions to make sure they are all doing it the same way.  Read each instruction to the duck and explain why it’s important to the success of the project. Did the duck’s ominous silence suggest areas where someone might misunderstand the instructions?

I hope this is helpful to you in your evaluation planning, and maybe other areas of your life.  Why use a rubber duck instead of something else? Well, they are awfully cute. And they come with a great song that you can sing when you’ve completed your plan: https://www.youtube.com/watch?v=Mh85R-S-dh8

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Designing Surveys: Does The Order of Response Options Matter?

Friday, July 8th, 2016

I recently reviewed a questionnaire for a colleague, who used one of the most common question formats around: the Likert question. Here is an example:

Beiber 1

This is not a question from my colleague’s survey.  (I thought I should point that out in case you were wondering about my professional network.)  However, her response options were laid out similarly, with the most positive ones to the left.  So I shared with her the best practice I learned from my own training in survey design: Reverse the order of the response options so the most negative ones are at the left.

 

Bieber 2

 

This “negative first” response order tends to be accepted as a best practice because it is thought to reduce positive response bias (that is, people overstating how much they agree with a given statement).  Because I find myself repeating this advice often, I thought the topic of “response option order” would make a good blog post. After all, who doesn’t like simple little rules to follow?  To write a credible blog post, I decided to track down the empirical evidence that underpins this recommended practice.

And I learned something new: that evidence is pretty flimsy.

This article by Jeff Sauro, from Measuring U, provides a nice summary article and references about the evidence for our “left-side bias.”  “Left-side bias” refers to the tendency for survey-takers to choose the left-most choices in a horizontal list of response options. No one really knows why we do this.  Some speculate it’s because we read from left to right and the left options are the first ones we see.  I suppose we’re lazy: we stop reading mid-page and make a choice, picking one of the few options we laid eyes on. This speculation comes from findings that show the left-side bias is more pronounced if the question is vague or confusing, or if the survey takers flat-out don’t care about the question topic.

Here’s how left-side bias is studied. Let’s say you really care what people think about Justin Bieber. (I know it’s a stretch, but work with me here.)  Let’s say you asked 50 people their opinion of the pop star using the sample question from above. You randomly assign the first version to 25 of the respondents (group 1) and the second version 2 to the other 25 respondents (group 2).  Findings would predict that group 1 will seem to have a more favorable opinion of Justin Bieber, purely because the positive options are on the left.

Sauro’s references for his article do provide evidence of “left-side bias.” However, after reviewing his references, I drew the same conclusion that he did: the effect of response option order is small, to the point of being insignificant. I became more convinced that this was the case when I looked for guidance in the work of Donald Dillman, who has either conducted or synthesized studies on almost every imaginable characteristic of surveys. Yet I could not find any Dillman source that addressed how to order response options for Likert questions.  In his examples, Dillman follows the “negative option to the left” principle, but I couldn’t find his explicit recommendation for the format. Response option order does not seem to be on Dillman’s radar.

So, how does all this information change my own practice going forward?

three smiley faces: left is frown; middle is neutral; right is smiling. Finger is touching the smiling one.

For new surveys, I’ll still format Likert-type survey questions with the most negative response options to the left.  You may be saying to yourself, “But if the negative options are on the left, doesn’t that introduce negative bias?” Maybe, but I would argue that using the “negative to the left” format will give me the most conservative estimate of my respondents’ level of endorsement on a given topic.

However, if I’m helping to modify an existing survey, particularly one that has been used several times, I won’t suggest changing the order of the response options. If people are used to seeing your questions with positive responses to the left, keep doing that.  You’ll introduce a different and potentially worse type of error by switching the order. More importantly, if you are planning to compare findings from a survey given at two points in time, you want to keep the order of response options consistent.  That way, you’ll have a comparable amount of error cause by bias in time 1 and time 2.

Finally, I would pay much closer attention to the survey question itself.  Extreme language seems to be a stronger source of bias than order of the response options. Edit out those extreme words like “very,”  “extremely,” “best” and “worst.”   For example, I would rewrite our sample question to “Justin Bieber is a good singer.”   Dillman would suggest neutralizing the question further with wording like this: “Do you agree or disagree that Justin Bieber is a good singer?”

Of course, this is one of many nuanced considerations you have to make when writing survey questions.  The most comprehensive resource I know for survey design is Internet, Phone, Mail and Mixed-Mode Surveys: The Tailored Design Method (4th ed.), by Dillman, Smyth, and Christian (Hoboken, NJ: John Wiley and Sons, Inc, 2014).

 

Happy Fourth of July in Numbers!

Friday, July 1st, 2016

4th of July graphic image

Before holidays we sometimes do a post on the value of putting your data in a visually understandable format, perhaps some kind of infographic.

As I write this, some of you may be sitting at your desks pondering how you will celebrate U.S. Independence Day. To help turn your ponderings into a work-related activity, here are some examples of Fourth of July Infographics.  Since some of them have numbers but no dates (for example the number of fireworks purchased in the US “this year”) you might use them as templates for the next holiday-based infographic you create yourself.

If you like History, the History Channel has a fun infographic called 4th of July by the Numbers.  It includes useful information such as:

  • the oldest signer of the Declaration of Independence was Benjamin Franklin at 70,
  • the record for the hot dog eating contest on Coney Island is 68 hotdogs in 10 minutes, and
  • 80% of Americans attend a barbecue, picnic or cookout on the 4th of July

Thinking about the food for your picnic (if you’re one of the 80% having one)?

From the perspective of work (remember work?) here is an infographic from Unmetric on how and why you should create your own infographics for the 4th of July: How to Create Engaging 4th of July Content.

Have a great 4th of July weekend!

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Last updated on Monday, June 27, 2016

Funded by the National Library of Medicine under Contract No. UG4LM012343 with the University of Washington.