Skip all navigation and go to page content

NEO Shop Talk

The blog of the National Network of Libraries of Medicine Evaluation Office

Archive for March, 2017

“Five Things I’ve Learned,” from an Evaluation Veteran

Friday, March 24th, 2017

 

Cindy Olney in her home office

Kalyna Durbak, the NEO’s novice evaluator, recently posted the five things she learned about evaluation since joining our staff. I thought I would steal, er, borrow Kalyna’s “five things” topic and write about the most important lessons I’ve learned after 25+ -years in the evaluation field.

My first experience with program evaluation was in the 1980s, as a graduate assistant in the University of Arizona School of Medicine’s evaluation office.  Excel was just emerging as the cool new tool for data crunching. SPSS ran on room-sized mainframes, and punch cards were fading fast from the computing world. Social security numbers were routinely collected and stored along with other information about our research or evaluation participants. Our desktop computers ran on DOS. The Internet had not even begun wreaking havoc.

Yep, I’m old. The field has evolved over time and the work is more meaningful than ever. Here are five things I know now that I wish I had known when I started.

#1 Evaluation is different from research: Evaluation and research have distinctly different end goals. The aim of research is to add to general knowledge and understanding.  Evaluation, on the other hand, is designed to improve the value of something specific (programs, products, personnel, services) and to guide decision-making. Evaluation borrows many techniques from research methodology because those methods are a means to accurate, credible information. Technical accuracy of data means nothing if it cannot be applied to program improvement or decision-making.

#2 Evaluation is not the most important kid in the room. Evaluation, unchecked, can be resource-intensive, both in money and time. For every dollar and hour spent on evaluation, one dollar and hour is subtracted from funds used to produce or enhance a program or service. Project plans should focus first on service or program design and delivery, with proportional funding allocated to evaluation. Evaluation studies should not be judged by the same criteria used for research. Rather, the goal is to collect usable information in the most cost-effective manner possible.

#3 What gets measured gets done: Evaluation is a management tool that’s worth the investment. Project teams are most successful when they begin with the end in mind, and evaluation plans force discussion about desired results (outcomes) early on.  (Thank you, Stephen Covey, for helping evaluators advocate for their early involvement in projects.)  You must articulate what you want to accomplish before you can measure it.  You need a good action plan, logically linked to desired outcomes, before you can design an process assessment. Even if your resources limit you to the most rudimentary of evaluation methods, the mere process of committing to outcomes, activities and measure on paper (in a logic model, please!) allows a team to take one giant step forward toward program success.

#4 Value is in the eyes of the stakeholders: While research asks “What happened,” evaluation asks “What happened, how important is it, and, knowing what we know, what do we do?”  That’s why an evaluation report that merely collects dust on a shelf is a travesty. The evaluation process is not complete until stakeholders have interpreted the information and contributed their perspectives on how to act on the findings. In essence, I am talking about rendering judgement: what do the findings say about the value of the program? That value judgment should, in turn, inform decisions about the future of the program. While factual findings should be objective, judgments are not.  Value is in the eyes of the people invested in the success of your program, aka, stakeholders. Assessment of value may vary and even conflict among various stakeholder groups. For example, a public library health literacy program has several types of stakeholders. The library users will judge the program based on its usefulness to their lives. City government officials will judge the program based on how many taxpayers express satisfaction with the program.  Public librarians will value the program if it aligns with their library mission and brings visibility to their organization.  Evaluation is not complete until these multiple perspectives of value are explored and integrated into program decision-making.

#5 Everything I need to know about evaluation reporting I learned in kindergarten. Kindergarten was the first and possibly the last place I got to learn through play. In grad school, I learned to write 25-50 page research and evaluation reports. In my career, I discovered people read the executive summaries (if I was lucky), then stop. Evaluations are supposed to lead to learning about your programs, but no one thinks there’s anything fun about 50-page reports. Thankfully, evaluators have developed quite a few engaging ways to involve stakeholders in analyzing and using evaluation findings. For example, data dashboards allow stakeholders to interact with data visualizations and answer their own evaluation questions.  Data parties provide a social setting to share coffee, snacks, and data interpretations.  New innovations in evaluation reporting are being generated every year. It’s a great time to be an evaluator! More bling; less writing, and it’s all for the greater good.

So,there you have it: my five things.  These five lessons have served me well. I suspect they will continue to do so until bigger and better evaluation ideas come along. What about you? Share your insights below in our comments section.

Uninspired by Bars? Try Dot Plots

Friday, March 17th, 2017

Thanks to Jessi Van Der Volgen and Molly Knapp at the NNLM Training Office for allowing us to feature their assessment project and for providing the images in this post. 

Are you tired of bars?

I don’t mean the kind of bars where you celebrate and socialize. I mean the kind used in data visualization.  My evidence-free theory is that people still succumb to using the justifiably maligned pie chart simply because we cannot face one more bar graph.

Take heart, readers. Today, I’m here to tell you a story about some magic data that fell on the NEO’s doorstep and broke us free of our bar chart rut.

It all began with a project by our NNLM Training Office (NTO) colleagues, the intrepid leaders of NNLM’s instructional design and delivery. They do it all. They teach. They administratively support the regions’ training efforts. They initiate opportunities and resources to up-level instructional effectiveness throughout the network. One of their recent initiatives was a national needs assessment of NNLM training participants. That was the source of the fabulous data I write about today.

For context, I should mention that training is one of NNLM’s key strategies for reaching the furthest corners of our country to raise awareness, accessibility and use of NLM health information resources. NNLM offers classes to all types of direct users, (e.g., health professionals; community-based organization staffs) but we value the efficiency of our “train-the-trainer” programs. In these classes, librarians and others learn how to use NLM resources so they, in turn, can teach their users. The national needs assessment was geared primarily toward understanding how to best serve “train-the-trainer” participants, who often takes multiple classes to enhance their skills.

For the NTO’s needs assessment, one area of inquiry involved an inventory of learners’ need for training in 30 topic areas. The NTO wanted to assess participants’ desired level and their current level of proficiency in each topic.  That meant 60 questions. That was one heck-of-a-long survey. We wished them luck.

The NTO team was undaunted!  They did some research and found a desirable format for presenting this set of questions (see upper left). The format had a nice minimalist design. The sliders were more fun for participants than radio buttons. Also, NTO designed the online questionnaire so that only a handful of question-pairs appeared on the screen at one time.  The approach worked, because NTO received responses from 559 respondents, and 472 completed the whole questionnaire.

Dot plots for four skill topic areas. Conducting literature searches (4=Current; 5=Desired) Understanding and searching for evidence-based research ( Current-3; Desired=5) Develop/teach classes (Current-3; Desired=5; Create videos/web tutorials Current-2; Desired=4)

The NEO, in turn, consulted the writings of one of our favorite dataviz oracles, Stephanie Evergreen. And she did not disappoint.  We found the ideal solution: dot plots!  Evergreen’s easy-to-follow instructions from this blog post allowed us to create dot plots in Excel, using a few creative hacks. This approach allowed us to thematically cluster results from numerous related questions into one chart. We were able to present data for 60 questions in a total of seven charts.

I would like to point out a couple of design choices I made:

  • I used different shapes and colors to visually distinguish between “current proficiency” and “desired proficiency.” Navy blue for current proficiency was inspired from NNLM’s logo. I used a complimentary green for the desired proficiency because green means “go.”
  • Evergreen prefers to place labels (e.g., “conducting literature searches”) close to the actual dots. That works well if your labels consist of one or two words. We found that our labels had to be longer to make sense. Setting them flush-left made them more readable.
  • I suggested plotting medians rather than means because many of the data distributions were skewed. You can use means, but probably should round to whole numbers so you don’t distract from the gaps.

Dot plots are quite versatile. We used the format to highlight gaps in proficiency, but other evaluators have demonstrated that dot plots work well for visualizing change over time and cross-group comparisons.

Dot plots are not as easy to create as the default Excel bar chart, but they are interesting.  So give up bars for a while.  Try plotting!

 

 

 

 

5 Things I Found Out as an Evaluation Newbie

Friday, March 10th, 2017

Since joining the NEO in October, I have learned a lot about the world of evaluation. Here are 5 things that have made me rethink how I approach evaluation, program planning, and overall life.

#1: Anyone can do evaluation
Think about a project that you are working on at work. Now take out your favorite pen and pad of paper, or open a new blank document, and write What? at the top of the page. Give yourself a few minutes to write or type out a general outline of the project. Do the same for the questions So What? and Now What? Reflect on why the project is important to your organization’s mission, and what will you do with any new found information from the project. Finished? Congratulations, you’ve just taken your first step as a budding evaluator by engaging in some Reflective Practice.

This first step does not mean you are an evaluation guru. It takes more than just a reflection piece to create a whole evaluation plan (actually, just 4 steps). What I hope you take away from this exercise is that every project could use some form of evaluation, and that there is no hocus pocus involved in evaluation. All you need is a team willing to put in the effort to create “an evaluation culture of valuing evidence, valuing questioning, and valuing evaluative thinking” (Better Evaluation). I am sure you even have one of the most basic tools you can use for evaluation, which leads me to #2.

#2: Excel is your best friend
I will not deny that Tableau, Power BI, and other really cool Data Visualization and Business Intelligence software is out there. There’s also R, for those who are looking for another programming language to conquer. If you are working at a small library, or a non profit, it might be hard to get the training or the funds for such software. Enter Excel. You can do a lot of neat things with Excel. A quick search for Excel on Stephanie Evergreen’s blog will result in many free tutorials on how to make interesting (and useful) charts in Excel. You can even make pivot tables, which can help you easily summarize complicated sets of data. Excel might not be the best tool for data visualization, but it’s a tool that many of us already have.

#3: This isn’t your grade school report card
I still remember the terror I would feel the day that report cards would come out. I should not have been afraid, because I usually received great grades. What always terrified me was the uncertainty of how my teachers reached the resulting grade. If the teacher was nice, he or she would explain how the grade was calculated, but most of the time I was left with a report card with no comments. If I wanted to strive for a better grade, I would have to arrange a meeting with the teacher. That never happened because I was very shy, and the prospect seemed more terrifying than getting the report card!

It can be scary to think about an evaluation program. What if it doesn’t come out well, and you get a “bad grade”? I’m here to tell you that the terror won’t be there, because you are not a student waiting for an ambiguous letter grade. You are in the teacher’s seat, and you get to decide what it means to succeed and what it means to fail. You have full control over the parameters of your evaluation! This does not guarantee success, but it does give you a fair shot at succeeding.

#4: Really, it’s ok to fail
Ever since I have started working with the NEO, I’ve been confronted with failure. We start most of our meetings by retelling our most recent failures, like how we forgot to put something on our Outlook calendar or could not get something to work. We’ve even written a blog post about it! I call this a failure-positive work environment. Instead of beating ourselves up about these little failures, we learn from them and carry on.

I’ve found myself reflecting on my recent work at a nonprofit and how I’ve approached failures in the past. To put it bluntly, I haven’t done well with failure. In fact, my approach to failure has usually been embarrassment, guilt, and eventual burnout. I see now that these feelings, though hard to ignore, are completely unproductive. They are also easy to prevent. If you have an evaluation plan in place, you can turn a failure into just another data point on a path towards success. As Karen wrote in the blog post about failure, “Reporting [a failure] is kind of like that commercial about how to come in late to a meeting. If you bring the food, you’re not the person who came late, you’re the person who brought breakfast.”

#5: Do not ignore outcome evaluation!
It took me a while for this information to sink in, but there are multiple ways to evaluate a program. Process evaluation assesses how you did something, “Are you doing what you said you’d do?” Outcome evaluation is a bit different, as it tries to answer the question of whether the program achieved its goal, or “Are you accomplishing the WHY of what you wanted to do?” When I think about these two types of evaluation, it’s tempting to focus on the process evaluation because I have more control over the process than the outcomes. I can plan a fantastic program, and “pass” a process evaluation. The same plan can “fail” an outcomes evaluation if people were not receptive to the program. Before you forgo your outcomes evaluation plan, remember pointers #3 and #4: you are in charge of the parameters, and failure isn’t the end of the world. Prepare an outcomes evaluation plan knowing that whatever happens, you’ll be able to use the information in the future. Also, remember that we have worksheets to help you write out any evaluation plan.

I hope you’ve found my reflections helpful in your evaluation planning. Let me know your favorite takeaways in the comments!

Photo credit: Kerry Kirk.

The Dark Side of Questionnaires: How to Identify Questionnaire Bias

Monday, March 6th, 2017

Villain cartoon with survey questions

People in my social media circles have been talking lately about bias in questionnaires.  There are biased questionnaires.  Some of them are biased by accident and some are on purpose.  Some are biased in the questions and some are biased in other ways, such as the selection of the people who are asked to complete the questionnaires. Recently, a couple of my friends posted on Facebook that people should check out the NNLM Evaluation Office to learn about better questionnaires. Huzzah! This week’s post was born!

Here are a few things to look for when creating, responding to, or looking at the results of questionnaires.

Poorly worded questions

Sometimes simple problems with questions can lead to bias, whether accidental or on purpose.  Watch out for these kinds of questions:

  • Questions that have unequal number of positive and negative responses.

Example:

Overall, how would you rate NIHSeniorHealth?

Excellent | Very Good | Good | Fair | Poor 

Notice that “Good” is the middle option (which should be neutral), and some people consider “Fair” to be a slightly positive term.

  • Leading questions, which are questions that are asked in a way that is intended to produce a desired answer.

Example:

Most people find MedlinePlus very easy to navigate.  Do you find it easy to navigate?  (Yes   No)

How would you feel if you had trouble navigating MedlinePlus? It would be hard to say ‘No’ to that question.

  •  Double-barreled questions, which are two questions in one.

 Example:

 Do you want so lower the cost of health care and limit compensation to medical malpractice lawsuits?

 This question has two parts – to answer yes or no, you have to agree or disagree with both parts. And who doesn’t want to lower health care costs?

  •  Loaded questions, which are questions that have a false or questionable logic inherent in the question (a “Have you stopped beating your wife” kind of question). Political surveys are notorious for using loaded questions.

Example:

Are you in favor of slowing the increase in autism by allowing people to choose whether or not to vaccinate their child?

This question makes the assumption that vaccinations cause autism. It might be difficult to answer if you don’t agree with that assumption.

The NEO has some suggestions for writing questions in their Booklet 3: Collecting and Analyzing Evaluation Data, page 5-7.

Questionnaire respondents

People think of the questions as a way to bias questionnaires, but another form of bias can be found in the questionnaire respondents.

  • Straw polls or convenience polls are polls that are given to people the easiest way. For example polling the people who are attending an event, or putting a questionnaire on a newspaper homepage (or your Facebook page).  The reason they are problematic is that they attract response from people who are particularly interested or energized by a topic, so you are hearing from the noisy minority.
  • Who you send the questionnaire to has a lot to do with why you are sending out the questionnaire. If you want to know about the opinions of people in a small club, then that’s who you would send them to. But if you are trying to reach a large number of people, you might want to try sampling, which involves learning about randomizing.  (Consider checking out the Appendix C of NNLM PNR’s Measuring the Difference: Guide to Planning and Evaluating Health Information Outreach). Keep in mind that the potential bias here isn’t necessarily in sending the questionnaires to a small group of people, but in how you represent the results of that questionnaire.
  • Low response rate may bias questionnaire results because it’s hard to know if your respondents truly represent the group being surveyed.  The best way to prevent response rate bias is to follow the methods described in this NEO post Boosting Response Rates with Invitation Letters to ensure you get the best response rate possible.

Lastly, the Purpose of the Questionnaire

Just like looking for bias in news or health information or anything else, you want to think about is who is putting out the questionnaire and what is its purpose?  A  questionnaire isn’t always a tool for objectively gathering data.  Here are some other things a questionnaire can be used for:

  • To energize a constituent base so that they will donate money (who hasn’t filled out a questionnaire that ends with a request for donations?)
  • To confirm what someone already thinks on a topic (those Facebook polls are really good for that)
  • To give people information while pretending to find out their opinion (a lot of marketing polls I get on my landline seem to be more about letting me know about some products than really finding out what I know).

If you want to know more about questionnaires, here are some of the NEO resources that can help:

Planning and Evaluating Health Information Outreach Projects, Booklet 3: Collecting and Evaluating Evaluation Data

Boosting Response Rates with Invitation Letters

More NEO Shop Talk blog posts about Questionnaires and Surveys

 

Picture attribution

Villano by J.J., licensed under the Creative Commons Attribution-Share Alike 3.0 Unported license.

 

Last updated on Monday, June 27, 2016

Funded by the National Library of Medicine under Contract No. UG4LM012343 with the University of Washington.