Skip all navigation and go to page content

NEO Shop Talk

The blog of the National Network of Libraries of Medicine Evaluation Office

Archive for 2017

Hungry for More Evaluation Ideas? How About a Tip a Day?

Friday, April 28th, 2017

Sometimes people ask us where we get ideas for our posts.  I’m going to tell you one of our big secrets – when we can’t think of something we check out the American Evaluation Association’s daily blog AEA 365 | A Tip-A-Day by and for Evaluators.  Seriously, every single day some evaluator posts a tip for other evaluators.

In no way do we rewrite any of their posts, but we do in fact scrounge for ideas.  I thought you might want to scrounge for your own ideas by looking at this great blog now and then.

Just this week, there’s a post about about using very short surveys throughout a project so that you can make changes to improve your project as you go: Using pulse surveys to get rapid actionable feedback from teachers during a professional development experience by Valerie Futch Ehrlich  Along with describing how they did it, the post also recommends a survey tool called Waggl that allows people to vote on each others’ feedback, so the best ideas float to the top. Cool, right?

Here’s another post about how you can use tables to explain complicated ideas to potential funders in grant applications: Using Tables Effectively in Grant Proposals by Kate LaVelle and Judith Rhodes. 

Happy scrounging!


Finding Evaluator Resources in Surprising Places

Friday, April 21st, 2017

The National Training Office recently posted a Free and Low Cost Tools guide to help educators create and carry out training. Many of these tools can be used by more than just trainers though — evaluators can makes use of these tools as well. Here are some of my favorite tools from NTO’s guide.

I have used Trello to make personal to-do lists and to keep track of my progress with the recent NNLM website migration. Since Trello boards have a column structure, I thought to myself- could I use Trello to make an interactive Logic Model? Yes, yes I can!

I took the logic model from our Sunnydale blog series, and converted it into a Trello board. I made lists for Inputs, Activities, Reach, Short-term Outcomes, Intermediate Outcomes, and Long-term Outcomes. Each individual activity, outcome, etc is written on a Trello card under the list item. You can comment on the cards, create checklists for each card, set due dates, and mark them complete, all on an interactive and visually pleasing logic model.

Are your cards getting cluttered with comments and checklists? Create a new board and link the board to a specific card. For example, you can create a separate Kanban board for the card “Start a 12-hour “Dusk to Dawn” health reference hotline,” and attach the link to the new board to the card in the logic model. Then, once that activity is completed, you can mark the corresponding card in the logic model as complete.

Adobe Spark

Here at NEO we are all about creating aesthetically pleasing and easy to read reports. I believe we all want to create pretty reports, but sometimes lack the time and energy to create one. It is much easier to type up a simple Word document and send it as an attachment.

With Adobe Spark, you can create slick web pages, social media posts, and even videos for free. I created the image on the left for a future slide deck about Sunnydale’s evaluation program in less than 5 minutes.

This image is proportioned for a PowerPoint presentation, but you can create custom sized images for physical fliers, Facebook, and a number of other social media platforms.

I also made this spoof report with Adobe Spark in about 10 minutes. No HTML/CSS knowledge was needed, and Adobe Spark hosts the web page for you. It was a fairly simple process.

I have not used SlideShark personally, but the idea of being able to present my PowerPoint slides from my mobile device makes my day. Instead of carrying around a bulky laptop, or forgetting a small flash drive, I can use SlideShark to broadcast my PowerPoint straight from my smartphone to a projector. In addition, I can share an online version of my presentation that is viewable at any time. That means no more uploading issues, or large email attachments.

Slack has to be my favorite tool on this list. At first, it might seem like a fancy forum, but it is so much more. Slack integrates with many other tools, such as Trello, Google Drive, and Skype, so you can keep all of your relevant work conversations, documents, and tools in one place, instead of hidden in various emails. Emojis and GIFs are encouraged, creating a fun and casual work environment.

Evaluators are always looking to increase stakeholder participation in their evaluation efforts. It could be especially hard to communicate if stakeholders do not live in the same area. Slack could be a useful way to keep stakeholders engaged in conversation no matter the distance. You could even throw a virtual data party! You can have one Slack channel for the entire party, or break up different activities into separate channels. Since Slack invites participants to use emojis and GIFs, the resulting conversation will certainly look like a party! 🎉

What are your favorite tools from NTO’s guide? Let us know in the comments!

Best Bargain for Evaluation Training: the AEA Summer Evaluation Institute

Friday, April 14th, 2017

Who doesn’t love a bargain?

If you love a bargain and you want to up your evaluation game, check out the American Evaluation Association’s Summer Evaluation Institute, now open for registration. This year, it will be held June 5 – 7 at the Omni Atlanta Hotel CNN Center.  For $395 (AEA members) or $480 (nonmembers), you get five 3-hour workshops.  You can choose from among 35 workshops taught by some of the most experienced evaluators in the field. The Summer Institute fee also covers lunch and snacks on most days.

The AEA Summer Institute also offers three pre-session workshops on June 4  for an additional fee. The first two on the list,  Introduction to Evaluation and A Primer to Evaluation Theories and Approaches, are ideal for those who are new to the evaluation field. The third is a special workshop offered by another renowned evaluation training organization, The Evaluators Institute. This 6-hour workshop Evaluation and Culture will teach a step-by-step approach to developing a culturally responsive evaluation.

The NEO Shop Talk bloggers attend this conference regularly. If you decide to join us, let us know on our Facebook page. Maybe we can do lunch!

Logo for the 2017 AEA Summer Evaluation Institute

Nonrandom Ideas for Using Random Samples

Friday, April 7th, 2017

Five dice sitting on a computer keyboard

I bet random sampling is something most of you learned about in your Intro to Statistics class, then never thought about again.  Truthfully, many program evaluators seldom use random sampling.  We often are working with smallish groups of program participants and can seek feedback from everyone (also known as conducting a census).

Besides, even if you have access to contact information for a huge group of people (e.g., your membership, city, user base, fan club), why not invite everyone? Survey software packages usually charge a flat annual fee, so it costs the same to invite 80 or 8,000 people to fill out your questionnaire. In fact, with social media, you may even be able to post surveys at no cost.

Today, I am here to give you examples of how you can use random sampling for pesky evaluation challenges. Then, I end with simple resources for creating your own random samples.

Challenge #1: A convenience sample just isn’t good enough. If you invite all 10,000 people on a mailing list (aka your population) to complete a questionnaire and 3% respond, that is not a census. It’s a convenience sample. Those few respondents likely were energized by extreme positive or negative opinions about the topic. You have no idea the degree to which their opinions represent the majority. Convenience samples aren’t necessarily useless. Often, you can corroborate findings and assess bias if you have other evaluation data for comparison.  A better response rate, however, automatically increases faith in the validity of your findings. A personalized communication process  is a key tactic for boosting response rate. It does not come without time cost. You must carefully review your email list. Are names spelled correctly? Are email addresses properly formatted? When email messages bounce, you should correct addresses and resend invitations. Nonrespondents should be contacted through personally written email messages or phone calls. Really late responders, if budget permits, should be mailed print copies of the questionnaires. A random sample, therefore, allows you to create a statistically valid shortlist of respondents and treat them with TLC.

Challenge #2: You have too many questions you want to ask: A long questionnaire is another potential threat to a good response rate.  If you have a significant number of people in your population, you can create several mini-questionnaires with a portion of your questions on each.  You can then divide up your population randomly and send different questions to each group.

Challenge #3: You are conducting a qualitative interviewing project on a controversial topic:  Usually, random sampling is not recommended for qualitative interviewing projects. Purposeful sampling is more likely to provide a small sample of interviewees with the appropriate mix of experiences and viewpoints for your project. However, when subject matter is controversial, a random sampling method protects you from appearing to “cherry pick” participants who side with your perceived position. For example, your organization may want to implement changes that are getting mixed reactions from various influential stakeholders.  You would use random sampling to demonstrate a good faith effort to represent a broad range of opinions.

Spreadsheet with names of female scientistis, random numbers, and the RAND function

Challenge #4: You want to collect evaluation information in a less-than-convenient manner.  It’s true, some evaluation methods can be annoying.  However, random sampling can limit the burden to you and those who help you gather data.  Suppose you want point-of-contact feedback from help desk customers.  You can randomly select a manageable number of weeks out of the 52 in a given year to collect user feedback. Maybe you want to systematically confirm the most popular usage times for your library computers. You could randomly choose different hours each day for a month or two to conduct and record a head count.

Challenge #5: You want to analyze existing data and there’s a lot of it.  In this information age, we are surrounded by organizational and even public information that might be useful in our evaluation projects.  Social media posts, “comment box” feedback, and training participants’ suggestions can be sources of valuable information. These sources also may provide daunting amounts of data. Rather than wade through 50,000 public ideas of how to remodel your library, for example, it is acceptable to randomly select a more manageable portion of responses and analyze those.

So, how do you pull a random sample?

You first have to determine your sample size.  For the qualitative examples described here, your time and resources will be your guide. For surveys conducted with larger populations, you can use an online calculator, such as this one from RAOSOFT. Then, to get your actual sample, you can put your list in Excel and use the RAND function to assigned a random number to individuals in the list and pull a random sample. Check out this video on how to pull a random sample using Excel.

You never know when your evaluation project may call out for a nice random sample. I have described just a few examples here. Happy sampling!

Nine Whys to an Elevator Speech

Monday, April 3rd, 2017

Elevator keypad

I know we already have a great post on doing elevator speeches.  But today I want to share an experience I had this week that made me think about another way to think about how approach elevator speeches, or basically what to say to someone about your project when you only have time for a sentence or two.

On Tuesday I presented a workshop at the Robert M. Bird Health Sciences Library at the University of Oklahoma Health Sciences Center. Each attendee was working on a logic model for a real-life project they were planning.  The first step in logic models is determining your long term outcomes (as our Shop Talk readers may know, the NEO is all about determining your outcomes).  In Tuesday’s class we were using an exercise called Nine Whys  (from Liberating Structures) to figure out the long-term outcomes of the projects.  In this partner exercise, you explain your activity and your partner asks “why is that important to you?”  After you answer, the partner asks “why” again (much like my 4-year old daughter does).  You keep going back and forth, answering the question of why, until you feel that you have really reached the core of why the thing you are doing is important.

One thing I like about this assignment is that a lot of us think that it’s self-evident why something we’re doing is important.  But it might only be self-evident to those of us doing the same jobs.  So while it’s important for the logic model to state clearly how our activities tie to the long-term outcomes, it may be just as important for us to connect the dots for everyone we talk to about our projects.

In the workshop, attendees went around the room and described their project.  In general, people showed a lot of enthusiasm for their projects.  Then we formed partners and enacted the Nine Whys exercise. I then asked everyone once again to state their activity followed immediately by the last answer to “why”  (e.g. “I’m going to do ___ because ____.”).  Each person made an incredibly powerful statement – it was as if their enthusiasm had turned into power.

When I heard each successive statement, I thought “these are so powerful. If I was a stakeholder, I would want to know these things.” And then I thought they would make great elevator speeches.

As an example, Phill Jo, the head of the OU-Bird Health Science Library’s Access Services, described a plan she had to reorganize the Access Services area.  During the first Why, we found out that access services staff members were far apart from each other, and part of her plan was to move them closer together. The next Why explained that putting staff members closer together would make communication easier and build teamwork.  Successive Whys led to increased work effectiveness, and finally to shorter wait-times for access services including document delivery, circulation, interlibrary loans, etc. (and since it’s a health science library, that could lead to better health outcomes for some patients).

Here was her sentence at the end: “We’re planning to reorganize the space in Access Services, which will lead to more effective communication resulting in providing quality access services to our students, staff, and faculty more quickly.”

If the Provost of the University of Oklahoma Health Sciences Center happened to be standing next to Phill at a university function (ok, or in an elevator), and in an awkward social moment said “so, Ms. Jo, what are you working on,” wouldn’t the statement above be a great answer?  If she were to say only “we are reorganizing the space in Access Services,” the Provost might think “that sounds like a lot of money. I hope they aren’t just getting expensive new chairs.”  But if Phill were to immediately connect reorganization to less time waiting for ILLs or other access services, the Provost will probably remember how long he had to wait when he was a student, and applaud her for working to improve the organization. He might even ask for more information which would start a useful and less awkward conversation.

I encourage you to try a Nine-Whys project on your activities and see if it works to help you explain your projects to people outside your professional circles (and maybe even inspire yourself).

“Five Things I’ve Learned,” from an Evaluation Veteran

Friday, March 24th, 2017


Cindy Olney in her home office

Kalyna Durbak, the NEO’s novice evaluator, recently posted the five things she learned about evaluation since joining our staff. I thought I would steal, er, borrow Kalyna’s “five things” topic and write about the most important lessons I’ve learned after 25+ -years in the evaluation field.

My first experience with program evaluation was in the 1980s, as a graduate assistant in the University of Arizona School of Medicine’s evaluation office.  Excel was just emerging as the cool new tool for data crunching. SPSS ran on room-sized mainframes, and punch cards were fading fast from the computing world. Social security numbers were routinely collected and stored along with other information about our research or evaluation participants. Our desktop computers ran on DOS. The Internet had not even begun wreaking havoc.

Yep, I’m old. The field has evolved over time and the work is more meaningful than ever. Here are five things I know now that I wish I had known when I started.

#1 Evaluation is different from research: Evaluation and research have distinctly different end goals. The aim of research is to add to general knowledge and understanding.  Evaluation, on the other hand, is designed to improve the value of something specific (programs, products, personnel, services) and to guide decision-making. Evaluation borrows many techniques from research methodology because those methods are a means to accurate, credible information. Technical accuracy of data means nothing if it cannot be applied to program improvement or decision-making.

#2 Evaluation is not the most important kid in the room. Evaluation, unchecked, can be resource-intensive, both in money and time. For every dollar and hour spent on evaluation, one dollar and hour is subtracted from funds used to produce or enhance a program or service. Project plans should focus first on service or program design and delivery, with proportional funding allocated to evaluation. Evaluation studies should not be judged by the same criteria used for research. Rather, the goal is to collect usable information in the most cost-effective manner possible.

#3 What gets measured gets done: Evaluation is a management tool that’s worth the investment. Project teams are most successful when they begin with the end in mind, and evaluation plans force discussion about desired results (outcomes) early on.  (Thank you, Stephen Covey, for helping evaluators advocate for their early involvement in projects.)  You must articulate what you want to accomplish before you can measure it.  You need a good action plan, logically linked to desired outcomes, before you can design an process assessment. Even if your resources limit you to the most rudimentary of evaluation methods, the mere process of committing to outcomes, activities and measure on paper (in a logic model, please!) allows a team to take one giant step forward toward program success.

#4 Value is in the eyes of the stakeholders: While research asks “What happened,” evaluation asks “What happened, how important is it, and, knowing what we know, what do we do?”  That’s why an evaluation report that merely collects dust on a shelf is a travesty. The evaluation process is not complete until stakeholders have interpreted the information and contributed their perspectives on how to act on the findings. In essence, I am talking about rendering judgement: what do the findings say about the value of the program? That value judgment should, in turn, inform decisions about the future of the program. While factual findings should be objective, judgments are not.  Value is in the eyes of the people invested in the success of your program, aka, stakeholders. Assessment of value may vary and even conflict among various stakeholder groups. For example, a public library health literacy program has several types of stakeholders. The library users will judge the program based on its usefulness to their lives. City government officials will judge the program based on how many taxpayers express satisfaction with the program.  Public librarians will value the program if it aligns with their library mission and brings visibility to their organization.  Evaluation is not complete until these multiple perspectives of value are explored and integrated into program decision-making.

#5 Everything I need to know about evaluation reporting I learned in kindergarten. Kindergarten was the first and possibly the last place I got to learn through play. In grad school, I learned to write 25-50 page research and evaluation reports. In my career, I discovered people read the executive summaries (if I was lucky), then stop. Evaluations are supposed to lead to learning about your programs, but no one thinks there’s anything fun about 50-page reports. Thankfully, evaluators have developed quite a few engaging ways to involve stakeholders in analyzing and using evaluation findings. For example, data dashboards allow stakeholders to interact with data visualizations and answer their own evaluation questions.  Data parties provide a social setting to share coffee, snacks, and data interpretations.  New innovations in evaluation reporting are being generated every year. It’s a great time to be an evaluator! More bling; less writing, and it’s all for the greater good.

So,there you have it: my five things.  These five lessons have served me well. I suspect they will continue to do so until bigger and better evaluation ideas come along. What about you? Share your insights below in our comments section.

Uninspired by Bars? Try Dot Plots

Friday, March 17th, 2017

Thanks to Jessi Van Der Volgen and Molly Knapp at the NNLM Training Office for allowing us to feature their assessment project and for providing the images in this post. 

Are you tired of bars?

I don’t mean the kind of bars where you celebrate and socialize. I mean the kind used in data visualization.  My evidence-free theory is that people still succumb to using the justifiably maligned pie chart simply because we cannot face one more bar graph.

Take heart, readers. Today, I’m here to tell you a story about some magic data that fell on the NEO’s doorstep and broke us free of our bar chart rut.

It all began with a project by our NNLM Training Office (NTO) colleagues, the intrepid leaders of NNLM’s instructional design and delivery. They do it all. They teach. They administratively support the regions’ training efforts. They initiate opportunities and resources to up-level instructional effectiveness throughout the network. One of their recent initiatives was a national needs assessment of NNLM training participants. That was the source of the fabulous data I write about today.

For context, I should mention that training is one of NNLM’s key strategies for reaching the furthest corners of our country to raise awareness, accessibility and use of NLM health information resources. NNLM offers classes to all types of direct users, (e.g., health professionals; community-based organization staffs) but we value the efficiency of our “train-the-trainer” programs. In these classes, librarians and others learn how to use NLM resources so they, in turn, can teach their users. The national needs assessment was geared primarily toward understanding how to best serve “train-the-trainer” participants, who often takes multiple classes to enhance their skills.

For the NTO’s needs assessment, one area of inquiry involved an inventory of learners’ need for training in 30 topic areas. The NTO wanted to assess participants’ desired level and their current level of proficiency in each topic.  That meant 60 questions. That was one heck-of-a-long survey. We wished them luck.

The NTO team was undaunted!  They did some research and found a desirable format for presenting this set of questions (see upper left). The format had a nice minimalist design. The sliders were more fun for participants than radio buttons. Also, NTO designed the online questionnaire so that only a handful of question-pairs appeared on the screen at one time.  The approach worked, because NTO received responses from 559 respondents, and 472 completed the whole questionnaire.

Dot plots for four skill topic areas. Conducting literature searches (4=Current; 5=Desired) Understanding and searching for evidence-based research ( Current-3; Desired=5) Develop/teach classes (Current-3; Desired=5; Create videos/web tutorials Current-2; Desired=4)

The NEO, in turn, consulted the writings of one of our favorite dataviz oracles, Stephanie Evergreen. And she did not disappoint.  We found the ideal solution: dot plots!  Evergreen’s easy-to-follow instructions from this blog post allowed us to create dot plots in Excel, using a few creative hacks. This approach allowed us to thematically cluster results from numerous related questions into one chart. We were able to present data for 60 questions in a total of seven charts.

I would like to point out a couple of design choices I made:

  • I used different shapes and colors to visually distinguish between “current proficiency” and “desired proficiency.” Navy blue for current proficiency was inspired from NNLM’s logo. I used a complimentary green for the desired proficiency because green means “go.”
  • Evergreen prefers to place labels (e.g., “conducting literature searches”) close to the actual dots. That works well if your labels consist of one or two words. We found that our labels had to be longer to make sense. Setting them flush-left made them more readable.
  • I suggested plotting medians rather than means because many of the data distributions were skewed. You can use means, but probably should round to whole numbers so you don’t distract from the gaps.

Dot plots are quite versatile. We used the format to highlight gaps in proficiency, but other evaluators have demonstrated that dot plots work well for visualizing change over time and cross-group comparisons.

Dot plots are not as easy to create as the default Excel bar chart, but they are interesting.  So give up bars for a while.  Try plotting!





5 Things I Found Out as an Evaluation Newbie

Friday, March 10th, 2017

Since joining the NEO in October, I have learned a lot about the world of evaluation. Here are 5 things that have made me rethink how I approach evaluation, program planning, and overall life.

#1: Anyone can do evaluation
Think about a project that you are working on at work. Now take out your favorite pen and pad of paper, or open a new blank document, and write What? at the top of the page. Give yourself a few minutes to write or type out a general outline of the project. Do the same for the questions So What? and Now What? Reflect on why the project is important to your organization’s mission, and what will you do with any new found information from the project. Finished? Congratulations, you’ve just taken your first step as a budding evaluator by engaging in some Reflective Practice.

This first step does not mean you are an evaluation guru. It takes more than just a reflection piece to create a whole evaluation plan (actually, just 4 steps). What I hope you take away from this exercise is that every project could use some form of evaluation, and that there is no hocus pocus involved in evaluation. All you need is a team willing to put in the effort to create “an evaluation culture of valuing evidence, valuing questioning, and valuing evaluative thinking” (Better Evaluation). I am sure you even have one of the most basic tools you can use for evaluation, which leads me to #2.

#2: Excel is your best friend
I will not deny that Tableau, Power BI, and other really cool Data Visualization and Business Intelligence software is out there. There’s also R, for those who are looking for another programming language to conquer. If you are working at a small library, or a non profit, it might be hard to get the training or the funds for such software. Enter Excel. You can do a lot of neat things with Excel. A quick search for Excel on Stephanie Evergreen’s blog will result in many free tutorials on how to make interesting (and useful) charts in Excel. You can even make pivot tables, which can help you easily summarize complicated sets of data. Excel might not be the best tool for data visualization, but it’s a tool that many of us already have.

#3: This isn’t your grade school report card
I still remember the terror I would feel the day that report cards would come out. I should not have been afraid, because I usually received great grades. What always terrified me was the uncertainty of how my teachers reached the resulting grade. If the teacher was nice, he or she would explain how the grade was calculated, but most of the time I was left with a report card with no comments. If I wanted to strive for a better grade, I would have to arrange a meeting with the teacher. That never happened because I was very shy, and the prospect seemed more terrifying than getting the report card!

It can be scary to think about an evaluation program. What if it doesn’t come out well, and you get a “bad grade”? I’m here to tell you that the terror won’t be there, because you are not a student waiting for an ambiguous letter grade. You are in the teacher’s seat, and you get to decide what it means to succeed and what it means to fail. You have full control over the parameters of your evaluation! This does not guarantee success, but it does give you a fair shot at succeeding.

#4: Really, it’s ok to fail
Ever since I have started working with the NEO, I’ve been confronted with failure. We start most of our meetings by retelling our most recent failures, like how we forgot to put something on our Outlook calendar or could not get something to work. We’ve even written a blog post about it! I call this a failure-positive work environment. Instead of beating ourselves up about these little failures, we learn from them and carry on.

I’ve found myself reflecting on my recent work at a nonprofit and how I’ve approached failures in the past. To put it bluntly, I haven’t done well with failure. In fact, my approach to failure has usually been embarrassment, guilt, and eventual burnout. I see now that these feelings, though hard to ignore, are completely unproductive. They are also easy to prevent. If you have an evaluation plan in place, you can turn a failure into just another data point on a path towards success. As Karen wrote in the blog post about failure, “Reporting [a failure] is kind of like that commercial about how to come in late to a meeting. If you bring the food, you’re not the person who came late, you’re the person who brought breakfast.”

#5: Do not ignore outcome evaluation!
It took me a while for this information to sink in, but there are multiple ways to evaluate a program. Process evaluation assesses how you did something, “Are you doing what you said you’d do?” Outcome evaluation is a bit different, as it tries to answer the question of whether the program achieved its goal, or “Are you accomplishing the WHY of what you wanted to do?” When I think about these two types of evaluation, it’s tempting to focus on the process evaluation because I have more control over the process than the outcomes. I can plan a fantastic program, and “pass” a process evaluation. The same plan can “fail” an outcomes evaluation if people were not receptive to the program. Before you forgo your outcomes evaluation plan, remember pointers #3 and #4: you are in charge of the parameters, and failure isn’t the end of the world. Prepare an outcomes evaluation plan knowing that whatever happens, you’ll be able to use the information in the future. Also, remember that we have worksheets to help you write out any evaluation plan.

I hope you’ve found my reflections helpful in your evaluation planning. Let me know your favorite takeaways in the comments!

Photo credit: Kerry Kirk.

The Dark Side of Questionnaires: How to Identify Questionnaire Bias

Monday, March 6th, 2017

Villain cartoon with survey questions

People in my social media circles have been talking lately about bias in questionnaires.  There are biased questionnaires.  Some of them are biased by accident and some are on purpose.  Some are biased in the questions and some are biased in other ways, such as the selection of the people who are asked to complete the questionnaires. Recently, a couple of my friends posted on Facebook that people should check out the NNLM Evaluation Office to learn about better questionnaires. Huzzah! This week’s post was born!

Here are a few things to look for when creating, responding to, or looking at the results of questionnaires.

Poorly worded questions

Sometimes simple problems with questions can lead to bias, whether accidental or on purpose.  Watch out for these kinds of questions:

  • Questions that have unequal number of positive and negative responses.


Overall, how would you rate NIHSeniorHealth?

Excellent | Very Good | Good | Fair | Poor 

Notice that “Good” is the middle option (which should be neutral), and some people consider “Fair” to be a slightly positive term.

  • Leading questions, which are questions that are asked in a way that is intended to produce a desired answer.


Most people find MedlinePlus very easy to navigate.  Do you find it easy to navigate?  (Yes   No)

How would you feel if you had trouble navigating MedlinePlus? It would be hard to say ‘No’ to that question.

  •  Double-barreled questions, which are two questions in one.


 Do you want so lower the cost of health care and limit compensation to medical malpractice lawsuits?

 This question has two parts – to answer yes or no, you have to agree or disagree with both parts. And who doesn’t want to lower health care costs?

  •  Loaded questions, which are questions that have a false or questionable logic inherent in the question (a “Have you stopped beating your wife” kind of question). Political surveys are notorious for using loaded questions.


Are you in favor of slowing the increase in autism by allowing people to choose whether or not to vaccinate their child?

This question makes the assumption that vaccinations cause autism. It might be difficult to answer if you don’t agree with that assumption.

The NEO has some suggestions for writing questions in their Booklet 3: Collecting and Analyzing Evaluation Data, page 5-7.

Questionnaire respondents

People think of the questions as a way to bias questionnaires, but another form of bias can be found in the questionnaire respondents.

  • Straw polls or convenience polls are polls that are given to people the easiest way. For example polling the people who are attending an event, or putting a questionnaire on a newspaper homepage (or your Facebook page).  The reason they are problematic is that they attract response from people who are particularly interested or energized by a topic, so you are hearing from the noisy minority.
  • Who you send the questionnaire to has a lot to do with why you are sending out the questionnaire. If you want to know about the opinions of people in a small club, then that’s who you would send them to. But if you are trying to reach a large number of people, you might want to try sampling, which involves learning about randomizing.  (Consider checking out the Appendix C of NNLM PNR’s Measuring the Difference: Guide to Planning and Evaluating Health Information Outreach). Keep in mind that the potential bias here isn’t necessarily in sending the questionnaires to a small group of people, but in how you represent the results of that questionnaire.
  • Low response rate may bias questionnaire results because it’s hard to know if your respondents truly represent the group being surveyed.  The best way to prevent response rate bias is to follow the methods described in this NEO post Boosting Response Rates with Invitation Letters to ensure you get the best response rate possible.

Lastly, the Purpose of the Questionnaire

Just like looking for bias in news or health information or anything else, you want to think about is who is putting out the questionnaire and what is its purpose?  A  questionnaire isn’t always a tool for objectively gathering data.  Here are some other things a questionnaire can be used for:

  • To energize a constituent base so that they will donate money (who hasn’t filled out a questionnaire that ends with a request for donations?)
  • To confirm what someone already thinks on a topic (those Facebook polls are really good for that)
  • To give people information while pretending to find out their opinion (a lot of marketing polls I get on my landline seem to be more about letting me know about some products than really finding out what I know).

If you want to know more about questionnaires, here are some of the NEO resources that can help:

Planning and Evaluating Health Information Outreach Projects, Booklet 3: Collecting and Evaluating Evaluation Data

Boosting Response Rates with Invitation Letters

More NEO Shop Talk blog posts about Questionnaires and Surveys


Picture attribution

Villano by J.J., licensed under the Creative Commons Attribution-Share Alike 3.0 Unported license.


A Logic Model Hack: The Project Reality Check

Friday, February 24th, 2017

Duct tape or duck tape torn strips of isolated elements of strong adhesive gray material used in packaging boxes or repairing or fixing broken things that need to be sealed air tight.

Logic models just may be the duct tape of the evaluation world.

A logic model’s usefulness extends well beyond initial project planning. (If you aren’t familiar with logic models, here’s a fun introduction.)  Today’s post starts a new NEO Shop Talk series to take our readers beyond Logic Models 101. We call this series Logic Model Hacks. Our first topic: The Project Reality Check. Watch for more hacks in future posts.

The Project Reality Check allows you to assess the feasibility of your project with stakeholders, experienced colleagues, and key informants.  I refer to these people as “Reality Checkers.”  (I’m capping their title out of respect for their importance.)  Your logic model is your one-page project vision. Present it with a brief pitch, and you can bring anyone up to speed on your plans in a New York minute (or two).  Then, with a few follow-up questions, you can guide your Reality Checkers in identifying key project blind spots. What assumptions you are making? What external factors could help or hinder your project? The figure below is the logic model template from the NEO’s booklet Planning Outcomes-Based Outreach Projects . This template includes boxes for assumptions and external factors. By the time you complete your Project Reality Check, you will have excellent information to add to those boxes.

How to Conduct a Logic Model Reality Check

I always incorporate Project Reality Checks into any logic model development process I lead. Here is my basic game plan:

  • A small project team (2-5 people) works out the project plan and fills in the columns of the logic model. One person can do this step, if necessary.
  • After filling in the main columns, this working group drafts a list of assumptions and external factors for the boxes at the bottom. However, I don’t add the working group’s information to the logic model version for the Reality Checkers. You want fresh responses from them. Showing them your assumptions and external factors in advance may prevent them from generating their own. Best to give them a clean slate.
  • Make a list of potential Reality Checkers and divvy them up among project team members.
  • Prepare a question guide for querying your Reality Checkers.
  • Set up appointments. You can talk with your Reality Checkers in one-to-one conversations that probably will take 15-20 minutes. If you can convene an advisory group for your project, you could easily adapt the Project Reality Check interview process for a group discussion.

Here are the types of folks who might be good consultants for your project plans:

  • The people who will be working on the actual project.
  • Representatives from partner organizations.
  • Key informants. Here’s a tip: If you conducted key informant interviews for community assessment related to this project, don’t hesitate to show your logic model to those interviewees. It is a way to follow-up on the first interview, showing how you are using the information they provided. This is an opportunity for second contact with community opinion leaders.
  • Colleagues who conducted similar projects.
  • Funding agency staff. This is not always feasible, but take advantage of the opportunity if it’s there. These folks have a birds-eye view of what succeeds and fails in communities served by their organizations.

It’s a good idea to have an interview plan, so that you can use your Reality Checkers’ time efficiently and adequately capture their valuable advice. I would start with a short elevator speech, to provide context for the logic model.  Here’s a template you can adapt;

We have this exciting project, where we are trying to ___ [add your goal here]. Specifically, we want _____{the people or organization benefiting from your project}  to _________[add your outcomes]. We plan to do it by ____{summarize your activities).  Here’s our logic model, that shows a few more details of our plan.”

Then, you want to follow up with questions for the Reality Checkers:

  • What assumptions are we making that you think we need to check?
  • Are there resources in the community or in our partner organization that might help us do this project?
  • Are there barriers or challenges we should be prepared to address?
  • I would also like to check some assumptions our project team is making. Present your assumptions at the end of the discussion and get the Reality Checkers’ assessment.

How to Apply What You Learn

After completing the interviews, your working team should reconvene to process what you learned. Remove some of the assumptions that you confirmed in the interviews. Add any new assumptions to be investigated. Adapt your logic model to leverage newly discovered resources (positive external factors) or change your activities to address challenges or barriers. Prepare contingency plans for project turbulence predicted by your Reality Checkers.

Chances are high that you will be changing your logic model after the Project Reality Check. The good news is that you will only have to make changes on paper. That’s much easier than responding to problems that arise because you didn’t identify your blind spots during the planning phase of your project.

Other blog posts about logic models:

How I Learned to Stop Worrying and Love Logic Models (The Chili Lesson)

An Easier Way to Plan:Tearless Logic Models

Logic Models: Handy Hints

Last updated on Monday, June 27, 2016

Funded by the National Library of Medicine under Contract No. UG4LM012343 with the University of Washington.