Skip all navigation and go to page content

NEO Shop Talk

The blog of the National Network of Libraries of Medicine Evaluation Office

“Five Things I’ve Learned,” from an Evaluation Veteran

 

Cindy Olney in her home office

Kalyna Durbak, the NEO’s novice evaluator, recently posted the five things she learned about evaluation since joining our staff. I thought I would steal, er, borrow Kalyna’s “five things” topic and write about the most important lessons I’ve learned after 25+ -years in the evaluation field.

My first experience with program evaluation was in the 1980s, as a graduate assistant in the University of Arizona School of Medicine’s evaluation office.  Excel was just emerging as the cool new tool for data crunching. SPSS ran on room-sized mainframes, and punch cards were fading fast from the computing world. Social security numbers were routinely collected and stored along with other information about our research or evaluation participants. Our desktop computers ran on DOS. The Internet had not even begun wreaking havoc.

Yep, I’m old. The field has evolved over time and the work is more meaningful than ever. Here are five things I know now that I wish I had known when I started.

#1 Evaluation is different from research: Evaluation and research have distinctly different end goals. The aim of research is to add to general knowledge and understanding.  Evaluation, on the other hand, is designed to improve the value of something specific (programs, products, personnel, services) and to guide decision-making. Evaluation borrows many techniques from research methodology because those methods are a means to accurate, credible information. Technical accuracy of data means nothing if it cannot be applied to program improvement or decision-making.

#2 Evaluation is not the most important kid in the room. Evaluation, unchecked, can be resource-intensive, both in money and time. For every dollar and hour spent on evaluation, one dollar and hour is subtracted from funds used to produce or enhance a program or service. Project plans should focus first on service or program design and delivery, with proportional funding allocated to evaluation. Evaluation studies should not be judged by the same criteria used for research. Rather, the goal is to collect usable information in the most cost-effective manner possible.

#3 What gets measured gets done: Evaluation is a management tool that’s worth the investment. Project teams are most successful when they begin with the end in mind, and evaluation plans force discussion about desired results (outcomes) early on.  (Thank you, Stephen Covey, for helping evaluators advocate for their early involvement in projects.)  You must articulate what you want to accomplish before you can measure it.  You need a good action plan, logically linked to desired outcomes, before you can design an process assessment. Even if your resources limit you to the most rudimentary of evaluation methods, the mere process of committing to outcomes, activities and measure on paper (in a logic model, please!) allows a team to take one giant step forward toward program success.

#4 Value is in the eyes of the stakeholders: While research asks “What happened,” evaluation asks “What happened, how important is it, and, knowing what we know, what do we do?”  That’s why an evaluation report that merely collects dust on a shelf is a travesty. The evaluation process is not complete until stakeholders have interpreted the information and contributed their perspectives on how to act on the findings. In essence, I am talking about rendering judgement: what do the findings say about the value of the program? That value judgment should, in turn, inform decisions about the future of the program. While factual findings should be objective, judgments are not.  Value is in the eyes of the people invested in the success of your program, aka, stakeholders. Assessment of value may vary and even conflict among various stakeholder groups. For example, a public library health literacy program has several types of stakeholders. The library users will judge the program based on its usefulness to their lives. City government officials will judge the program based on how many taxpayers express satisfaction with the program.  Public librarians will value the program if it aligns with their library mission and brings visibility to their organization.  Evaluation is not complete until these multiple perspectives of value are explored and integrated into program decision-making.

#5 Everything I need to know about evaluation reporting I learned in kindergarten. Kindergarten was the first and possibly the last place I got to learn through play. In grad school, I learned to write 25-50 page research and evaluation reports. In my career, I discovered people read the executive summaries (if I was lucky), then stop. Evaluations are supposed to lead to learning about your programs, but no one thinks there’s anything fun about 50-page reports. Thankfully, evaluators have developed quite a few engaging ways to involve stakeholders in analyzing and using evaluation findings. For example, data dashboards allow stakeholders to interact with data visualizations and answer their own evaluation questions.  Data parties provide a social setting to share coffee, snacks, and data interpretations.  New innovations in evaluation reporting are being generated every year. It’s a great time to be an evaluator! More bling; less writing, and it’s all for the greater good.

So,there you have it: my five things.  These five lessons have served me well. I suspect they will continue to do so until bigger and better evaluation ideas come along. What about you? Share your insights below in our comments section.

Uninspired by Bars? Try Dot Plots

Thanks to Jessi Van Der Volgen and Molly Knapp at the NNLM Training Office for allowing us to feature their assessment project and for providing the images in this post. 

Are you tired of bars?

I don’t mean the kind of bars where you celebrate and socialize. I mean the kind used in data visualization.  My evidence-free theory is that people still succumb to using the justifiably maligned pie chart simply because we cannot face one more bar graph.

Take heart, readers. Today, I’m here to tell you a story about some magic data that fell on the NEO’s doorstep and broke us free of our bar chart rut.

It all began with a project by our NNLM Training Office (NTO) colleagues, the intrepid leaders of NNLM’s instructional design and delivery. They do it all. They teach. They administratively support the regions’ training efforts. They initiate opportunities and resources to up-level instructional effectiveness throughout the network. One of their recent initiatives was a national needs assessment of NNLM training participants. That was the source of the fabulous data I write about today.

For context, I should mention that training is one of NNLM’s key strategies for reaching the furthest corners of our country to raise awareness, accessibility and use of NLM health information resources. NNLM offers classes to all types of direct users, (e.g., health professionals; community-based organization staffs) but we value the efficiency of our “train-the-trainer” programs. In these classes, librarians and others learn how to use NLM resources so they, in turn, can teach their users. The national needs assessment was geared primarily toward understanding how to best serve “train-the-trainer” participants, who often takes multiple classes to enhance their skills.

For the NTO’s needs assessment, one area of inquiry involved an inventory of learners’ need for training in 30 topic areas. The NTO wanted to assess participants’ desired level and their current level of proficiency in each topic.  That meant 60 questions. That was one heck-of-a-long survey. We wished them luck.

The NTO team was undaunted!  They did some research and found a desirable format for presenting this set of questions (see upper left). The format had a nice minimalist design. The sliders were more fun for participants than radio buttons. Also, NTO designed the online questionnaire so that only a handful of question-pairs appeared on the screen at one time.  The approach worked, because NTO received responses from 559 respondents, and 472 completed the whole questionnaire.

Dot plots for four skill topic areas. Conducting literature searches (4=Current; 5=Desired) Understanding and searching for evidence-based research ( Current-3; Desired=5) Develop/teach classes (Current-3; Desired=5; Create videos/web tutorials Current-2; Desired=4)

The NEO, in turn, consulted the writings of one of our favorite dataviz oracles, Stephanie Evergreen. And she did not disappoint.  We found the ideal solution: dot plots!  Evergreen’s easy-to-follow instructions from this blog post allowed us to create dot plots in Excel, using a few creative hacks. This approach allowed us to thematically cluster results from numerous related questions into one chart. We were able to present data for 60 questions in a total of seven charts.

I would like to point out a couple of design choices I made:

  • I used different shapes and colors to visually distinguish between “current proficiency” and “desired proficiency.” Navy blue for current proficiency was inspired from NNLM’s logo. I used a complimentary green for the desired proficiency because green means “go.”
  • Evergreen prefers to place labels (e.g., “conducting literature searches”) close to the actual dots. That works well if your labels consist of one or two words. We found that our labels had to be longer to make sense. Setting them flush-left made them more readable.
  • I suggested plotting medians rather than means because many of the data distributions were skewed. You can use means, but probably should round to whole numbers so you don’t distract from the gaps.

Dot plots are quite versatile. We used the format to highlight gaps in proficiency, but other evaluators have demonstrated that dot plots work well for visualizing change over time and cross-group comparisons.

Dot plots are not as easy to create as the default Excel bar chart, but they are interesting.  So give up bars for a while.  Try plotting!

 

 

 

 

5 Things I Found Out as an Evaluation Newbie

Since joining the NEO in October, I have learned a lot about the world of evaluation. Here are 5 things that have made me rethink how I approach evaluation, program planning, and overall life.

#1: Anyone can do evaluation
Think about a project that you are working on at work. Now take out your favorite pen and pad of paper, or open a new blank document, and write What? at the top of the page. Give yourself a few minutes to write or type out a general outline of the project. Do the same for the questions So What? and Now What? Reflect on why the project is important to your organization’s mission, and what will you do with any new found information from the project. Finished? Congratulations, you’ve just taken your first step as a budding evaluator by engaging in some Reflective Practice.

This first step does not mean you are an evaluation guru. It takes more than just a reflection piece to create a whole evaluation plan (actually, just 4 steps). What I hope you take away from this exercise is that every project could use some form of evaluation, and that there is no hocus pocus involved in evaluation. All you need is a team willing to put in the effort to create “an evaluation culture of valuing evidence, valuing questioning, and valuing evaluative thinking” (Better Evaluation). I am sure you even have one of the most basic tools you can use for evaluation, which leads me to #2.

#2: Excel is your best friend
I will not deny that Tableau, Power BI, and other really cool Data Visualization and Business Intelligence software is out there. There’s also R, for those who are looking for another programming language to conquer. If you are working at a small library, or a non profit, it might be hard to get the training or the funds for such software. Enter Excel. You can do a lot of neat things with Excel. A quick search for Excel on Stephanie Evergreen’s blog will result in many free tutorials on how to make interesting (and useful) charts in Excel. You can even make pivot tables, which can help you easily summarize complicated sets of data. Excel might not be the best tool for data visualization, but it’s a tool that many of us already have.

#3: This isn’t your grade school report card
I still remember the terror I would feel the day that report cards would come out. I should not have been afraid, because I usually received great grades. What always terrified me was the uncertainty of how my teachers reached the resulting grade. If the teacher was nice, he or she would explain how the grade was calculated, but most of the time I was left with a report card with no comments. If I wanted to strive for a better grade, I would have to arrange a meeting with the teacher. That never happened because I was very shy, and the prospect seemed more terrifying than getting the report card!

It can be scary to think about an evaluation program. What if it doesn’t come out well, and you get a “bad grade”? I’m here to tell you that the terror won’t be there, because you are not a student waiting for an ambiguous letter grade. You are in the teacher’s seat, and you get to decide what it means to succeed and what it means to fail. You have full control over the parameters of your evaluation! This does not guarantee success, but it does give you a fair shot at succeeding.

#4: Really, it’s ok to fail
Ever since I have started working with the NEO, I’ve been confronted with failure. We start most of our meetings by retelling our most recent failures, like how we forgot to put something on our Outlook calendar or could not get something to work. We’ve even written a blog post about it! I call this a failure-positive work environment. Instead of beating ourselves up about these little failures, we learn from them and carry on.

I’ve found myself reflecting on my recent work at a nonprofit and how I’ve approached failures in the past. To put it bluntly, I haven’t done well with failure. In fact, my approach to failure has usually been embarrassment, guilt, and eventual burnout. I see now that these feelings, though hard to ignore, are completely unproductive. They are also easy to prevent. If you have an evaluation plan in place, you can turn a failure into just another data point on a path towards success. As Karen wrote in the blog post about failure, “Reporting [a failure] is kind of like that commercial about how to come in late to a meeting. If you bring the food, you’re not the person who came late, you’re the person who brought breakfast.”

#5: Do not ignore outcome evaluation!
It took me a while for this information to sink in, but there are multiple ways to evaluate a program. Process evaluation assesses how you did something, “Are you doing what you said you’d do?” Outcome evaluation is a bit different, as it tries to answer the question of whether the program achieved its goal, or “Are you accomplishing the WHY of what you wanted to do?” When I think about these two types of evaluation, it’s tempting to focus on the process evaluation because I have more control over the process than the outcomes. I can plan a fantastic program, and “pass” a process evaluation. The same plan can “fail” an outcomes evaluation if people were not receptive to the program. Before you forgo your outcomes evaluation plan, remember pointers #3 and #4: you are in charge of the parameters, and failure isn’t the end of the world. Prepare an outcomes evaluation plan knowing that whatever happens, you’ll be able to use the information in the future. Also, remember that we have worksheets to help you write out any evaluation plan.

I hope you’ve found my reflections helpful in your evaluation planning. Let me know your favorite takeaways in the comments!

Photo credit: Kerry Kirk.

The Dark Side of Questionnaires: How to Identify Questionnaire Bias

Villain cartoon with survey questions

People in my social media circles have been talking lately about bias in questionnaires.  There are biased questionnaires.  Some of them are biased by accident and some are on purpose.  Some are biased in the questions and some are biased in other ways, such as the selection of the people who are asked to complete the questionnaires. Recently, a couple of my friends posted on Facebook that people should check out the NNLM Evaluation Office to learn about better questionnaires. Huzzah! This week’s post was born!

Here are a few things to look for when creating, responding to, or looking at the results of questionnaires.

Poorly worded questions

Sometimes simple problems with questions can lead to bias, whether accidental or on purpose.  Watch out for these kinds of questions:

  • Questions that have unequal number of positive and negative responses.

Example:

Overall, how would you rate NIHSeniorHealth?

Excellent | Very Good | Good | Fair | Poor 

Notice that “Good” is the middle option (which should be neutral), and some people consider “Fair” to be a slightly positive term.

  • Leading questions, which are questions that are asked in a way that is intended to produce a desired answer.

Example:

Most people find MedlinePlus very easy to navigate.  Do you find it easy to navigate?  (Yes   No)

How would you feel if you had trouble navigating MedlinePlus? It would be hard to say ‘No’ to that question.

  •  Double-barreled questions, which are two questions in one.

 Example:

 Do you want so lower the cost of health care and limit compensation to medical malpractice lawsuits?

 This question has two parts – to answer yes or no, you have to agree or disagree with both parts. And who doesn’t want to lower health care costs?

  •  Loaded questions, which are questions that have a false or questionable logic inherent in the question (a “Have you stopped beating your wife” kind of question). Political surveys are notorious for using loaded questions.

Example:

Are you in favor of slowing the increase in autism by allowing people to choose whether or not to vaccinate their child?

This question makes the assumption that vaccinations cause autism. It might be difficult to answer if you don’t agree with that assumption.

The NEO has some suggestions for writing questions in their Booklet 3: Collecting and Analyzing Evaluation Data, page 5-7.

Questionnaire respondents

People think of the questions as a way to bias questionnaires, but another form of bias can be found in the questionnaire respondents.

  • Straw polls or convenience polls are polls that are given to people the easiest way. For example polling the people who are attending an event, or putting a questionnaire on a newspaper homepage (or your Facebook page).  The reason they are problematic is that they attract response from people who are particularly interested or energized by a topic, so you are hearing from the noisy minority.
  • Who you send the questionnaire to has a lot to do with why you are sending out the questionnaire. If you want to know about the opinions of people in a small club, then that’s who you would send them to. But if you are trying to reach a large number of people, you might want to try sampling, which involves learning about randomizing.  (Consider checking out the Appendix C of NNLM PNR’s Measuring the Difference: Guide to Planning and Evaluating Health Information Outreach). Keep in mind that the potential bias here isn’t necessarily in sending the questionnaires to a small group of people, but in how you represent the results of that questionnaire.
  • Low response rate may bias questionnaire results because it’s hard to know if your respondents truly represent the group being surveyed.  The best way to prevent response rate bias is to follow the methods described in this NEO post Boosting Response Rates with Invitation Letters to ensure you get the best response rate possible.

Lastly, the Purpose of the Questionnaire

Just like looking for bias in news or health information or anything else, you want to think about is who is putting out the questionnaire and what is its purpose?  A  questionnaire isn’t always a tool for objectively gathering data.  Here are some other things a questionnaire can be used for:

  • To energize a constituent base so that they will donate money (who hasn’t filled out a questionnaire that ends with a request for donations?)
  • To confirm what someone already thinks on a topic (those Facebook polls are really good for that)
  • To give people information while pretending to find out their opinion (a lot of marketing polls I get on my landline seem to be more about letting me know about some products than really finding out what I know).

If you want to know more about questionnaires, here are some of the NEO resources that can help:

Planning and Evaluating Health Information Outreach Projects, Booklet 3: Collecting and Evaluating Evaluation Data

Boosting Response Rates with Invitation Letters

More NEO Shop Talk blog posts about Questionnaires and Surveys

 

Picture attribution

Villano by J.J., licensed under the Creative Commons Attribution-Share Alike 3.0 Unported license.

 

A Logic Model Hack: The Project Reality Check

Duct tape or duck tape torn strips of isolated elements of strong adhesive gray material used in packaging boxes or repairing or fixing broken things that need to be sealed air tight.

Logic models just may be the duct tape of the evaluation world.

A logic model’s usefulness extends well beyond initial project planning. (If you aren’t familiar with logic models, here’s a fun introduction.)  Today’s post starts a new NEO Shop Talk series to take our readers beyond Logic Models 101. We call this series Logic Model Hacks. Our first topic: The Project Reality Check. Watch for more hacks in future posts.

The Project Reality Check allows you to assess the feasibility of your project with stakeholders, experienced colleagues, and key informants.  I refer to these people as “Reality Checkers.”  (I’m capping their title out of respect for their importance.)  Your logic model is your one-page project vision. Present it with a brief pitch, and you can bring anyone up to speed on your plans in a New York minute (or two).  Then, with a few follow-up questions, you can guide your Reality Checkers in identifying key project blind spots. What assumptions you are making? What external factors could help or hinder your project? The figure below is the logic model template from the NEO’s booklet Planning Outcomes-Based Outreach Projects . This template includes boxes for assumptions and external factors. By the time you complete your Project Reality Check, you will have excellent information to add to those boxes.

How to Conduct a Logic Model Reality Check

I always incorporate Project Reality Checks into any logic model development process I lead. Here is my basic game plan:

  • A small project team (2-5 people) works out the project plan and fills in the columns of the logic model. One person can do this step, if necessary.
  • After filling in the main columns, this working group drafts a list of assumptions and external factors for the boxes at the bottom. However, I don’t add the working group’s information to the logic model version for the Reality Checkers. You want fresh responses from them. Showing them your assumptions and external factors in advance may prevent them from generating their own. Best to give them a clean slate.
  • Make a list of potential Reality Checkers and divvy them up among project team members.
  • Prepare a question guide for querying your Reality Checkers.
  • Set up appointments. You can talk with your Reality Checkers in one-to-one conversations that probably will take 15-20 minutes. If you can convene an advisory group for your project, you could easily adapt the Project Reality Check interview process for a group discussion.

Here are the types of folks who might be good consultants for your project plans:

  • The people who will be working on the actual project.
  • Representatives from partner organizations.
  • Key informants. Here’s a tip: If you conducted key informant interviews for community assessment related to this project, don’t hesitate to show your logic model to those interviewees. It is a way to follow-up on the first interview, showing how you are using the information they provided. This is an opportunity for second contact with community opinion leaders.
  • Colleagues who conducted similar projects.
  • Funding agency staff. This is not always feasible, but take advantage of the opportunity if it’s there. These folks have a birds-eye view of what succeeds and fails in communities served by their organizations.

It’s a good idea to have an interview plan, so that you can use your Reality Checkers’ time efficiently and adequately capture their valuable advice. I would start with a short elevator speech, to provide context for the logic model.  Here’s a template you can adapt;

We have this exciting project, where we are trying to ___ [add your goal here]. Specifically, we want _____{the people or organization benefiting from your project}  to _________[add your outcomes]. We plan to do it by ____{summarize your activities).  Here’s our logic model, that shows a few more details of our plan.”

Then, you want to follow up with questions for the Reality Checkers:

  • What assumptions are we making that you think we need to check?
  • Are there resources in the community or in our partner organization that might help us do this project?
  • Are there barriers or challenges we should be prepared to address?
  • I would also like to check some assumptions our project team is making. Present your assumptions at the end of the discussion and get the Reality Checkers’ assessment.

How to Apply What You Learn

After completing the interviews, your working team should reconvene to process what you learned. Remove some of the assumptions that you confirmed in the interviews. Add any new assumptions to be investigated. Adapt your logic model to leverage newly discovered resources (positive external factors) or change your activities to address challenges or barriers. Prepare contingency plans for project turbulence predicted by your Reality Checkers.

Chances are high that you will be changing your logic model after the Project Reality Check. The good news is that you will only have to make changes on paper. That’s much easier than responding to problems that arise because you didn’t identify your blind spots during the planning phase of your project.

Other blog posts about logic models:

How I Learned to Stop Worrying and Love Logic Models (The Chili Lesson)

An Easier Way to Plan:Tearless Logic Models

Logic Models: Handy Hints

ABP: Always Be Pilot-testing (some experiences with questionnaire design)

Cover of NEO's Booklet 3 on collecting and analyzing evaluation data

This week I have been working on a questionnaire for the Texas Library Association (TLA) on the cultural climate of TLA.  Having just gone through this process, I will tell you that NEO’s Booklet #3: Collecting and Analyzing Evaluation Data has really useful tips on how to write questionnaires (p. 3-7).  I thought it might be good to talk about some of the tips that had turned out particularly useful for this project, but the theme of all of these is “always be pilot-testing.”

NEO’s #1 Tip: Always pilot test!

This questionnaire is still being pilot tested. So far I have thought the questionnaire was perfect at least 10 times, and we are still finding out about important changes from people who are pilot testing it for our committee.  One important part of this tip is to include stakeholders in the pilot testing.  Stakeholders have points of view that may not be included in the points of view of the people creating the survey.  After we have what we think is a final version, our questionnaire will be tested by the TLA Executive Board.  While this process sounds exhausting, every single change that has been made (to the questionnaire that I thought was finished) has fundamentally improved it.

There is a response for everyone who completes the question

Our questionnaire asks questions about openness and inclusiveness to people of diverse races, ethnicities, nationalities, ages, gender identities, sexual identities, cognitive and physical disabilities, perceived socioeconomic status, etc.  We are hoping to get personal opinions from all kinds of librarians who live all over Texas.  By definition this means that many of the questions are possibly sensitive, and may be hot button issues for some people.

In addition to wording the questions carefully, it’s important that every question has a response for everyone who completes the question. We would hate for someone not to find the response that best works for them, and then leave the questionnaire unanswered, or even worse get their feelings hurt or feel insulted. For example, we have a question about whether our respondents feel that their populations are represented in TLA’s different groups (membership, leadership, staff, etc). At first the answers were just “yes” or “no.”  But then (from responses in the pilot testing) we realized that a person may feel that they belong to more than one population. For example, what if someone was both Asian and had a physical disability.  Perhaps they feel that one group is well represented and the other group not represented at all.  How would they answer the question?  Without creating a complicated response, we decided to change our response options to “yes” “some are” and “no.”

“Don’t Know” or “Not Applicable”

In a similar vein, sometimes people do not know the answer to the question you are asking.  They can feel pressured to make a choice among questions rather than skip the question (and if they do skip the question, the data will not show why).  For example, we are asking a question about whether people feel that TLA is inclusive, exclusionary or neither.  Originally I thought those three choices covered all the bases.  But among discussions with Cindy (who was pilot testing the questionnaire), we realized that if someone simply didn’t know, they wouldn’t feel comfortable saying that TLA was neither inclusive nor exclusionary.  So we added a “Don’t know” option.

Speaking from experience, the most important thing is keeping an open mind. Remember that the people taking your questionnaire will be seeing it from different eyes than yours, and they are the ones you are hoping to get information from.  So, while I recommend following all the tips in Booklet 3, to get the best results, make sure that you test your questionnaire with a wide variety of people who represent those who will be taking it.

Got Documents? How to Do a Document Review

Are you an introvert?  Then I have an evaluation method for you: document review. You usually you can do this method from the comfort of your own office. No scary interactions with strangers.

Truth is, my use of existing data in evaluation seldom rises to the definition of true document review.  I usually read through relevant documents to understand a program’s history or context. However, a recent blog post by Linda Cabral in the AEA365 blog reminded me that document review is a real evaluation method that is conducted systematically. Cabral provide tips and a resource for doing document review correctly.  For today’s post, I decided to plan a document review that the NEO might conduct someday, describing how I would use Cabral’s guidelines. I also checked out the CDC’s Evaluation Brief, Data Collection Methods for Evaluation: Document Review, which Cabral recommended.

Here’s some project background. The NEO leads and supports evaluation efforts of the National Network of Libraries of Medicine (NNLM), which promotes access to and use of health information resources developed by the NIH’s National Library of Medicine. Eight health sciences libraries (called Regional Medical Libraries or RMLs) manage a program in which they provide modest amounts of funding to other organizations to conduct health information outreach in their regions. The organizations receiving these funds (known as subawards) write proposals that include brief descriptions (1-2 paragraphs) about their projects. These descriptions, along with other information about the subaward projects, are entered that is into the NLM’s Outreach Projects Database (OPD).

The OPD has a wealth of information, so I need an evaluation question to help me focus my document review. I settle on this question: How do our subawardees collaborate with other organizations to promote NLM products?  Partnerships and collaborations are a cornerstone of NNLM. They are the “network” in our name.  Yet simply listing the diverse types of organizations involved in our work does not satisfactorily capture the nature of our collaborations.  Possibly the subaward program descriptions in our OPD can add depth to our understanding of this aspect of the NNLM.

Now that I’ve identified my primary evaluation question, here’s how I would apply Cabral’s guidelines in the actual study.

Catalogue the information available to you:  For my project, I would first review the fields on the OPD’s data entry pages to see what information is entered for each project.  I obviously want to use the descriptive paragraphs. However, it helps to peruse the other project details. For example, it might be interesting to see if different types of organization (such as libraries and community-based organizations) form different types of collaborations. This step may cause me to add evaluation questions to my study.

I also would employ some type of sampling, because the OPD contains over 4500 project descriptions from as far back as 2001.  It is neither feasible nor necessary to review all of them.  There are many sampling choices, both random and purposeful. (Check out this article by Palinkas et al for purposeful sampling strategies.)  I‘m most interested in current award projects, so I likely would choose projects conducted in the past 2-3 years.

Develop a data collection form: A data collection form is the tool that allows you to record abstracted or summarized information from the full documents. Fortunately, the OPD system downloads data into an Excel-readable spreadsheet, which is the basis for my form. I would first delete columns in this spreadsheet that contain information not irrelevant to my study, such as mailing address and  phone numbers of the subaward contact person.

Get a co-evaluator: I would volunteer a NEO colleague to partner with me, to increase the objectivity of the analysis. Document review almost always involves coding of qualitative data.  If you use qualitative analysis for your study, a partner improves the trustworthiness of conclusions drawn from the data. If you are converting information into quantifiable (countable) data, a co-evaluator allows you to assess and correct human error in your coding process. If you do not have a partner for your entire project, try to find someone who can work with you on a subset of the data so you can calibrate your coding against someone else’s.

Ensure consistently among teammates involved in the analysis: “Abstracting data,” for my project, means identifying themes in the project descriptions.  Here’s a step-by-step description of the process I would use:

  • My partner and I would take a portion of the documents (15-20%) and both of us would read the same set of project descriptions. We would develop a list of themes that both of us believe are important to track for our study. Tracking means we would add columns to our data collection form/worksheet and note absence or presence of the themes in each project description.
  • We would then divide up the remaining program descriptions. I would code half of them and my partner would take the other half.
  • After reading 20% of the remaining documents, we would check in with each other to see if important new themes have emerged that we want to track. If so, we would add columns on our data collection document. (We would also check that first 15-20% of project descriptions for presence of these new themes.)
  • When all program descriptions are coded, we would sort our data collection form so we could explore patterns or commonalities among programs that share common themes.

For a more explicit description of coding qualitative data, check out the NEO publication Collecting and Analyzing Evaluation Data. The qualitative analysis methods described starting on page 25 can be applied in qualitative document review.

So, got documents? Now you know how to use them to assess your programs.

NEO Announcement! Home Grown Tools and Resources

Red Toolbox with ToolsSince NEO (formerly OERC) was formed, we’ve created a lot of material – four evaluation guides, a 4-step guide to creating an evaluation plan, hosted in-person classes and webinars, and of course, written in this very blog! All of the guides, classes, and blogs come with a lot of materials, including tip sheets, example plans, and resource lists. In order to get to all of these resources though, you had to go through each section of the website and search for them, or attend one of our in person classes. That all changed today.

Starting now, NEO will be posting its own tip sheets, evaluation examples, and more of our favorite links on the Tools and Resources page. Our first addition is our brand new tip sheet, “Maximizing Response Rate to Questionnaires,” which can be found under the Data Collection tab. We also provided links to some of our blog posts in each tab, making them easier to find. Look for more additions to the Tools and Resources page in upcoming months.

Do you have a suggestion for a tip sheet? Comment below – you might see it in the future!

Failure IS an Option: Measuring and Reporting It

Back to Square One signpost

Failure.  We all know it’s good for us.  We learn from failure, right? In Batman Begins, Bruce Wayne’s dad says “Why do we fall, Bruce? So we can learn to pick ourselves up.”  But sometimes failure, like falling, isn’t much fun (although, just like falling, sometimes it is fun for the other people around you). Sometimes in our jobs we have to report our failures to someone. And sometimes the politics of our jobs makes reporting failure a definite problem.

In the NEO office we like to start our meetings by reporting a recent failure. I think it’s a fun thing to do because I think my failures are usually pretty funny.  But Cindy has us do it from a higher motivation than getting people to laugh.  Taking risks is about being willing to fail. Sara Blakely, the founder and CEO of Spanx, grew up reporting failure every day at the dinner table: https://vimeo.com/175524001  In this video she says that “failure to me became not trying, versus the outcome.”

Why failure matters in evaluation

In general we are all really good at measuring our process (the activities we do) and not so good at measuring outcomes (the things we want to see happen because of our activities).  This is because we have a lot of control over whether our activities are done correctly, and very little control over the outcomes.  We want to measure something that shows that we did a great job, and we don’t want to measure something that might make us look bad. That’s why we find it preferable to measure something we have control over. It can look like we failed if we didn’t get the results we wanted, even if the work at our end was brilliant.

sad businesswoman

But of course outcomes are what we really care about (Steering by Outcomes: Begin with the End in Mind).  They are the “what for?” of what we do.  What if you evaluated the outcomes of some training sessions that you taught and you found out that no one used the skill that you taught them.  That would be sad and it might look like you wasted time and resources.  But on the other hand, what if you don’t measure whether or not anyone ever uses what you taught them, and you just keep teaching the classes and reporting successful classes, never finding out that people aren’t using what you taught them.  Wouldn’t that be the real waste of resources?

So how do you report failure?

I think getting over our fear of failure has to do with learning how to report failure so it doesn’t look like, well, failure.  The key is to stay focused on the end goal: we all really want to know the answer to the question “are we making a difference?”  If we stay focused on that question, then we need to figure out what indicators we can measure to find the answer. If the answer is “no, we didn’t make a difference” then how can we report that in a way that shows we’ve learned how to make the answer “yes?” How can we think about failure so it’s about “learning to pick ourselves up?” or better yet, contributing to your organization’s mission?

One way is to measure outcomes early and often. If you wait until the end of your project to measure your outcomes, you can’t adjust your project to enhance the possibilities of success.  If you know early on that your short-term outcomes are not coming out the way you hope, you can change what you’re doing.  So when you do your final report, you aren’t reporting failure, you’re reporting lessons learned, flexibility and ultimately success.

Here’s an example

Let’s say you’re teaching a series of classes to physical therapists on using PubMed Health so they can identify the most effective therapy for their patients.  At the end of the class you have the students complete a course evaluation, in which they give high scores to the class and the teachers.  If you are evaluating outcomes early, you might add a question like: “Do you think you will use PubMed Health in the next month?”  This is an early outcome question.   If most of them say “no” to this question, you will know quickly that if you don’t change something about what you’re doing in future classes, it is unlikely that a follow-up survey two months later will show that they had used PubMed Health.  Maybe you aren’t giving examples that apply to these particular students. Maybe these students aren’t in the position to make decisions about effective therapies. You have an opportunity to talk to some of the students and find out what you can change so your project is successful.

Complete Failure

You’ve tried everything, but you still don’t have the results you wanted to see.  The good news is, if you’ve been collecting your process and outcomes data, you have a lot of information about why things didn’t turn out as hoped and what can be done differently. Reporting that information is kind of like that commercial about how to come in late to a meeting. If you bring the food, you’re not the person who came late, you’re the person who brought breakfast.  If you report that you did a big project that didn’t work, you’re reporting failure.  If you report that things didn’t work out the way you hoped, but you have data-based suggestions for a better use of organizational resources that meet the same goal–then you’re the person who is working for positive change that supports the organization, and have metaphorically brought the breakfast. Who doesn’t love that person?

 

Beyond the Memes: Evaluating Your Social Media Strategy – Part 2

In my last post, I wrote about how to create social media outcomes for your organization. This week, we will take a look at writing objectives for your outcomes using the SMART method.

Though objectives and outcomes sound like the same thing, they are two different concepts in your evaluation plan – outcomes are the big ideas, while objectives relate to the specifics. Read Karen’s post to find out more about what outcomes and objectives are.

In the book Measuring the Networked Nonprofit, by Beth Kanter and Katie Delahaye Paine, they talk a lot about SMART objectives. We have not covered these types of objectives on the blog, so I thought this would be a good time to introduce this type of objective building. According to the book, a SMART objective is “specific, measurable, attainable, realistic, and timely” (Kanter and Paine 47). There are many variations on this definition, so we will use my favorite: Specific, Measurable, Attainable, Relevant, and Timely.

Specific: Leave the big picture for your outcomes. Use the 5 W’s (who, what, when, where, and why) to help craft this portion
Measurable: If you can’t measure it, how will you know you’ve actually achieved what you set out to do?
Attainable: Don’t make you objectives impossible. It’s not productive (or fun) to create objectives that you know you cannot reach. Understand what your community needs, and involve stakeholders.
Relevant: Is your community on Twitter? Create a Twitter account. Do they avoid Twitter? Don’t make a Twitter account. Use the tools that are relevant to the community that you serve.
Timely: Set a time frame for your objectives and outcomes, or your project might take too long for it to be relevant to your community. Time is also money, so create a deadline for your project so that you do not waste resources on a lackluster project.

As an example, let’s return to NEO’s favorite hypothetical town of Sunnydale to see how they added social media objectives into their Dusk to Dawn program. To refresh your memory, read this post from last September about Sunnydale’s Evaluation Plan.

Christopher Walken Fever Meme with the text 'Well, guess what! I’ve got a fever / and the only prescription is more hashtags'

Sunnydale librarians know that their vampire population uses Twitter on a daily basis for many reasons – meeting new vampires, suggesting favorite vampire friendly night clubs, and even engaging the library with general reference questions. Librarians came up with the idea to use the hashtag #dusk2dawn in all of their promotional materials about the health program Dusk to Dawn. Their thinking was it would help increase awareness of their objectives of 4 evening classes on MedlinePlus and PubMed, which in turn would support the outcomes “Increased ability of the Internet-using Sunnydale vampires to research needed health information” and “These vampires will use their increased skills to research health information for their brood.”

With that in mind, let’s make a SMART objective for this hashtag’s usage:

Specific
We will plug in what we have so far into the Specific section:

Vampires (who) in Sunnydale (where) will show an increase in awareness of health-related events hosted by the library (what) by retweeting the hashtag #dusk2dawn (why) for the duration of the Dusk to Dawn program (when).

Measurable
Measurable is probably the hardest part. What kind of metrics will Sunnydale librarians use to measure hashtag usage? How will they do it?

The social media librarian will manually monitor the hashtag’s usage by setting up an alert for its usage on TweetDeck. Each time the hashtag is used by a non-librarian in reference to the Sunnydale Library, the librarian will copy the tweet’s content to a spreadsheet, adding signifiers if it is a positive or negative tweet.

Attainable
Can our objective be reached? What is it about vampires in Sunnydale that makes this hashtag monitoring possible?

We know from polling and experience that our community likes using Twitter – specifically, they regularly engage with us on this platform. Having a dedicated hashtag for our overall program is a natural progression for us and our community.

Relevant
How does the hashtag #dusk2dawn contribute to the overall Dusk to Dawn mission?

An increase in usage of the hashtag #dusk2dawn will show that our community is actively talking about, hopefully in a positive way. This should increase awareness of our program’s objectives of 4 evening classes on MedlinePlus and PubMed, which in turn would support the outcomes “Increased ability of the Internet-using Sunnydale vampires to research needed health information” and “These vampires will use their increased skills to research health information for their brood.”

Timely
How long should it take for the vampires to increase their awareness of our program’s objectives?

There should be an upward trend in awareness over the course of the program. We have 7 months before we are reevaluating the whole Dusk to Dawn program, so we will set 7 months as our deadline for increased hashtag usage.

SMART!
Now, we put it all together to get:

Vampires in Sunnyvale will show an increase in awareness of health-related events hosted by the library, indicated by a 15% increase of the hashtag #dusk2dawn by Sunnydale vampires for the duration of the Dusk to Dawn program.

Success Baby

Though this objective is SMART, it is certainly will not work in every library. Perhaps the community your library serves does not use Twitter to connect with the library, or you do not have enough people on staff to monitor the hashtag’s usage. If you make a SMART objective that will be relevant to your community, it will have a better chance to succeed.

Here at NEO, we usually do not use SMART objectives method, but rather Measurable Outcome Objectives. Step 3 on the Evaluation Resources page points to many different resources on our website that talk about Measurable Objectives. Try both out, and see what works for your organization.

We will be taking a break from social media evaluation and goal setting for a few weeks. Next time we talk about social media, we will show our very own social media evaluation plan!

Find more information about SMART objectives here:

Let me know if you have any questions or comments about this post! Comment on Facebook and Twitter, or email me at kalyna@uw.edu.

Image credits: Christopher Walken Fever Meme made by Kalyna Durbak on ImgFlip.com. Success Kid meme is from Know Your Meme.

Last updated on Monday, June 27, 2016

Funded by the National Library of Medicine under Contract No. UG4LM012343 with the University of Washington.