Skip all navigation and go to page content

NEO Shop Talk

The blog of the National Network of Libraries of Medicine Evaluation Office

Archive for February, 2017

A Logic Model Hack: The Project Reality Check

Friday, February 24th, 2017

Duct tape or duck tape torn strips of isolated elements of strong adhesive gray material used in packaging boxes or repairing or fixing broken things that need to be sealed air tight.

Logic models just may be the duct tape of the evaluation world.

A logic model’s usefulness extends well beyond initial project planning. (If you aren’t familiar with logic models, here’s a fun introduction.)  Today’s post starts a new NEO Shop Talk series to take our readers beyond Logic Models 101. We call this series Logic Model Hacks. Our first topic: The Project Reality Check. Watch for more hacks in future posts.

The Project Reality Check allows you to assess the feasibility of your project with stakeholders, experienced colleagues, and key informants.  I refer to these people as “Reality Checkers.”  (I’m capping their title out of respect for their importance.)  Your logic model is your one-page project vision. Present it with a brief pitch, and you can bring anyone up to speed on your plans in a New York minute (or two).  Then, with a few follow-up questions, you can guide your Reality Checkers in identifying key project blind spots. What assumptions you are making? What external factors could help or hinder your project? The figure below is the logic model template from the NEO’s booklet Planning Outcomes-Based Outreach Projects . This template includes boxes for assumptions and external factors. By the time you complete your Project Reality Check, you will have excellent information to add to those boxes.

How to Conduct a Logic Model Reality Check

I always incorporate Project Reality Checks into any logic model development process I lead. Here is my basic game plan:

  • A small project team (2-5 people) works out the project plan and fills in the columns of the logic model. One person can do this step, if necessary.
  • After filling in the main columns, this working group drafts a list of assumptions and external factors for the boxes at the bottom. However, I don’t add the working group’s information to the logic model version for the Reality Checkers. You want fresh responses from them. Showing them your assumptions and external factors in advance may prevent them from generating their own. Best to give them a clean slate.
  • Make a list of potential Reality Checkers and divvy them up among project team members.
  • Prepare a question guide for querying your Reality Checkers.
  • Set up appointments. You can talk with your Reality Checkers in one-to-one conversations that probably will take 15-20 minutes. If you can convene an advisory group for your project, you could easily adapt the Project Reality Check interview process for a group discussion.

Here are the types of folks who might be good consultants for your project plans:

  • The people who will be working on the actual project.
  • Representatives from partner organizations.
  • Key informants. Here’s a tip: If you conducted key informant interviews for community assessment related to this project, don’t hesitate to show your logic model to those interviewees. It is a way to follow-up on the first interview, showing how you are using the information they provided. This is an opportunity for second contact with community opinion leaders.
  • Colleagues who conducted similar projects.
  • Funding agency staff. This is not always feasible, but take advantage of the opportunity if it’s there. These folks have a birds-eye view of what succeeds and fails in communities served by their organizations.

It’s a good idea to have an interview plan, so that you can use your Reality Checkers’ time efficiently and adequately capture their valuable advice. I would start with a short elevator speech, to provide context for the logic model.  Here’s a template you can adapt;

We have this exciting project, where we are trying to ___ [add your goal here]. Specifically, we want _____{the people or organization benefiting from your project}  to _________[add your outcomes]. We plan to do it by ____{summarize your activities).  Here’s our logic model, that shows a few more details of our plan.”

Then, you want to follow up with questions for the Reality Checkers:

  • What assumptions are we making that you think we need to check?
  • Are there resources in the community or in our partner organization that might help us do this project?
  • Are there barriers or challenges we should be prepared to address?
  • I would also like to check some assumptions our project team is making. Present your assumptions at the end of the discussion and get the Reality Checkers’ assessment.

How to Apply What You Learn

After completing the interviews, your working team should reconvene to process what you learned. Remove some of the assumptions that you confirmed in the interviews. Add any new assumptions to be investigated. Adapt your logic model to leverage newly discovered resources (positive external factors) or change your activities to address challenges or barriers. Prepare contingency plans for project turbulence predicted by your Reality Checkers.

Chances are high that you will be changing your logic model after the Project Reality Check. The good news is that you will only have to make changes on paper. That’s much easier than responding to problems that arise because you didn’t identify your blind spots during the planning phase of your project.

Other blog posts about logic models:

How I Learned to Stop Worrying and Love Logic Models (The Chili Lesson)

An Easier Way to Plan:Tearless Logic Models

Logic Models: Handy Hints

ABP: Always Be Pilot-testing (some experiences with questionnaire design)

Monday, February 20th, 2017

Cover of NEO's Booklet 3 on collecting and analyzing evaluation data

This week I have been working on a questionnaire for the Texas Library Association (TLA) on the cultural climate of TLA.  Having just gone through this process, I will tell you that NEO’s Booklet #3: Collecting and Analyzing Evaluation Data has really useful tips on how to write questionnaires (p. 3-7).  I thought it might be good to talk about some of the tips that had turned out particularly useful for this project, but the theme of all of these is “always be pilot-testing.”

NEO’s #1 Tip: Always pilot test!

This questionnaire is still being pilot tested. So far I have thought the questionnaire was perfect at least 10 times, and we are still finding out about important changes from people who are pilot testing it for our committee.  One important part of this tip is to include stakeholders in the pilot testing.  Stakeholders have points of view that may not be included in the points of view of the people creating the survey.  After we have what we think is a final version, our questionnaire will be tested by the TLA Executive Board.  While this process sounds exhausting, every single change that has been made (to the questionnaire that I thought was finished) has fundamentally improved it.

There is a response for everyone who completes the question

Our questionnaire asks questions about openness and inclusiveness to people of diverse races, ethnicities, nationalities, ages, gender identities, sexual identities, cognitive and physical disabilities, perceived socioeconomic status, etc.  We are hoping to get personal opinions from all kinds of librarians who live all over Texas.  By definition this means that many of the questions are possibly sensitive, and may be hot button issues for some people.

In addition to wording the questions carefully, it’s important that every question has a response for everyone who completes the question. We would hate for someone not to find the response that best works for them, and then leave the questionnaire unanswered, or even worse get their feelings hurt or feel insulted. For example, we have a question about whether our respondents feel that their populations are represented in TLA’s different groups (membership, leadership, staff, etc). At first the answers were just “yes” or “no.”  But then (from responses in the pilot testing) we realized that a person may feel that they belong to more than one population. For example, what if someone was both Asian and had a physical disability.  Perhaps they feel that one group is well represented and the other group not represented at all.  How would they answer the question?  Without creating a complicated response, we decided to change our response options to “yes” “some are” and “no.”

“Don’t Know” or “Not Applicable”

In a similar vein, sometimes people do not know the answer to the question you are asking.  They can feel pressured to make a choice among questions rather than skip the question (and if they do skip the question, the data will not show why).  For example, we are asking a question about whether people feel that TLA is inclusive, exclusionary or neither.  Originally I thought those three choices covered all the bases.  But among discussions with Cindy (who was pilot testing the questionnaire), we realized that if someone simply didn’t know, they wouldn’t feel comfortable saying that TLA was neither inclusive nor exclusionary.  So we added a “Don’t know” option.

Speaking from experience, the most important thing is keeping an open mind. Remember that the people taking your questionnaire will be seeing it from different eyes than yours, and they are the ones you are hoping to get information from.  So, while I recommend following all the tips in Booklet 3, to get the best results, make sure that you test your questionnaire with a wide variety of people who represent those who will be taking it.

Got Documents? How to Do a Document Review

Friday, February 10th, 2017

Are you an introvert?  Then I have an evaluation method for you: document review. You usually you can do this method from the comfort of your own office. No scary interactions with strangers.

Truth is, my use of existing data in evaluation seldom rises to the definition of true document review.  I usually read through relevant documents to understand a program’s history or context. However, a recent blog post by Linda Cabral in the AEA365 blog reminded me that document review is a real evaluation method that is conducted systematically. Cabral provide tips and a resource for doing document review correctly.  For today’s post, I decided to plan a document review that the NEO might conduct someday, describing how I would use Cabral’s guidelines. I also checked out the CDC’s Evaluation Brief, Data Collection Methods for Evaluation: Document Review, which Cabral recommended.

Here’s some project background. The NEO leads and supports evaluation efforts of the National Network of Libraries of Medicine (NNLM), which promotes access to and use of health information resources developed by the NIH’s National Library of Medicine. Eight health sciences libraries (called Regional Medical Libraries or RMLs) manage a program in which they provide modest amounts of funding to other organizations to conduct health information outreach in their regions. The organizations receiving these funds (known as subawards) write proposals that include brief descriptions (1-2 paragraphs) about their projects. These descriptions, along with other information about the subaward projects, are entered that is into the NLM’s Outreach Projects Database (OPD).

The OPD has a wealth of information, so I need an evaluation question to help me focus my document review. I settle on this question: How do our subawardees collaborate with other organizations to promote NLM products?  Partnerships and collaborations are a cornerstone of NNLM. They are the “network” in our name.  Yet simply listing the diverse types of organizations involved in our work does not satisfactorily capture the nature of our collaborations.  Possibly the subaward program descriptions in our OPD can add depth to our understanding of this aspect of the NNLM.

Now that I’ve identified my primary evaluation question, here’s how I would apply Cabral’s guidelines in the actual study.

Catalogue the information available to you:  For my project, I would first review the fields on the OPD’s data entry pages to see what information is entered for each project.  I obviously want to use the descriptive paragraphs. However, it helps to peruse the other project details. For example, it might be interesting to see if different types of organization (such as libraries and community-based organizations) form different types of collaborations. This step may cause me to add evaluation questions to my study.

I also would employ some type of sampling, because the OPD contains over 4500 project descriptions from as far back as 2001.  It is neither feasible nor necessary to review all of them.  There are many sampling choices, both random and purposeful. (Check out this article by Palinkas et al for purposeful sampling strategies.)  I‘m most interested in current award projects, so I likely would choose projects conducted in the past 2-3 years.

Develop a data collection form: A data collection form is the tool that allows you to record abstracted or summarized information from the full documents. Fortunately, the OPD system downloads data into an Excel-readable spreadsheet, which is the basis for my form. I would first delete columns in this spreadsheet that contain information not irrelevant to my study, such as mailing address and  phone numbers of the subaward contact person.

Get a co-evaluator: I would volunteer a NEO colleague to partner with me, to increase the objectivity of the analysis. Document review almost always involves coding of qualitative data.  If you use qualitative analysis for your study, a partner improves the trustworthiness of conclusions drawn from the data. If you are converting information into quantifiable (countable) data, a co-evaluator allows you to assess and correct human error in your coding process. If you do not have a partner for your entire project, try to find someone who can work with you on a subset of the data so you can calibrate your coding against someone else’s.

Ensure consistently among teammates involved in the analysis: “Abstracting data,” for my project, means identifying themes in the project descriptions.  Here’s a step-by-step description of the process I would use:

  • My partner and I would take a portion of the documents (15-20%) and both of us would read the same set of project descriptions. We would develop a list of themes that both of us believe are important to track for our study. Tracking means we would add columns to our data collection form/worksheet and note absence or presence of the themes in each project description.
  • We would then divide up the remaining program descriptions. I would code half of them and my partner would take the other half.
  • After reading 20% of the remaining documents, we would check in with each other to see if important new themes have emerged that we want to track. If so, we would add columns on our data collection document. (We would also check that first 15-20% of project descriptions for presence of these new themes.)
  • When all program descriptions are coded, we would sort our data collection form so we could explore patterns or commonalities among programs that share common themes.

For a more explicit description of coding qualitative data, check out the NEO publication Collecting and Analyzing Evaluation Data. The qualitative analysis methods described starting on page 25 can be applied in qualitative document review.

So, got documents? Now you know how to use them to assess your programs.

NEO Announcement! Home Grown Tools and Resources

Friday, February 3rd, 2017

Red Toolbox with ToolsSince NEO (formerly OERC) was formed, we’ve created a lot of material – four evaluation guides, a 4-step guide to creating an evaluation plan, hosted in-person classes and webinars, and of course, written in this very blog! All of the guides, classes, and blogs come with a lot of materials, including tip sheets, example plans, and resource lists. In order to get to all of these resources though, you had to go through each section of the website and search for them, or attend one of our in person classes. That all changed today.

Starting now, NEO will be posting its own tip sheets, evaluation examples, and more of our favorite links on the Tools and Resources page. Our first addition is our brand new tip sheet, “Maximizing Response Rate to Questionnaires,” which can be found under the Data Collection tab. We also provided links to some of our blog posts in each tab, making them easier to find. Look for more additions to the Tools and Resources page in upcoming months.

Do you have a suggestion for a tip sheet? Comment below – you might see it in the future!

Last updated on Monday, June 27, 2016

Funded by the National Library of Medicine under Contract No. UG4LM012343 with the University of Washington.