Skip all navigation and go to page content

NEO Shop Talk

The blog of the National Network of Libraries of Medicine Evaluation Office

Archive for the ‘Practical Evaluation’ Category

How Many Interviews Does It Take to Assess A Project?

Friday, October 21st, 2016

A green piggy bank standing with a group of pink piggy banks to represent the cost effectiveness of individual interviews

FAQ from NEO users: How many interviews or focus groups do we need for our qualitative assessment project?

Our typical response: Um, how much money and time do you have?

At which point, our users probably want to throw a stapler at us. (Karen and I work remotely out of an abundance of caution.)

Although all NEO users are, in fact, quite well-mannered, I was happy to discover a study that allows us to provide a better response to that question.  A study published by Namey et al, which appears in the American Journal of Evaluation’s September issue, provides empirically based estimates of the number of one-to-one or group interviews needed for qualitative interviewing projects.  More specifically, their study compared the cost effectiveness of individual and focus group interviews. The researchers conducted an impressive 40 interviews and 40 focus groups (with an average of eight people per group). They then used a boot-strap sampling methodology, which essentially allowed them to do 10,000 mini-studies on their research questions.

They first looked at how many individual and focus group interviews it took to reach what qualitative researchers call thematic saturation. In lay terms, saturation means “Not really hearing anything new here.”  Operationally, it occurs when 80-90% of the information provided in an interview has already been covered in the previous interviews.

The researchers found that 80% saturation occurred after 2-4 focus groups or eight individual interviews. It took 16 interviews and five focus groups to reach 90% saturation. Note their estimates apply to studies that focus on one specific population.  If you want to explore the experiences of two groups, such as doctors and nurse practitioners, you would hold eight interviews per group to reach 80% thematic saturation.

For comparative cost assessment, the researchers used a formula that combined hourly rate of the data collector’s time, incentive costs per participant, and cost of transcription for recordings. They chose not to include cost for factors that vary widely, such as space rental or refreshments. Using more predictable costs made for cleaner and more generalizable comparisons.

Bottom line, they found individual interview methods cost 12-20% less than focus group methods.

Of course, many of us operate on shoestring budgets, so we are our own moderators and transcribers.  Even though most of us DIYers collect hourly wages, the cost for outsourcing these tasks is probably higher than for conducting them internally. Knowing this, the researchers looked at variations on moderator, transcriptionist, and incentive costs.  They also compared cost effectiveness of the two methods when lowering the standards for thematic saturation (i.e., aiming for 70% saturation instead of 80%). Across the board, individual interviews were more cost-effective than focus groups.

Cost is not always the only consideration when choosing between focus groups and individual interviews. Some assessment questions beg for group brainstorming, while others demand the privacy of one-to-one discussions. However, for many assessment studies, either method is equally viable.  In that case, cost and convenience will drive your decision. Personally, I often find individual interviews to be more convenient than focus groups, both for the participants and for me. It’s nice to know that the cost justifies using the more convenient approach.

The full article provides details on their methods, so it is a nice primer on qualitative analysis of interview transcripts. Here’s the full citation:

Namey E, Guest G, McKenna K, Chen M. Evaluating bang for the buck: a cost effectiveness comparison between individual interviews and focus groups based on thematic saturation levels. 2016 September; 37(3): 425-440.


What’s in a Name? Convey Your Chart’s Meaning with a Great Title

Friday, October 7th, 2016

Some of you may be working on conference posters and paper presentations for Fall conferences.  And some of those will probably include charts to demonstrate data representing a lot of hard work on your part.  In most cases you have minutes to use that chart to get your audience to understand the data.

Stephanie Evergreen has great advice for displaying chart data.  She literally wrote the books on it: Presenting Data Effectively and Effective Data Visualization.  Her recent blog post is about one of the simplest and most powerful changes you can make to effectively present your chart data: “Strong Titles Are The Biggest Bang for Your Buck.

What many of us do is present the data with a generic title, like “Attendance rates.” Then the viewer has to spend time working through the data and you hope that they see what you wanted them to.  What Stephanie Evergreen proposes (backed by persuasive research) is to give your charts a clear title that explains what the data shows. Your poster or paper is almost certainly making a point.  Determine how your chart supports the point of your presentation and state that in the title.  Here are some reasons why:

  • It respects your viewers’ time
  • It forces you to be clear about the point you want your data to make
  • It makes the data more memorable

Stephanie Evergreen’s post has some great examples of how a good title can really improve the impact of the chart.  In addition, here is an example from the NEO webinar Make Your Point: Good Design for Data Visualization.

Looking at this original chart, you might notice that in each activity the follow-up showed an increase over the baseline.  If you, the viewer, didn’t have a lot of time, that might be all you noticed.

Chart with title: Comparison of emergency preparedness activities from baseline to follow-up

With a simple change of title , you can see that the author of this presentation is highlighting the increased number of continuity of services plans.  This is designed to enhance the point of the presentation, and not waste the viewers’ time. Also, note that the title is left justified instead of centered.  Because the title is a full sentence, a left-justified format is easier to read.

Chart with title: The biggest improvement in emergency preparedness from baseline to follow-up was the number of network member organizations reporting that they had or were working on a service continuity plan.

So, while Shakespeare might have been correct when he wrote “What’s in a name? that which we call a rose / By any other name would smell as sweet,” what if the presenter was trying to show the fortitude of Texas antique roses to survive in harsh weather conditions, and the viewer only noticed how sweet the rose smelled?  Maybe the heading “A Rose” sometimes isn’t enough information.








Six Degrees of Key Informants: Finding Interviewees for Community Assessment

Friday, September 30th, 2016

Linking grid of the social networks of a young adults of various nationalitiesSix degrees of separation is a concept that describes social interconnectedness. The idea, popularized in a 1990s movie, is that each human on the planet could reach any other human, by way of friend-to-friend introduction, in six steps or less. For example, you are six (or fewer) people away from meeting your favorite celebrity you happen to adore. Research has explored this social interconnectedness and, while the actual maximum number of intermediaries may be in dispute, we do live in a small world.

If you are faced with doing a needs or community assessment, social interconnectedness should be comforting. Most good community assessments conducted for project planning involve interviewing key informants selected for their special knowledge about their communities. Key informant interviews can provide a quick source of detailed information important for planning. This tip sheet from USAID talks about the basics of conducting key informant interviews.

However, when Karen and I provide training on community assessment, we find our workshop participants feel daunted by the prospect of actually identifying key informants. So take heart, readers. Our social interconnectedness means good key informants are just a friend or two away.

The interview sampling approach I use is described in this article by Tiffany.  It is a participatory evaluation method that engages stakeholders in framing evaluation questions and recruiting interviewees. These stakeholders are your first key informants. You most likely will find them among members of the project team initiating the community assessment. It is likely that their interest in a community was sparked because someone on the team knew someone in the user community.

Those stakeholder key informants can, in turn, direct you to more key informants who can talk about their own and their peers’ needs, desires, opinions and lifestyles.  After each interview, you ask a key informant “Who else do you recommend that I interview?” and “What important information can that person add to my understanding about the community?” As a result, your key informants share in refining your community assessment inquiries. Their recommendations allow them input into the direction of your project plans. Because your key informants are likely to be opinion leaders in their communities, you can generate enthusiasm and possibly forge important partnerships, assuming they respond positively to your project.  Key informant interviews are your first step in building trust in the user community you’re assessing.

To find key informants  who can truly help you gather good project-planning information, be clear about the information you’re seeking.  That way, your stakeholders can refer you to the best  interviewees for your needs.  For guidance on the type of information you should gather in a community assessment, check out these NEO blog posts on Diffusion of Innovation Part One and Part Two.

Ideally key informants get something in return for participating in interviews. At the very least, key informants who are opinion leaders have valuable information about your project or organization to share with their peers. More significantly, your interviewees will assist you in bringing valuable services or resources to their communities.

I want to share two examples of projects where I used this approach to key  informant sampling.  A few years ago, I led a community assessment project for Cumberland County (North Carolina) Public Library and Information Center, which wanted to improve its service to the military community affiliated with Fort Bragg. (Public Libraries published an article about this project here.)Public librarian Jennifer Taft received funding from the State Library of North Carolina for this project and also participated in the community assessment process. We started by interviewing a cadre of her colleagues from the Fayetteville Community Blueprint Network, composed of representatives of local organizations that served military families. Jennifer and her colleagues had worked together to put on a community forum on post-traumatic stress. After each interview with her forum colleagues, I collected recommendations for other key informants. I did the same with my second wave of interviews. Our sample grew until we had a good sample of interviewees and focus group participants with experience-based perspectives on the military community. All worked in organizations that provided services to military families. Most also were members of military families (that is, service members, veterans or spouses).  The key informant interviews had an advantage beyond providing useful information. Relationships established in the interviewing phase provided the library with the contacts it needed to participate, for the first time, in on-post activities.

In a different project, I worked with the National Network of Libraries of Medicine South Central Regional Medical Library to explore how it could support public libraries in hurricane-prone counties. Our sampling process began with contacting librarians at the state libraries of Louisiana and Texas, both of which actively supported public libraries during Hurricanes Katrina and Ike. They introduced us to key informants from “further-in” libraries that valiantly helped waves of evacuees from communities that suffered direct hits.  Our contacts pointed us to libraries that were struck by the storms and restored services quickly in order to help their community members. After we completed our interviews, these librarians became valuable partners in helping us develop NN/LM resources. (You can read about the Gulf Coast library community assessment here in Public Libraries.

Still worried about locating good key informants?  I assure you, you can have faith in the interconnectedness phenomenon.  It has always worked for me, starting with my very first qualitative interviewing project. That project occurred in the 1970s when I was an undergraduate at Penn State. I was enrolled in an undergraduate class taught by an American folklorist.  Our extra credit assignment was to find three legends from our own families or social circles. In those pre-Internet days, most modern-day legends were ghost stories. I was immediately overwhelmed.  What friend of mine could possibly have a ghost story to share? Turns out, the first person I saw after class had a haunting tale. And so did the next person. Within days, I was 10 points closer to getting an A in the course. Everyone, it seemed, knew someone who had seen a ghost.

So remember, you are probably less than six people removed from a great key informant. Just get a handle on what you want to know in your community assessment, talk to anyone affiliated with the community, and you’re on your way.

And, if you know somebody who knows somebody who knows Kevin Bacon, kindly send their contact information to Cindy or Karen?


Reference: One of the most well-known studies of interconnectedness was

published by Travers, Jeffrey, and Milgram, “An Experimental Study of the Small World Problem”, Sociometry 32(4, Dec. 1969):425–443


A Diagram is Worth a Thousand Words: Visual Evaluation Plans

Friday, September 16th, 2016

What would you rather look at?  Some paragraphs of text and bullet points that explain in a step-by-step fashion your process and outcomes evaluation plans, or a diagram of those plans?  For me the answer is easy: a diagram.  Diagrams have the advantage of being quickly understandable, interesting to look at, invite participation of the viewer, and possibly most important for me, they’re colorful.  A textual explanation can walk me through the same process, but I would play a much more passive role, and I might not understand the big picture without having, well, a big picture.


Obviously you would also need the text.  Somewhere you need to explain the details of what you’re going to do in your evaluation. But a diagram can make the plan immediately comprehensible, and the reader can then read the textual explanation while understanding the overall context.

Bethany Laursen, an evaluation consultant, posted some examples of what she calls visual evaluation plans in her blog, Laursen Evaluation and Design.  These are created by students in a class at the University of Wisconsin-Madison.  I like them because by looking at them I have a basic understanding of their projects and how they will be evaluated.

Her blog post presents visual evaluation plans as a way of getting non-evaluators to understand your evaluation plans.  But I think they can also be a way that people (whether evaluators or non-evaluators who find themselves writing evaluation plans) could begin to think about how to plan their evaluation strategy to fit their project.

Microsoft products like Word and Power Point have drawing tools that can work to make diagrams.  But I think best with pen and paper, so if I were designing an evaluation plan for my daughter’s birthday party (see February 4, 2016 post), I would do something like the drawing here. Then I could create a plan for evaluating each of the process evaluation questions (in blue arrows) and each of the outcome evaluation questions (in red arrows).

This video, Faster Program Evaluation Planning: a New Visual Approach, shows how you could use a product like DoView to create a snazzy looking evaluation plan that also can link images to the textual description of the evaluation, and even further, link to your actual evaluation.

That famous phrase in the title, “a picture is worth a thousand words,” works really well to show how you can use your diagram to communicate your evaluation plan to others.  But if you’re using a diagram to design your plan in the first place, the quote that might work better is Gloria Steinem’s: “Without leaps of imagination or dreaming, we lose the excitement of possibilities. Dreaming, after all is a form of planning.”

And as Winnie-the-Pooh says “Nobody can be uncheered with a balloon.”









My Report Writing Toolkit

Friday, September 9th, 2016


I once heard evaluator extraordinaire Michael Patton say that an evaluator could staple an executive summary to a bunch of pages ripped from a phone book and no one would notice. Possibly our readers have developed a fear of drowning in numbers and technical information?

(For our younger blog followers, a phone book is that thick paperback that materializes on your doorstep about once a year and you trip over it a few times before throwing it in your recycling bin.)

Many of us are trying to write better reports, thanks to proactive efforts in our professional associations.  Many such organizations provide excellent training on report writing, often to sold-out audiences. The first step toward better reporting is better synthesis of our evaluation findings.  You yourself must understand your data well before you can effectively share findings with others.  However, there are many other design elements in a report that you can use to help your readers understand key points and retain important information. Nonverbal elements such as color, font choice, page layout, and graphic design, all contribute to effective evaluation reporting.

I have picked up a few tricks of the trade over the past few years.  So in today’s blog, I’m giving you my personal report-writing toolkit.

PowerPoint:  You may think of PowerPoint as a presentation tool, but I have discovered it is also a great tool for producing written reports.  Slide layouts provide flexibility in organizing graphics and text on a page.  The text boxes also force me to be succinct with written content. My favorite resource for PowerPoint reports is Nancy Duarte’s Slidedocs.  You can download free PowerPoint templates at her website, but truthfully, I seldom use them.  They never seem quite right for what I want to present and I don’t think all of them are accessible (508 compliant).  However, I use them to guide my design.  The templates provide examples of good layout and color palettes.  Also, Duarte’s templates exemplify effective practices for readability, such as ideal column width and line spacing.

“Presenting Data Effectively” by Stephanie Evergreen. I routinely consult this primer on presenting data when I write evaluation reports. Her book gets into the nitty-gritty of reporting evaluation results.  How do you choose font type? Where do you place data labels in a chart? How do you layout a page to incorporate text and charts. She leaves no stone unturned in this book.

Free photos: Photos have their place in both written reports and presentation slides, particularly when they serve as visual metaphors for key findings. Google’s advance search has a “usage rights” option that allows you to quickly find images online that are free to use or share. However, the quality of images from Google searches is variable.  I prefer to start with Pixabay, which provides consistently high quality pictures that are free to use.

Color Picker Tools:  Accent colors add visual interest to reports and direct readers’ attention to key findings. There are two color picker tools that I use routinely to find accent colors for my headers and graphics. PowerPoint now has an “eye dropper” feature that allows you to add custom colors that match images in your reports..  This is the fastest way to add a custom color to your theme palette.

However, when I have time to perfect my color choices, I rely on Adobe Color. You upload your image and Adobe Color shows you a palette of complementary colors to choose from. (I like to use my report cover photo or a screen snip of a logo or web page as my image.)  Adobe Color will allow you to adjust your palette to find, for example, brighter or more muted versions of your colors. Once you have the colors you want, you can get the RGV codes to create a custom color scheme for your report.

Two views of a line graph with four lines (four groups). The left is the graph as seen by the graph creator. It has a three black lines and a red line to emphasize results from a subgroup. The right version of the graph shows how a color-blind person sees it. The red line is black and the other lines are lighter gray

Color blindness checker: This exciting new multi-colored reporting world has its downside.  A small percentage of people are colorblind, so improper color choices may make your reports less understandable to them.  (The American Academy of Ophthalmologists estimates that colorblindness affects 8% of men and .5% of women.) So it’s a good idea to check your images through an app like Vischeck.

The two charts on the left show the results of a Vischeck on a line graph  I designed, where I made one line red to draw attention to results for one group. If you are not color blind, you will see that the left-hand chart has a red line to highlight a specific finding. The right chart shows what colorblind readers see: the line is darker, but it is not red. The darkness of the line does provide some contrast, so it is probably acceptable. But a different color or possibly a wider line would make that finding noticeable for all readers.

Printer: If your report is going to be printed and reproduced, chances are the copies will not be in color. I have learned the hard way to print my reports in black-in-white before distributing them to be sure the contrasts are still visible without color. If not, you can try varying intensity (gray versus black) or patterns (solid versus dotted lines).

There you have it: my go-to tools for creating evaluation reports. If you have others, I hope you’ll visit the NEO’s Facebook page and share them!

Here’s the full test for Stephanie Evergreen’s book: Evergreen, SDH. Presenting data effectively. Los Angeles, CA: Sage, 2014.


From Logic Model to Proposal Evaluation – Part 2: The Evaluation Plan

Friday, September 2nd, 2016

Photo of black and white cat with fangsLast week we wrote some basic goals and objectives for a proposal about teaching health literacy skills to vampires in Sunnydale.  Here’s what the goals and objectives look like, taken from the Executive Summary statement in last week’s post:

Goal: The goal of our From Dusk to Dawn project is to improve the health and well-being of vampires in the Sunnydale community.

Objective 1: We will teach 4 hands-on evening classes on the use of MedlinePlus and PubMed to improve Sunnydale vampires’ ability to find consumer health information and up to date research about health conditions.

Objective 2: We will open a 12-hour “Dusk-to-Dawn” health reference hotline to help the vampires with their reference questions.

There are also three outcomes that we have identified:

  1. Short-term: Increased ability of the Internet-using Sunnydale vampires to research needed health information.
  2. Intermediate: These vampires will use their increased skills to research health information for their brood.
  3. Long-term: Overall, the Sunnydale vampires will have improved health and as a result form better relationships with the human community of Sunnydale.

To get to an evaluation plan from here you have to know that there are basically two kinds of things you’ll want to measure: process and outcomes.

Process assessment measures that you did what you said you would do and the way you said you would do it. For example, you can count the number of classes you taught, how many people attended, and whether their survey responses showed that they thought you did a good job teaching.

Also you might want to show that you were willing to make changes in the plan if review of your process assessment showed that you weren’t getting the results you wanted.  For example, if you planned all your classes in early evening, but few vampires attended, you might interview some vampires and find out that early evening is mealtime for most vampires, and move your classes to a different time to increase attendance.  Your evaluation plan could show that you are collecting that information and that you will be responsive to what you see happening.

Outcome assessment measures the extent to which your project had the impact that you hoped it would on the recipients of the project, or even greater on their overall organizations or communities. We showed the first step of outcome assessment in last week’s assignment, but I’m going to break it down a little more here.  Put in basic terms, to do an outcome assessment, you state your outcome, you add in an indicator, a target, and a time frame to come up with a measurable objective, and then you write out the source of your data, your data collection method, and your data collection timing to complete the picture.  Let’s talk about each item here:

Indicator: This is the evidence you can gather that shows whether or not you met your outcomes.  If one of your outcomes is that the vampires have increased ability to research health information, how would you know if that had happened? The indicator could be their increased confidence level in finding health information, or it could be improvement in skills test scores given before and after a training session.

Target: The target is the goal that makes this project look like a success to you.  For example, if the vampires improve their test scores by 50% over a baseline test, is that enough to say you have successfully reached that outcome?  And how many of the vampires need to reach that 50% goal?  All of them? One of them?  Targets can be hard to identify, because you don’t want them to be too hard to reach but if they’re too easy your funder may not be impressed with your ambition.  Sometimes you can work with the funder or other stakeholders on setting targets that are credible.

Time frame: This is the point in time that when the threshold for success will be achieved.  So if you want to make sure the vampires increased their ability by the end of your training, then the time frame would be by the end of your training.

Data Source: This is the location where your information is found. Often, data sources are people (such as participants or observers) but they also may be records, pictures, or meeting notes. Here are some examples of data sources.

Data Collection Methods: Evaluation methods are the tools you use to collect data, such as a survey, observation, or quiz.  Here is more examples of data collection methods.

Data Collection Timing: The data collection timing is describing exactly when you will be collecting the data.

What does your final evaluation plan look like? 

Here is a sample piece of an evaluation plan for the Dusk to Dawn proposal.

Objective 1: teach 4 hands-on evening classes on the use of MedlinePlus and PubMed to improve Sunnydale vampires’ ability to research consumer health information and up to date research about health conditions.

Process Assessment: The PI will collect the following information to ensure that classes are being taught; expected attendance figures are being reached; teachers are doing a good job teaching classes (including surviving the classes).  Data will be reviewed after each class and changes will be made to the program as needed to reach target goals:

◊ Participant roster to measure attendance figures
◊ Class evaluations to measure teacher performance
◊ Count of number of teachers at the beginning and ending of each class to measure survival of instructors
◊ Project team will meet after the second class to review success and lessons learned and to consider course corrections to ensure objectives are met

Outcome Assessment:
Measureable Objective: In a post-test given immediately after each class, a minimum of 75% of Sunnydale vampire attendees demonstrate that they learned how to find needed resources in PubMed and MedlinePlus by showing at least a 50% improvement over the pre-test.

Based on Level 2 (Learning) in the Kirkpatrick Model, a test will be created with some basic health questions to be researched. Class participants will be given these questions as a pre-test before the class, and then will be given the same questions after the class as a post-test.  This learning outcome will be considered successful if a minimum of 75% of Sunnydale vampire participants demonstrate that their scores improved by at least 50%.

Last wishes, I mean, thoughts

This is not a complete evaluation plan, but the purpose of these two posts has been to show how you can go from a logic model to the evaluation plan of a proposal.  Don’t worry if all your outcomes cannot be measured in the scope of your project.  For example, in this Dusk to Dawn project, it might have been dangerous to find out if the vampires had passed on needed health information to their brood, even harder to find out whether the vampires had become more healthy as a result of the information.  This doesn’t mean to leave these outcomes out, but you may want to acknowledge that measuring some outcomes is out of the scope of the project’s resources.

As Grandpa Munster once said “Don’t let time or space detain ya, here you go, to Transylvania.”

Photo credit: Photo of 365::79 – Vampire Cat by Sarah Reid on Flickr under Creative Commons license CC BY 2.0.  No changes were made.












From Logic Model to Proposal Evaluation – Part 1: Goals and Objectives

Friday, August 26th, 2016

Vocabulary. Jargon. Semantics.  Sometimes I think it’s the death of us all.  Seriously, it’s really hard to have a conversation about anything when you use the same words in the same context to mean completely different things.

Take Goals and Objectives.  I can’t tell you how many different ways this has been taught to me.  But in general all the explanations agree that a goal is a big concept, and an objective is more specific.

Things get complicated when we use words like Activities, Outcomes, and Measurable Objectives when teaching you about logic models as a way of planning a project.  Which of those words correlate with Goals and Objectives when writing a proposal for the project you just planned?

Bela Lugosi as Dracula

I’m going to walk through an example of how we can connect the dots between the logic model that we use to plan projects, and the terminology used in proposal writing.  There isn’t necessarily going to be a one to one relationship, and it might depend on the number of goals you have.

As has been stated in previous posts, we’ve never actually done any work with the fictional community of Sunnydale, a place where there was, in the past, a large number of vampires and other assorted demons.  But in order to work through this problem, let’s go back to this hypothetical post where we used the Kirkpatrick Model to determine outcomes that we would like to see with any remaining vampires who want to live healthy long lives, and get along with their human neighbors.  For this post, I’m going to pretend I’m writing a proposal to do a training project for them based on those outcomes, and then show how they lead to an evaluation plan.


The goal can be your long-term outcome or it can be somewhat separate from the outcomes. But either way, your goal needs to be logically connected to the work you’re planning to do.  For example, if you’re going to train vampires to use MedlinePlus, goals like “making the world a better place,” or “achieving world peace,” are not as connected to your project as something like “improving health and well being of vampires” or “improving the health-literacy of vampires so they can make good decisions about their health.”

Here is a logic model showing how this could be laid out, using the outcomes established in the earlier post:

Dusk to Dawn Logic Model

Keep in mind that the purpose of a proposal is to persuade someone to fund your project.  So for the sake of my proposal, I’m going to combine the long-term outcomes into one goal statement.

The goal of this project is to improve the health and well being of vampires in the Sunnydale community.


The objectives can be taken from the logic model Activities column. But keep something in mind.  Logic models are small – one page at most.  So you can’t use a lot of words to describe activities.  Objectives on the other hand are activities with some detail filled in. So in the logic model the activity might be “Evening hands-on training on MedlinePlus and PubMed,” while the objective I put in my proposal might be “Objective 1: We will teach 4 hands-on evening classes on the use of MedlinePlus and PubMed to improve Sunnydale vampires’ ability to find consumer health information and up to date research.”

Objectives in Context

Here’s a sample of my Executive Summary of the project, showing goals, objectives, and outcomes in a narrative format:

Executive Summary: The goal of our From Dusk to Dawn project is to improve the health and well being of vampires in the Sunnydale community. In order to reach this goal, we will 1) teach 4 hands-on evening classes on the use of MedlinePlus and PubMed to improve Sunnydale vampires’ ability to find consumer health information and up to date research about health conditions; and 2) open a 12-hour “Dusk-to-Dawn” health reference hotline to help the vampires with their reference questions.  With these activities, we hope to see a) increased ability of the Internet-using Sunnydale vampires to research needed health information; b) that those vampires will use their increased skills to research health information for their brood; and c) these vampires will use this information to make good health decisions leading to improved health, and as a result form better relationships with the human community of Sunnydale.

Please note that in this executive summary, I do not use the word “objectives” to identify the phrases numbered 1 and 2, and I also do not use the word “outcomes” to identify the phrases lettered a, b, and c (because I like the way it reads better without them). However, in detailed narrative of my proposal I would use those terms to go with those exact phrases.

So then, what are Measurable Objectives?

The key to the evaluation plan is creating another kind of objective: what we call a measurable outcome objective. When you create your evaluation plan, along with showing how you plan to measure that you did what you said you would do (process assessment), you will also want to plan how to collect data showing the degree to which you have reached your outcomes (outcome assessment).  These statements are what we call measurable outcome objectives.

Using the “Book 2 Worksheet: Outcome Objectives” found on our Evaluation Resources web page, you start with your outcomes, add an indicator, target and time frame to get measurable objectives  and write it in a single sentence.  Here’s an example of what that would look like using the first outcome listed in the Executive Summary:

Dusk to Dawn Measurable Objective

We’ve gotten through some terminology and some steps for going from your logic model to measuring your outcomes.

Stay tuned for next week when we turn all of this into an Evaluation Plan!

Dare I say it? Same bat time, same bat channel…
































Shop Talk SWOT Hack for Proposal Writers

Friday, August 19th, 2016

SWOT (strengths, weaknesses, opportunities, and threats) analysis, strategic planning method presented as diagram on blackboard with white chalk and sticky notes

Every self-respecting workshop has its share of hacks. Today’s post is about the NEO Shop Talk’s SWOT hack.

Most of our readers have heard of SWOT analysis, because of its widespread use in strategic planning. NEO developed its own special version of SWOT analysis to help our readers and training participants with preparation of funding proposals.  Our version of SWOT analysis is one of a number of methods on the NEO’s new resource page for proposal planning featured in last week’s post.

“SWOT” stands for Strengths, Weaknesses, Opportunities, and Threats.  Businesses use SWOT analysis to examine their organizations’ internal strengths and weaknesses, and to identify external opportunities and threats that may impact future success. Strategic plans are then designed to exploit the positive factors and manage the negative factors identified in the analysis.

SWOT analysis can be a great proposal-planning tool. After all, funding proposal are essentially strategic plans. The analysis will prepare you to write a plan that describes the following:

  • Your organization’s unique ability to meet the needs of your primary project beneficiaries (Strengths)
  • The weaknesses in your organization that you hope to address through the funding requested in your proposal. (Weaknesses)
  • Resources external to your organization that you have discovered and can leverage for project success, such as experts, partners, or technology.(Opportunities)
  • Potential challenges you have identified and your contingency plan for addressing them, should they arise. (Threats)

Funding proposal do differ in one key way from organizational strategic plans: they are persuasive in nature. Your proposal must argue convincingly that an initiative is needed. It must also demonstrate your organization’s readiness to address that need. To make your arguments credible, you will need data, and you get that data from a community assessment. (I use the word “community” for any group that you want to serve through your project.) The NEO has tweaked the SWOT analysis process so that it can serve as the first step in the community assessment process.

Every SWOT analysis uses a chart.  We altered the traditional SWOT chart a bit, adding a third column.  In that column, you can record questions that arise during your SWOT discussion to be explored in your community assessment. Our chart looks like this:

NEO's version of the SWOT charts with a third column in gray for the internal and external unknowns

Here are the basic steps we suggest for facilitating a SWOT discussion:

  1. Convene a SWOT team. Ideally, representatives’ expertise and experience will lead to a thorough understanding of the internal and external factors that can impact your project. You want team members who know your organization well and those who know the beneficiary community well.  It’s great if you can find people who know both, such as key informants who belong to the beneficiary group and also use the services of your library or organization.
  2. Ask the group to brainstorm ideas for each of the six squares in the chart above. To record group input, facilitators favor poster-size SWOT charts pinned to the wall and stacks of sticky pads that allow team members to add their ideas to each square.
  3. Once you have exhausted the discussion about the six squares, you now want to see if you have evidence to support the facts and ideas. Examine each idea on the chart, asking the following questions: (a) What source of information exists to support our claims about the identified strengths, weaknesses, opportunities and threats? If you have no real evidence for an idea, it may need to be moved to an “unknown” square (b) How important is it that we include this claim in our proposal (c) If we do need to include it, is our data credible enough to support our claim? It it’s weak, how can we get more persuasive data or additional corroborating information?
  4. Now, work with your “unknowns.” How can you educate yourself about those gray areas? What data sources and methods can you use?
  5. At this point, you now know where to focus your community assessment efforts. Your last step is to make a community assessment plan. Assign tasks to team members and determine a data collection timeline.

Once you have collected your data, your core project team can revisit the SWOT chart. Your community assessment findings should fit neatly into the four SWOT squares and, hopefully, you will have far fewer “unknowns.” Some of your community assessment findings will help you build your rationale for your project. Other information will help you refine your project strategies, which you will work out using another great planning tool from our proposal-planning page: the logic model.  For a group project-planning process, check out the NEO post on tearless logic models.


Evaluation Planning for Proposals: a New NEO Resource

Friday, August 12th, 2016

Angry crazy Business woman with a laptop

Have you ever found yourself in this situation?  You’re well along in your proposal writing when you get to the section that says “how will you evaluate your project?”  Do you think:

  1. “Oh #%$*! It’s that section again.”
  2. “Why do they make us do this?”
  3. “Yay! Here’s where I get to describe how I will collect evidence that my project is really working!”

We at the NEO suggest thinking about evaluation from the get-go, so you’ll be prepared when you get to that section.  And we have some great booklets that show how to do that.  But sometimes people aren’t happy when we say “here are some booklets to read to get started,” even though they are awesome booklets.

So the NEO has made a new web page to make it easier to incorporate evaluation into the project planning process and end up with an evaluation plan that develops naturally.

1. Do a Community Assessment; 2. Make a Logic Model; 3. Develop Measurable Objectives; 4. Create an Evaluation Plan

We group the process into 4 steps: 1) Do a Community Assessment; 2) Make a Logic Model; 3) Develop Measurable Objectives for Your Outcomes; and 4) Create an Evaluation Plan.   Rather than explain what everything is and how to use it (for that you can read the booklets), this page links to the worksheets and samples (and some how-to sections) from the booklets so that you can jump right into planning.  And you can skip the things you don’t need or that you’ve already done.

In addition, we have included links to posts in this blog that show examples of the areas covered so people can put them in context.

We hope this helps with your entire project planning and proposal writing experience, as well as provides support for that pesky evaluation section of the proposal.

Please let Cindy ( or me ( know how it works for you, and feel free to make suggestions.  Cheers!













The Kirkpatrick Model (Part 2) — With Humans

Friday, August 5th, 2016

Disclaimer: Karen’s blog post last week on the Kirkpatrick Model used an example that was hypothetical.  We want to be clear that the NEO has never evaluated any programs directed toward improving health outcomes for vampires.

However, we can claim success in applying the Kirkpatrick Model for National Network of Libraries of Medicine (NN/LM) training programs.

The NN/LM’s mission is to promote the biomedical and consumer health resources of the National Library of Medicine.  One strategy that is popular with NN/LM’s Regional Medical Libraries, which lead and manage the network, is the “train-the-trainer” program. These programs teach librarians and others about NLM resources so that they, in turn, will teach their peers, patients, or clients. When the NEO provides evaluation consulting for train-the-trainer programs, we rely heavily on the Kirkpatrick Model.

Kirkpatrick Outcomes Levels and Logic Models

For example, the NN/LM’s initiative to reach out to community college librarians incorporated “train-the-trainer” as one of several strategies to promote use of NLM resources in community college health professions programs. While the initiative was multi-pronged, train-the-trainer programs for community college librarians was a major strategy of the project. The Kirkpatrick Model helped our task force define outcomes and develop measures for this activity.

Our logic model led us to the following program theory:

If we train community college librarians to use National Library of Medicine Resources

  • They will respond favorably to our message (Reaction)
  • And discover new, useful health information resources that (Learning)
  • They will use when working with faculty and students (Behavior)
  • Which will lead to increased use of NLM resources among community college faculty and staff (Results)


Measuring Outcomes

We developed two simple measurement tools to assess the four outcome levels.  To measure Reaction, RML instructors administered a standard one-page session evaluation form that has been used for years by instructors who provide NN/LM training sessions. The form collects participants’ feedback, including the grade (A through F) they would assign to the class. This form was our measure of participant reaction.

The other three levels were assessed using a follow-up questionnaire sent to the training participants several months after their training. On this questionnaire, we asked them a series of yes/no questions:

Learning: At this training session, did you learn about health information resources that were NEW to you?

Behavior: Regarding the NEW resources you learned at the training, have you done any of the following?

  • Shown these resources to students?
  • Used the resources when preparing lesson plans?
  • Shown the resources to community college faculty or staff?
  • Used the resources to answer reference questions?

Results: Do you know if the resources are being used by

  • Students?
  • Faculty, administration, or staff at your organization?
  • The librarians at your institution?

We knew our Results questions were weak. They obviously were very subjective. Most of the respondents said they didn’t know about use beyond their library staff members. Unfortunately, we did not have resources for a more objective measure of our anticipated results (e.g., surveying faculty and students at participating schools). Our dilemma was not unusual. Many practitioners of the Kirkpatrick Model agree that assessing Results-level outcomes can be costly and challenging.

However, in anticipation that this Results-level measure might not work, we had a back-up plan inspired by Robert Brinkerhoff’s Success Case Method (which we posted about here). In this approach, evaluators ask training participants to describe how their training benefited the organization.  We ended the questionnaire with the following open-ended question: Please describe how the training you received on National Library of Medicine resources has made a difference for you or your organization.

This question worked well, with 57% of respondents providing examples of how the training improved their customer services. They reported using the NLM resources to provide reference services and incorporating NLM resources into their information literacy classes for health professions students.  Some also were talking to faculty about the importance of teaching health professions students about RML resources that students could use after graduation.

In the end, the Kirkpatrick Model helped us get metrics and qualitative information that helped to assess the effectiveness of our train-the-trainer activities.  Most of the training participants who responded to our follow-up questionnaire learned new resources and were promoting them to student and faculty. Their stories showed that the NN/LM training improved the services they were delivering to their users.

The NEO has drawn on the Kirkpatrick Model to design evaluation methods for similar projects, including our own evaluation training programs.  It is a great tool for helping program planners to define concrete objectives and create measures that are closely linked to desired outcomes.


Last updated on Monday, June 27, 2016

Funded by the National Library of Medicine under Contract No. UG4LM012343 with the University of Washington.