Skip all navigation and go to page content

NEO Shop Talk

The blog of the National Network of Libraries of Medicine Evaluation Office

Archive for June, 2015

Telling Good Stories about Good Programs

Monday, June 29th, 2015

Sometimes our program successes are a well-kept secret, hidden deep in our final reports under pages of statistics, tables, and descriptive details. There is a way to shine a stronger light on positive program impacts: program success stories. These are short (1-2 page) narratives that are designed to educate policy makers, attract partners, and share effective practices among colleagues.

The Centers for Disease Control and Prevention deserves credit in leading a program success story movement within the public health sector. You can find lots of resources at the CDC’s website for developing program success stories. A quick Google search will turn up many success story web pages from public health departments, such as the three listed below:

If you want to create success stories for your program or organization, you need to start with a plan. You want to establish a routine to collect information in a timely manner. To get started, check out the CDC Division of Oral Health’s Tips for Writing an Effective Success Story. For more details, the CDC offers the workbook Impact and Value: Telling Your Program’s Story. The CDC Division of Adolescent and School Health also has a how-to guide for writing success stories: Success Story Optional Tool. Finally, you might find this Success Story Data Collection Tool helpful for organizing and writing your program story.  A data collection sheet could be particularly useful if multiple team members are involved in collecting success story data. The data collection tool is available in PDF or Word formats.

stockfresh_687180_magic-book-with-pages-transforming-into-birds_sizeS (2)

Easy Email Newsletters for Keeping Stakeholders Informed

Friday, June 19th, 2015

Do you find it difficult to ensure that you are keeping your stakeholders up to date throughout your program? In her AEA Summer Institute class entitled “An Executive Summary is Not Enough: Effective Evaluation Reporting Techniques,” Kylie Hutchinson from Community Solutions suggests a very interesting product for staying in touch with stakeholders. The product is Constant Contact, a tool that allows you to put the latest stats, activities, or planning updates into a newsletter format that goes into the body of an email, allowing stakeholders to easily stay updated on the progress of your program. In addition, you, the evaluator, get feedback on who opens the email and what they click on.

Here is a sample email created for free in just a few moments, showing NN/LM stakeholders what activities took place in May in an imaginary community college outreach project.

Constant Contact

Components of Process Evaluation

Friday, June 12th, 2015

At the American Evaluation Association Summer Institute, Laura Linnan, Director of the Carolina Collaborative for Research on Work & Health at UNC Gillings School of Public Health, did a workshop entitled Process Evaluation: What You Need to Know and How to Get Started. According to the CDC, process evaluation is the systematic collection of information on a program’s inputs, activities, and outputs, as well as the program’s context and other key characteristics.

Logic Model Image from CDC

Process evaluation looks at the specific activities that take place during an outreach project to ensure that planned interventions are carried out equally at all sites and with all participants, to explain why successes happen or do not happen, and to understand the relationships between the project components. Process evaluation can be extremely important in making adjustments to ensure the project’s success, and determining how or whether to do a project again.

In the workshop I attended, Linnan walked through the details covered in Chapter 1 of the book Process Evaluation for Public Health Interventions and Research by Laura Linnan and Allan Steckler. This chapter presents an overview of process evaluation methods. In it, they define a set of terms that describe the components of process evaluation (Table 1.1). These components are valuable to understand, because evaluators can look in detail at each component to determine which ones should be evaluated.

  1. Context
  2. Reach
  3. Dose delivered
  4. Dose received
  5. Fidelity
  6. Implementation
  7. Recruitment

In addition, the authors describe a step-by-step process for designing and implementing process evaluation in a flow chart shown in Figure 1.1, including: creating an inventory of process objectives; reaching consensus on process evaluation questions to be answered; creating measurement tools to assess process objectives; analyzing data; and creating user-friendly reports. And as a final note, Linnan and Steckler recommend that stakeholders be involved in every aspect of this process.

Lesson Learned: Outputs are Cool!

Friday, June 5th, 2015

AEA Summer Institute Logo

Cindy Olney and I just returned from the American Evaluation Association Summer Institute in Atlanta, GA. The blog posts for the next couple of months will be filled with lessons learned from the Institute. I am going to start with Outputs, because they were the greatest surprise to me.

In his “Introduction to Program Evaluation,” Thomas Chapel, Chief Evaluation Officer for the Centers for Disease Control and Prevention, said that he thought outputs were just as important as outcomes. This was quite shocking to me, since it always seemed like outputs were just the way of counting what had been done, and not nearly as interesting as finding out if the desired outcome had happened.

Outputs are the tangible products of the activities that take place in a project. For example, let’s say the project’s goal is to reduce the number of children with Elevated Blood Lead Levels (EBLL) by screening children to identify the ones with EBLL and then referring them to health professionals for medical management. In this brief project description, the activities would be to:

1) Screen children to identify the ones with EBLL
2) Refer them to health professionals for medical management

If outputs are the tangible products of the activities, they are sometimes thought to be something countable, like “the number of children screened for EBLL” and “the number of referrals.” This is how the project manager can ensure that the activities took place that were planned.

However, if you think about the way an activity can take place, you can see that some methods of completing the activities might lead to a successful outcomes, and some might not. A better way of thinking of the outputs might be “what would an output look like that would lead to the outcome that we are looking for?” To use “referrals” as an example, let’s say that during the program 100% of the children identified with EBLL were referred to health professionals, but only 30% of them actually followed up and went to a health professional. If the only information you gathered was the number of referrals, you cannot tell why the success rate was so low. Some of the things that could go wrong in a referral is that people are referred to physicians who are not taking more patients, or to physicians who don’t speak the same language as the parents of the child. So you might want to define the referral output as including those factors. The new output measure could be “the number of referrals to ‘qualified’ physicians,” in which ‘qualified’ is defined by the attributes you need to see in the physicians, such as physicians who are taking new patients, or physicians who speak the same language as the family.

The lesson for me is that outputs are as important as outcomes because by thinking carefully about outputs at the beginning of the planning process, you can ensure that the project has the greatest chance of successful outcomes, and by using outputs during process evaluation, you can make any needed corrections in the process as it is happening to ensure the greatest success of the project.

Last updated on Monday, June 27, 2016

Funded by the National Library of Medicine under Contract No. UG4LM012343 with the University of Washington.