National Network of Libraries of Medicine
English Arabic Chinese (Simplified) French Hindi Japanese Korean Persian Portuguese Russian Spanish

NEO News

Subscribe to NEO News feed NEO News
The blog of the NNLM Evaluation Office
Updated: 36 min 21 sec ago

Free Resources to Help Communities Engage with Their Data

Mon, 2017-11-20 16:56

As you already know, the whole NEO team attended the Evaluation 2017 conference last week.  I learned enough to fill up quite a few blog posts. Today’s is about some free tools I found out about that can help communities get comfortable working with data.

I went to a presentation by the Engagement Lab at Emerson College. The purpose of this Lab is to re-imagine civic engagement in our digital culture.  Engagement Lab has created a suite of free online tools to encourage the communities that they work with work with data even if they are beginners. The products have super fun examples on each page so you can see if they would work for you.

The one I thought might be best for the NEO (and for this blog) is the one called WTFcsv which stands for what you probably think it stands for (there’s an introductory video that includes a lot of bleeping). The idea is that if you are new at using data and have a ton of data in a CSV file, what the bleep do you do with it?

The web tool has some examples you can look at to understand how the tool works. I like the “UFO Sitings in Massachusetts” example that shows among other things the most common reported shapes/descriptions of UFO sitings in MA (“light” is the most common, followed by “circle”). It even comes with an activity guide for educators to help people learn to work with data.6 charts showing results of a survey

I wanted to see how this would work with National Network of Libraries of Medicine data.  A few years ago NNLM had an initiative to increase awareness and use of National Library of Medicine resources, like PubMed and MedlinePlus. I uploaded the CSV file that had the results of that project. This image is a screen shot of the results (you can click on it to make it bigger).  I think it did a good job of making charts that would give us something to talk about.

The good news is that it only takes minutes to upload the data and see the results. Also, below the data is a paragraph with some suggestions of conversations you might want to have about the data. WTFcsv is a tool for increasing community interaction with data, so this is very helpful. The results stay up on the website for 60 days, so you can share the link with a group.

Most of the bad news has to do with trying to make an example that would look good in this blog. In order to find data that would make a nice set of images to show you, I went through a lot of our NNLM NEO data.  And I did have to reformat the data in the CSV file for it to work nicely.  But if you were using the tool as a starting point, it’s okay for the data to not quite work with the WTFcsv resource – the purpose is to give you something to talk about, and it certainly does that (even if the something is that you might need to reconfigure your data a little).

The titles of each chart only allowed very few characters. So I had to shorten the titles of the data’s columns down to what may only partly represent  the data. However, I was making something to show in a screenshot, which is not what this tool is designed for.  If I had left the titles long, they would show up when you click on the chart and get some of the additional information.

The presenter showed us four additional tools from the Emerson College Engagement Lab made that are available for anyone to use for free.

Wordcounter tells you the most common words in a bunch of text, including bigrams (pairs of words) and trigrams (sets of 3 words together).  This can be a starting point for textual analysis.

Same Diff compares two or more text files and tells you how similar or different they are.  Examples include comparing the speeches of Hillary Clinton and Donald Trump, or comparing the lyrics of Bob Dylan and Katy Perry, among others.

Connectthedots shows how your data are connected by analyzing them as a network. They show a chart where each node is a character in Les Misérables, and each link indicates that they appear in a scene together.

If you want to know more about applying these tools in a real life situation, the Executive Director of the Engagement Lab, Eric Gordon, has an online book called Accelerating Public Engagement: A Roadmap for Local Government.

 

Categories: RML Blogs

#Eval17 Highlights

Mon, 2017-11-13 17:27

The whole NEO team attended AEA’s Evaluation 2017 conference last week. I am still processing a lot of what I’ve learned from the conference, and hope to write in more detail about them in the upcoming months. Until then, here are some of my highlights:

Workshops
I attended the two-day Eval 101 workshop by Donna Mertens, and the half day Logic Model workshop from Thomas Chapel. Both workshops gave me a solid understanding of how evaluators plan, design, and execute their evaluations through hands-on training. I know I’ll be referring to my notes and handouts from these workshops often.

My customized conference tag.

Ignite presentations
The conference website defines these presentations as “20 PowerPoint slides that automatically advance every 15 seconds for a total presentation time of just 5 minutes.” Just thinking about creating such a presentation makes me nervous! The few that I saw have inspired me to work on my elevator pitch skills.

#Eval17Twitter
I attended a delicious lunch with fellow evaluators who are active on Twitter. Though I am not very active on that platform, they welcomed me and even listened to my elevator speech about why public libraries are amazing. Each attendee worked in different evaluation environments, and came from all over the United States and around the world. It was a fun way to learn more about the evaluation field.

Sessions
It’s hard to pick a favorite session, but one that stood out was DVR3: No title given. Despite the lack of a title, the multipaper presentation will stay with me for a long time. The first presentation was from Jennifer R. Lyons and Mary O’Brien McAdaragh, who talked about a personal project sending hand-drawn data visualizations on postcards. The second presentation, by Jessica Deslauriers and Courtney Vengrin, shared their experiences using Inkscape in creating data visualizations.

First NEO meeting IRL
This is my favorite part of the conference. I’ve been working with the NEO for over a year, and yet this was the first time we were all in the same room together. It was such a treat to dine with Cindy and Karen, and work in the same time zone. We also welcomed our newest member, Susan Wolfe, to the team. Look for a group photo in our upcoming Thanksgiving post.

Official banner for AEA's Evaluation 2017 conference.

I recommend librarians interested in honing their evaluation skills to sign up for the pre-conference workshops, and to attend AEA’s annual conference at least once. It opened my eyes to all sorts of possibilities in our efforts to evaluate our own trainings and programs.

Categories: RML Blogs

Beyond Anecdotes: Story Collection Methods for Program Evaluation

Fri, 2017-11-03 15:18

Woodcut text stating "Share your story" with a cup of coffee beside it. Story telling concept

The promotora’s uncle was sick and decided it was his time to die. She was less convinced, so she researched his symptoms on MedlinePlus and found evidence that his condition probably was treatable. So she gathered the family together to persuade him to seek treatment. Not only did her uncle survive, he began teaching his friends to use MedlinePlus. This promotora (community health worker) was grateful for the class she had taken on MedlinePlus offered by a local health sciences librarian.

This is a true story, but it is one that will sound familiar to many who do health outreach, education, or other forms of community service. Those of us who coach, teach, mentor, or engage in outreach often hear anecdotes of the unexpected ways our participants benefit from engagement in our programs. It’s why many of us chafe at using metrics alone to evaluate our programs. Numbers usually fall short of capturing this inspiring evidence of our programs’ value.

The good news is that it isn’t difficult to turn anecdotes into evaluation data, as long as you approach the story (data) collection and analysis systematically. That usually means use of a standard question guide, particularly for those inexperienced in qualitative mythologies.

For easy story collection methods, check out the NEO tip sheet Qualitative Interview “Story” Methods. While there are many approaches to doing qualitative evaluation, this tip sheet focuses on methods that are ideal for those with limited budgets and experience in qualitative methods. Most of these story methods can be adapted for any phase of evaluation (needs assessment, formative, or outcomes). The interview guides for each method consist of 2-4 questions, so they can be used alone for short one-to-one interviews or incorporated into more involved interviews, such as focus groups. Every team member can be trained to collect and document stories, allowing you to compile a substantial bank of qualitative data in a relatively short period of time. For example, I used the Colonias Project Method for an outreach project in the Lower Rio Grande Valley and collected 150 stories by the end of this 18-month project. That allowed us to do a thematic analysis of how MedlinePlus en Español was used by the community members. Individual stories helped to illustrate our findings in a compelling way.

Do you believe a story is worth a thousand metrics? If so, check out the tip sheet and try your hand at your own qualitative evaluation project.

Note: The story above came from the project described in this article: Olney, Cynthia A. et al. “MedlinePlus and the Challenge of Low Health Literacy: Findings from the Colonias Project.” Journal of the Medical Library Association 95.1 (2007): 31–39. PMC free article.

Categories: RML Blogs

Welcome to Sunnydale

Fri, 2017-10-27 12:17

It’s the spookiest time of the year! To help celebrate, we’re visiting our favorite fictional town, Sunnydale.

If you’re a long-time reader of Shop Talk, you might already be familiar with the posts about librarians reaching out to the vampire population in Sunnydale. The first post about Sunnydale was Developing Program Outcomes using the Kirkpatrick Model – with Vampires, which featured librarians developing an outcomes-based plan for an evening class in MedlinePlus and PubMed. Since then, the librarians of Sunnydale have been busy creating logic models, evaluation proposals, and evaluating their social media engagement.

Annie the cat bares her fangs. Photo courtesy of Petsitter M.

Whether you’re a new subscriber or have been reading the Shop Talk since its inception, the Sunnydale posts allow us to have a little fun while teaching evaluation skills. We will update this list with new Sunnydale posts, so be sure to bookmark this page for future use.

We hope you enjoy this trip to Sunnydale, and have a fang-tastic Halloween!

Developing Program Outcomes using the Kirkpatrick Model – with Vampires
July 28, 2016 by Karen Vargas

The Kirkpatrick Model (Part 2) — With Humans
August 2, 2016 by Cindy Olney

From Logic Model to Proposal Evaluation – Part 1: Goals and Objectives
August 26, 2016 by Karen Vargas

From Logic Model to Proposal Evaluation – Part 2: The Evaluation Plan
September 2, 2016 by Karen Vargas

Beyond the Memes: Evaluating Your Social Media Strategy – Part 1
January 13, 2017 by Kalyna Durbak

Beyond the Memes: Evaluating Your Social Media Strategy – Part 2
January 20, 2017 by Kalyna Durbak

Finding Evaluator Resources in Surprising Places
April 21, 2017 by Kalyna Durbak

Logic Model Hack: Constructing Proposals
June 2, 2017 by Karen Vargas

Evaluation Questions: GPS for Your Data Analysis
September 8, 2017 by Cindy Olney

Photo Credits: Annie, Cindy’s cat, bares her fangs. Photo courtesy of Petsitter M.

Categories: RML Blogs

Social Exchange Theory and Questionnaires Part 2: Communication and Distribution

Fri, 2017-10-20 17:59

Handshake between a laptop and a humanLast week we talked about how to think about questionnaire design in terms of social exchange theory – how to lower perceived cost and raise perceived rewards and trust in order to get people to complete a questionnaire.

But there’s more to getting people to complete a questionnaire than its design.  There are the words you use to ask people to complete your questionnaire (often in the form of the content of an email with the questionnaire attached).  And there’s the method of distribution itself – will you email? Mail? Hand it to someone? How many times should you remind someone?

As we have said in the previous post, Boosting Response Rates with Invitation Letters , we recommend the Dillman’s Tailored Design Method (TDM) as a technique for improving response rates.  In TDM, in order to get the most responses, you might communicate with your respondents four times. For example, an introduction email before the questionnaire goes out, the email with the questionnaire attached, and two reminders. How does this fit with social exchange theory?

Let’s go back to the three questions we said in the last post that you should always consider, and apply them to communication about and distribution of your questionnaire:

  • How can I make this easier and less time-intensive for the respondent? (Lower cost)
  • How can I make this a more rewarding experience for the respondent? (Increase reward)
  • How can I reassure the participants that it is safe to share information? (Increase trust)

Remember, when you are asking someone to complete a questionnaire for you, you are asking them to take time out of their lives that they cannot get back.  Remember too that they have been asked to complete many, many questionnaires in the past that have taken up too much of their time, annoyed them, or have clearly been designed to manipulate them into donating money or in some other way abused their trust.  You have to use your communication and distribution strategy to overcome these obstacles. Here are just a few ideas.

Decrease Perceived Cost

  • Be sure respondents have multiple ways of contacting someone with questions. This reduces the possibility that someone will put off responding to the questionnaire because they have a question and have to figure out who to ask.
  • Add a “reply by” date in all of your invitation emails. Many people find it easier to follow up with a task if there is a clear deadline.

Increase Perceived Reward 

  • Ask respondents directly for their help and tell them specifically why you are asking for their opinions.  This may help your respondents’ to understand that they are uniquely able to answer the questions and feel like they are contributing to something. For many people, this is a reward.
  • If you can afford to, consider including a token gift with your first invitation email. This small gift of $1 or $2 as a token of appreciation demonstrates trust and respect that the organization distributing the questionnaire shows for the respondents. And research shows that it provides better results than a larger amount of money given only to people who respond.

Increase Perceived Trust

  • Have someone trusted by your participants to endorse your project (such as, by signing  your pre-letter, posting to a circulated newsletter or blog). This builds trust that you are trusted among their peers.
  • Tell them how you will keep their responses confidential and secure.
  • For mail surveys, use first-class postage – this will increase their trust that you take the questionnaire seriously.

Source: Dillman DA, Smyth JD, and Christian LM. Internet, Phone, Mail, and Mixed-Mode Surveys: The Tailored Design Method, 4th edition. Hoboken, NJ: Wiley; 2014.

Categories: RML Blogs

Social Exchange Theory and Questionnaires Part 1: Questionnaire Design

Mon, 2017-10-16 16:40

Getting a high response rate is an important part of trusting the information you get from a questionnaire.  Don Dillman, a guru of questionnaire research, says that to get a good response rate it helps to see questionnaires as part of a social exchange.  Social Exchange Theory is the theory that “people are more likely to comply with a request from someone else if they believe and trust that the rewards for complying with that request will eventually exceed the costs of complying.”1 Specifically he says that when designing your questionnaires, distributing them, or communicating about them, you need to think specifically about ways to lower the perceived cost of responding to the questionnaire, and increase perceived rewards and perceived trust.

Social Exchange Theory Diagram

What do we mean by perceived cost, rewards and trust?  A cost might be the amount of time it takes to do the survey, but the perceived cost is how long that survey feels to the person answering it.  For example, if the survey is interesting, it could be perceived as taking less time than a shorter, but confusing or poorly worded survey.  A reward could be an actual monetary reward, or it could be the reward of knowing that you are participating in something that will make important change happen.  Perceived trust could be trusting that the organization will make good use of your responses.

Today I will only focus on questionnaire design — in future blog posts we will write about how social exchange theory can be used in communicating about and distributing your questionnaires.

One of the things I like about social exchange theory in questionnaire design is that normally I would be looking at the questions I’m writing in terms of how to get the information that I want.  This is fine of course, but by looking at the questions from a social exchange perspective, I can also be thinking about ways I might write questions to improve someone’s likelihood of completing the survey.

Ask yourself these three questions:

  • How can I make this easier and less time-intensive for the respondent? (Lower cost)
  • How can I make this a more rewarding experience for the respondent? (Increase reward)
  • How can I reassure the participants that it is safe to share information? (Increase trust)

Here are some ideas that might get you started as you think about applying social exchange theory to your questionnaire design.

Decrease Cost

  • Only ask questions in your survey that you really need to know the answers to so you can keep it as short as possible.
  • Pilot test your questionnaire and revise to ensure that the questions are as good as possible to minimize annoying your respondents with poorly worded or confusing questions.
  • Put open-ended questions near the end.

Increase Reward 

  • Ask interesting questions that the respondent want to answer.
  • As part of the question, tell the respondent how the answer will be used, so they feel that by answering the question they are being helpful (for example “Your feedback will help our reference librarians know how to provide better service to users like you.”)

Increase Trust

  • A status bar in an online survey lets you know how much of the survey is left and helps you to trust that you won’t be answering the survey for too long
  • Assure respondents that you will keep responses confidential and secure. While this may have already been stated in the introduction, it could help to state it again when asking a sensitive question.

For more information, see:

NNLM Evaluation Office: Booklet 3 in Planning and Evaluating Health Information Outreach Projects series: Collecting and Analyzing Evaluation Data https://nnlm.gov/neo/guides/bookletThree508

NEO Shop Talk Blog posts on Questionnaires: https://news.nnlm.gov/neo/category/questionnaires-and-surveys/. Some

NEO Questionnaire Tools and Resources Guide on Data Collecting: https://nnlm.gov/neo/guides/tools-and-resources/data-collection

——————

1 Dillman, Don A., et al. Internet, Phone, Mail, and Mixed-Mode Surveys : The Tailored Design Method, John Wiley & Sons, Incorporated, 2014. ProQuest Ebook Central, p. 24.

Categories: RML Blogs

DYI: Build Your Own Culture of Evaluation

Fri, 2017-10-06 14:10

Group of business people, holding large puzzle pieces that fit together,, to signify working together to understand evaluation data.

Our organization has a culture of evaluation.

Oooh, doesn’t that sound impressive? In fact, I confess to using that term, culture of evaluation, in describing the NNLM Evaluation Office’s mission. However, if someone ever asked me to explain concretely what a culture of evaluation actually looks like, it would have taken some fast Googling, er, thinking on my part to come up with a response.

Then I discovered the Community Literacy of Ontario’s eight-module series, Developing A Culture of Evaluation. In module 1, Introduction to Evaluation, they ground the concept in seven observable indicators seen in organizations dedicated to using evaluation for learning and change. (You can read their list on page 11 of module 1).

That led me on a  hunt for more online resources with suggestions on how to build a culture of evaluation. I located some good ones.  Here’s an infographic from Community Solutions Planning and Evaluation with 30 ideas for evaluation culture-building that most nonprofits could adopt. John Mayne’s brief Building an Evaluation Culture for Effective Evaluation and Results Management describes what senior management can do to make (or break) an organization’s culture of evaluation. My investigation inspired me to think of ways we can all foster a culture of evaluation in our own teams and organizations.

Put Evaluation Information on Meeting Agendas

Embrace organizational learning and use evaluation information as your primary resource. Find ways to integrate performance and outcome measures into daily planning and decision making.  A good place to start is in staff or team meetings. Usage statistics, social media metrics, attendance or membership rates are examples of data that many organizations collect routinely that might generate good discussion about your programs. If you don’t have any formally collected data related to agenda topics, consider asking your team to collect some data informally.  Check out module 3, Collecting Data, for examples of both informal and formal data collection guidance. Module 3, Taking Action, has some practical examples of how you can share evaluation data and structure discussions. (I particularly like the template on page 9 of this module.)

Take Calculated Risks Using Evaluation Data

When planning programs, collect and synthesize evaluation data to get an overview of factors that support and challenge your likelihood of success. One of the best tools for doing this is a SWOT analysis (SWOT stand for Strengths, Weaknesses, Opportunities, Threats). This NEO Shop Talk post describes how to extend the traditional SWOT discussion to identify unknowns regarding your program success.  The SWOT analysis can both help you synthesize existing information about your customers and environment, as well as identify areas where you need more information. You might want to revisit module 3’s discussion on informal data collection to help when you lack existing evaluation information.

Report Findings Early and Often

Like cockroaches, exhaustive final reports will likely survive until the end of time. if you are truly committed to a culture of evaluation, however, you need to break with this end-of-project tradition and find opportunities to share findings on an ongoing basis.  Data dashboards are one example of how to engage a broad audience in your organization’s evaluation data. However, they require time and expertise that may not be out of reach for many organizations. One nice tip from the Community Solution 30-ideas infographics is to make friends with your organization’s communication team.  They can help you find opportunities in publications, websites, and social media channels to share evaluation findings. Your job will be to add substance to the numbers. While quick facts can be interesting, it is better to talk about numbers as evidence of success.  You also should not be shy about publishing less stellar findings and explaining how your organization is using them to improve programs and services.

Engage Stakeholders in the Evaluation Process

A stakeholder is anyone who has a stake in the success of your program.  They should, and usually do, influence program decisions. It’s up to you to make sure they are engaging with evaluation information as they develop informed opinions and advice.  Rather than giving them well-synthesized findings in annual reports or presentations, engage them in the actual practice of evaluating programs.  NEO Shop Talk has a number of posts that can help you structure meetings and discussion with stakeholders about evaluation findings.  Check out these posts on data parties, audience engagement, and Liberating Structures.

Of course, a culture of evaluation requires foundational evaluation activities. I highly recommend all of the modules in Community of Literacy of Ontario’s Developing A Culture of Evaluation. The content is succinct and easy to read, and relatively jargon free. (The jargon they do use is defined.)  The NEO’s booklet series “Planning and Evaluating Health Information Outreach Projects” is another how-to resource on the basics of evaluation.

Acknowledgements

The full citation for John Mayne’s paper is

Categories: RML Blogs

My First Logic Model Experience

Fri, 2017-09-29 12:14

This past Wednesday Karen and Cindy held a logic model workshop, which included crafting outcomes, for NNLM staff. Most people probably did not know that it was the first time I worked with a logic model! Here are a few of my takeaways about logic models and writing outcomes:

A logic model

The logic model we created during the workshop. Click on the logic model to enlarge the picture.

You don’t have to come up with outcomes out of thin air.
Thinking of theories such as the Diffusion of Innovation and the Kirkpatrick Model can help you write out your outcomes.

In logic models, outcomes are about someone or something else.
Karen suggested to “think in the 3rd person,” and not to write sentences with a first-person subject. In the logic model above, the short-term and intermediate outcomes describe what the participating librarians do. In the long-term outcomes, the logic model describes what happens to the public and to the library. Activities are what you do; outcomes are what the participants do.

If you don’t have any control over something, it’s probably an outcome.
Cindy pointed out that many people do not like committing to outcomes because they cannot fully control the success or failure of it. We cannot control if our fictional public librarian program in St. Louis will lead to 99% of library staff starting at MedlinePlus when they have a health question, but it’s something that our program could strive to do.

You don’t have to measure every single outcome.
Why would you write down a hard-to-measure outcome? It details the why of a program. Why is it important to teach librarians how to use MedlinePlus? Why should librarians use MedlinePlus to help with reference questions? Librarians should be taught how to use MedlinePlus because it will “improve health literacy in the public with more reliable health information.” Having the end in mind will help steer the program to success. There’s a reason why we love using the Yogi Berra quote “If you don’t know where you’re going, you might not get there,” it’s true!

Logic models can be used for many things.
Logic models are not made for just evaluation planning. Inputs can help an organization figure out a program’s budget. Brainstorming external factors can lead to potential supporters, or reveal barriers. Here’s the chart from the workshop’s PowerPoint:

A chart showing how you can use a logic model to write a proposal. Inputs = budget; activities = strategies; reach = reach out outreach; outcomes = results and evaluation; external factors = support and barriers; assumptions = reviewers' questions.

This was my first logic model workshop, but it’s not my last. I am signed up for the half-day workshop Logic Models for Program Evaluation and Planning during the American Evaluation Association’s Eval17 conference in November. See you there!

Categories: RML Blogs

Because People Aren’t Gold Fish: AEA Audience Engagement Strategies

Fri, 2017-09-22 16:01

Gold fish in a bowl making eye contact with a cat.

Here’s a statistic to ponder.  According to a study done by Microsoft, as reported in Time magazine, the average adult human has a shorter attention span than a goldfish. Goldfish can pay attention for 9 seconds, but you’ll lose a human after 8.

Think of that factoid the next time you complain about having only 15 minutes to give a meeting or conference presentation. This attention-span reality may be particularly problematic for those presenting evaluation results, which often feature lots of facts and figures. Using an N of 1 (myself), I’ve found attention span shortens as data density increases.

The solution is to sprinkle audience engagement strategies throughout your presentation to keep  re-capturing  your audience members’ attention. For tips and techniques, the American Evaluation Association has you covered.  Check out the no-cost, downloadable Audience Engagement Strategy Book, by Sheila Robinson, one of the many resources provided on the AEA’s Potent Presentation Resources (aka p2r) web page.

This Audience Engagement Strategy Book presents 20 strategies to draw your audience members into your presentation. You’ll find a range of strategies, from the most subtle, such as eye contact or simple polls, to out-of-the-seat activities, such as writing ideas on sticky notes and posting them around the room.

The book provides an assessment of all 20  strategies on six dimensions, such as amount of time and cost, ease of application, and the amount of movement required from your participants. Each strategy also receives a brief write-up, with a one-paragraph description and a tip or two for implementing the technique. The book ends with a short list of other resources for audience engagement methods. The Audience Engagement Strategy Book cleverly uses graphics to convey a lot of information in 15 pages. (A nod to readers’ attention span, perhaps?)

Of course, audience engagement is just one element of a good presentation. Delivery is built on good content and design.  If there is a conference presentation in your future, this entire website is useful to you.  The p2i Tools and Guidelines page will help you build a good presentation from the ground up. You also will find tips for posters. Even if you are presenting somewhere other than a professional or academic conference, you will find these resources helpful.

So, check out the p2i initiative. And when it comes time to give a presentation,  get ready, get set, and, in the words of Star Fleet Captain Jean Luc Picard:

:Engage!

Categories: RML Blogs

Results-Based Accountability and Outcomes-Based Evaluation: Same Thing?

Fri, 2017-09-15 17:29

Green line graphWe are currently working with performance measures and indicators in assessing the programs of the NNLM. As a starting point, we’ve looked to the Common Metrics Initiative being used by Clinical and Translational Science Award (CTSA) institutions.  CTSA institutions vary widely in their projects, but they are using a set of common metrics to demonstrate measurable improvements towards advancing translational research and workforce development.

The Common Metrics Initiative is based on the Results-Based Accountability (RBA) Framework.  A guide to understanding this concept is on the Illinois Department of Human Services website: The Results-Based Accountability Guide. This was my first encounter with Results-Based Accountability. While it has a lot in common with the outcomes-based evaluation, it has some additional facets.

One thing they have in common is that the first thing you do is start at the end – the change you want to see – and then work backwards to the activities.  However in RBA, one interesting distinction is whether or not the “end” change was “population accountability” or “performance accountability.”  Population accountability is used if the desired end is to improve the quality of life for a population.  Performance accountability is if the desired end concerns how well a program, agency or service system is performing.

In the vocabulary of RBA, the metrics used for “population accountability” are called indicators, and they are what you would measure to show the changes you want to see in a population.  At this point, I’m thinking “aha, population accountability is like outcomes evaluation, and performance accountability is like process evaluation.”

But not so fast. According to the RBA framework, in performance accountability, the metrics are called performance measures, and they show how well your program is performing.  Here’s where it’s different from process evaluation.  In performance accountability, there are three kinds of performance measures:

  • How much are we doing?
  • How well are we doing it?
  • Is anyone better off?

Ultimately that last kind of performance measure, “is anyone better off,” is what we would have called an outcome measure.  But in terms of the RBA’s performance accountability, the focus is whether or not the program is doing what it needs to to ensure that someone is better off.  Performance accountability includes outcomes in its understanding of the degree to which a program has been performed successfully.

There is a lot more than this to the process of Results-Based Accountability. In fact, I feel a little guilty leaving it at this.  However, you can read the guide for yourself.  If we use any of the processes they recommend, we will be sure to share how they work here in the Shop Talk.

Categories: RML Blogs

Evaluation Questions: GPS for Your Data Analysis

Fri, 2017-09-08 13:11

GPS

Have you ever found yourself starring at a flip drive or folder full of evaluation data, wondering how the heck to make sense of it?  Maybe you collected a bunch of information as a requirement for a funded project.  Maybe a VIP handed you data files and tasked you with “producing a report on the findings.” Now, there you sit, trying to figure out how to move forward.

You know what’s missing in this scenario? Evaluation questions. Data cannot provide you with answers if no one ever articulated questions.

It is ideal, of course, to write your evaluation questions before you design your data collection processes. Otherwise, to misquote the great Yogi Berra, “if you don’t know what your evaluation questions are, you might not answer them.” As with most evaluation planning, it’s best to get your stakeholders involved in question development, to be sure you answer their questions as well as your own.

However, we don’t live in an ideal world most of the time. The good news is, you can introduce evaluation questions during the analysis process.  Whenever you suffer from data-based confusion, stop and think “Wait, what exactly do I need to know here?”  List your questions, then figure out how to analyze your data to answer them.

For clarification, I’m not talking about the questions used to collect a single bit of information, like those used in surveys and interview guides.  Evaluation questions are related to your program, broadly defining all the information you need in order to implement a program and assess its value.  Specifically, your questions identify information to (a) plan well (b) conduct and adapt your activities as needed or (c) make informed decisions about the value of a program. These questions also set boundaries for your data collection methods, helping you collect only information that is useful to you. Ultimately, your evaluation questions guide your data collection methods, analysis, and reporting.

Revisiting Sunnydale

To demonstrate sample evaluation questions, let’s revisit our  From Dusk to Dawn Project , the fictional health information outreach project to vampires featured in other NEO blog posts.  In case you are unfamiliar with our un-outreach to the un-dead, the image to the left provides a project summary.  Evaluation questions should be written for three key decision-making points in a program: planning; implementation; and outcomes/value assessment. Here are examples of some evaluation questions I would use if I were evaluating the From Dusk to Dawn program.

Planning: What do we need to know to plan this program?

The goal of the From Dusk to Dawn project is to improve the health and well being of vampires in the Sunnydale community. In order to reach this goal, The program offers Hands-on evening classes on the use of MedlinePlus and PubMed find health information and up-to-date research about health concerns. An overnight “Dusk-to-Dawn” health reference hotline to help the vampires with their reference questions With these activities, we hope to see Increased ability of the Internet-using Sunnydale vampires to research needed health information Vampires using their increased skills to research health information for themselves and their broods Improved health of Sunnyvale vampires and Improved relationships between the vampire and human community of Sunnydale.

  • What qualities does our program need to attract vampires?
  • What community organizations currently serve vampires and might want to partner on this program?
  • What challenges do we face in implementing our plans?

Implementation: What are we doing and how well are we doing it?

  • How many calls does the From Dusk to Dawn Hotline get per night from vampires seeking health information?
  • How satisfied are the vampires with the training sessions?
  • How do the reference librarians and instructors suggest we improve the program activities?

Outcomes What did we accomplish with this program?

  • Did we meet our objective targets?
  • What unanticipated positive or negative outcomes came from our efforts?
  • What reactions do different stakeholder groups have to the outcomes of this program? (Stakeholders include vampires and their families,, Sunnydale city leaders, vampire-serving CBOs, and human residents)

If the project team starts with these questions, we can develop questionnaires, interview guides, and other evaluation methods with precision, collecting the most necessary and useful data.  These questions also will frame our data analysis and structure our reports to stakeholders.

Other Examples

If you want more examples of evaluation questions, check out these resources:

It takes resolve to begin planning an evaluation by articulating evaluation questions, but it pays off BIG in the end.  If you are on teams or work groups initiating evaluation projects, press for the group to do their prep work.  Persuade them to start with evaluation questions.

Repeat after me: Friends don’t let friends do question-free evaluation.

 

 

 

Categories: RML Blogs

Postcard from Houston

Fri, 2017-09-01 16:24

Post hurricane street in Karen's Houston neighborhood.

If you visited NEO Shop Talk this week, you know that blogger Karen Vargas shared how her Hurricane Harvey ordeal sharpened her evaluator skills.  Her story ran on Monday, when Houston was still in the thick of the epic disaster. For at least 48 more hours, the family still carefully monitored the Buffalo Bayou water levels and determined alternative responses to employ if the flood waters reached her block.  In the end, her family remained safe and dry throughout.  Now that the rain stopped and the water is slowly receding, they believe imminent danger has passed.

Her story does highlight some lessons we can apply in evaluating “non-emergency” programs:

  • A logic model is meant for adapting. It is easy, when survival is NOT on the line, to believe a logic model is only useful for program planning. Our programs would be so much better if we kept assessing and adapting our planned activities to meet the changing environments of our programs.
  • More heads are better than one. Karen said both her initial and alternative plans were infinitely better because of input from her neighbors. The block “team” collectively had more ideas, experiences, and resources than any one person could offer. Karen believes this proves an important planning and evaluation principal:  Don’t plan your program in isolation if you can help it.
  • Recognize when a logic model has served its purpose and move on to planning a new program.  Now that the storm threat has subsided, Karen and her family are facing an entirely new set of needs. She has to find food to restock their pantry, daycare for her daughter until the schools reopen, and volunteer outlets to help their community return to normal. She said it’s time to replace the Harvey Response Plan with the Harvey Recovery Plan. Sometimes, though, project teams may not recognize when it’s time to stop tweaking activities and  seriously reconsider the relevance or viability of our outcomes.  We are hesitant to turn the page.

Karen’s last bit of advice: Always apply lessons learned from prior experiences and make note of new ones. From previous hurricanes, she knew to buy gas and water first. This time, she discovered that non-grocery retail stores, like department and drug stores, are better outlets for food when the masses are cleaning off shelves at the local grocery store.

The rest of the NEO staff was in close touch with Karen through the week, thanks to Skype and continued electricity at Karen’s end.  We, too, are just starting to exhale.  Our hearts go out to the Houstonians who were less lucky than our colleague and hope for the best possible holiday weekend for all.

 

Categories: RML Blogs

Special Edition: 5 Ways that Living Through a Hurricane Made Me a Better Evaluator

Mon, 2017-08-28 16:26
Photo of flooded street in Houston

A street by my house flooded with Buffalo Bayou

Setting the scene:

Houston is mostly flat, so all geographical changes are subtle.  We live on a high-ish point about a quarter mile from Buffalo Bayou, one of the main watersheds for Houston.  Historically our neighborhood has stayed dry, even in the fiercest storms, like Tropical Storm Allison and Hurricane Ike.  It takes a lot of water for Buffalo Bayou to work its way up the hill to our house.

But Hurricane Harvey seems intent to try.  Several times a day we have been walking down the street to see how high the Bayou is.  At its highest (so far) it was about a block from our house, which is closer than it’s ever been.  The rain is still falling, the storm is still trying to figure out what direc tion to go, and the Army Corps of Engineers has released a lot of water from a couple of upstream reservoirs into Buffalo Bayou (at the time of this writing, we don’t know how that’s going to turn out).

Important things in our favor:

  • We still have electricity (one of the interesting side affects of that is that since I work at home, I can still go to work).
  • We have plenty of food and water.

Things that we’ll call challenges:

  • We have an energetic 4 year old
  • Everything in Houston is closed
  • Streets are full of water, so even if things weren’t closed we couldn’t get there

So what have I learned from this experience that will make me a better evaluator?

Lesson 1: Starting with an evaluation question focuses your thinking. An evaluation question is what do we want to find out from the evaluation of our project.  In our case we knew last week that this mess was coming and had plenty of information for us to assume we would be stuck in the house by ourselves for anywhere from 3-5 days (it seems to be turning into 7 days, but we had the basic idea). We didn’t know whether there would be electricity, but we did know schools were going to be closed Monday (today) and likely to be closed for longer (now we know that they will be closed all week). So I think our evaluation question is: How successful are we at preparing for this natural disaster in a way that would keep our family safe and our child from going bananas?

Lesson 2: Start with a logic model.  As an evaluator, I know that even if there wasn’t an evaluation plan up front (ahem), if you’re called in to evaluate something in the middle or end of a project, you should still start by making a logic model. So here it is, even though our planning took place without a logic model.

Logic model of hurricane planning

We can use the evaluation plan to build outcome indicators and set targets to build measurable objectives, like “number of positive experiences for daughter is greater than the number of negative ones, all week long.”

Lesson 3: Good planning makes it easier to change course.  Of course it’s not possible to plan for everything, especially when the hurricane appears in the Gulf of Mexico as a mixed up bunch of clouds and two days later is a category 4 hurricane.  The 6.5 million people who live in the Houston area planned ahead when they heard about the storm, and therefore stores were empty of water and perishables. If we had really planned ahead, we would already have our hurricane kit ready.  That said, we aren’t newbies to hurricane preparedness, and we knew about going to places other people hadn’t thought of, like going to gas stations to get water.

But back to changing course.  The plan was always to stay in our house, which has always braved the Houston storms because it is on a “hill.”  However, this storm is like no other, and the Bayou rose higher than it has ever risen.  So yesterday we needed to look at alternatives, especially since it was getting dark and the streets were flooded.

Because we had done a lot of preparation, we were able to change course and pack all of our food and our supplies and our animals for a quick evacuation to a nearby parking garage or a trip up to the attic, whichever turned out to be the better plan.

Lesson 4: Connections matter.  In a good program evaluation, you have a team of advisors, or stakeholders, who are invested in the success of your project. In this storm, those people are our neighbors, friends, family, and coworkers.  It has been important to keep those people involved, so they know when we last reported in, for moral support, and to exchange ideas and information, which has been invaluable.  We learned things like: for heaven’s sake don’t go up into the attic if you don’t have an ax to get out of the attic.

Last night as we were concerned about the water continuing to rise in the dark, we brainstormed with our across-the-street neighbors (part of our advisory team) to develop additional plans of what we could do if the water kept rising. Since we had all been walking down the block to check on the height of the water, we agreed to take shifts so we could all get some sleep and text each other the water levels.  Working collaboratively with our advisory team made stronger plans and helped us be healthier and better able to make decisions today.

Lesson 5: Outcomes outcomes outcomes.  Our number one outcome is that we survive this adventure.  It’s an important outcome because none of the other outcomes will work without that one.  But I put lessons learned as the interim outcome, and better preparedness for the next hurricane as the long-term outcome.  Why are lessons learned and improved preparedness outcomes?  I probably don’t need to say that living on the Gulf of Mexico provides a lot of opportunities to practice hurricane preparedness.  Each one teaches us lessons that we can use for the next one. And years from now (please let it be years from now) when we have another storm, we would like to survive that one too.  By making “lessons learned” an outcome, when we evaluate this storm we will see whether we’ve learned anything that we can use to survive the next one.

For more information on:

Categories: RML Blogs

SOAR: An Appreciative Inquiry Approach for Strategic Planning

Fri, 2017-08-25 10:00

By guest blogger Michelle Eberle, Consultant, Massachusetts Library SystemClose up of woman drawing a lightbulb on a flip chart with stickers

In my new position with the Massachusetts Library System (MLS), one of my roles is providing consultation for strategic planning.  The Massachusetts Library System, a non-profit funded by the Commonwealth, support all types of libraries (public, school, academic, and special).  Yes, that includes health sciences libraries, too! Libraries in Massachusetts are required to complete a strategic plan approved by the Massachusetts Board of Library Commissioners (MBLC) in order to qualify for Library Services & Technology Act (LSTA) grants and the Massachusetts Public Library Construction Program. The MLS provides professional development on strategic planning and will facilitate a visioning exercise called SOAR for member libraries in Massachusetts.

Upon joining the Massachusetts Library System’s Consulting & Training Services Team, I learned that the MLS recommends that libraries initiate their strategic planning process with a SOAR (Strengths, Opportunities, Aspirations, Results) activity rather than using the more well-known SWOT (Strengths, Weaknesses, Opportunities, Threats) analysis.

So, what is a SOAR exercise and why is it preferable? In a SOAR exercise, participants identify and discuss:

  •      Strengths (what’s working well)
  •      Opportunities (how you can add value to your stakeholders’ needs)
  •      Aspirations (your hopes and dreams for your future)
  •      Results (what you would like to achieve)

SOAR embraces an appreciative inquiry approach.  This approach generates an uplifting discussion and increases capacity for continuous improvement for the future of your library. If you have been avidly reading the NEO’s Shop Talk blog, then you may be familiar with appreciative inquiry. Directing conversation towards aspirations and results rather than weaknesses and threats creates a positive experience that generates a spirit of collaboration and good will for the future, rather than dredging up past challenges, drama, and problems.  The most important reason to choose SOAR over SWOT is that it increases your staff and community’s support for change and new initiatives. The SOAR exercise can be a powerful approach to generate creative ideas for your library’s future while engaging your stakeholders with an inclusive process.

During my first experience with SOAR, I observed my colleague, Kristi Chadwick, skillfully use the SOAR exercise to draw creative ideas from a suburban public library strategic planning task force. To set the tone for the exercise, Kristi encouraged participants to share all their ideas without judgement or analysis, to speak one person at a time, and to stay focused.  There are no bad ideas! The SOAR exercise is the time to brainstorm and capture information. Kristi captured the numerous ideas on post-it flip charts.  By the end of the session, the wall was covered with flip charts full of the library’s strengths, opportunities, aspirations, and results.  At the end of the meeting, the task force was cheerful and optimistic.  The SOAR exercise prepared them to move forward to the next steps in their strategic planning process:  a visioning activity and community survey.

Maybe guiding your library’s overall strategic planning process is outside the scope of your professional role.   You can use the SOAR exercise to plan programs, update your collections, reconfigure library space, and guide your professional development.  The SOAR activity can be used to improve any service provided by your library.  It’s applicable to just about any kind of planning, whether professional or personal.

Want to give the SOAR process a try?  Go ahead and complete this SOAR exercise sheet to identify what you value most and what you would like for your future.

Check out the following resources to learn more about SOAR activities:

Give some thought to how you can use SOAR as a planning tool. If you use it, let me know how you like it in the comments here or email me at michelle@masslibsystem.org.

Categories: RML Blogs

Using Charts for Analysis: a Blog Post Fail

Mon, 2017-08-14 12:01

When I volunteered to write this blog post, it was going to be about how a small increase in your social media presence could result in a large increase in website traffic. Instead, it’s a lesson of how charts can be great tools for analyzing data, and how you should always keep an open mind when approaching a blog post. Click on any of the charts to view them at full size. Enjoy!

The Setup
Over the past few months, the NEO has seen an increase in monthly blog visits. We take great pride in our blog posts, and have enjoyed seeing this increase. June was our best month to date yet, with over 1000 views!

A bar graph showing the slow increase of visitors from August 2016 to July 2017.

We were also curious – what was the reason for this increase? Was it, perhaps, an increase in our Twitter activity? I decided to investigate. I logged into WordPress and looked at our Referrer statistics. A Referrer is any “other blogs, web sites, and search engines that link to your blog.” A view is counted as a referrer “if a visitor lands on a URL on your site after clicking a link on the referrer’s site” (WordPress). This means that any traffic to our blog coming from a tweet would be listed as one referrer view from Twitter. After looking at a few months’ worth of data, I saw that the number of Twitter referrers has in fact increased over the past few months. I created a line chart comparing the number of tweets per month with the number of Twitter referrers.

A line graph showing Twitter referrers increasing with the number of Twitter posts per month.

The results made me happy – our increase number of tweets did have an impact on the number of Twitter referrers. Our efforts in social media were making a difference! What a great blog post!

Digging deeper into the data
My curiosity got the best of me though. How did Twitter referrals compare to other referrers? WordPress analytics did not allow me to compare monthly referrer stats, so I made a new spreadsheet. Staring at the numbers side-by-side didn’t help either, so to visualize the data better I made a line chart. The results were a bit surprising. It was true that Twitter referrals increased, but in the scheme of the other referrers it wasn’t much.

A line chart showing all referrers from August 2016 to July 2017. Twitter, though it increased, is not the main referrer to our blog.

I did notice, however, the large spike in Facebook referrals. That came from the American Evaluation Association (AEA) posting our blog post The Cookie Exercise: Setting Criteria on Facebook. They also shared it on Twitter, which would account for the extra high bump in June for Twitter.

Another realization came to me when talking with Cindy, the Assistant Director of the NEO. As I was showing her the charts I created, she asked me to make one more – one that shows the total Twitter referrals, other referrals, and total blog visits in one chart. The chart looks like this:

A line chart depicting total twitter referrals, other referrals, and total blog views. Though Twitter referrals are increasing, most people do not visit the site from referrals.

The Twitter referrals do not look so significant in comparison to the total blog views! What would happen to my blog post?

Lessons learned
Though my blog post didn’t come together like I expected, I learned a lot from this exercise. The most important lesson was about charts. I usually think of charts to present my findings, something that is created after I collect and analyze data. Here, I used charts to explore and analyze data, before I reported the data. Charts and graphs can be used to make sense of our data.

My second lesson was in the type of data that I collected. While talking to Cindy, I had to rely on my memory of when our posts were retweeted, or when the AEA shared our blog post. If such information is important to me, and I am not seeing it on native social media analytics, I need to create my own spreadsheets or dashboards.

Finally, my third lesson should be familiar to those who read the NEO Shop Talk on a regular basis (or read my 5 Things I Found Out as an Evaluation Newbie blog post) – you can always learn something from failure. My blog post did not come out the way I wanted, but through the process of writing it I learned a lot about how to analyze data, and where I could improve my evaluation skills. Remember, failure is an option!

Categories: RML Blogs

Logic Model Hack: Project Management Tool

Fri, 2017-08-04 14:53

My whole career has been an evaluation project juggling act.  At times, I was the only evaluation consultant for an entire university campus. Logic models were a game changer for me. When a client showed up in my office three months after our initial consult, I could pull out our logic model and we could catch up on the plan in less than a minute.

Now, on project management teams, I am the self-appointed “Logic Model Queen.”  I’m sure my team mates roll their eyes behind my back when I start drawing the iconic logic model structure on the conference room white board.  That’s okay. I always win them back by the end of the project, because logic models are excellent project management tools.

Creating the Logic Model

Let’s review how a project team develops a logic model.  Using the template shown below, the group first establishes desired outcomes (results) for the project. Team members then work “backwards” to see if planned activities logically could lead to the outcomes. In the Resources column, they list everything needed to conduct activities. Once the columns are filled, the team members can reflect on their assumptions underlying the plan and identify known challenges. From there, the team established methods to assess program implementation (process evaluation) and results (outcomes evaluation).

Logic model template. Three boxes connected by right-pointing arrows. Resources (If we get these resources)" "Activities" Conduct these activities and deliver these services/products; Outcomes: We will accomplish these outcomes. Under the three boxes, there are two boxes spanning the three columns labeled "Assumptions" and "Challenges"

Centerpiece of Project Management 

Once the logic model is drafted, don’t bury it in some dusty folder in cyberspace.  Your team should review it at each meeting.  Lack of meeting time is no excuse: a logic model review can be done in five minutes, preferably at the top of the meeting.  The general question the team should consider this:  Does our logic model still reflect reality? If the answer is basically yes, the team can then use the following logic-model inspired questions to review the project more thoroughly

  1. Have we been able to acquire everything we needed, as listed in the resources section of our logic model? Do we have to do without or find a substitute for anything?
  2. Does our process evaluation data show we are on track to complete our outputs (aka deliverables)? How are the participants responding to our project?
  3. How well are we documenting our progress? Are our records complete? Are the project implementers submitting assessment information? Are there ways to make the data collection process more usable?
  4. What wrong assumptions did we make that we need to address?
  5. What unexpected challenges are we encountering?
  6. Are our outcomes still reasonable?
  7. Do we have the evaluation information we need to make decisions at different stages of this project?
  8. Are we recording our successes well? Will we be able to tell a compelling story to key stakeholders at the end of this project?

Change Shows Learning 

So let’s say your team considers the question Does our logic model still reflect reality, and the answer is “no.” Now what do you do? First, be comforted by the fact that this is not an unusual occurrence.  Good project management almost always leads to change.  Here are some reasons that programs must be adapted midstream:

Queen saying "'Tis okay to change your logic model"

Your expectations were too high. Often, in the middle of programs, we realize the change we hoped for is going to take more time than expected. It is not unusual to discover “things take longer than they do.” (That’s a direct quote from one of my first evaluator-bosses.) In such cases, the most comforting words in the English language are pilot project. Let go of your expectations, revise your outcomes and, if necessary, alter your evaluation methods to capture evidence of modified outcomes.

You learn new things about the community you’re serving: Let’s face it, needs and community assessment only give you a tiny piece of the information you need to run a successful project. Only by working with participants and stakeholders directly to do you truly learn about their desires, needs, and values. As the great poet Maya Angelou said,  do the best you can, and when you know better, do better. If you can, adjust your program activities and outcomes as much as possible to better meet your stakeholders’ wants or needs.  Otherwise, record what you learn so you know what to do differently next time.

Unexpected changes arise for you, your community, or your partners. We all work in complex systems that can change with little notice, disrupting best laid plans. Regular discussion related to your logic model will help your team assess and respond to shifting terrain.  Team discussions will document the challenges for final reports.

I have written before that I think logic models are the duct tape of the evaluation world.  I hope this post convinced you to use it early and often during project management discussions.  You also might like reading about how to use it for proposal writing, conducting program reality checks, or even for planning your child’s birthday party.

If you want training on logic models, you might be interested in the half-day workshop Logic Models for Program Evaluation and Planning, being offered at the American Evaluation Association’s 2017 conference in Washington, DC. (It’s number 64 on this list of AEA 2017 workshops.) Cost is $80-95, depending on how early you register.  You do not have to register for the conference to enroll in the workshop.

 

Categories: RML Blogs