Skip all navigation and go to page content

NEO Shop Talk

The blog of the National Network of Libraries of Medicine Evaluation Office

Archive for the ‘Practical Evaluation’ Category

“Five Things I’ve Learned,” from an Evaluation Veteran

Friday, March 24th, 2017

 

Cindy Olney in her home office

Kalyna Durbak, the NEO’s novice evaluator, recently posted the five things she learned about evaluation since joining our staff. I thought I would steal, er, borrow Kalyna’s “five things” topic and write about the most important lessons I’ve learned after 25+ -years in the evaluation field.

My first experience with program evaluation was in the 1980s, as a graduate assistant in the University of Arizona School of Medicine’s evaluation office.  Excel was just emerging as the cool new tool for data crunching. SPSS ran on room-sized mainframes, and punch cards were fading fast from the computing world. Social security numbers were routinely collected and stored along with other information about our research or evaluation participants. Our desktop computers ran on DOS. The Internet had not even begun wreaking havoc.

Yep, I’m old. The field has evolved over time and the work is more meaningful than ever. Here are five things I know now that I wish I had known when I started.

#1 Evaluation is different from research: Evaluation and research have distinctly different end goals. The aim of research is to add to general knowledge and understanding.  Evaluation, on the other hand, is designed to improve the value of something specific (programs, products, personnel, services) and to guide decision-making. Evaluation borrows many techniques from research methodology because those methods are a means to accurate, credible information. Technical accuracy of data means nothing if it cannot be applied to program improvement or decision-making.

#2 Evaluation is not the most important kid in the room. Evaluation, unchecked, can be resource-intensive, both in money and time. For every dollar and hour spent on evaluation, one dollar and hour is subtracted from funds used to produce or enhance a program or service. Project plans should focus first on service or program design and delivery, with proportional funding allocated to evaluation. Evaluation studies should not be judged by the same criteria used for research. Rather, the goal is to collect usable information in the most cost-effective manner possible.

#3 What gets measured gets done: Evaluation is a management tool that’s worth the investment. Project teams are most successful when they begin with the end in mind, and evaluation plans force discussion about desired results (outcomes) early on.  (Thank you, Stephen Covey, for helping evaluators advocate for their early involvement in projects.)  You must articulate what you want to accomplish before you can measure it.  You need a good action plan, logically linked to desired outcomes, before you can design an process assessment. Even if your resources limit you to the most rudimentary of evaluation methods, the mere process of committing to outcomes, activities and measure on paper (in a logic model, please!) allows a team to take one giant step forward toward program success.

#4 Value is in the eyes of the stakeholders: While research asks “What happened,” evaluation asks “What happened, how important is it, and, knowing what we know, what do we do?”  That’s why an evaluation report that merely collects dust on a shelf is a travesty. The evaluation process is not complete until stakeholders have interpreted the information and contributed their perspectives on how to act on the findings. In essence, I am talking about rendering judgement: what do the findings say about the value of the program? That value judgment should, in turn, inform decisions about the future of the program. While factual findings should be objective, judgments are not.  Value is in the eyes of the people invested in the success of your program, aka, stakeholders. Assessment of value may vary and even conflict among various stakeholder groups. For example, a public library health literacy program has several types of stakeholders. The library users will judge the program based on its usefulness to their lives. City government officials will judge the program based on how many taxpayers express satisfaction with the program.  Public librarians will value the program if it aligns with their library mission and brings visibility to their organization.  Evaluation is not complete until these multiple perspectives of value are explored and integrated into program decision-making.

#5 Everything I need to know about evaluation reporting I learned in kindergarten. Kindergarten was the first and possibly the last place I got to learn through play. In grad school, I learned to write 25-50 page research and evaluation reports. In my career, I discovered people read the executive summaries (if I was lucky), then stop. Evaluations are supposed to lead to learning about your programs, but no one thinks there’s anything fun about 50-page reports. Thankfully, evaluators have developed quite a few engaging ways to involve stakeholders in analyzing and using evaluation findings. For example, data dashboards allow stakeholders to interact with data visualizations and answer their own evaluation questions.  Data parties provide a social setting to share coffee, snacks, and data interpretations.  New innovations in evaluation reporting are being generated every year. It’s a great time to be an evaluator! More bling; less writing, and it’s all for the greater good.

So,there you have it: my five things.  These five lessons have served me well. I suspect they will continue to do so until bigger and better evaluation ideas come along. What about you? Share your insights below in our comments section.

Uninspired by Bars? Try Dot Plots

Friday, March 17th, 2017

Thanks to Jessi Van Der Volgen and Molly Knapp at the NNLM Training Office for allowing us to feature their assessment project and for providing the images in this post. 

Are you tired of bars?

I don’t mean the kind of bars where you celebrate and socialize. I mean the kind used in data visualization.  My evidence-free theory is that people still succumb to using the justifiably maligned pie chart simply because we cannot face one more bar graph.

Take heart, readers. Today, I’m here to tell you a story about some magic data that fell on the NEO’s doorstep and broke us free of our bar chart rut.

It all began with a project by our NNLM Training Office (NTO) colleagues, the intrepid leaders of NNLM’s instructional design and delivery. They do it all. They teach. They administratively support the regions’ training efforts. They initiate opportunities and resources to up-level instructional effectiveness throughout the network. One of their recent initiatives was a national needs assessment of NNLM training participants. That was the source of the fabulous data I write about today.

For context, I should mention that training is one of NNLM’s key strategies for reaching the furthest corners of our country to raise awareness, accessibility and use of NLM health information resources. NNLM offers classes to all types of direct users, (e.g., health professionals; community-based organization staffs) but we value the efficiency of our “train-the-trainer” programs. In these classes, librarians and others learn how to use NLM resources so they, in turn, can teach their users. The national needs assessment was geared primarily toward understanding how to best serve “train-the-trainer” participants, who often takes multiple classes to enhance their skills.

For the NTO’s needs assessment, one area of inquiry involved an inventory of learners’ need for training in 30 topic areas. The NTO wanted to assess participants’ desired level and their current level of proficiency in each topic.  That meant 60 questions. That was one heck-of-a-long survey. We wished them luck.

The NTO team was undaunted!  They did some research and found a desirable format for presenting this set of questions (see upper left). The format had a nice minimalist design. The sliders were more fun for participants than radio buttons. Also, NTO designed the online questionnaire so that only a handful of question-pairs appeared on the screen at one time.  The approach worked, because NTO received responses from 559 respondents, and 472 completed the whole questionnaire.

Dot plots for four skill topic areas. Conducting literature searches (4=Current; 5=Desired) Understanding and searching for evidence-based research ( Current-3; Desired=5) Develop/teach classes (Current-3; Desired=5; Create videos/web tutorials Current-2; Desired=4)

The NEO, in turn, consulted the writings of one of our favorite dataviz oracles, Stephanie Evergreen. And she did not disappoint.  We found the ideal solution: dot plots!  Evergreen’s easy-to-follow instructions from this blog post allowed us to create dot plots in Excel, using a few creative hacks. This approach allowed us to thematically cluster results from numerous related questions into one chart. We were able to present data for 60 questions in a total of seven charts.

I would like to point out a couple of design choices I made:

  • I used different shapes and colors to visually distinguish between “current proficiency” and “desired proficiency.” Navy blue for current proficiency was inspired from NNLM’s logo. I used a complimentary green for the desired proficiency because green means “go.”
  • Evergreen prefers to place labels (e.g., “conducting literature searches”) close to the actual dots. That works well if your labels consist of one or two words. We found that our labels had to be longer to make sense. Setting them flush-left made them more readable.
  • I suggested plotting medians rather than means because many of the data distributions were skewed. You can use means, but probably should round to whole numbers so you don’t distract from the gaps.

Dot plots are quite versatile. We used the format to highlight gaps in proficiency, but other evaluators have demonstrated that dot plots work well for visualizing change over time and cross-group comparisons.

Dot plots are not as easy to create as the default Excel bar chart, but they are interesting.  So give up bars for a while.  Try plotting!

 

 

 

 

5 Things I Found Out as an Evaluation Newbie

Friday, March 10th, 2017

Since joining the NEO in October, I have learned a lot about the world of evaluation. Here are 5 things that have made me rethink how I approach evaluation, program planning, and overall life.

#1: Anyone can do evaluation
Think about a project that you are working on at work. Now take out your favorite pen and pad of paper, or open a new blank document, and write What? at the top of the page. Give yourself a few minutes to write or type out a general outline of the project. Do the same for the questions So What? and Now What? Reflect on why the project is important to your organization’s mission, and what will you do with any new found information from the project. Finished? Congratulations, you’ve just taken your first step as a budding evaluator by engaging in some Reflective Practice.

This first step does not mean you are an evaluation guru. It takes more than just a reflection piece to create a whole evaluation plan (actually, just 4 steps). What I hope you take away from this exercise is that every project could use some form of evaluation, and that there is no hocus pocus involved in evaluation. All you need is a team willing to put in the effort to create “an evaluation culture of valuing evidence, valuing questioning, and valuing evaluative thinking” (Better Evaluation). I am sure you even have one of the most basic tools you can use for evaluation, which leads me to #2.

#2: Excel is your best friend
I will not deny that Tableau, Power BI, and other really cool Data Visualization and Business Intelligence software is out there. There’s also R, for those who are looking for another programming language to conquer. If you are working at a small library, or a non profit, it might be hard to get the training or the funds for such software. Enter Excel. You can do a lot of neat things with Excel. A quick search for Excel on Stephanie Evergreen’s blog will result in many free tutorials on how to make interesting (and useful) charts in Excel. You can even make pivot tables, which can help you easily summarize complicated sets of data. Excel might not be the best tool for data visualization, but it’s a tool that many of us already have.

#3: This isn’t your grade school report card
I still remember the terror I would feel the day that report cards would come out. I should not have been afraid, because I usually received great grades. What always terrified me was the uncertainty of how my teachers reached the resulting grade. If the teacher was nice, he or she would explain how the grade was calculated, but most of the time I was left with a report card with no comments. If I wanted to strive for a better grade, I would have to arrange a meeting with the teacher. That never happened because I was very shy, and the prospect seemed more terrifying than getting the report card!

It can be scary to think about an evaluation program. What if it doesn’t come out well, and you get a “bad grade”? I’m here to tell you that the terror won’t be there, because you are not a student waiting for an ambiguous letter grade. You are in the teacher’s seat, and you get to decide what it means to succeed and what it means to fail. You have full control over the parameters of your evaluation! This does not guarantee success, but it does give you a fair shot at succeeding.

#4: Really, it’s ok to fail
Ever since I have started working with the NEO, I’ve been confronted with failure. We start most of our meetings by retelling our most recent failures, like how we forgot to put something on our Outlook calendar or could not get something to work. We’ve even written a blog post about it! I call this a failure-positive work environment. Instead of beating ourselves up about these little failures, we learn from them and carry on.

I’ve found myself reflecting on my recent work at a nonprofit and how I’ve approached failures in the past. To put it bluntly, I haven’t done well with failure. In fact, my approach to failure has usually been embarrassment, guilt, and eventual burnout. I see now that these feelings, though hard to ignore, are completely unproductive. They are also easy to prevent. If you have an evaluation plan in place, you can turn a failure into just another data point on a path towards success. As Karen wrote in the blog post about failure, “Reporting [a failure] is kind of like that commercial about how to come in late to a meeting. If you bring the food, you’re not the person who came late, you’re the person who brought breakfast.”

#5: Do not ignore outcome evaluation!
It took me a while for this information to sink in, but there are multiple ways to evaluate a program. Process evaluation assesses how you did something, “Are you doing what you said you’d do?” Outcome evaluation is a bit different, as it tries to answer the question of whether the program achieved its goal, or “Are you accomplishing the WHY of what you wanted to do?” When I think about these two types of evaluation, it’s tempting to focus on the process evaluation because I have more control over the process than the outcomes. I can plan a fantastic program, and “pass” a process evaluation. The same plan can “fail” an outcomes evaluation if people were not receptive to the program. Before you forgo your outcomes evaluation plan, remember pointers #3 and #4: you are in charge of the parameters, and failure isn’t the end of the world. Prepare an outcomes evaluation plan knowing that whatever happens, you’ll be able to use the information in the future. Also, remember that we have worksheets to help you write out any evaluation plan.

I hope you’ve found my reflections helpful in your evaluation planning. Let me know your favorite takeaways in the comments!

Photo credit: Kerry Kirk.

A Logic Model Hack: The Project Reality Check

Friday, February 24th, 2017

Duct tape or duck tape torn strips of isolated elements of strong adhesive gray material used in packaging boxes or repairing or fixing broken things that need to be sealed air tight.

Logic models just may be the duct tape of the evaluation world.

A logic model’s usefulness extends well beyond initial project planning. (If you aren’t familiar with logic models, here’s a fun introduction.)  Today’s post starts a new NEO Shop Talk series to take our readers beyond Logic Models 101. We call this series Logic Model Hacks. Our first topic: The Project Reality Check. Watch for more hacks in future posts.

The Project Reality Check allows you to assess the feasibility of your project with stakeholders, experienced colleagues, and key informants.  I refer to these people as “Reality Checkers.”  (I’m capping their title out of respect for their importance.)  Your logic model is your one-page project vision. Present it with a brief pitch, and you can bring anyone up to speed on your plans in a New York minute (or two).  Then, with a few follow-up questions, you can guide your Reality Checkers in identifying key project blind spots. What assumptions you are making? What external factors could help or hinder your project? The figure below is the logic model template from the NEO’s booklet Planning Outcomes-Based Outreach Projects . This template includes boxes for assumptions and external factors. By the time you complete your Project Reality Check, you will have excellent information to add to those boxes.

How to Conduct a Logic Model Reality Check

I always incorporate Project Reality Checks into any logic model development process I lead. Here is my basic game plan:

  • A small project team (2-5 people) works out the project plan and fills in the columns of the logic model. One person can do this step, if necessary.
  • After filling in the main columns, this working group drafts a list of assumptions and external factors for the boxes at the bottom. However, I don’t add the working group’s information to the logic model version for the Reality Checkers. You want fresh responses from them. Showing them your assumptions and external factors in advance may prevent them from generating their own. Best to give them a clean slate.
  • Make a list of potential Reality Checkers and divvy them up among project team members.
  • Prepare a question guide for querying your Reality Checkers.
  • Set up appointments. You can talk with your Reality Checkers in one-to-one conversations that probably will take 15-20 minutes. If you can convene an advisory group for your project, you could easily adapt the Project Reality Check interview process for a group discussion.

Here are the types of folks who might be good consultants for your project plans:

  • The people who will be working on the actual project.
  • Representatives from partner organizations.
  • Key informants. Here’s a tip: If you conducted key informant interviews for community assessment related to this project, don’t hesitate to show your logic model to those interviewees. It is a way to follow-up on the first interview, showing how you are using the information they provided. This is an opportunity for second contact with community opinion leaders.
  • Colleagues who conducted similar projects.
  • Funding agency staff. This is not always feasible, but take advantage of the opportunity if it’s there. These folks have a birds-eye view of what succeeds and fails in communities served by their organizations.

It’s a good idea to have an interview plan, so that you can use your Reality Checkers’ time efficiently and adequately capture their valuable advice. I would start with a short elevator speech, to provide context for the logic model.  Here’s a template you can adapt;

We have this exciting project, where we are trying to ___ [add your goal here]. Specifically, we want _____{the people or organization benefiting from your project}  to _________[add your outcomes]. We plan to do it by ____{summarize your activities).  Here’s our logic model, that shows a few more details of our plan.”

Then, you want to follow up with questions for the Reality Checkers:

  • What assumptions are we making that you think we need to check?
  • Are there resources in the community or in our partner organization that might help us do this project?
  • Are there barriers or challenges we should be prepared to address?
  • I would also like to check some assumptions our project team is making. Present your assumptions at the end of the discussion and get the Reality Checkers’ assessment.

How to Apply What You Learn

After completing the interviews, your working team should reconvene to process what you learned. Remove some of the assumptions that you confirmed in the interviews. Add any new assumptions to be investigated. Adapt your logic model to leverage newly discovered resources (positive external factors) or change your activities to address challenges or barriers. Prepare contingency plans for project turbulence predicted by your Reality Checkers.

Chances are high that you will be changing your logic model after the Project Reality Check. The good news is that you will only have to make changes on paper. That’s much easier than responding to problems that arise because you didn’t identify your blind spots during the planning phase of your project.

Other blog posts about logic models:

How I Learned to Stop Worrying and Love Logic Models (The Chili Lesson)

An Easier Way to Plan:Tearless Logic Models

Logic Models: Handy Hints

Got Documents? How to Do a Document Review

Friday, February 10th, 2017

Are you an introvert?  Then I have an evaluation method for you: document review. You usually you can do this method from the comfort of your own office. No scary interactions with strangers.

Truth is, my use of existing data in evaluation seldom rises to the definition of true document review.  I usually read through relevant documents to understand a program’s history or context. However, a recent blog post by Linda Cabral in the AEA365 blog reminded me that document review is a real evaluation method that is conducted systematically. Cabral provide tips and a resource for doing document review correctly.  For today’s post, I decided to plan a document review that the NEO might conduct someday, describing how I would use Cabral’s guidelines. I also checked out the CDC’s Evaluation Brief, Data Collection Methods for Evaluation: Document Review, which Cabral recommended.

Here’s some project background. The NEO leads and supports evaluation efforts of the National Network of Libraries of Medicine (NNLM), which promotes access to and use of health information resources developed by the NIH’s National Library of Medicine. Eight health sciences libraries (called Regional Medical Libraries or RMLs) manage a program in which they provide modest amounts of funding to other organizations to conduct health information outreach in their regions. The organizations receiving these funds (known as subawards) write proposals that include brief descriptions (1-2 paragraphs) about their projects. These descriptions, along with other information about the subaward projects, are entered that is into the NLM’s Outreach Projects Database (OPD).

The OPD has a wealth of information, so I need an evaluation question to help me focus my document review. I settle on this question: How do our subawardees collaborate with other organizations to promote NLM products?  Partnerships and collaborations are a cornerstone of NNLM. They are the “network” in our name.  Yet simply listing the diverse types of organizations involved in our work does not satisfactorily capture the nature of our collaborations.  Possibly the subaward program descriptions in our OPD can add depth to our understanding of this aspect of the NNLM.

Now that I’ve identified my primary evaluation question, here’s how I would apply Cabral’s guidelines in the actual study.

Catalogue the information available to you:  For my project, I would first review the fields on the OPD’s data entry pages to see what information is entered for each project.  I obviously want to use the descriptive paragraphs. However, it helps to peruse the other project details. For example, it might be interesting to see if different types of organization (such as libraries and community-based organizations) form different types of collaborations. This step may cause me to add evaluation questions to my study.

I also would employ some type of sampling, because the OPD contains over 4500 project descriptions from as far back as 2001.  It is neither feasible nor necessary to review all of them.  There are many sampling choices, both random and purposeful. (Check out this article by Palinkas et al for purposeful sampling strategies.)  I‘m most interested in current award projects, so I likely would choose projects conducted in the past 2-3 years.

Develop a data collection form: A data collection form is the tool that allows you to record abstracted or summarized information from the full documents. Fortunately, the OPD system downloads data into an Excel-readable spreadsheet, which is the basis for my form. I would first delete columns in this spreadsheet that contain information not irrelevant to my study, such as mailing address and  phone numbers of the subaward contact person.

Get a co-evaluator: I would volunteer a NEO colleague to partner with me, to increase the objectivity of the analysis. Document review almost always involves coding of qualitative data.  If you use qualitative analysis for your study, a partner improves the trustworthiness of conclusions drawn from the data. If you are converting information into quantifiable (countable) data, a co-evaluator allows you to assess and correct human error in your coding process. If you do not have a partner for your entire project, try to find someone who can work with you on a subset of the data so you can calibrate your coding against someone else’s.

Ensure consistently among teammates involved in the analysis: “Abstracting data,” for my project, means identifying themes in the project descriptions.  Here’s a step-by-step description of the process I would use:

  • My partner and I would take a portion of the documents (15-20%) and both of us would read the same set of project descriptions. We would develop a list of themes that both of us believe are important to track for our study. Tracking means we would add columns to our data collection form/worksheet and note absence or presence of the themes in each project description.
  • We would then divide up the remaining program descriptions. I would code half of them and my partner would take the other half.
  • After reading 20% of the remaining documents, we would check in with each other to see if important new themes have emerged that we want to track. If so, we would add columns on our data collection document. (We would also check that first 15-20% of project descriptions for presence of these new themes.)
  • When all program descriptions are coded, we would sort our data collection form so we could explore patterns or commonalities among programs that share common themes.

For a more explicit description of coding qualitative data, check out the NEO publication Collecting and Analyzing Evaluation Data. The qualitative analysis methods described starting on page 25 can be applied in qualitative document review.

So, got documents? Now you know how to use them to assess your programs.

Failure IS an Option: Measuring and Reporting It

Friday, January 27th, 2017

Back to Square One signpost

Failure.  We all know it’s good for us.  We learn from failure, right? In Batman Begins, Bruce Wayne’s dad says “Why do we fall, Bruce? So we can learn to pick ourselves up.”  But sometimes failure, like falling, isn’t much fun (although, just like falling, sometimes it is fun for the other people around you). Sometimes in our jobs we have to report our failures to someone. And sometimes the politics of our jobs makes reporting failure a definite problem.

In the NEO office we like to start our meetings by reporting a recent failure. I think it’s a fun thing to do because I think my failures are usually pretty funny.  But Cindy has us do it from a higher motivation than getting people to laugh.  Taking risks is about being willing to fail. Sara Blakely, the founder and CEO of Spanx, grew up reporting failure every day at the dinner table: https://vimeo.com/175524001  In this video she says that “failure to me became not trying, versus the outcome.”

Why failure matters in evaluation

In general we are all really good at measuring our process (the activities we do) and not so good at measuring outcomes (the things we want to see happen because of our activities).  This is because we have a lot of control over whether our activities are done correctly, and very little control over the outcomes.  We want to measure something that shows that we did a great job, and we don’t want to measure something that might make us look bad. That’s why we find it preferable to measure something we have control over. It can look like we failed if we didn’t get the results we wanted, even if the work at our end was brilliant.

sad businesswoman

But of course outcomes are what we really care about (Steering by Outcomes: Begin with the End in Mind).  They are the “what for?” of what we do.  What if you evaluated the outcomes of some training sessions that you taught and you found out that no one used the skill that you taught them.  That would be sad and it might look like you wasted time and resources.  But on the other hand, what if you don’t measure whether or not anyone ever uses what you taught them, and you just keep teaching the classes and reporting successful classes, never finding out that people aren’t using what you taught them.  Wouldn’t that be the real waste of resources?

So how do you report failure?

I think getting over our fear of failure has to do with learning how to report failure so it doesn’t look like, well, failure.  The key is to stay focused on the end goal: we all really want to know the answer to the question “are we making a difference?”  If we stay focused on that question, then we need to figure out what indicators we can measure to find the answer. If the answer is “no, we didn’t make a difference” then how can we report that in a way that shows we’ve learned how to make the answer “yes?” How can we think about failure so it’s about “learning to pick ourselves up?” or better yet, contributing to your organization’s mission?

One way is to measure outcomes early and often. If you wait until the end of your project to measure your outcomes, you can’t adjust your project to enhance the possibilities of success.  If you know early on that your short-term outcomes are not coming out the way you hope, you can change what you’re doing.  So when you do your final report, you aren’t reporting failure, you’re reporting lessons learned, flexibility and ultimately success.

Here’s an example

Let’s say you’re teaching a series of classes to physical therapists on using PubMed Health so they can identify the most effective therapy for their patients.  At the end of the class you have the students complete a course evaluation, in which they give high scores to the class and the teachers.  If you are evaluating outcomes early, you might add a question like: “Do you think you will use PubMed Health in the next month?”  This is an early outcome question.   If most of them say “no” to this question, you will know quickly that if you don’t change something about what you’re doing in future classes, it is unlikely that a follow-up survey two months later will show that they had used PubMed Health.  Maybe you aren’t giving examples that apply to these particular students. Maybe these students aren’t in the position to make decisions about effective therapies. You have an opportunity to talk to some of the students and find out what you can change so your project is successful.

Complete Failure

You’ve tried everything, but you still don’t have the results you wanted to see.  The good news is, if you’ve been collecting your process and outcomes data, you have a lot of information about why things didn’t turn out as hoped and what can be done differently. Reporting that information is kind of like that commercial about how to come in late to a meeting. If you bring the food, you’re not the person who came late, you’re the person who brought breakfast.  If you report that you did a big project that didn’t work, you’re reporting failure.  If you report that things didn’t work out the way you hoped, but you have data-based suggestions for a better use of organizational resources that meet the same goal–then you’re the person who is working for positive change that supports the organization, and have metaphorically brought the breakfast. Who doesn’t love that person?

 

Beyond the Memes: Evaluating Your Social Media Strategy – Part 2

Friday, January 20th, 2017

In my last post, I wrote about how to create social media outcomes for your organization. This week, we will take a look at writing objectives for your outcomes using the SMART method.

Though objectives and outcomes sound like the same thing, they are two different concepts in your evaluation plan – outcomes are the big ideas, while objectives relate to the specifics. Read Karen’s post to find out more about what outcomes and objectives are.

In the book Measuring the Networked Nonprofit, by Beth Kanter and Katie Delahaye Paine, they talk a lot about SMART objectives. We have not covered these types of objectives on the blog, so I thought this would be a good time to introduce this type of objective building. According to the book, a SMART objective is “specific, measurable, attainable, realistic, and timely” (Kanter and Paine 47). There are many variations on this definition, so we will use my favorite: Specific, Measurable, Attainable, Relevant, and Timely.

Specific: Leave the big picture for your outcomes. Use the 5 W’s (who, what, when, where, and why) to help craft this portion
Measurable: If you can’t measure it, how will you know you’ve actually achieved what you set out to do?
Attainable: Don’t make you objectives impossible. It’s not productive (or fun) to create objectives that you know you cannot reach. Understand what your community needs, and involve stakeholders.
Relevant: Is your community on Twitter? Create a Twitter account. Do they avoid Twitter? Don’t make a Twitter account. Use the tools that are relevant to the community that you serve.
Timely: Set a time frame for your objectives and outcomes, or your project might take too long for it to be relevant to your community. Time is also money, so create a deadline for your project so that you do not waste resources on a lackluster project.

As an example, let’s return to NEO’s favorite hypothetical town of Sunnydale to see how they added social media objectives into their Dusk to Dawn program. To refresh your memory, read this post from last September about Sunnydale’s Evaluation Plan.

Christopher Walken Fever Meme with the text 'Well, guess what! I’ve got a fever / and the only prescription is more hashtags'

Sunnydale librarians know that their vampire population uses Twitter on a daily basis for many reasons – meeting new vampires, suggesting favorite vampire friendly night clubs, and even engaging the library with general reference questions. Librarians came up with the idea to use the hashtag #dusk2dawn in all of their promotional materials about the health program Dusk to Dawn. Their thinking was it would help increase awareness of their objectives of 4 evening classes on MedlinePlus and PubMed, which in turn would support the outcomes “Increased ability of the Internet-using Sunnydale vampires to research needed health information” and “These vampires will use their increased skills to research health information for their brood.”

With that in mind, let’s make a SMART objective for this hashtag’s usage:

Specific
We will plug in what we have so far into the Specific section:

Vampires (who) in Sunnydale (where) will show an increase in awareness of health-related events hosted by the library (what) by retweeting the hashtag #dusk2dawn (why) for the duration of the Dusk to Dawn program (when).

Measurable
Measurable is probably the hardest part. What kind of metrics will Sunnydale librarians use to measure hashtag usage? How will they do it?

The social media librarian will manually monitor the hashtag’s usage by setting up an alert for its usage on TweetDeck. Each time the hashtag is used by a non-librarian in reference to the Sunnydale Library, the librarian will copy the tweet’s content to a spreadsheet, adding signifiers if it is a positive or negative tweet.

Attainable
Can our objective be reached? What is it about vampires in Sunnydale that makes this hashtag monitoring possible?

We know from polling and experience that our community likes using Twitter – specifically, they regularly engage with us on this platform. Having a dedicated hashtag for our overall program is a natural progression for us and our community.

Relevant
How does the hashtag #dusk2dawn contribute to the overall Dusk to Dawn mission?

An increase in usage of the hashtag #dusk2dawn will show that our community is actively talking about, hopefully in a positive way. This should increase awareness of our program’s objectives of 4 evening classes on MedlinePlus and PubMed, which in turn would support the outcomes “Increased ability of the Internet-using Sunnydale vampires to research needed health information” and “These vampires will use their increased skills to research health information for their brood.”

Timely
How long should it take for the vampires to increase their awareness of our program’s objectives?

There should be an upward trend in awareness over the course of the program. We have 7 months before we are reevaluating the whole Dusk to Dawn program, so we will set 7 months as our deadline for increased hashtag usage.

SMART!
Now, we put it all together to get:

Vampires in Sunnyvale will show an increase in awareness of health-related events hosted by the library, indicated by a 15% increase of the hashtag #dusk2dawn by Sunnydale vampires for the duration of the Dusk to Dawn program.

Success Baby

Though this objective is SMART, it is certainly will not work in every library. Perhaps the community your library serves does not use Twitter to connect with the library, or you do not have enough people on staff to monitor the hashtag’s usage. If you make a SMART objective that will be relevant to your community, it will have a better chance to succeed.

Here at NEO, we usually do not use SMART objectives method, but rather Measurable Outcome Objectives. Step 3 on the Evaluation Resources page points to many different resources on our website that talk about Measurable Objectives. Try both out, and see what works for your organization.

We will be taking a break from social media evaluation and goal setting for a few weeks. Next time we talk about social media, we will show our very own social media evaluation plan!

Find more information about SMART objectives here:

Let me know if you have any questions or comments about this post! Comment on Facebook and Twitter, or email me at kalyna@uw.edu.

Image credits: Christopher Walken Fever Meme made by Kalyna Durbak on ImgFlip.com. Success Kid meme is from Know Your Meme.

Beyond the Memes: Evaluating Your Social Media Strategy – Part 1

Friday, January 13th, 2017

Welcome to our new NEO blogger, Kalyna Durbak.  Her first post addresses a topic that is a concern to many of us, evaluating our social media!


By Kalyna Durbak, Program Assistant, NNLM Evaluation Office (NEO)

Have you ever wondered if your library’s Facebook page was worth the time and effort? I think about social media a lot, but then again I’ve been using Facebook daily for over 10 years. The book Measuring the Networked Nonprofit, by Beth Kanter and Katie Delahaye Paine, can help your library or organization figure out how to measure the impact of your social media campaigns have on the world.

Not all of us work for a nonprofit, but I feel many organizations share similar constraints with nonprofits – like not being able to afford to hire a firm to develop and manage the social media accounts. It’s easy to think that social media is easy to do because we all manage our personal profiles. Once you start managing accounts that belong to an organization, it gets hard. What do you post? What can you post? How many likes can I collect?

One does not simply post memes on Facebook

Before we get into any measurement, I want to briefly write about why social media outcomes are important to have, and why they should be measured. A library should not create a Facebook page simply to collect likes, or a Twitter page to gather followers. As my husband would say, that’s simply “scoring Internet points.” Internet points make you feel good inside, but do not impact the world around you. The real magic in using social media comes from creating a community around your organization that is willing to show up and help out when you ask.

A library should create a social media page in order for something to happen in the real world – an outcome. Figuring out why you need a social media account will help your library manage its various accounts more efficiently, and in the end measure the successes and failures of your social media campaigns. If you need more convincing, read Cindy’s post “Steering by Outcomes: Begin with the End in Mind.” For help on determining your outcomes, I suggest reading Karen’s blog post “Developing Program Outcomes using the Kirkpatrick Model – with Vampires.”

What are some reasons for using social media in your library? Maybe you will have an online campaign to promote digital assets, or perhaps you will add a social media component to a program that already exists in your library. Whatever they are, any social media activity you do should support an outcome. A few outcomes I can think of are:

  • Increased awareness of library programs
  • New partnerships found for future collaborative efforts
  • Improved perceptions about the organization
  • Increase in library foundation’s donor base

None of the outcomes specifically mention Facebook, Twitter, or any other social media platforms. That’s because outcomes outline the big picture – it’s what you want to happen after completing your project. In the above examples, a library wants the donor base to be increased, or the library wants increased awareness of library programs. It’s the ideal world your library or organization wants to exist in. Facebook and Twitter can help achieve these outcomes, but the number of retweets you get is not an outcome.

To make that ideal future a reality, you need to create objectives. Objectives are the signposts that will indicate whether you are successful in reaching your outcome. Next week, we will craft social media oriented objectives for a library in our favorite hypothetical town of Sunnydale. Catch up on Sunnydale with these posts:

Let me know if you have any questions or comments about this post! Comment on Facebook and Twitter, or email me at kalyna@uw.edu.

My Favorite Things 2016 (Spoiler Alert: Includes Cats)

Wednesday, December 21st, 2016

Little figurine of Santa standing in snow, holding gifts

During gift-giving season every year, Oprah publishes a list of her favorite things. Well, move over, Oprah, because I also have a list. This is my bag of holiday gifts for our NEO Shop Talk readers.

Art Exhibits

There are two websites with galleries of data visualizations that are really fun to visit. The first,  Information is Beautiful , has wonderful examples of data visualizations, many of which are interactive. My favorites from this site are Who Old Are You?   (put in your birth date to start it) and Common MythConceptions. The other is Tableau Public, Tableau Software Company’s “public commons” for their users to share their work.  My picks are the Endangered Species Safari  and the data visualization of the Simpsons Vizapedia.  And, in case  you’re wondering what happened to your favorite Crayola crayon colors, you can find out here.

Movies

Nancy Duarte’s The Secret Structure of Great Talks is my favorite TEDtalk. Duarte describes the simple messaging structure underlying inspirational speeches. Once you grasp this structure, you will know how to present evaluations findings to advocate for stakeholder support. I love the information in this talk, but that’s not why I listen to it over and over again.  It’s because Duarte says “you have the power to change the world” and, by the end of the talk, I believe her.

Dot plot for a fictional workshop data, titled Participant Self Assessment of their Holiday Skills before and after our holiday survival workshop. Pre/post self-report ratings for four items: Baking without a sugar overdose (pre=3; post-5); Making small talk at the office party (pre=1; post=3); Getting gifts through airport security (pre=2; post-5); Managing road rage in mall parking lots (pre=2; post-4)

I also am a fan of two videos from the Denver Museum of Natural History, which demonstrate how museum user metrics can be surprisingly entertaining. What Do Jelly Beans Have To Do With The Museum? shows demographics with colorful candy and Audience Insights On Parking at the Museum  talks amusingly about a common challenge of urban life.

Crafts

If you want to try your hand at creating snappier charts and graphs, you need to spend some time at Stephanie Evergreen’s blog. For example, she gives you step-by-step instructions on making lollipop charts, dot plots , and overlapping bar charts. Stephanie works exclusively in Excel, so there’s no need to purchase or learn new software. You also might want to learn a few new Excel graphing tricks at Ann Emery’s blog.  For instance, she describes how to label the lines in your graphs or adjust bar chart spacing.

Site Seeing

How about a virtual tour to the UK? I still marvel at the innovative Visualizing Mill Road  project. Researchers collected community data, then shared their findings in street art. This is the only project I know of featuring charts in sidewalk chalk. The web site talks about community members’ reactions to the project, which is also pretty fascinating.

Humor

I left the best for last. This is a gift for our most sophisticated readers, recommended by none other than Paul Gargani, president of the American Evaluation Association. It is a web site for the true connoisseurs of online evaluation resources.  I present to you the Twitter feed for  Eval Cat.  Even the  NEO Shop Talk cats begrudgingly admire it, although no one has invited them to post.

 

Pictures of the four NEO Eval Cats

 

 

 

 

 

 

 

 

 

Here’s wishing you an enjoyable holiday.

‘Tis the Season to Do Some Qualitative Interviewing!

Friday, December 9th, 2016

For most of us, the end-of-year festivities are in full swing. We get to enjoy holiday treats. Lift a wine glass with colleagues, friends, and loved ones. Step back from the daily grind and enjoy some light-hearted holiday fun.

Or, we could take these golden holiday social events to work on our qualitative interviewing skills! That’s right.  I want to invite you to participate in another NEO’s holiday challenge: The Qualitative Interview challenge. (You can read about our Appreciative Inquiry challenge here.)

If you are a bit introverted and overwhelmed in holiday situations, this challenge is perfect for you. It will give you a mission: a task to take your mind off that social awkwardness you feel in large crowds. (Please tell me I’m not the only one!) If, on the other hand, you are more of a life-of-the-party guest, this challenge will help you talk less and listen more.  Other party-goers will love you and you might learn something.

Here’s your challenge.  Jot down some good conversational questions that fit typical categories of qualitative interview questions.  Commit a couple questions to memory before you hit a party. Use those questions to fuel conversations with fellow party-goers and see if you get the type of information you were seeking.

To really immerse yourself in this challenge, create a chart with the six categories of questions. (I provided an example below)  When your question is successful (i.e., you get the type of information you wanted), give yourself a star.  Sparkly star stickers are fun, but you can also simply draw stars beside the questions. Your goal is to get at least one star in each category by midnight on December 31.

Holiday challenge chart, There is a holiday border around a table-style chartt with the six categories of questions, the five extra credit techniques, and blank cells for stars

According to qualitative researcher/teacher extraordinaire Michael Q. Patton, there are six general categories of qualitative interview questions.  Here are categories:

  • Experience or behavior questions: Ask people to tell you a story about something they experienced or something they do. For unique experiences, you might say “Describe your best holiday ever.” You could ask about more routine behavior, such as “What traditions do you try to always celebrate during the holidays?”
  • Sensory questions: Sensory questions are similar to experience questions, but they focus on what people see, hear, feel, smell, or taste. Questions about holiday meals or vacation spots will likely elicit sensory answers.
  • Opinion and value questions: If you ask people what they think about something, you will hear their opinions and values. When Charlie Brown asked “What is the true meaning of Christmas?” he was posing a value/opinion question.
  • Emotions questions: Here, you ask people to express their emotional reactions. Emotions questions can be tricky. In my experience, most people are better at expressing opinions than emotions, so be prepared to follow up.  For example, if you ask me “What do you dislike about the holiday season?” I might say “I don’t like gift-shopping.”   “Like” is more of an opinion word than an emotion word. You want me to reach past my brain and into my heart. So you could follow-up with “How do you feel when you’re shopping for holiday gifts?”  I might say “The crowds frustrate and exhaust me” or “I feel stressed out trying to find perfect gifts on a deadline.“ Now I have described my emotions around gift-shopping. Give yourself a star!
  • Knowledge questions: These questions seek factual information. For example, you might ask for tried-and-true advice to make holiday travel easier. If answers include tips for getting through airport security quickly or the location of clean gas station bathrooms on the PA Turnpike, you asked a successful knowledge question.
  • Background and demographic questions: These questions explore how factors such as ethnicity, culture, socio-economic status, occupation, or religion affect one’s experiences and world view. What foods do their immigrant grandparents cook for New Year’s celebrations every year?  What is it like to be single during the holidays? How do religious beliefs or practices affect their approach to the holidays? These are examples of background/demographic questions.

To take this challenge up a notch, try to incorporate the following techniques while practicing interview skills over egg nog.

Ask open-ended questions. Closed-ended questions can be answered with a word or phrase.  “Did you like the movie?”  The answer “Yes” or “No” is a comprehensive response to that question.   An open-ended version of this question might be “Describe  a good movie you saw recently.”  If you phrased your question so that your conversation partner had to string together words or sentences to form an answer, give yourself an extra star.

Pay attention to question sequence:  The easiest questions for people to answer are those that ask them to tell a story. The act of telling a story helps people get in touch with their opinions and feelings about something.  Also, once you have respectfully listened to their story, they will feel more comfortable sharing opinions and feelings with you. So break the ice with experience questions.

Wait for answers:  Sometimes we ask questions, then don’t wait for a response.  Some people have to think through an answer completely before they talk out loud. Those seconds of silence make me want to jump in with a rephrased question. The problem is, you’ll start the clock again as they contemplate the new version of your question. To hold myself back, I try to pay attention to my own breathing while maintaining friendly eye contact.

Connect and support: You get another star if you listened carefully enough to accurately reflect their answers back to them. This is called reflective listening.  If you want a fun tutorial on how to listen, check out Julian Treasure’s TEDtalk.

Some of you are likely thinking “Thanks but no thanks for this holiday challenge.” Maybe it seems too much like work. Maybe you plan to avoid social gatherings like the plague this season.  Fair enough.  All of the tips apply to bona fide qualitative interviews. When planning and conducting qualitative interviews, remember to include questions that target different types of information. Make your questions open-ended and sequence them so they are easy to answer.  Listen carefully and connect with your interviewee by reflecting back what you heard.

Regardless of whether you take up the challenge or not, I wish you happy holidays full of fun and warm conversations.

My source for interview question types and interview techniques was  Patton MQ. Qualitative Research and Evaluation Methods.  4th ed.  Thousand Oaks, CA: Sage, 2015.

Last updated on Monday, June 27, 2016

Funded by the National Library of Medicine under Contract No. UG4LM012343 with the University of Washington.