Skip all navigation and go to page content

NEO Shop Talk

The blog of the National Network of Libraries of Medicine Evaluation Office

Archive for the ‘Practical Evaluation’ Category

Got Documents? How to Do a Document Review

Friday, February 10th, 2017

Are you an introvert?  Then I have an evaluation method for you: document review. You usually you can do this method from the comfort of your own office. No scary interactions with strangers.

Truth is, my use of existing data in evaluation seldom rises to the definition of true document review.  I usually read through relevant documents to understand a program’s history or context. However, a recent blog post by Linda Cabral in the AEA365 blog reminded me that document review is a real evaluation method that is conducted systematically. Cabral provide tips and a resource for doing document review correctly.  For today’s post, I decided to plan a document review that the NEO might conduct someday, describing how I would use Cabral’s guidelines. I also checked out the CDC’s Evaluation Brief, Data Collection Methods for Evaluation: Document Review, which Cabral recommended.

Here’s some project background. The NEO leads and supports evaluation efforts of the National Network of Libraries of Medicine (NNLM), which promotes access to and use of health information resources developed by the NIH’s National Library of Medicine. Eight health sciences libraries (called Regional Medical Libraries or RMLs) manage a program in which they provide modest amounts of funding to other organizations to conduct health information outreach in their regions. The organizations receiving these funds (known as subawards) write proposals that include brief descriptions (1-2 paragraphs) about their projects. These descriptions, along with other information about the subaward projects, are entered that is into the NLM’s Outreach Projects Database (OPD).

The OPD has a wealth of information, so I need an evaluation question to help me focus my document review. I settle on this question: How do our subawardees collaborate with other organizations to promote NLM products?  Partnerships and collaborations are a cornerstone of NNLM. They are the “network” in our name.  Yet simply listing the diverse types of organizations involved in our work does not satisfactorily capture the nature of our collaborations.  Possibly the subaward program descriptions in our OPD can add depth to our understanding of this aspect of the NNLM.

Now that I’ve identified my primary evaluation question, here’s how I would apply Cabral’s guidelines in the actual study.

Catalogue the information available to you:  For my project, I would first review the fields on the OPD’s data entry pages to see what information is entered for each project.  I obviously want to use the descriptive paragraphs. However, it helps to peruse the other project details. For example, it might be interesting to see if different types of organization (such as libraries and community-based organizations) form different types of collaborations. This step may cause me to add evaluation questions to my study.

I also would employ some type of sampling, because the OPD contains over 4500 project descriptions from as far back as 2001.  It is neither feasible nor necessary to review all of them.  There are many sampling choices, both random and purposeful. (Check out this article by Palinkas et al for purposeful sampling strategies.)  I‘m most interested in current award projects, so I likely would choose projects conducted in the past 2-3 years.

Develop a data collection form: A data collection form is the tool that allows you to record abstracted or summarized information from the full documents. Fortunately, the OPD system downloads data into an Excel-readable spreadsheet, which is the basis for my form. I would first delete columns in this spreadsheet that contain information not irrelevant to my study, such as mailing address and  phone numbers of the subaward contact person.

Get a co-evaluator: I would volunteer a NEO colleague to partner with me, to increase the objectivity of the analysis. Document review almost always involves coding of qualitative data.  If you use qualitative analysis for your study, a partner improves the trustworthiness of conclusions drawn from the data. If you are converting information into quantifiable (countable) data, a co-evaluator allows you to assess and correct human error in your coding process. If you do not have a partner for your entire project, try to find someone who can work with you on a subset of the data so you can calibrate your coding against someone else’s.

Ensure consistently among teammates involved in the analysis: “Abstracting data,” for my project, means identifying themes in the project descriptions.  Here’s a step-by-step description of the process I would use:

  • My partner and I would take a portion of the documents (15-20%) and both of us would read the same set of project descriptions. We would develop a list of themes that both of us believe are important to track for our study. Tracking means we would add columns to our data collection form/worksheet and note absence or presence of the themes in each project description.
  • We would then divide up the remaining program descriptions. I would code half of them and my partner would take the other half.
  • After reading 20% of the remaining documents, we would check in with each other to see if important new themes have emerged that we want to track. If so, we would add columns on our data collection document. (We would also check that first 15-20% of project descriptions for presence of these new themes.)
  • When all program descriptions are coded, we would sort our data collection form so we could explore patterns or commonalities among programs that share common themes.

For a more explicit description of coding qualitative data, check out the NEO publication Collecting and Analyzing Evaluation Data. The qualitative analysis methods described starting on page 25 can be applied in qualitative document review.

So, got documents? Now you know how to use them to assess your programs.

Failure IS an Option: Measuring and Reporting It

Friday, January 27th, 2017

Back to Square One signpost

Failure.  We all know it’s good for us.  We learn from failure, right? In Batman Begins, Bruce Wayne’s dad says “Why do we fall, Bruce? So we can learn to pick ourselves up.”  But sometimes failure, like falling, isn’t much fun (although, just like falling, sometimes it is fun for the other people around you). Sometimes in our jobs we have to report our failures to someone. And sometimes the politics of our jobs makes reporting failure a definite problem.

In the NEO office we like to start our meetings by reporting a recent failure. I think it’s a fun thing to do because I think my failures are usually pretty funny.  But Cindy has us do it from a higher motivation than getting people to laugh.  Taking risks is about being willing to fail. Sara Blakely, the founder and CEO of Spanx, grew up reporting failure every day at the dinner table: https://vimeo.com/175524001  In this video she says that “failure to me became not trying, versus the outcome.”

Why failure matters in evaluation

In general we are all really good at measuring our process (the activities we do) and not so good at measuring outcomes (the things we want to see happen because of our activities).  This is because we have a lot of control over whether our activities are done correctly, and very little control over the outcomes.  We want to measure something that shows that we did a great job, and we don’t want to measure something that might make us look bad. That’s why we find it preferable to measure something we have control over. It can look like we failed if we didn’t get the results we wanted, even if the work at our end was brilliant.

sad businesswoman

But of course outcomes are what we really care about (Steering by Outcomes: Begin with the End in Mind).  They are the “what for?” of what we do.  What if you evaluated the outcomes of some training sessions that you taught and you found out that no one used the skill that you taught them.  That would be sad and it might look like you wasted time and resources.  But on the other hand, what if you don’t measure whether or not anyone ever uses what you taught them, and you just keep teaching the classes and reporting successful classes, never finding out that people aren’t using what you taught them.  Wouldn’t that be the real waste of resources?

So how do you report failure?

I think getting over our fear of failure has to do with learning how to report failure so it doesn’t look like, well, failure.  The key is to stay focused on the end goal: we all really want to know the answer to the question “are we making a difference?”  If we stay focused on that question, then we need to figure out what indicators we can measure to find the answer. If the answer is “no, we didn’t make a difference” then how can we report that in a way that shows we’ve learned how to make the answer “yes?” How can we think about failure so it’s about “learning to pick ourselves up?” or better yet, contributing to your organization’s mission?

One way is to measure outcomes early and often. If you wait until the end of your project to measure your outcomes, you can’t adjust your project to enhance the possibilities of success.  If you know early on that your short-term outcomes are not coming out the way you hope, you can change what you’re doing.  So when you do your final report, you aren’t reporting failure, you’re reporting lessons learned, flexibility and ultimately success.

Here’s an example

Let’s say you’re teaching a series of classes to physical therapists on using PubMed Health so they can identify the most effective therapy for their patients.  At the end of the class you have the students complete a course evaluation, in which they give high scores to the class and the teachers.  If you are evaluating outcomes early, you might add a question like: “Do you think you will use PubMed Health in the next month?”  This is an early outcome question.   If most of them say “no” to this question, you will know quickly that if you don’t change something about what you’re doing in future classes, it is unlikely that a follow-up survey two months later will show that they had used PubMed Health.  Maybe you aren’t giving examples that apply to these particular students. Maybe these students aren’t in the position to make decisions about effective therapies. You have an opportunity to talk to some of the students and find out what you can change so your project is successful.

Complete Failure

You’ve tried everything, but you still don’t have the results you wanted to see.  The good news is, if you’ve been collecting your process and outcomes data, you have a lot of information about why things didn’t turn out as hoped and what can be done differently. Reporting that information is kind of like that commercial about how to come in late to a meeting. If you bring the food, you’re not the person who came late, you’re the person who brought breakfast.  If you report that you did a big project that didn’t work, you’re reporting failure.  If you report that things didn’t work out the way you hoped, but you have data-based suggestions for a better use of organizational resources that meet the same goal–then you’re the person who is working for positive change that supports the organization, and have metaphorically brought the breakfast. Who doesn’t love that person?

 

Beyond the Memes: Evaluating Your Social Media Strategy – Part 2

Friday, January 20th, 2017

In my last post, I wrote about how to create social media outcomes for your organization. This week, we will take a look at writing objectives for your outcomes using the SMART method.

Though objectives and outcomes sound like the same thing, they are two different concepts in your evaluation plan – outcomes are the big ideas, while objectives relate to the specifics. Read Karen’s post to find out more about what outcomes and objectives are.

In the book Measuring the Networked Nonprofit, by Beth Kanter and Katie Delahaye Paine, they talk a lot about SMART objectives. We have not covered these types of objectives on the blog, so I thought this would be a good time to introduce this type of objective building. According to the book, a SMART objective is “specific, measurable, attainable, realistic, and timely” (Kanter and Paine 47). There are many variations on this definition, so we will use my favorite: Specific, Measurable, Attainable, Relevant, and Timely.

Specific: Leave the big picture for your outcomes. Use the 5 W’s (who, what, when, where, and why) to help craft this portion
Measurable: If you can’t measure it, how will you know you’ve actually achieved what you set out to do?
Attainable: Don’t make you objectives impossible. It’s not productive (or fun) to create objectives that you know you cannot reach. Understand what your community needs, and involve stakeholders.
Relevant: Is your community on Twitter? Create a Twitter account. Do they avoid Twitter? Don’t make a Twitter account. Use the tools that are relevant to the community that you serve.
Timely: Set a time frame for your objectives and outcomes, or your project might take too long for it to be relevant to your community. Time is also money, so create a deadline for your project so that you do not waste resources on a lackluster project.

As an example, let’s return to NEO’s favorite hypothetical town of Sunnydale to see how they added social media objectives into their Dusk to Dawn program. To refresh your memory, read this post from last September about Sunnydale’s Evaluation Plan.

Christopher Walken Fever Meme with the text 'Well, guess what! I’ve got a fever / and the only prescription is more hashtags'

Sunnydale librarians know that their vampire population uses Twitter on a daily basis for many reasons – meeting new vampires, suggesting favorite vampire friendly night clubs, and even engaging the library with general reference questions. Librarians came up with the idea to use the hashtag #dusk2dawn in all of their promotional materials about the health program Dusk to Dawn. Their thinking was it would help increase awareness of their objectives of 4 evening classes on MedlinePlus and PubMed, which in turn would support the outcomes “Increased ability of the Internet-using Sunnydale vampires to research needed health information” and “These vampires will use their increased skills to research health information for their brood.”

With that in mind, let’s make a SMART objective for this hashtag’s usage:

Specific
We will plug in what we have so far into the Specific section:

Vampires (who) in Sunnydale (where) will show an increase in awareness of health-related events hosted by the library (what) by retweeting the hashtag #dusk2dawn (why) for the duration of the Dusk to Dawn program (when).

Measurable
Measurable is probably the hardest part. What kind of metrics will Sunnydale librarians use to measure hashtag usage? How will they do it?

The social media librarian will manually monitor the hashtag’s usage by setting up an alert for its usage on TweetDeck. Each time the hashtag is used by a non-librarian in reference to the Sunnydale Library, the librarian will copy the tweet’s content to a spreadsheet, adding signifiers if it is a positive or negative tweet.

Attainable
Can our objective be reached? What is it about vampires in Sunnydale that makes this hashtag monitoring possible?

We know from polling and experience that our community likes using Twitter – specifically, they regularly engage with us on this platform. Having a dedicated hashtag for our overall program is a natural progression for us and our community.

Relevant
How does the hashtag #dusk2dawn contribute to the overall Dusk to Dawn mission?

An increase in usage of the hashtag #dusk2dawn will show that our community is actively talking about, hopefully in a positive way. This should increase awareness of our program’s objectives of 4 evening classes on MedlinePlus and PubMed, which in turn would support the outcomes “Increased ability of the Internet-using Sunnydale vampires to research needed health information” and “These vampires will use their increased skills to research health information for their brood.”

Timely
How long should it take for the vampires to increase their awareness of our program’s objectives?

There should be an upward trend in awareness over the course of the program. We have 7 months before we are reevaluating the whole Dusk to Dawn program, so we will set 7 months as our deadline for increased hashtag usage.

SMART!
Now, we put it all together to get:

Vampires in Sunnyvale will show an increase in awareness of health-related events hosted by the library, indicated by a 15% increase of the hashtag #dusk2dawn by Sunnydale vampires for the duration of the Dusk to Dawn program.

Success Baby

Though this objective is SMART, it is certainly will not work in every library. Perhaps the community your library serves does not use Twitter to connect with the library, or you do not have enough people on staff to monitor the hashtag’s usage. If you make a SMART objective that will be relevant to your community, it will have a better chance to succeed.

Here at NEO, we usually do not use SMART objectives method, but rather Measurable Outcome Objectives. Step 3 on the Evaluation Resources page points to many different resources on our website that talk about Measurable Objectives. Try both out, and see what works for your organization.

We will be taking a break from social media evaluation and goal setting for a few weeks. Next time we talk about social media, we will show our very own social media evaluation plan!

Find more information about SMART objectives here:

Let me know if you have any questions or comments about this post! Comment on Facebook and Twitter, or email me at kalyna@uw.edu.

Image credits: Christopher Walken Fever Meme made by Kalyna Durbak on ImgFlip.com. Success Kid meme is from Know Your Meme.

Beyond the Memes: Evaluating Your Social Media Strategy – Part 1

Friday, January 13th, 2017

Welcome to our new NEO blogger, Kalyna Durbak.  Her first post addresses a topic that is a concern to many of us, evaluating our social media!


By Kalyna Durbak, Program Assistant, NNLM Evaluation Office (NEO)

Have you ever wondered if your library’s Facebook page was worth the time and effort? I think about social media a lot, but then again I’ve been using Facebook daily for over 10 years. The book Measuring the Networked Nonprofit, by Beth Kanter and Katie Delahaye Paine, can help your library or organization figure out how to measure the impact of your social media campaigns have on the world.

Not all of us work for a nonprofit, but I feel many organizations share similar constraints with nonprofits – like not being able to afford to hire a firm to develop and manage the social media accounts. It’s easy to think that social media is easy to do because we all manage our personal profiles. Once you start managing accounts that belong to an organization, it gets hard. What do you post? What can you post? How many likes can I collect?

One does not simply post memes on Facebook

Before we get into any measurement, I want to briefly write about why social media outcomes are important to have, and why they should be measured. A library should not create a Facebook page simply to collect likes, or a Twitter page to gather followers. As my husband would say, that’s simply “scoring Internet points.” Internet points make you feel good inside, but do not impact the world around you. The real magic in using social media comes from creating a community around your organization that is willing to show up and help out when you ask.

A library should create a social media page in order for something to happen in the real world – an outcome. Figuring out why you need a social media account will help your library manage its various accounts more efficiently, and in the end measure the successes and failures of your social media campaigns. If you need more convincing, read Cindy’s post “Steering by Outcomes: Begin with the End in Mind.” For help on determining your outcomes, I suggest reading Karen’s blog post “Developing Program Outcomes using the Kirkpatrick Model – with Vampires.”

What are some reasons for using social media in your library? Maybe you will have an online campaign to promote digital assets, or perhaps you will add a social media component to a program that already exists in your library. Whatever they are, any social media activity you do should support an outcome. A few outcomes I can think of are:

  • Increased awareness of library programs
  • New partnerships found for future collaborative efforts
  • Improved perceptions about the organization
  • Increase in library foundation’s donor base

None of the outcomes specifically mention Facebook, Twitter, or any other social media platforms. That’s because outcomes outline the big picture – it’s what you want to happen after completing your project. In the above examples, a library wants the donor base to be increased, or the library wants increased awareness of library programs. It’s the ideal world your library or organization wants to exist in. Facebook and Twitter can help achieve these outcomes, but the number of retweets you get is not an outcome.

To make that ideal future a reality, you need to create objectives. Objectives are the signposts that will indicate whether you are successful in reaching your outcome. Next week, we will craft social media oriented objectives for a library in our favorite hypothetical town of Sunnydale. Catch up on Sunnydale with these posts:

Let me know if you have any questions or comments about this post! Comment on Facebook and Twitter, or email me at kalyna@uw.edu.

My Favorite Things 2016 (Spoiler Alert: Includes Cats)

Wednesday, December 21st, 2016

Little figurine of Santa standing in snow, holding gifts

During gift-giving season every year, Oprah publishes a list of her favorite things. Well, move over, Oprah, because I also have a list. This is my bag of holiday gifts for our NEO Shop Talk readers.

Art Exhibits

There are two websites with galleries of data visualizations that are really fun to visit. The first,  Information is Beautiful , has wonderful examples of data visualizations, many of which are interactive. My favorites from this site are Who Old Are You?   (put in your birth date to start it) and Common MythConceptions. The other is Tableau Public, Tableau Software Company’s “public commons” for their users to share their work.  My picks are the Endangered Species Safari  and the data visualization of the Simpsons Vizapedia.  And, in case  you’re wondering what happened to your favorite Crayola crayon colors, you can find out here.

Movies

Nancy Duarte’s The Secret Structure of Great Talks is my favorite TEDtalk. Duarte describes the simple messaging structure underlying inspirational speeches. Once you grasp this structure, you will know how to present evaluations findings to advocate for stakeholder support. I love the information in this talk, but that’s not why I listen to it over and over again.  It’s because Duarte says “you have the power to change the world” and, by the end of the talk, I believe her.

Dot plot for a fictional workshop data, titled Participant Self Assessment of their Holiday Skills before and after our holiday survival workshop. Pre/post self-report ratings for four items: Baking without a sugar overdose (pre=3; post-5); Making small talk at the office party (pre=1; post=3); Getting gifts through airport security (pre=2; post-5); Managing road rage in mall parking lots (pre=2; post-4)

I also am a fan of two videos from the Denver Museum of Natural History, which demonstrate how museum user metrics can be surprisingly entertaining. What Do Jelly Beans Have To Do With The Museum? shows demographics with colorful candy and Audience Insights On Parking at the Museum  talks amusingly about a common challenge of urban life.

Crafts

If you want to try your hand at creating snappier charts and graphs, you need to spend some time at Stephanie Evergreen’s blog. For example, she gives you step-by-step instructions on making lollipop charts, dot plots , and overlapping bar charts. Stephanie works exclusively in Excel, so there’s no need to purchase or learn new software. You also might want to learn a few new Excel graphing tricks at Ann Emery’s blog.  For instance, she describes how to label the lines in your graphs or adjust bar chart spacing.

Site Seeing

How about a virtual tour to the UK? I still marvel at the innovative Visualizing Mill Road  project. Researchers collected community data, then shared their findings in street art. This is the only project I know of featuring charts in sidewalk chalk. The web site talks about community members’ reactions to the project, which is also pretty fascinating.

Humor

I left the best for last. This is a gift for our most sophisticated readers, recommended by none other than Paul Gargani, president of the American Evaluation Association. It is a web site for the true connoisseurs of online evaluation resources.  I present to you the Twitter feed for  Eval Cat.  Even the  NEO Shop Talk cats begrudgingly admire it, although no one has invited them to post.

 

Pictures of the four NEO Eval Cats

 

 

 

 

 

 

 

 

 

Here’s wishing you an enjoyable holiday.

‘Tis the Season to Do Some Qualitative Interviewing!

Friday, December 9th, 2016

For most of us, the end-of-year festivities are in full swing. We get to enjoy holiday treats. Lift a wine glass with colleagues, friends, and loved ones. Step back from the daily grind and enjoy some light-hearted holiday fun.

Or, we could take these golden holiday social events to work on our qualitative interviewing skills! That’s right.  I want to invite you to participate in another NEO’s holiday challenge: The Qualitative Interview challenge. (You can read about our Appreciative Inquiry challenge here.)

If you are a bit introverted and overwhelmed in holiday situations, this challenge is perfect for you. It will give you a mission: a task to take your mind off that social awkwardness you feel in large crowds. (Please tell me I’m not the only one!) If, on the other hand, you are more of a life-of-the-party guest, this challenge will help you talk less and listen more.  Other party-goers will love you and you might learn something.

Here’s your challenge.  Jot down some good conversational questions that fit typical categories of qualitative interview questions.  Commit a couple questions to memory before you hit a party. Use those questions to fuel conversations with fellow party-goers and see if you get the type of information you were seeking.

To really immerse yourself in this challenge, create a chart with the six categories of questions. (I provided an example below)  When your question is successful (i.e., you get the type of information you wanted), give yourself a star.  Sparkly star stickers are fun, but you can also simply draw stars beside the questions. Your goal is to get at least one star in each category by midnight on December 31.

Holiday challenge chart, There is a holiday border around a table-style chartt with the six categories of questions, the five extra credit techniques, and blank cells for stars

According to qualitative researcher/teacher extraordinaire Michael Q. Patton, there are six general categories of qualitative interview questions.  Here are categories:

  • Experience or behavior questions: Ask people to tell you a story about something they experienced or something they do. For unique experiences, you might say “Describe your best holiday ever.” You could ask about more routine behavior, such as “What traditions do you try to always celebrate during the holidays?”
  • Sensory questions: Sensory questions are similar to experience questions, but they focus on what people see, hear, feel, smell, or taste. Questions about holiday meals or vacation spots will likely elicit sensory answers.
  • Opinion and value questions: If you ask people what they think about something, you will hear their opinions and values. When Charlie Brown asked “What is the true meaning of Christmas?” he was posing a value/opinion question.
  • Emotions questions: Here, you ask people to express their emotional reactions. Emotions questions can be tricky. In my experience, most people are better at expressing opinions than emotions, so be prepared to follow up.  For example, if you ask me “What do you dislike about the holiday season?” I might say “I don’t like gift-shopping.”   “Like” is more of an opinion word than an emotion word. You want me to reach past my brain and into my heart. So you could follow-up with “How do you feel when you’re shopping for holiday gifts?”  I might say “The crowds frustrate and exhaust me” or “I feel stressed out trying to find perfect gifts on a deadline.“ Now I have described my emotions around gift-shopping. Give yourself a star!
  • Knowledge questions: These questions seek factual information. For example, you might ask for tried-and-true advice to make holiday travel easier. If answers include tips for getting through airport security quickly or the location of clean gas station bathrooms on the PA Turnpike, you asked a successful knowledge question.
  • Background and demographic questions: These questions explore how factors such as ethnicity, culture, socio-economic status, occupation, or religion affect one’s experiences and world view. What foods do their immigrant grandparents cook for New Year’s celebrations every year?  What is it like to be single during the holidays? How do religious beliefs or practices affect their approach to the holidays? These are examples of background/demographic questions.

To take this challenge up a notch, try to incorporate the following techniques while practicing interview skills over egg nog.

Ask open-ended questions. Closed-ended questions can be answered with a word or phrase.  “Did you like the movie?”  The answer “Yes” or “No” is a comprehensive response to that question.   An open-ended version of this question might be “Describe  a good movie you saw recently.”  If you phrased your question so that your conversation partner had to string together words or sentences to form an answer, give yourself an extra star.

Pay attention to question sequence:  The easiest questions for people to answer are those that ask them to tell a story. The act of telling a story helps people get in touch with their opinions and feelings about something.  Also, once you have respectfully listened to their story, they will feel more comfortable sharing opinions and feelings with you. So break the ice with experience questions.

Wait for answers:  Sometimes we ask questions, then don’t wait for a response.  Some people have to think through an answer completely before they talk out loud. Those seconds of silence make me want to jump in with a rephrased question. The problem is, you’ll start the clock again as they contemplate the new version of your question. To hold myself back, I try to pay attention to my own breathing while maintaining friendly eye contact.

Connect and support: You get another star if you listened carefully enough to accurately reflect their answers back to them. This is called reflective listening.  If you want a fun tutorial on how to listen, check out Julian Treasure’s TEDtalk.

Some of you are likely thinking “Thanks but no thanks for this holiday challenge.” Maybe it seems too much like work. Maybe you plan to avoid social gatherings like the plague this season.  Fair enough.  All of the tips apply to bona fide qualitative interviews. When planning and conducting qualitative interviews, remember to include questions that target different types of information. Make your questions open-ended and sequence them so they are easy to answer.  Listen carefully and connect with your interviewee by reflecting back what you heard.

Regardless of whether you take up the challenge or not, I wish you happy holidays full of fun and warm conversations.

My source for interview question types and interview techniques was  Patton MQ. Qualitative Research and Evaluation Methods.  4th ed.  Thousand Oaks, CA: Sage, 2015.

Creating Partnerships that Work

Friday, December 2nd, 2016

Multiracial Businesspeople Stacking Hands

“Five guys on the court working together can achieve more than five talented individuals who come and go as individuals.” Kareem Abdul-Jabbar

When you’re working on an outreach project, you will almost certainly have some kind of partner organizations in the project.  Funders of outreach projects love to see partnerships, and sometimes they even require it.  When everything works like it’s supposed to, a partnership between organizations working on a joint outreach project can spawn better ideas, create a richer program, and improve reach.

But have you ever felt like you’ve made some bad decisions in your choice of partners? (I’m not talking about your sordid relationship history here).  It feels like a disaster when your plans fall apart because your partner organization had a completely different understanding of their role in the project, different priorities, or there were communication problems (okay, maybe I am).

Yesterday I was reviewing our Tools and Resources Guide for a major website update coming next week.  While I was doing that, I re-discovered some great resources for choosing and maintaining partners.

The Community Tool Box from the University of Kansas has a toolkit on Creating and Maintaining Partnerships.  This toolkit is made up of steps for partnering organizations to work through together. Here are some of the main categories, but the descriptions on the website are quite detailed:

  • Describe the problems or goals that have brought partners together in common purpose
  • Outline your partnership’s vision and mission, objectives
  • Re-examine the group’s membership in light of your vision, mission, and objectives
  • Describe potential barriers to your partnership’s success and how you would overcome them
  • Describe how the partnership will function and how responsibilities will be shared among partner organizations
  • Describe how the group will maintain momentum and foster renewal
  • If the partnership is losing momentum, review current barriers to your success
  • If necessary, revisit the plan to identify and recruit new or additional members
  • When maintaining the partnership at its current level is no longer appropriate or feasible, consider other alternatives, including changing focus, adding new members, or even dissolving the partnership

The Urban Indian Health Institute (UIHI) has a Resource Guide: Establishing and Maintaining Effective Partnerships.  It is a one page document with an emphasis on building trust among partners.  Here are some of their characteristics of successful partnerships.

  • A common vision and collective commitment
  • Mutual trust and respect
  • Risks, resources and rewards are shared jointly
  • Opportunities for capacity building through learning exchanges
  • Openness to learning and teaching opportunities
  • Ground rules that create a safe space to address challenges
  • Acknowledgement of the differences between the partners
  • Flexibility

Balancing the emphasis on trust and respect, UIHI also has helpful guides on their Resources for Partnerships page for establishing Memorandums of Understanding (MOUs) so partners clearly understand what’s expected of each other.

You might take a look at these guides and think “they’re asking me to do a lot of work – I just want to do a few health information presentations at the local public library.”  While you may be correct, sometimes a small project grows into something bigger, and then suddenly you find yourselves writing a proposal for a grant from the National Library of Medicine.  When going into a partnership, large or small (or personal), it might be worthwhile to take a look at some of these these guides — even if you don’t do all the steps or complete an MOU, you will find something that can help you make sure your partnership is nourished and successful for as long as it needs to be.

 

Mean, Median or Mode–What’s the Difference?

Monday, November 7th, 2016

Five stars ratings with shadow on white

Last week I taught the class Finding Information in Numbers and Words: Data Analysis for Program Evaluation at the SCC/MLA Annual Meeting in Galveston, TX.  There is a section of the class where we review some math concepts that are frequently used in evaluation, and the discussion of mean, median and mode was more interesting than I expected.  Mean, median and mode are measures of “central tendency,” which is the most representative score in a distribution of scores.  Central tendency is a descriptive statistic, because it is one way to describe a distribution of scores. Since everyone there had run across those concepts before, I asked the group if anyone knew of any clever mnemonics for remembering the difference.  Several people responded, both in class and afterwards (thanks Margaret Vugrin, Julia Crawford and Michelle Malizia!) Here are a couple of memory tools for you:

Mean: Turns out those mean kids are just average (mean = average)

Median: Just like the median in the road: if you line the values up in order, the median is the number “in the middle”

Mode: Mode has an “O” sound, and O is the first letter in “Often.”  It is the value on the list which occurs most often.

Or

Hey diddle diddle, the Median’s the middle; you add and divide for the Mean. The Mode is the one that appears the most, and the Range is the difference between.

Now that you can remember them, which one should you use?  I think a good way to think about it relates to the ratings you see when you’re trying to pick a hotel or restaurant.  I don’t know about you, but when I’m looking at a restaurant score, when I see 4 stars (out of 5), I’m not comfortable with that number without seeing the breakdown.  There’s a big difference between a 4 where the scores are spread out and one where the scores are heavily skewed towards 5.

Let’s say I’m looking at reviews for hotdog restaurants that have around 4.0 stars.  The first one I look at has a distribution of scores spread out relatively evenly from 5 stars (the most number of ratings) to 1 star (the fewest).  A mean (mean kids are just average) works well here.  To do this you add all the scores up and divide the sum by the number of responses to reach a mean of 3.9.

3-9-star-mean

Here is the chart for a similar hotdog restaurant with a slightly higher 4.2 star rating:

4-2-star-median

While the mean adds up to 4.2,  you can see in the chart that the scores are heavily skewed towards 4 and 5 stars, and it seems that 4.2 does not accurately describe the ratings of that restaurant (remember that central tendency is a descriptive statistic). However, if you use a (middle of the road) median with this kind of distribution, the result is a whopping 5.  To determine the median, you line all the results up in order and select the one in the very middle.  You might want to use a median when the distribution looks skewed.

This particular restaurant analogy doesn’t really work with mode. Mode is used for categories, which really cannot be averaged mathematically.  For example, if you want to know what is the most representative type of restaurant in your city, you might find out that your city has more hotdog restaurants than any other kind of restaurant (that would be awesome, right?).

I hope this helps. If you know any other mnemonics for mean, median, or mode, please send me an email at kjvargas@uw.edu and I will add them to the bottom of this post.

 

How Many Interviews Does It Take to Assess A Project?

Friday, October 21st, 2016

A green piggy bank standing with a group of pink piggy banks to represent the cost effectiveness of individual interviews

FAQ from NEO users: How many interviews or focus groups do we need for our qualitative assessment project?

Our typical response: Um, how much money and time do you have?

At which point, our users probably want to throw a stapler at us. (Karen and I work remotely out of an abundance of caution.)

Although all NEO users are, in fact, quite well-mannered, I was happy to discover a study that allows us to provide a better response to that question.  A study published by Namey et al, which appears in the American Journal of Evaluation’s September issue, provides empirically based estimates of the number of one-to-one or group interviews needed for qualitative interviewing projects.  More specifically, their study compared the cost effectiveness of individual and focus group interviews. The researchers conducted an impressive 40 interviews and 40 focus groups (with an average of eight people per group). They then used a boot-strap sampling methodology, which essentially allowed them to do 10,000 mini-studies on their research questions.

They first looked at how many individual and focus group interviews it took to reach what qualitative researchers call thematic saturation. In lay terms, saturation means “Not really hearing anything new here.”  Operationally, it occurs when 80-90% of the information provided in an interview has already been covered in the previous interviews.

The researchers found that 80% saturation occurred after 2-4 focus groups or eight individual interviews. It took 16 interviews and five focus groups to reach 90% saturation. Note their estimates apply to studies that focus on one specific population.  If you want to explore the experiences of two groups, such as doctors and nurse practitioners, you would hold eight interviews per group to reach 80% thematic saturation.

For comparative cost assessment, the researchers used a formula that combined hourly rate of the data collector’s time, incentive costs per participant, and cost of transcription for recordings. They chose not to include cost for factors that vary widely, such as space rental or refreshments. Using more predictable costs made for cleaner and more generalizable comparisons.

Bottom line, they found individual interview methods cost 12-20% less than focus group methods.

Of course, many of us operate on shoestring budgets, so we are our own moderators and transcribers.  Even though most of us DIYers collect hourly wages, the cost for outsourcing these tasks is probably higher than for conducting them internally. Knowing this, the researchers looked at variations on moderator, transcriptionist, and incentive costs.  They also compared cost effectiveness of the two methods when lowering the standards for thematic saturation (i.e., aiming for 70% saturation instead of 80%). Across the board, individual interviews were more cost-effective than focus groups.

Cost is not always the only consideration when choosing between focus groups and individual interviews. Some assessment questions beg for group brainstorming, while others demand the privacy of one-to-one discussions. However, for many assessment studies, either method is equally viable.  In that case, cost and convenience will drive your decision. Personally, I often find individual interviews to be more convenient than focus groups, both for the participants and for me. It’s nice to know that the cost justifies using the more convenient approach.

The full article provides details on their methods, so it is a nice primer on qualitative analysis of interview transcripts. Here’s the full citation:

Namey E, Guest G, McKenna K, Chen M. Evaluating bang for the buck: a cost effectiveness comparison between individual interviews and focus groups based on thematic saturation levels. 2016 September; 37(3): 425-440.

 

What’s in a Name? Convey Your Chart’s Meaning with a Great Title

Friday, October 7th, 2016

Some of you may be working on conference posters and paper presentations for Fall conferences.  And some of those will probably include charts to demonstrate data representing a lot of hard work on your part.  In most cases you have minutes to use that chart to get your audience to understand the data.

Stephanie Evergreen has great advice for displaying chart data.  She literally wrote the books on it: Presenting Data Effectively and Effective Data Visualization.  Her recent blog post is about one of the simplest and most powerful changes you can make to effectively present your chart data: “Strong Titles Are The Biggest Bang for Your Buck.

What many of us do is present the data with a generic title, like “Attendance rates.” Then the viewer has to spend time working through the data and you hope that they see what you wanted them to.  What Stephanie Evergreen proposes (backed by persuasive research) is to give your charts a clear title that explains what the data shows. Your poster or paper is almost certainly making a point.  Determine how your chart supports the point of your presentation and state that in the title.  Here are some reasons why:

  • It respects your viewers’ time
  • It forces you to be clear about the point you want your data to make
  • It makes the data more memorable

Stephanie Evergreen’s post has some great examples of how a good title can really improve the impact of the chart.  In addition, here is an example from the NEO webinar Make Your Point: Good Design for Data Visualization.

Looking at this original chart, you might notice that in each activity the follow-up showed an increase over the baseline.  If you, the viewer, didn’t have a lot of time, that might be all you noticed.

Chart with title: Comparison of emergency preparedness activities from baseline to follow-up

With a simple change of title , you can see that the author of this presentation is highlighting the increased number of continuity of services plans.  This is designed to enhance the point of the presentation, and not waste the viewers’ time. Also, note that the title is left justified instead of centered.  Because the title is a full sentence, a left-justified format is easier to read.

Chart with title: The biggest improvement in emergency preparedness from baseline to follow-up was the number of network member organizations reporting that they had or were working on a service continuity plan.

So, while Shakespeare might have been correct when he wrote “What’s in a name? that which we call a rose / By any other name would smell as sweet,” what if the presenter was trying to show the fortitude of Texas antique roses to survive in harsh weather conditions, and the viewer only noticed how sweet the rose smelled?  Maybe the heading “A Rose” sometimes isn’t enough information.

Save

Save

Save

Save

Save

Save

Save

Last updated on Monday, June 27, 2016

Funded by the National Library of Medicine under Contract No. UG4LM012343 with the University of Washington.