National Network of Libraries of Medicine
English Arabic Chinese (Simplified) French Hindi Japanese Korean Persian Portuguese Russian Spanish

NEO News

Subscribe to NEO News feed NEO News
The blog of the NNLM Evaluation Office
Updated: 42 min 24 sec ago

I Used the Tearless Logic Model Method and It Worked. Nobody Cried!

Fri, 2017-12-15 14:46

In April 2015, this NEO Shop Talk post introduced you to something called a “Tearless Logic Model” that was developed by a group of community psychologists at Wichita State University and published in the Global Journal of Community Psychology Practice. I am here to report that I have used this, and it is true. It really is a tearless process. My evidence? Nobody cried during or after the process when I used it!

Let’s start by talking about why logic models might make people cry. Often when evaluators are presenting logic models and talking about evaluation they use terms and phrases such as “outcomes-based planning” and “that is an output, not an outcome.” Using profession-related jargon is like speaking in a special language that prevents everyone else in the room from participating in the conversation. That can be a painful experience and can make some people feel as though they are being excluded on purpose, perhaps even making them produce tears! The Tearless Logic Model process was designed to make sure everyone feels included, and that they understand and can participate in the conversation.

I decided to use the tearless logic model with a group from a small nonprofit organization that was working to start a community kitchen. I received a call from a consultant who was working with them who asked me if I would be able to help them develop a logic model. It was perfect. They were a bunch of people from the community who had absolutely no experience with evaluation. They had heard of logic models somewhere and were expecting someone to show up with a lot of technical forms and use terms to go with them.

group of people smilingInstead, they were surprised with some simple questions and paper on the wall. After a couple of questions, they let me know that they really needed to get to working on that logic model, and I reassured them that we would. We completed the process and they were truly surprised in the end that they had been creating the logic model. After the meeting I further organized the information into a more formal framework and sent it to them and they were pleased with the results. Moreover, because the group had collaboratively created the logic model and agreed on the activities, outputs, and outcomes, they were ready to buy into the whole evaluation process.

Recently, I was fortunate to hear Dr. Greg Meissen, one of the authors, talk about this tool. He has used it with many community and other types of groups and it continues to be a useful tool. With large groups, you can break up into smaller groups to answer the questions and then bring all their answers together at the end. He noted that, when the group is varied and includes professionals with a lot of knowledge about evaluation and individuals who know nothing about evaluation, the Tearless Logic Model evens the playing field by taking the jargon out of the process and introducing the concepts in terms that everyone can understand. He also noted the value in having a good facilitator.

So, the next time you are dreading development of a logic model with a group of people, check out this tool. It really does make the process painless, and thus, “tearless.” If you use it, be careful not to slip into evaluation jargon or technical terms. In the end, after you rearrange the columns into the logic model flow, you can have the group check to see if there are connections among the activities, outputs, and outcomes. You especially want to make sure that every outcome is linked to an activity.

Resource: Lien, A.D., Greenleaf, J.P., Lemke, M.K., Hakim, S.M., Swink, N.P. Wright, R., & Meissen, G. (2011). Tearless Logic Model. Global Journal of Community Psychology Practice, 2(2).

Categories: RML Blogs

Meet Susan Wolfe, The NEO’s New Evaluation Specialist Susan Wolfe

Tue, 2017-12-05 17:39

Susan Wolfe standing beside a river in Europe

The NEO welcomes our new evaluation specialist, Susan M. Wolfe. Susan will be contributing her evaluation expertise to the National Library of Medicine’s recently announced partnership with the NIH All of Us Research Program. a landmark effort to advance precision medicine. The All of Us program aims to build one of the largest, most diverse datasets of its kind for health research, engaging with one million or more volunteers nationwide who will sign up to share their information over time. NLM and All of Us will work together to raise awareness about the program and improve participant access through community engagement efforts with public libraries across the United States. You can read more about the All of Us partnership here.

Susan is an evaluator and community psychologist who works with local, state, national, and international organizations through her consulting firm, Susan Wolfe and Associates.  She formerly served as program analyst for the US Department of Health and Human Services Office of the Inspector General; director of a longitudinal homelessness research study funded by the National Institute of Mental Health; and assistant director of research for a large community college district.  A teacher and writer, Susan has been an adjunct lecturer with several universities and published numerous peer-reviewed journal articles, book chapters, and books. She has a PhD in Human Development from the University of Texas at Dallas, an MA in Ecological (Community) Psychology from Michigan State University, a BS in Psychology from the University of Michigan-Flint, and a diploma from the Michigan School of Beauty Culture.

What exactly is a community psychologist?

Most disciplines within psychology are focused on individuals. Community psychologists go beyond the individual to look at the individual in interaction with the environment. Environment includes the social, cultural, economic, political, and physical environmental influences. We work to promote positive change, health, and empowerment at the individual and systemic levels.

How does being a community psychologist affect your evaluation work?

Community psychology provided me with a great foundation for evaluation work. My training included a lot of research and evaluation methods and ecological theories. These theories remind me about how interconnected everything is and that when you change something in the world, because of the interconnectedness, something else is likely to be affected. For example, when gentrification occurs in neighborhoods we often think of that as a good thing because it revitalizes the neighborhood and prevents further decline. However, on the other hand, many people are displaced as rents rise and they can no longer afford to live there, and some become homeless. When I evaluate a program, I automatically start looking at it within its context, including where it fits within a system, how it affects the system, and how the system affects the program. I also add a racial equity and social justice perspective to my work where it is applicable.

What is one of your favorite evaluation experiences?

I’ve had too many favorite experiences, so I will describe my most memorable. I was working for the U.S. Department of Health and Human services when Hurricane Katrina struck. One of the tragedies from the hurricane was the deaths in nursing homes, which prompted a request for an evaluation of nursing home emergency planning among the Gulf States. I was appointed as co-lead for the study, which had a very tight timeline. We incorporated a lot of context measures into the design. Team members did site visits to all the Gulf States. Data collection was interesting, but also emotionally taxing as we witnessed the devastation to the sites and the people who lived there – especially in Louisiana and Mississippi. We talked with nursing home directors, emergency managers, mayors, police chiefs, nursing home ombudsmen and many others and learned a lot about the complexity involved in making the decisions whether to evacuate or not, and then implementing the plans either way. There are risks if they stay, and other risks if they leave, so it isn’t simple.

What made that experience so special?

The report received a lot of attention and we were left with a feeling that we produced a report that could make a difference. Our team received the Inspector General’s Award for Excellence in Program Evaluation for it.

What attracted you to the All of Us Research Project?

I was excited at the prospect of being involved in a project of such significance for medical practice. For the past several years I have done a substantial amount of work with health disparities. The idea that so much data will be gathered to enable scientists to learn more about individual and group differences across multiple levels (biological, environmental, behavioral) will, hopefully, help to reduce and eliminate the disparities. How could I not be attracted to this!

What bit of personal information would you like to share to help us know you better?

I am really introverted, although most people don’t believe me when they meet me. I love working at home with just the company of my Chihuahua, Chiweenie, and cat. I like to travel a lot, all over the country and world. I crochet mediocre things for my family – like blankets and hats, and I like to hang out at home, cook, clean the kitchen, and watch TV. I am married to Charles, have two grown children, a daughter-in-law, two grandchildren, and another grandchild on the way.

Final note: Susan works remotely for the University of Washington Health Sciences Library from Cedar Hill, Texas, and can be reached at smwolfe@uw.edu.

Categories: RML Blogs

Happy Thanksgiving From the NEO Staff

Wed, 2017-11-22 18:41

The  NNLM Evaluation Office staff had a rare opportunity in early November. We had our first-ever, in-person staff meeting. Our staff members all work virtually from their offices in Georgia, Texas, and Washington. We traveled to Washington DC in early November to attend the American Evaluation Association, so we took a morning for a staff mini-retreat.  This is our first all-staff photo, which we took in front of the AEA banner.

 Kalyna Durbak, Karen Vargas, Cindy Olney, Susan Wolfe

Allow me to introduce you to the NEO bloggers. Starting from the left is Kalyna Durbak, Karen Vargas, me (Cindy Olney), and Susan Wolfe. Susan is our newest staff member, who has a  special assignment with the NNLM. We will introduce her and her project in the near future.  Look for NEO Shop Talk posts from Susan on topics related to participatory evaluation and culturally responsive evaluation.

We are thankful for all of our readers and wish you a wonderful holiday.

 

 

Categories: RML Blogs

Free Resources to Help Communities Engage with Their Data

Mon, 2017-11-20 16:56

As you already know, the whole NEO team attended the Evaluation 2017 conference last week.  I learned enough to fill up quite a few blog posts. Today’s is about some free tools I found out about that can help communities get comfortable working with data.

I went to a presentation by the Engagement Lab at Emerson College. The purpose of this Lab is to re-imagine civic engagement in our digital culture.  Engagement Lab has created a suite of free online tools to encourage the communities that they work with work with data even if they are beginners. The products have super fun examples on each page so you can see if they would work for you.

The one I thought might be best for the NEO (and for this blog) is the one called WTFcsv which stands for what you probably think it stands for (there’s an introductory video that includes a lot of bleeping). The idea is that if you are new at using data and have a ton of data in a CSV file, what the bleep do you do with it?

The web tool has some examples you can look at to understand how the tool works. I like the “UFO Sitings in Massachusetts” example that shows among other things the most common reported shapes/descriptions of UFO sitings in MA (“light” is the most common, followed by “circle”). It even comes with an activity guide for educators to help people learn to work with data.6 charts showing results of a survey

I wanted to see how this would work with National Network of Libraries of Medicine data.  A few years ago NNLM had an initiative to increase awareness and use of National Library of Medicine resources, like PubMed and MedlinePlus. I uploaded the CSV file that had the results of that project. This image is a screen shot of the results (you can click on it to make it bigger).  I think it did a good job of making charts that would give us something to talk about.

The good news is that it only takes minutes to upload the data and see the results. Also, below the data is a paragraph with some suggestions of conversations you might want to have about the data. WTFcsv is a tool for increasing community interaction with data, so this is very helpful. The results stay up on the website for 60 days, so you can share the link with a group.

Most of the bad news has to do with trying to make an example that would look good in this blog. In order to find data that would make a nice set of images to show you, I went through a lot of our NNLM NEO data.  And I did have to reformat the data in the CSV file for it to work nicely.  But if you were using the tool as a starting point, it’s okay for the data to not quite work with the WTFcsv resource – the purpose is to give you something to talk about, and it certainly does that (even if the something is that you might need to reconfigure your data a little).

The titles of each chart only allowed very few characters. So I had to shorten the titles of the data’s columns down to what may only partly represent  the data. However, I was making something to show in a screenshot, which is not what this tool is designed for.  If I had left the titles long, they would show up when you click on the chart and get some of the additional information.

The presenter showed us four additional tools from the Emerson College Engagement Lab made that are available for anyone to use for free.

Wordcounter tells you the most common words in a bunch of text, including bigrams (pairs of words) and trigrams (sets of 3 words together).  This can be a starting point for textual analysis.

Same Diff compares two or more text files and tells you how similar or different they are.  Examples include comparing the speeches of Hillary Clinton and Donald Trump, or comparing the lyrics of Bob Dylan and Katy Perry, among others.

Connectthedots shows how your data are connected by analyzing them as a network. They show a chart where each node is a character in Les Misérables, and each link indicates that they appear in a scene together.

If you want to know more about applying these tools in a real life situation, the Executive Director of the Engagement Lab, Eric Gordon, has an online book called Accelerating Public Engagement: A Roadmap for Local Government.

 

Categories: RML Blogs

#Eval17 Highlights

Mon, 2017-11-13 17:27

The whole NEO team attended AEA’s Evaluation 2017 conference last week. I am still processing a lot of what I’ve learned from the conference, and hope to write in more detail about them in the upcoming months. Until then, here are some of my highlights:

Workshops
I attended the two-day Eval 101 workshop by Donna Mertens, and the half day Logic Model workshop from Thomas Chapel. Both workshops gave me a solid understanding of how evaluators plan, design, and execute their evaluations through hands-on training. I know I’ll be referring to my notes and handouts from these workshops often.

My customized conference tag.

Ignite presentations
The conference website defines these presentations as “20 PowerPoint slides that automatically advance every 15 seconds for a total presentation time of just 5 minutes.” Just thinking about creating such a presentation makes me nervous! The few that I saw have inspired me to work on my elevator pitch skills.

#Eval17Twitter
I attended a delicious lunch with fellow evaluators who are active on Twitter. Though I am not very active on that platform, they welcomed me and even listened to my elevator speech about why public libraries are amazing. Each attendee worked in different evaluation environments, and came from all over the United States and around the world. It was a fun way to learn more about the evaluation field.

Sessions
It’s hard to pick a favorite session, but one that stood out was DVR3: No title given. Despite the lack of a title, the multipaper presentation will stay with me for a long time. The first presentation was from Jennifer R. Lyons and Mary O’Brien McAdaragh, who talked about a personal project sending hand-drawn data visualizations on postcards. The second presentation, by Jessica Deslauriers and Courtney Vengrin, shared their experiences using Inkscape in creating data visualizations.

First NEO meeting IRL
This is my favorite part of the conference. I’ve been working with the NEO for over a year, and yet this was the first time we were all in the same room together. It was such a treat to dine with Cindy and Karen, and work in the same time zone. We also welcomed our newest member, Susan Wolfe, to the team. Look for a group photo in our upcoming Thanksgiving post.

Official banner for AEA's Evaluation 2017 conference.

I recommend librarians interested in honing their evaluation skills to sign up for the pre-conference workshops, and to attend AEA’s annual conference at least once. It opened my eyes to all sorts of possibilities in our efforts to evaluate our own trainings and programs.

Categories: RML Blogs

Beyond Anecdotes: Story Collection Methods for Program Evaluation

Fri, 2017-11-03 15:18

Woodcut text stating "Share your story" with a cup of coffee beside it. Story telling concept

The promotora’s uncle was sick and decided it was his time to die. She was less convinced, so she researched his symptoms on MedlinePlus and found evidence that his condition probably was treatable. So she gathered the family together to persuade him to seek treatment. Not only did her uncle survive, he began teaching his friends to use MedlinePlus. This promotora (community health worker) was grateful for the class she had taken on MedlinePlus offered by a local health sciences librarian.

This is a true story, but it is one that will sound familiar to many who do health outreach, education, or other forms of community service. Those of us who coach, teach, mentor, or engage in outreach often hear anecdotes of the unexpected ways our participants benefit from engagement in our programs. It’s why many of us chafe at using metrics alone to evaluate our programs. Numbers usually fall short of capturing this inspiring evidence of our programs’ value.

The good news is that it isn’t difficult to turn anecdotes into evaluation data, as long as you approach the story (data) collection and analysis systematically. That usually means use of a standard question guide, particularly for those inexperienced in qualitative mythologies.

For easy story collection methods, check out the NEO tip sheet Qualitative Interview “Story” Methods. While there are many approaches to doing qualitative evaluation, this tip sheet focuses on methods that are ideal for those with limited budgets and experience in qualitative methods. Most of these story methods can be adapted for any phase of evaluation (needs assessment, formative, or outcomes). The interview guides for each method consist of 2-4 questions, so they can be used alone for short one-to-one interviews or incorporated into more involved interviews, such as focus groups. Every team member can be trained to collect and document stories, allowing you to compile a substantial bank of qualitative data in a relatively short period of time. For example, I used the Colonias Project Method for an outreach project in the Lower Rio Grande Valley and collected 150 stories by the end of this 18-month project. That allowed us to do a thematic analysis of how MedlinePlus en Español was used by the community members. Individual stories helped to illustrate our findings in a compelling way.

Do you believe a story is worth a thousand metrics? If so, check out the tip sheet and try your hand at your own qualitative evaluation project.

Note: The story above came from the project described in this article: Olney, Cynthia A. et al. “MedlinePlus and the Challenge of Low Health Literacy: Findings from the Colonias Project.” Journal of the Medical Library Association 95.1 (2007): 31–39. PMC free article.

Categories: RML Blogs