Skip all navigation and go to page content

NEO Shop Talk

The blog of the National Network of Libraries of Medicine Evaluation Office

Archive for the ‘News’ Category

“Five Things I’ve Learned,” from an Evaluation Veteran

Friday, March 24th, 2017

 

Cindy Olney in her home office

Kalyna Durbak, the NEO’s novice evaluator, recently posted the five things she learned about evaluation since joining our staff. I thought I would steal, er, borrow Kalyna’s “five things” topic and write about the most important lessons I’ve learned after 25+ -years in the evaluation field.

My first experience with program evaluation was in the 1980s, as a graduate assistant in the University of Arizona School of Medicine’s evaluation office.  Excel was just emerging as the cool new tool for data crunching. SPSS ran on room-sized mainframes, and punch cards were fading fast from the computing world. Social security numbers were routinely collected and stored along with other information about our research or evaluation participants. Our desktop computers ran on DOS. The Internet had not even begun wreaking havoc.

Yep, I’m old. The field has evolved over time and the work is more meaningful than ever. Here are five things I know now that I wish I had known when I started.

#1 Evaluation is different from research: Evaluation and research have distinctly different end goals. The aim of research is to add to general knowledge and understanding.  Evaluation, on the other hand, is designed to improve the value of something specific (programs, products, personnel, services) and to guide decision-making. Evaluation borrows many techniques from research methodology because those methods are a means to accurate, credible information. Technical accuracy of data means nothing if it cannot be applied to program improvement or decision-making.

#2 Evaluation is not the most important kid in the room. Evaluation, unchecked, can be resource-intensive, both in money and time. For every dollar and hour spent on evaluation, one dollar and hour is subtracted from funds used to produce or enhance a program or service. Project plans should focus first on service or program design and delivery, with proportional funding allocated to evaluation. Evaluation studies should not be judged by the same criteria used for research. Rather, the goal is to collect usable information in the most cost-effective manner possible.

#3 What gets measured gets done: Evaluation is a management tool that’s worth the investment. Project teams are most successful when they begin with the end in mind, and evaluation plans force discussion about desired results (outcomes) early on.  (Thank you, Stephen Covey, for helping evaluators advocate for their early involvement in projects.)  You must articulate what you want to accomplish before you can measure it.  You need a good action plan, logically linked to desired outcomes, before you can design an process assessment. Even if your resources limit you to the most rudimentary of evaluation methods, the mere process of committing to outcomes, activities and measure on paper (in a logic model, please!) allows a team to take one giant step forward toward program success.

#4 Value is in the eyes of the stakeholders: While research asks “What happened,” evaluation asks “What happened, how important is it, and, knowing what we know, what do we do?”  That’s why an evaluation report that merely collects dust on a shelf is a travesty. The evaluation process is not complete until stakeholders have interpreted the information and contributed their perspectives on how to act on the findings. In essence, I am talking about rendering judgement: what do the findings say about the value of the program? That value judgment should, in turn, inform decisions about the future of the program. While factual findings should be objective, judgments are not.  Value is in the eyes of the people invested in the success of your program, aka, stakeholders. Assessment of value may vary and even conflict among various stakeholder groups. For example, a public library health literacy program has several types of stakeholders. The library users will judge the program based on its usefulness to their lives. City government officials will judge the program based on how many taxpayers express satisfaction with the program.  Public librarians will value the program if it aligns with their library mission and brings visibility to their organization.  Evaluation is not complete until these multiple perspectives of value are explored and integrated into program decision-making.

#5 Everything I need to know about evaluation reporting I learned in kindergarten. Kindergarten was the first and possibly the last place I got to learn through play. In grad school, I learned to write 25-50 page research and evaluation reports. In my career, I discovered people read the executive summaries (if I was lucky), then stop. Evaluations are supposed to lead to learning about your programs, but no one thinks there’s anything fun about 50-page reports. Thankfully, evaluators have developed quite a few engaging ways to involve stakeholders in analyzing and using evaluation findings. For example, data dashboards allow stakeholders to interact with data visualizations and answer their own evaluation questions.  Data parties provide a social setting to share coffee, snacks, and data interpretations.  New innovations in evaluation reporting are being generated every year. It’s a great time to be an evaluator! More bling; less writing, and it’s all for the greater good.

So,there you have it: my five things.  These five lessons have served me well. I suspect they will continue to do so until bigger and better evaluation ideas come along. What about you? Share your insights below in our comments section.

Uninspired by Bars? Try Dot Plots

Friday, March 17th, 2017

Thanks to Jessi Van Der Volgen and Molly Knapp at the NNLM Training Office for allowing us to feature their assessment project and for providing the images in this post. 

Are you tired of bars?

I don’t mean the kind of bars where you celebrate and socialize. I mean the kind used in data visualization.  My evidence-free theory is that people still succumb to using the justifiably maligned pie chart simply because we cannot face one more bar graph.

Take heart, readers. Today, I’m here to tell you a story about some magic data that fell on the NEO’s doorstep and broke us free of our bar chart rut.

It all began with a project by our NNLM Training Office (NTO) colleagues, the intrepid leaders of NNLM’s instructional design and delivery. They do it all. They teach. They administratively support the regions’ training efforts. They initiate opportunities and resources to up-level instructional effectiveness throughout the network. One of their recent initiatives was a national needs assessment of NNLM training participants. That was the source of the fabulous data I write about today.

For context, I should mention that training is one of NNLM’s key strategies for reaching the furthest corners of our country to raise awareness, accessibility and use of NLM health information resources. NNLM offers classes to all types of direct users, (e.g., health professionals; community-based organization staffs) but we value the efficiency of our “train-the-trainer” programs. In these classes, librarians and others learn how to use NLM resources so they, in turn, can teach their users. The national needs assessment was geared primarily toward understanding how to best serve “train-the-trainer” participants, who often takes multiple classes to enhance their skills.

For the NTO’s needs assessment, one area of inquiry involved an inventory of learners’ need for training in 30 topic areas. The NTO wanted to assess participants’ desired level and their current level of proficiency in each topic.  That meant 60 questions. That was one heck-of-a-long survey. We wished them luck.

The NTO team was undaunted!  They did some research and found a desirable format for presenting this set of questions (see upper left). The format had a nice minimalist design. The sliders were more fun for participants than radio buttons. Also, NTO designed the online questionnaire so that only a handful of question-pairs appeared on the screen at one time.  The approach worked, because NTO received responses from 559 respondents, and 472 completed the whole questionnaire.

Dot plots for four skill topic areas. Conducting literature searches (4=Current; 5=Desired) Understanding and searching for evidence-based research ( Current-3; Desired=5) Develop/teach classes (Current-3; Desired=5; Create videos/web tutorials Current-2; Desired=4)

The NEO, in turn, consulted the writings of one of our favorite dataviz oracles, Stephanie Evergreen. And she did not disappoint.  We found the ideal solution: dot plots!  Evergreen’s easy-to-follow instructions from this blog post allowed us to create dot plots in Excel, using a few creative hacks. This approach allowed us to thematically cluster results from numerous related questions into one chart. We were able to present data for 60 questions in a total of seven charts.

I would like to point out a couple of design choices I made:

  • I used different shapes and colors to visually distinguish between “current proficiency” and “desired proficiency.” Navy blue for current proficiency was inspired from NNLM’s logo. I used a complimentary green for the desired proficiency because green means “go.”
  • Evergreen prefers to place labels (e.g., “conducting literature searches”) close to the actual dots. That works well if your labels consist of one or two words. We found that our labels had to be longer to make sense. Setting them flush-left made them more readable.
  • I suggested plotting medians rather than means because many of the data distributions were skewed. You can use means, but probably should round to whole numbers so you don’t distract from the gaps.

Dot plots are quite versatile. We used the format to highlight gaps in proficiency, but other evaluators have demonstrated that dot plots work well for visualizing change over time and cross-group comparisons.

Dot plots are not as easy to create as the default Excel bar chart, but they are interesting.  So give up bars for a while.  Try plotting!

 

 

 

 

The Dark Side of Questionnaires: How to Identify Questionnaire Bias

Monday, March 6th, 2017

Villain cartoon with survey questions

People in my social media circles have been talking lately about bias in questionnaires.  There are biased questionnaires.  Some of them are biased by accident and some are on purpose.  Some are biased in the questions and some are biased in other ways, such as the selection of the people who are asked to complete the questionnaires. Recently, a couple of my friends posted on Facebook that people should check out the NNLM Evaluation Office to learn about better questionnaires. Huzzah! This week’s post was born!

Here are a few things to look for when creating, responding to, or looking at the results of questionnaires.

Poorly worded questions

Sometimes simple problems with questions can lead to bias, whether accidental or on purpose.  Watch out for these kinds of questions:

  • Questions that have unequal number of positive and negative responses.

Example:

Overall, how would you rate NIHSeniorHealth?

Excellent | Very Good | Good | Fair | Poor 

Notice that “Good” is the middle option (which should be neutral), and some people consider “Fair” to be a slightly positive term.

  • Leading questions, which are questions that are asked in a way that is intended to produce a desired answer.

Example:

Most people find MedlinePlus very easy to navigate.  Do you find it easy to navigate?  (Yes   No)

How would you feel if you had trouble navigating MedlinePlus? It would be hard to say ‘No’ to that question.

  •  Double-barreled questions, which are two questions in one.

 Example:

 Do you want so lower the cost of health care and limit compensation to medical malpractice lawsuits?

 This question has two parts – to answer yes or no, you have to agree or disagree with both parts. And who doesn’t want to lower health care costs?

  •  Loaded questions, which are questions that have a false or questionable logic inherent in the question (a “Have you stopped beating your wife” kind of question). Political surveys are notorious for using loaded questions.

Example:

Are you in favor of slowing the increase in autism by allowing people to choose whether or not to vaccinate their child?

This question makes the assumption that vaccinations cause autism. It might be difficult to answer if you don’t agree with that assumption.

The NEO has some suggestions for writing questions in their Booklet 3: Collecting and Analyzing Evaluation Data, page 5-7.

Questionnaire respondents

People think of the questions as a way to bias questionnaires, but another form of bias can be found in the questionnaire respondents.

  • Straw polls or convenience polls are polls that are given to people the easiest way. For example polling the people who are attending an event, or putting a questionnaire on a newspaper homepage (or your Facebook page).  The reason they are problematic is that they attract response from people who are particularly interested or energized by a topic, so you are hearing from the noisy minority.
  • Who you send the questionnaire to has a lot to do with why you are sending out the questionnaire. If you want to know about the opinions of people in a small club, then that’s who you would send them to. But if you are trying to reach a large number of people, you might want to try sampling, which involves learning about randomizing.  (Consider checking out the Appendix C of NNLM PNR’s Measuring the Difference: Guide to Planning and Evaluating Health Information Outreach). Keep in mind that the potential bias here isn’t necessarily in sending the questionnaires to a small group of people, but in how you represent the results of that questionnaire.
  • Low response rate may bias questionnaire results because it’s hard to know if your respondents truly represent the group being surveyed.  The best way to prevent response rate bias is to follow the methods described in this NEO post Boosting Response Rates with Invitation Letters to ensure you get the best response rate possible.

Lastly, the Purpose of the Questionnaire

Just like looking for bias in news or health information or anything else, you want to think about is who is putting out the questionnaire and what is its purpose?  A  questionnaire isn’t always a tool for objectively gathering data.  Here are some other things a questionnaire can be used for:

  • To energize a constituent base so that they will donate money (who hasn’t filled out a questionnaire that ends with a request for donations?)
  • To confirm what someone already thinks on a topic (those Facebook polls are really good for that)
  • To give people information while pretending to find out their opinion (a lot of marketing polls I get on my landline seem to be more about letting me know about some products than really finding out what I know).

If you want to know more about questionnaires, here are some of the NEO resources that can help:

Planning and Evaluating Health Information Outreach Projects, Booklet 3: Collecting and Evaluating Evaluation Data

Boosting Response Rates with Invitation Letters

More NEO Shop Talk blog posts about Questionnaires and Surveys

 

Picture attribution

Villano by J.J., licensed under the Creative Commons Attribution-Share Alike 3.0 Unported license.

 

NEO Announcement! Home Grown Tools and Resources

Friday, February 3rd, 2017

Red Toolbox with ToolsSince NEO (formerly OERC) was formed, we’ve created a lot of material – four evaluation guides, a 4-step guide to creating an evaluation plan, hosted in-person classes and webinars, and of course, written in this very blog! All of the guides, classes, and blogs come with a lot of materials, including tip sheets, example plans, and resource lists. In order to get to all of these resources though, you had to go through each section of the website and search for them, or attend one of our in person classes. That all changed today.

Starting now, NEO will be posting its own tip sheets, evaluation examples, and more of our favorite links on the Tools and Resources page. Our first addition is our brand new tip sheet, “Maximizing Response Rate to Questionnaires,” which can be found under the Data Collection tab. We also provided links to some of our blog posts in each tab, making them easier to find. Look for more additions to the Tools and Resources page in upcoming months.

Do you have a suggestion for a tip sheet? Comment below – you might see it in the future!

Make 2017 the “Yes-and” Year for Evaluation

Friday, January 6th, 2017

20117 (year) with fireworks and "Always Say Yes" written underneath

PSA: New Year’s resolutions are passé.

It’s out with dreary self-betterment goals involving celery and punishing exercise. Instead, the trendiest way to mark the New Year is to pick an inspirational word and make it your annual North Star for self-improvement.  In that spirit, I want to propose a word of the year for the NEO Shop Talk community.

That word is “Yes and.”

Now, some of you are wondering how I passed kindergarten with such poor counting skills. However, If you or anyone you know has taken training in Improv theatre, you know “Yes and” is a word, or, more specifically, a verb.

Improv is a type of theatre in which a team of actors make up scenes on the spot, usually from audience suggestions.  Because performances are unscripted, improv actors train rather than rehearse. Training is built around commonly accepted “rules,” and, arguably, the best known rule of Improv is “Always say Yes and…”   That means that you accept any scene idea your teammate presents and add something to make that idea better. Once novice improvisers experience the upbeat emotional effect of this rule, they soon find themselves “yes-anding” in other parts of their lives. Some even preach about it to others (ahem).  Notice, by the way, I added a hyphen so we can all feel better about “yes-and” as the WORD of the year.

If you can “yes-and” evaluation requirements and responsibilities, it will put you on the road to mastering this rule. Let’s face it, the thought of evaluation does not generate an abundance of enthusiasm. Usually we do evaluation because someone else expects or requires us to do it: upper administration; accreditation boards; funding agencies. We only do evaluation when forced because it’s a lot of work. I compare evaluation to physical exercise. In theory, we know it’s good for us. In practice, we don’t have time for it. “Yes-anding” evaluation may not make you do more evaluation than you’re required to do.  It might, however, make your evaluation responsibilities more enjoyable or, at least, more meaningful to you personally.

For example, if you have to write a proposal for external funding, you often have to pull together assessment information to build a case for your proposed program. Does that mean you have to locate and synthesize lots of data from lots of sources? Yes, and you get to demonstrate all of the great things your library or organization has to offer. You also get to point out areas where you could provide even more awesome services if the funding agency gave you funding to meet your resource needs. (Here’s a NEO Shop Talk blog post on how to use SWOT analysis to synthesize needs assessment data.)

If your proposal includes outreach into a new community, you probably have to collect information from that community.   Do you have to find and conduct key informant interviews with representatives of the community?   Yes, and you also get to initiate relationships with influential community opinion leaders. Listening is a powerful way to build trust and rapport. If you have the opportunity to implement your program, these key informants will be powerful allies when you want to reach out to the broader community. (If you want some tips for finding key informants, check out this blog post.)

You can “yes-and” internally mandated evaluation as well. Your library or organization may require you to track data on an ongoing basis or to submit regular reports. To do this well, do you have to document your daily work, such as keep track of details surrounding reference services, workshop attendance, or facility usage? Yes and, you also get to create a database of motivational information to inspire you and fellow co-workers working on the same objectives and goals. Compile that information monthly or quarterly, and pass it around at staff meetingd.  Celebrate what you’re accomplishing. Figure out where effort is lagging and commit to bolstering activities in that area. Then celebrate your team’s astute use of data for making good program decisions. Yes, and, at reporting time, be sure you present your data so that your upper-level stakeholders notices your hard work.

Maybe your department or office has to set and assess annual objectives or outcomes. Do you have to collect and report data to show program results? Yes, and you also get to demonstrate your value to the organization. Just be sure you don’t hide your candle deep in some organizational online reporting system.  Annual reports are seldom page-turners. Find more compelling ways to communicate your success and contributions to upper administrators and influential users. For some ideas, you might want to check out some of NEO Shop Talk posts on reporting and data visualization.

Another rule of Improv is “There are no mistakes, only opportunities.”  Let’s paraphrase that to “there are no evaluation requirements, only opportunities.”  Here’s to making 2017 the year of “yes-anding” evaluation.

As a post-script, I want to share a NEO Shop Talk success with our readers. Did we post weekly blog entries in 2016? Yes, and you showed up more than ever. NEO Shop Talk visits increased 71% in 2016. Each month, we averaged 259 more visits compared to the same month in the previous year.  Our peak month was February, with 892 visits! Thank you, readers. Please come back and bring your friends!

slide1

2016 Annual NEO Shop Talk Round-up

Friday, December 30th, 2016

Top 10 List

Like everyone else, we have an end-of-the-year list.  Here’s our top ten list of the posts we wrote this year, based on number of views:

10. Developing Program Outcomes Using the Kirkpatrick Model – with Vampires

9.  Inspirational Annual Reporting with Appreciative Inquiry

8.  What is a Need?

7.  Designing Surveys: Does the Order of Response Options Matter?

6.  Simply Elegant Evaluation: GMR’s Pilot Assessment of a Chapter Exhibit

5.  A Chart Chooser for Qualitative Data!

4.  W.A.I.T. for Qualitative Interviews

3.  The Zen Trend in Data Visualization

2.  How I Learned to Stop Worrying and Love Logic Models (The Chili Lesson)

1.  Logic Model for a Birthday Party

We put a lot of links to interesting things in our blog posts.  Here are the Top Ten websites that people went to from our blog:

10. The Kirkpatrick Model

9.  Books by Stephanie Evergreen

8.  Tearless Logic Model article in Global Journal of Community Psychology Practice

7.  AEA 365 | A Tip-a-Day by and for Evaluators

6.  Public Libraries, Project Outcome – Looking Back, Looking Forward

5.  Build a Qualitative Dashboard

4.  Nat King Cole, The Christmas Song

3.  The Histomap by John Sparks

2.  Tools: Tearless Logic Model (how-to summary)

1.  Stephanie Evergreen Qualitative Chart Chooser

The NEO wishes you a happy and fulfilling New Year!!

My Favorite Things 2016 (Spoiler Alert: Includes Cats)

Wednesday, December 21st, 2016

Little figurine of Santa standing in snow, holding gifts

During gift-giving season every year, Oprah publishes a list of her favorite things. Well, move over, Oprah, because I also have a list. This is my bag of holiday gifts for our NEO Shop Talk readers.

Art Exhibits

There are two websites with galleries of data visualizations that are really fun to visit. The first,  Information is Beautiful , has wonderful examples of data visualizations, many of which are interactive. My favorites from this site are Who Old Are You?   (put in your birth date to start it) and Common MythConceptions. The other is Tableau Public, Tableau Software Company’s “public commons” for their users to share their work.  My picks are the Endangered Species Safari  and the data visualization of the Simpsons Vizapedia.  And, in case  you’re wondering what happened to your favorite Crayola crayon colors, you can find out here.

Movies

Nancy Duarte’s The Secret Structure of Great Talks is my favorite TEDtalk. Duarte describes the simple messaging structure underlying inspirational speeches. Once you grasp this structure, you will know how to present evaluations findings to advocate for stakeholder support. I love the information in this talk, but that’s not why I listen to it over and over again.  It’s because Duarte says “you have the power to change the world” and, by the end of the talk, I believe her.

Dot plot for a fictional workshop data, titled Participant Self Assessment of their Holiday Skills before and after our holiday survival workshop. Pre/post self-report ratings for four items: Baking without a sugar overdose (pre=3; post-5); Making small talk at the office party (pre=1; post=3); Getting gifts through airport security (pre=2; post-5); Managing road rage in mall parking lots (pre=2; post-4)

I also am a fan of two videos from the Denver Museum of Natural History, which demonstrate how museum user metrics can be surprisingly entertaining. What Do Jelly Beans Have To Do With The Museum? shows demographics with colorful candy and Audience Insights On Parking at the Museum  talks amusingly about a common challenge of urban life.

Crafts

If you want to try your hand at creating snappier charts and graphs, you need to spend some time at Stephanie Evergreen’s blog. For example, she gives you step-by-step instructions on making lollipop charts, dot plots , and overlapping bar charts. Stephanie works exclusively in Excel, so there’s no need to purchase or learn new software. You also might want to learn a few new Excel graphing tricks at Ann Emery’s blog.  For instance, she describes how to label the lines in your graphs or adjust bar chart spacing.

Site Seeing

How about a virtual tour to the UK? I still marvel at the innovative Visualizing Mill Road  project. Researchers collected community data, then shared their findings in street art. This is the only project I know of featuring charts in sidewalk chalk. The web site talks about community members’ reactions to the project, which is also pretty fascinating.

Humor

I left the best for last. This is a gift for our most sophisticated readers, recommended by none other than Paul Gargani, president of the American Evaluation Association. It is a web site for the true connoisseurs of online evaluation resources.  I present to you the Twitter feed for  Eval Cat.  Even the  NEO Shop Talk cats begrudgingly admire it, although no one has invited them to post.

 

Pictures of the four NEO Eval Cats

 

 

 

 

 

 

 

 

 

Here’s wishing you an enjoyable holiday.

A Chart Chooser for Qualitative Data!

Friday, November 18th, 2016

Core Values Word Cloud Concept

When people talk about data visualization, they are usually talking about quantitative data. In a previous post, we explained that data visualizations help people perform three primary functions: exploring, making sense of, and communicating data.  How can we report qualitative data in a way that performs those same functions?

We just got some exciting news from the EvergreenData blog that they have developed a Qualitative Chart Chooser. Seriously–it’s a work of art. Actually two works of art because they have two different chart chooser drafts to choose from.

The way it works is this: you think about the story you want to tell with your data, maybe about how something improved over time because of your awesome project. Then using the chart chooser, you look at the “show change over time” category, and then you could select a timeline, before-and-after “change photos,” or a histomap (what’s a histomap?  Take a look at this one).

This chart chooser is a very cool tool. But I wouldn’t wait until it was time to report findings to use it. One thing that we at the NEO suggest is that when you are first planning your project, you should think about the story or stories you want to tell at the end of your project. Maybe when you’re thinking about the story you want to tell, you could look at all these different qualitative charts in the chart chooser.  Which ones would you like to use? Do you want to tell the story of how your program aligns with the goals of your institution (you could try indicator dots)? Or maybe you want to show how the different parts of your project work together as a whole (a dendrogram might work). By looking at these options before you design your evaluation plan, you can be sure that you are gathering the right data from the beginning. Backing up even further in your planning process, if you are having trouble trying to decide what story or stories you want to tell, this Qualitative Chart Chooser can give you ways to think about that.

Here is some more information on qualitative data visualization and storytelling from NEO Shop Talk:

Qualitative Data Visualization, September 26, 2014

More Qualitative Data Visualization Ideas, December 18, 2014

Telling Good Stories About Good Programs, June 29, 2015

DIY Tool for Program Success Stories, July 2, 2015

 

Participatory Evaluation, NLM Style

Friday, November 11th, 2016

Road Sign with directional arrow and "Get Involved" written on it.

This week, I invite you to stop reading and start doing.

Okay, wait. Don’t go yet.  Let me explain. I am challenging you to be a participant-observer in a very important assessment project being conducted by the National Library of Medicine (NLM).

The NEO is part of the National Library of Medicine’s program (The National Network of Libraries of Medicine) that promotes use of NLM’s extensive body of health information resources.  The NLM is devoted to advancing the progress of medicine and improving the public health through access to health information. Whether you’re a librarian, health care provider, public health worker, patient/consumer, researcher, student, educator, or emergency responder fighting health-threatening disasters, the NLM has high quality, open-access health information for you.

Now the NLM is working on a long-range plan to enhance its service to its broad user population.  It is inviting the public to provide input on its future direction and priorities. Readers, you are a stakeholder in the planning process. Here is your chance to contribute to the vision. Just click here to participate.

And, because you are an evaluation-savvy NLM stakeholder, your participation will allow you to experience a strength-based participatory evaluation method in action.  Participatory evaluation refers to evaluation projects that engage a wide swath of stakeholders. Strength-based evaluation approaches are those that focus on getting stakeholders to identify the best of organizations and suggest ways to build on those strengths. Appreciative Inquiry is one of the most widely recognized strength-based approaches. The NEO blog have posts featuring Appreciative Inquiry projects here and here.

While I have no idea if the NLM’s long-range planning team explicitly used Appreciative Inquiry for developing their Request for Information, their questions definitely embody the spirit of strength-based assessment. I’m not going to post all of the question here because I want readers to go to the RFI to see the questions for themselves. But as a teaser, here’s the first question that appears in each area of inquiry addressed in the feedback form:

 “Identify what you consider an audacious goal in this area – a challenge that may be daunting but would represent a huge leap forward were it to be achieved.  Include any proposals for the steps and elements needed to reach that goal. The most important thing NLM does in this area, from your perspective.”

So be an observer: check out the NLM’s Request for Information.  Notice how they constructed a strength-based participant feedback form.

Then be a participant: take a few minutes to post your vision for the future of NLM.

How I Learned to Stop Worrying and Love Logic Models (The Chili Lesson)

Friday, October 28th, 2016

michelle

By Michelle Malizia, Director of Library Services for the Health Sciences, University of Houston

I’ll start with a full disclosure: I am a late convert to logic models. Many years ago, I worked in a department that, for a period of time, became governed by logic models. This experience made me fear… no, hate… logic models.  Several years later, through external workshops and assistance from the NN/LM Evaluation Office, I was introduced to the tremendous value of logic models.

My closest personal analogy relates to my feelings about chili. I grew up eating my mother’s chili, which basically consisted of cans of many different types of beans floating in a type of broth. I hated it. When I was 22 years old, I had no choice but to eat someone else’s chili. This chili had lots of ground beef and spices. It was delicious. Then it occurred to me, my mother’s chili was my only frame of reference for chili. I didn’t dislike chili – I disliked my mother’s chili.

And so it goes with logic models. Once I learned a different way to make and apply them, I became a dedicated user. I now design logic models whenever I plan a new service, activity or initiative.

In 2014, I was hired as the Director of Library Services for the Health Sciences at the University of Houston (UH). In 2017, UH will have its first medical library and my task is to plan the services for the new facility. Of course, I turned to logic models because they provided the framework of not only what I am planning but why I am planning each service and ultimately, how I will evaluate if I achieved the goal.

When I started with my logic models, I was tempted to begin with the activity. I had to remind myself that it is more important to document what I hope to accomplish by that activity (i.e. outcome).  Think about it: Why do librarians teach PubMed classes? Why do librarians want to be embedded in a nursing class? Why do so many libraries provide liaison services? Many of you are probably thinking: “That’s easy, Michelle. We do those things to better serve our customers.” My response is:  How do you know those activities better serve your customers?  How can you prove it to your stakeholders?  That is the reason you should start with your outcomes rather than activities.

For example, my new library will be providing assistance with NIH Public Access Policy compliance. When I developed my logic model, I called upon my inner 3-year old to ask the question best asked by toddlers:  Why.  Because I have a creative side, I use the software Visio (a Microsoft Office software) to create my logic models. It allows me to visually see connections between activities. The chart below shows  a portion of my logic model.

NIH Public Access Policy Assistance Services logic Model One. Activities are conduct workshops and assistance with compliance. Outputs are number of workshops and number of consultations. Short-term outcomes are increase awareness of NIH Public Access Policy and increased knowledge of compliance specifics. Intermediate outcome is compliance and long-term outcome is UH retains and receives NIH grants.

 

As you can see, my long-term outcome for this activity was to ensure that UH retains and receives NIH Grants. If UH researchers don’t comply with the NIH Public Access Policy mandate, their current and future funding is in jeopardy. The intermediate outcome leading to the long-term outcome is increased compliance with the policy. In order to increase compliance, I need to make researchers aware of the policy and how to comply.  That’s when I was able to determine the best methods for me to accomplish those outcomes. For my university environment, the best way to achieve these outcomes is through workshops and consultations.

Now that I knew the “what and the why,” I needed to determine the how.  How would I know if I accomplished my goals? Again, I turned to Visio to visualize how I could assess if I achieved my outcomes.

My final step was to determine my measurable indicators. For example, in the case of workshops, my indicator was “% of workshop attendees who reported being more knowledgeable about how to comply with the policy.”  My target was 85% of attendees. To evaluate this outcome, I would use a pre- and post-test.

 

Evaluation plan for NIH Public Access Policy Assistance Services. Outputs are workshops and consultations, leading to short-term outcomes of increased awareness of NIH Public Access policy and increased knowledge of compliance specifics. The outcomes will be assessed with a pre-post test and follow-up questionnaire. The intermediate outcome is increased compliance, which will be assessed with a survey and/or other follow-up

My overall work with logic models led to a pleasant surprise. Mid-way through my process, UH Libraries adopted a new strategic plan. Strategic plans are usually written in terms of goals. Some of my colleagues tried to feverishly determine where their activities fit into the library’s overall goals. Because I had already determined my outcomes, it was easy to slot my activities into the library’s overall plan.

If you have had a previous bad experience creating logic models, try it again. Ask the NEO for assistance and look at their extremely helpful guides. Like me, you may finally realize that logic models are worth the time and energy. Remember, there are many different types of chili.  Find the one that you like best.

NEO note: The evaluation field has come a long way in discovering new, less painful approaches to creating and using logic models.  If, like Michelle, you had bad experiences years ago with logic models, you might want to give them another chance.  You can learn one approach through the NEO booklet Michelle mentioned, which is  Planning Outcomes-Based Projects (Booklet 3 in our Planning and Evaluating Health Information Outreach Programs series). For alternative approaches, check out our NEO Shop Talk blog entries  Logic Model for a Birthday Party and An Easier Way to Plan: Tearless Logic Models.

 

Last updated on Monday, June 27, 2016

Funded by the National Library of Medicine under Contract No. UG4LM012343 with the University of Washington.