Skip all navigation and go to page content

NEO Shop Talk

The blog of the National Network of Libraries of Medicine Evaluation Office

Archive for the ‘News’ Category

‘Tis the Season to Do Some Qualitative Interviewing!

Friday, December 9th, 2016

For most of us, the end-of-year festivities are in full swing. We get to enjoy holiday treats. Lift a wine glass with colleagues, friends, and loved ones. Step back from the daily grind and enjoy some light-hearted holiday fun.

Or, we could take these golden holiday social events to work on our qualitative interviewing skills! That’s right.  I want to invite you to participate in another NEO’s holiday challenge: The Qualitative Interview challenge. (You can read about our Appreciative Inquiry challenge here.)

If you are a bit introverted and overwhelmed in holiday situations, this challenge is perfect for you. It will give you a mission: a task to take your mind off that social awkwardness you feel in large crowds. (Please tell me I’m not the only one!) If, on the other hand, you are more of a life-of-the-party guest, this challenge will help you talk less and listen more.  Other party-goers will love you and you might learn something.

Here’s your challenge.  Jot down some good conversational questions that fit typical categories of qualitative interview questions.  Commit a couple questions to memory before you hit a party. Use those questions to fuel conversations with fellow party-goers and see if you get the type of information you were seeking.

To really immerse yourself in this challenge, create a chart with the six categories of questions. (I provided an example below)  When your question is successful (i.e., you get the type of information you wanted), give yourself a star.  Sparkly star stickers are fun, but you can also simply draw stars beside the questions. Your goal is to get at least one star in each category by midnight on December 31.

Holiday challenge chart, There is a holiday border around a table-style chartt with the six categories of questions, the five extra credit techniques, and blank cells for stars

According to qualitative researcher/teacher extraordinaire Michael Q. Patton, there are six general categories of qualitative interview questions.  Here are categories:

  • Experience or behavior questions: Ask people to tell you a story about something they experienced or something they do. For unique experiences, you might say “Describe your best holiday ever.” You could ask about more routine behavior, such as “What traditions do you try to always celebrate during the holidays?”
  • Sensory questions: Sensory questions are similar to experience questions, but they focus on what people see, hear, feel, smell, or taste. Questions about holiday meals or vacation spots will likely elicit sensory answers.
  • Opinion and value questions: If you ask people what they think about something, you will hear their opinions and values. When Charlie Brown asked “What is the true meaning of Christmas?” he was posing a value/opinion question.
  • Emotions questions: Here, you ask people to express their emotional reactions. Emotions questions can be tricky. In my experience, most people are better at expressing opinions than emotions, so be prepared to follow up.  For example, if you ask me “What do you dislike about the holiday season?” I might say “I don’t like gift-shopping.”   “Like” is more of an opinion word than an emotion word. You want me to reach past my brain and into my heart. So you could follow-up with “How do you feel when you’re shopping for holiday gifts?”  I might say “The crowds frustrate and exhaust me” or “I feel stressed out trying to find perfect gifts on a deadline.“ Now I have described my emotions around gift-shopping. Give yourself a star!
  • Knowledge questions: These questions seek factual information. For example, you might ask for tried-and-true advice to make holiday travel easier. If answers include tips for getting through airport security quickly or the location of clean gas station bathrooms on the PA Turnpike, you asked a successful knowledge question.
  • Background and demographic questions: These questions explore how factors such as ethnicity, culture, socio-economic status, occupation, or religion affect one’s experiences and world view. What foods do their immigrant grandparents cook for New Year’s celebrations every year?  What is it like to be single during the holidays? How do religious beliefs or practices affect their approach to the holidays? These are examples of background/demographic questions.

To take this challenge up a notch, try to incorporate the following techniques while practicing interview skills over egg nog.

Ask open-ended questions. Closed-ended questions can be answered with a word or phrase.  “Did you like the movie?”  The answer “Yes” or “No” is a comprehensive response to that question.   An open-ended version of this question might be “Describe  a good movie you saw recently.”  If you phrased your question so that your conversation partner had to string together words or sentences to form an answer, give yourself an extra star.

Pay attention to question sequence:  The easiest questions for people to answer are those that ask them to tell a story. The act of telling a story helps people get in touch with their opinions and feelings about something.  Also, once you have respectfully listened to their story, they will feel more comfortable sharing opinions and feelings with you. So break the ice with experience questions.

Wait for answers:  Sometimes we ask questions, then don’t wait for a response.  Some people have to think through an answer completely before they talk out loud. Those seconds of silence make me want to jump in with a rephrased question. The problem is, you’ll start the clock again as they contemplate the new version of your question. To hold myself back, I try to pay attention to my own breathing while maintaining friendly eye contact.

Connect and support: You get another star if you listened carefully enough to accurately reflect their answers back to them. This is called reflective listening.  If you want a fun tutorial on how to listen, check out Julian Treasure’s TEDtalk.

Some of you are likely thinking “Thanks but no thanks for this holiday challenge.” Maybe it seems too much like work. Maybe you plan to avoid social gatherings like the plague this season.  Fair enough.  All of the tips apply to bona fide qualitative interviews. When planning and conducting qualitative interviews, remember to include questions that target different types of information. Make your questions open-ended and sequence them so they are easy to answer.  Listen carefully and connect with your interviewee by reflecting back what you heard.

Regardless of whether you take up the challenge or not, I wish you happy holidays full of fun and warm conversations.

My source for interview question types and interview techniques was  Patton MQ. Qualitative Research and Evaluation Methods.  4th ed.  Thousand Oaks, CA: Sage, 2015.

The Appreciative Inquiry Holiday Challenge

Wednesday, November 23rd, 2016

hand writing something in to the notebook near christmas toys.

The holiday season is upon us, so I want to give our readers a holiday Appreciative Inquiry challenge.  This is a fun way to practice the Appreciative Inquiry interview. It also provides an opportunity for you and your family to plan a better-than-usual holiday season.  Finally, it gives everyone something to talk about other than politics. (You’re welcome.)

During the coming week, ask yourself and your loved ones the following three questions:

  • What was the best holiday experience you’ve ever had?
  • What made that experience so special? What did you value about it?
  • What could happen to make this year’s holiday season exceptional?

Here’s how I would answer the questions.  My favorite holiday was the one I had as a child, traveling to Arizona to spend Christmas with extended family.  For a kid from Western Pennsylvania, Tucson was exotic.  Christmas lights on saguaro cactuses. Luminarias.  Tree ornaments from Mexico. The best part, though, was a trip to the Catalina mountains.

What I valued about that holiday was the differentness of the setting and seeing how those from another part of the country celebrated the holiday. I also liked the bright sunny days outdoors.

It’s a little too late to book a trip to Arizona for the holidays, but I can still seek out places close by that have a different take on holiday decorations. As for enjoying the outdoors, I live in a place that offers lots of opportunity on that front. My husband and I can easily fit in a hike and a trip to Helen, a Bavarian Alpine village in the North Georgia mountains.

Once you’ve talked with your family, make a list of everyone’s ideas for a great holiday and check them off as they happen. You could even do this as a group on a private Facebook page.  Or go old school and put a written list on your refrigerator door.  See if Appreciative Inquiry doesn’t add some sparkle to your holiday season this year.

Happy Thanksgiving, everyone.

 

 

A Chart Chooser for Qualitative Data!

Friday, November 18th, 2016

Core Values Word Cloud Concept

When people talk about data visualization, they are usually talking about quantitative data. In a previous post, we explained that data visualizations help people perform three primary functions: exploring, making sense of, and communicating data.  How can we report qualitative data in a way that performs those same functions?

We just got some exciting news from the EvergreenData blog that they have developed a Qualitative Chart Chooser. Seriously–it’s a work of art. Actually two works of art because they have two different chart chooser drafts to choose from.

The way it works is this: you think about the story you want to tell with your data, maybe about how something improved over time because of your awesome project. Then using the chart chooser, you look at the “show change over time” category, and then you could select a timeline, before-and-after “change photos,” or a histomap (what’s a histomap?  Take a look at this one).

This chart chooser is a very cool tool. But I wouldn’t wait until it was time to report findings to use it. One thing that we at the NEO suggest is that when you are first planning your project, you should think about the story or stories you want to tell at the end of your project. Maybe when you’re thinking about the story you want to tell, you could look at all these different qualitative charts in the chart chooser.  Which ones would you like to use? Do you want to tell the story of how your program aligns with the goals of your institution (you could try indicator dots)? Or maybe you want to show how the different parts of your project work together as a whole (a dendrogram might work). By looking at these options before you design your evaluation plan, you can be sure that you are gathering the right data from the beginning. Backing up even further in your planning process, if you are having trouble trying to decide what story or stories you want to tell, this Qualitative Chart Chooser can give you ways to think about that.

Here is some more information on qualitative data visualization and storytelling from NEO Shop Talk:

Qualitative Data Visualization, September 26, 2014

More Qualitative Data Visualization Ideas, December 18, 2014

Telling Good Stories About Good Programs, June 29, 2015

DIY Tool for Program Success Stories, July 2, 2015

 

Participatory Evaluation, NLM Style

Friday, November 11th, 2016

Road Sign with directional arrow and "Get Involved" written on it.

This week, I invite you to stop reading and start doing.

Okay, wait. Don’t go yet.  Let me explain. I am challenging you to be a participant-observer in a very important assessment project being conducted by the National Library of Medicine (NLM).

The NEO is part of the National Library of Medicine’s program (The National Network of Libraries of Medicine) that promotes use of NLM’s extensive body of health information resources.  The NLM is devoted to advancing the progress of medicine and improving the public health through access to health information. Whether you’re a librarian, health care provider, public health worker, patient/consumer, researcher, student, educator, or emergency responder fighting health-threatening disasters, the NLM has high quality, open-access health information for you.

Now the NLM is working on a long-range plan to enhance its service to its broad user population.  It is inviting the public to provide input on its future direction and priorities. Readers, you are a stakeholder in the planning process. Here is your chance to contribute to the vision. Just click here to participate.

And, because you are an evaluation-savvy NLM stakeholder, your participation will allow you to experience a strength-based participatory evaluation method in action.  Participatory evaluation refers to evaluation projects that engage a wide swath of stakeholders. Strength-based evaluation approaches are those that focus on getting stakeholders to identify the best of organizations and suggest ways to build on those strengths. Appreciative Inquiry is one of the most widely recognized strength-based approaches. The NEO blog have posts featuring Appreciative Inquiry projects here and here.

While I have no idea if the NLM’s long-range planning team explicitly used Appreciative Inquiry for developing their Request for Information, their questions definitely embody the spirit of strength-based assessment. I’m not going to post all of the question here because I want readers to go to the RFI to see the questions for themselves. But as a teaser, here’s the first question that appears in each area of inquiry addressed in the feedback form:

 “Identify what you consider an audacious goal in this area – a challenge that may be daunting but would represent a huge leap forward were it to be achieved.  Include any proposals for the steps and elements needed to reach that goal. The most important thing NLM does in this area, from your perspective.”

So be an observer: check out the NLM’s Request for Information.  Notice how they constructed a strength-based participant feedback form.

Then be a participant: take a few minutes to post your vision for the future of NLM.

Mean, Median or Mode–What’s the Difference?

Monday, November 7th, 2016

Five stars ratings with shadow on white

Last week I taught the class Finding Information in Numbers and Words: Data Analysis for Program Evaluation at the SCC/MLA Annual Meeting in Galveston, TX.  There is a section of the class where we review some math concepts that are frequently used in evaluation, and the discussion of mean, median and mode was more interesting than I expected.  Mean, median and mode are measures of “central tendency,” which is the most representative score in a distribution of scores.  Central tendency is a descriptive statistic, because it is one way to describe a distribution of scores. Since everyone there had run across those concepts before, I asked the group if anyone knew of any clever mnemonics for remembering the difference.  Several people responded, both in class and afterwards (thanks Margaret Vugrin, Julia Crawford and Michelle Malizia!) Here are a couple of memory tools for you:

Mean: Turns out those mean kids are just average (mean = average)

Median: Just like the median in the road: if you line the values up in order, the median is the number “in the middle”

Mode: Mode has an “O” sound, and O is the first letter in “Often.”  It is the value on the list which occurs most often.

Or

Hey diddle diddle, the Median’s the middle; you add and divide for the Mean. The Mode is the one that appears the most, and the Range is the difference between.

Now that you can remember them, which one should you use?  I think a good way to think about it relates to the ratings you see when you’re trying to pick a hotel or restaurant.  I don’t know about you, but when I’m looking at a restaurant score, when I see 4 stars (out of 5), I’m not comfortable with that number without seeing the breakdown.  There’s a big difference between a 4 where the scores are spread out and one where the scores are heavily skewed towards 5.

Let’s say I’m looking at reviews for hotdog restaurants that have around 4.0 stars.  The first one I look at has a distribution of scores spread out relatively evenly from 5 stars (the most number of ratings) to 1 star (the fewest).  A mean (mean kids are just average) works well here.  To do this you add all the scores up and divide the sum by the number of responses to reach a mean of 3.9.

3-9-star-mean

Here is the chart for a similar hotdog restaurant with a slightly higher 4.2 star rating:

4-2-star-median

While the mean adds up to 4.2,  you can see in the chart that the scores are heavily skewed towards 4 and 5 stars, and it seems that 4.2 does not accurately describe the ratings of that restaurant (remember that central tendency is a descriptive statistic). However, if you use a (middle of the road) median with this kind of distribution, the result is a whopping 5.  To determine the median, you line all the results up in order and select the one in the very middle.  You might want to use a median when the distribution looks skewed.

This particular restaurant analogy doesn’t really work with mode. Mode is used for categories, which really cannot be averaged mathematically.  For example, if you want to know what is the most representative type of restaurant in your city, you might find out that your city has more hotdog restaurants than any other kind of restaurant (that would be awesome, right?).

I hope this helps. If you know any other mnemonics for mean, median, or mode, please send me an email at kjvargas@uw.edu and I will add them to the bottom of this post.

 

How I Learned to Stop Worrying and Love Logic Models (The Chili Lesson)

Friday, October 28th, 2016

michelle

By Michelle Malizia, Director of Library Services for the Health Sciences, University of Houston

I’ll start with a full disclosure: I am a late convert to logic models. Many years ago, I worked in a department that, for a period of time, became governed by logic models. This experience made me fear… no, hate… logic models.  Several years later, through external workshops and assistance from the NN/LM Evaluation Office, I was introduced to the tremendous value of logic models.

My closest personal analogy relates to my feelings about chili. I grew up eating my mother’s chili, which basically consisted of cans of many different types of beans floating in a type of broth. I hated it. When I was 22 years old, I had no choice but to eat someone else’s chili. This chili had lots of ground beef and spices. It was delicious. Then it occurred to me, my mother’s chili was my only frame of reference for chili. I didn’t dislike chili – I disliked my mother’s chili.

And so it goes with logic models. Once I learned a different way to make and apply them, I became a dedicated user. I now design logic models whenever I plan a new service, activity or initiative.

In 2014, I was hired as the Director of Library Services for the Health Sciences at the University of Houston (UH). In 2017, UH will have its first medical library and my task is to plan the services for the new facility. Of course, I turned to logic models because they provided the framework of not only what I am planning but why I am planning each service and ultimately, how I will evaluate if I achieved the goal.

When I started with my logic models, I was tempted to begin with the activity. I had to remind myself that it is more important to document what I hope to accomplish by that activity (i.e. outcome).  Think about it: Why do librarians teach PubMed classes? Why do librarians want to be embedded in a nursing class? Why do so many libraries provide liaison services? Many of you are probably thinking: “That’s easy, Michelle. We do those things to better serve our customers.” My response is:  How do you know those activities better serve your customers?  How can you prove it to your stakeholders?  That is the reason you should start with your outcomes rather than activities.

For example, my new library will be providing assistance with NIH Public Access Policy compliance. When I developed my logic model, I called upon my inner 3-year old to ask the question best asked by toddlers:  Why.  Because I have a creative side, I use the software Visio (a Microsoft Office software) to create my logic models. It allows me to visually see connections between activities. The chart below shows  a portion of my logic model.

NIH Public Access Policy Assistance Services logic Model One. Activities are conduct workshops and assistance with compliance. Outputs are number of workshops and number of consultations. Short-term outcomes are increase awareness of NIH Public Access Policy and increased knowledge of compliance specifics. Intermediate outcome is compliance and long-term outcome is UH retains and receives NIH grants.

 

As you can see, my long-term outcome for this activity was to ensure that UH retains and receives NIH Grants. If UH researchers don’t comply with the NIH Public Access Policy mandate, their current and future funding is in jeopardy. The intermediate outcome leading to the long-term outcome is increased compliance with the policy. In order to increase compliance, I need to make researchers aware of the policy and how to comply.  That’s when I was able to determine the best methods for me to accomplish those outcomes. For my university environment, the best way to achieve these outcomes is through workshops and consultations.

Now that I knew the “what and the why,” I needed to determine the how.  How would I know if I accomplished my goals? Again, I turned to Visio to visualize how I could assess if I achieved my outcomes.

My final step was to determine my measurable indicators. For example, in the case of workshops, my indicator was “% of workshop attendees who reported being more knowledgeable about how to comply with the policy.”  My target was 85% of attendees. To evaluate this outcome, I would use a pre- and post-test.

 

Evaluation plan for NIH Public Access Policy Assistance Services. Outputs are workshops and consultations, leading to short-term outcomes of increased awareness of NIH Public Access policy and increased knowledge of compliance specifics. The outcomes will be assessed with a pre-post test and follow-up questionnaire. The intermediate outcome is increased compliance, which will be assessed with a survey and/or other follow-up

My overall work with logic models led to a pleasant surprise. Mid-way through my process, UH Libraries adopted a new strategic plan. Strategic plans are usually written in terms of goals. Some of my colleagues tried to feverishly determine where their activities fit into the library’s overall goals. Because I had already determined my outcomes, it was easy to slot my activities into the library’s overall plan.

If you have had a previous bad experience creating logic models, try it again. Ask the NEO for assistance and look at their extremely helpful guides. Like me, you may finally realize that logic models are worth the time and energy. Remember, there are many different types of chili.  Find the one that you like best.

NEO note: The evaluation field has come a long way in discovering new, less painful approaches to creating and using logic models.  If, like Michelle, you had bad experiences years ago with logic models, you might want to give them another chance.  You can learn one approach through the NEO booklet Michelle mentioned, which is  Planning Outcomes-Based Projects (Booklet 3 in our Planning and Evaluating Health Information Outreach Programs series). For alternative approaches, check out our NEO Shop Talk blog entries  Logic Model for a Birthday Party and An Easier Way to Plan: Tearless Logic Models.

 

Meet the NEO’s New Program Assistant Kalyna Durbak

Friday, October 14th, 2016

Kalyna Durbak

I am pleased to introduce the NEO’s new program assistant, Kalyna Durbak, MLIS, who joined our staff on October 3.  Kalyna will be our go-to person for managing the NEO website, providing technical support with webinars, and helping with the “roll-up-your-sleeves” work involved in carrying out evaluation projects.

Kalyna began working for the UW Health Sciences Library in May 2016. Prior to joining the NEO, Kalyna was the Web Content Assistant on the team that created and promotes the Response & Recovery App in Washington (RRAIN), designed to provide emergency responders with quick access to disaster-management resources. It also provides local information such as weather alerts and traffic reports. Kalyna also provided web content and social media assistance for the Health Evidence Resource for Washington State (HEALWA), a portal that provides affordable online access to clinical information and health education resources. The portal is available to health professionals who are licensed through 23 state organizations. A 2015 evaluation study conducted by HEALWA showed that many health professionals eligible to use the portal are not aware of it.  Kalyna helped promote HEALWA through social media and exhibits.

Kalyna earned her MLIS degree from University of Illinois at Urbana–Champaign and a BA in History from University of Illinois at Chicago. She was an intern at the Smithsonian Ralph Rinzler Folklife Archives and Collections and the Rochester Institute of Technology Archives. A Midwestern native, she recently moved to Seattle with her husband because they were attracted to Seattle’s mix of urban and outdoor opportunities. To introduce herself to our readers, Kalyna agreed to answer a few questions about herself.

What made you want to pursue an MLS?

I always considered myself a “jack of all trades.” At school I did not excel in one subject, but rather did fairly well in most areas of study. I also fell in love with researching, and doing “deep dives” into different subjects. I figured that with an MLS, I could end up working in vastly different environments, help others with research, and pursue my dream of being a lifelong learner.

What made you want to join the NN/LM Evaluation Office?

I recently realized that I needed to strengthen my evaluation skills. Whether I am working or volunteering, I am constantly trying to solve issues concerning outreach and training. For most of my career, I just created solutions without ever thinking “How can I measure my success in solving this issue?” and “Are these solutions working the way I intended?” These questions are key in determining whether the solution is actually solving any problems, or just wasting time and energy.

What experience have you had with evaluation?

My experiences with evaluation come from managing social media accounts. Once I realized I had a whole dashboard of statistics to my disposal, I used them to set optimization goals in terms of posting times and types of content that resonate with my audiences.

What evaluation skills do you particularly hope to develop?

I am very interested in developing my outcome assessment skills. I am usually the big idea person of a group, and enjoy setting lofty goals. In the past, I have measured the success of an initiative based on the number of tasks my group completed for the project. What I want to do going forward is measure success by the initiative’s impact on the intended audience and community.

What other interests do you have?

I am very active in a Ukrainian Scouting Organization called Plast. Through scouting I found my love for the outdoors, and made countless friends all over the United States and around the world. When I’m not working on scouting activities, I find myself crafting. My favorite crafts include quilling, card making, and traditional Ukrainian embroidery.

When I am crafting or commuting to work, I listen to various nerdy podcasts. Some of my favorites include 99% Invisible, LibUX, and Reply All

What is the bravest thing you’ve ever done?

I took a hike with my husband down the Grand Canyon. I didn’t make it that far, because I’m afraid of heights. There was one foot between my body and a drop into the canyon, and that was not where I wanted to be. I had to turn back partway down. My husband said, “I love you. Do you mind if I keep going?”  So I had to walk back up the trail alone. Looking back, I’m glad I went through it.  Once I climbed up, I felt so proud of myself.

What’s in a Name? Convey Your Chart’s Meaning with a Great Title

Friday, October 7th, 2016

Some of you may be working on conference posters and paper presentations for Fall conferences.  And some of those will probably include charts to demonstrate data representing a lot of hard work on your part.  In most cases you have minutes to use that chart to get your audience to understand the data.

Stephanie Evergreen has great advice for displaying chart data.  She literally wrote the books on it: Presenting Data Effectively and Effective Data Visualization.  Her recent blog post is about one of the simplest and most powerful changes you can make to effectively present your chart data: “Strong Titles Are The Biggest Bang for Your Buck.

What many of us do is present the data with a generic title, like “Attendance rates.” Then the viewer has to spend time working through the data and you hope that they see what you wanted them to.  What Stephanie Evergreen proposes (backed by persuasive research) is to give your charts a clear title that explains what the data shows. Your poster or paper is almost certainly making a point.  Determine how your chart supports the point of your presentation and state that in the title.  Here are some reasons why:

  • It respects your viewers’ time
  • It forces you to be clear about the point you want your data to make
  • It makes the data more memorable

Stephanie Evergreen’s post has some great examples of how a good title can really improve the impact of the chart.  In addition, here is an example from the NEO webinar Make Your Point: Good Design for Data Visualization.

Looking at this original chart, you might notice that in each activity the follow-up showed an increase over the baseline.  If you, the viewer, didn’t have a lot of time, that might be all you noticed.

Chart with title: Comparison of emergency preparedness activities from baseline to follow-up

With a simple change of title , you can see that the author of this presentation is highlighting the increased number of continuity of services plans.  This is designed to enhance the point of the presentation, and not waste the viewers’ time. Also, note that the title is left justified instead of centered.  Because the title is a full sentence, a left-justified format is easier to read.

Chart with title: The biggest improvement in emergency preparedness from baseline to follow-up was the number of network member organizations reporting that they had or were working on a service continuity plan.

So, while Shakespeare might have been correct when he wrote “What’s in a name? that which we call a rose / By any other name would smell as sweet,” what if the presenter was trying to show the fortitude of Texas antique roses to survive in harsh weather conditions, and the viewer only noticed how sweet the rose smelled?  Maybe the heading “A Rose” sometimes isn’t enough information.

Save

Save

Save

Save

Save

Save

Save

Update Your Evaluation Toolbox: Two Great Conferences

Friday, September 23rd, 2016

It’s the fall, also known as the beginning of conference season. It’s a very exciting time if you like evaluation/assessment.  If you want to improve your evaluation skills, two great conferences are coming up, back to back.  Take a look at some of these highlights and pick one to go to!

Oct. 24-29, 2016 Evaluation 2016, Atlanta GA

Eavlaution 2016 October 24-29, Atlanta, GA

This is the annual conference of the American Evaluation Association, an international organization with over 7000 members, and interest groups that cover topics like Assessment in Higher Education; Collaborative, Participatory & Empowerment Evaluation; and Data Visualization and Reporting.  The theme of this year’s conference is Evaluation + Design.

The conference has 40 workshops and 850 sessions.  Here are some example programs:

  • From crap to oh snap: Using DIY templates to (easily) improve information design across an organization
  • Developing Evaluation Tools to Measure MOOC Learner Outcomes in Higher Education
  • Evaluation Design for Innovation/Pilot Projects

There’s still time for Early Bird Registration (ends October 3)!

Oct. 31-November 2, 2016 Library Assessment Conference, Arlington VA

Library Assessment Conference 2016

This conference only happens every other year and is co-sponsored by the Association of Research Libraries (ARL) and the University of Washington (UW) Libraries (disclosure – the NEO is part of the UW Libraries–something we’re quite proud of).   The theme for this conference is Building Effective, Sustainable, Practical Assessment.

This conference is bookended by workshops like Getting the Message Out: Creating a Multi-Directional Approach to Communicating Assessment and Learning Analytics, Academic Libraries, and Institutional Context: Getting Started, Gaining Traction, Going Forward.

Scholarly papers and posters with titles like “How Well Do We Collaborate? Using Social Network Analysis (SNA) to Evaluate Engagement in Assessment Program” and “Consulting Detectives: How One Library Deduced the Effectiveness of Its Consultation Area & Services” are organized around a variety of topics, such as Organizational Issues; Ithaka S+R; and Analytics/Value.

 

This is an exciting time to be in the assessment and evaluation business.  Take this amazing opportunity to go to one of these conferences.

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

From Logic Model to Proposal Evaluation – Part 2: The Evaluation Plan

Friday, September 2nd, 2016

Photo of black and white cat with fangsLast week we wrote some basic goals and objectives for a proposal about teaching health literacy skills to vampires in Sunnydale.  Here’s what the goals and objectives look like, taken from the Executive Summary statement in last week’s post:

Goal: The goal of our From Dusk to Dawn project is to improve the health and well-being of vampires in the Sunnydale community.

Objective 1: We will teach 4 hands-on evening classes on the use of MedlinePlus and PubMed to improve Sunnydale vampires’ ability to find consumer health information and up to date research about health conditions.

Objective 2: We will open a 12-hour “Dusk-to-Dawn” health reference hotline to help the vampires with their reference questions.

There are also three outcomes that we have identified:

  1. Short-term: Increased ability of the Internet-using Sunnydale vampires to research needed health information.
  2. Intermediate: These vampires will use their increased skills to research health information for their brood.
  3. Long-term: Overall, the Sunnydale vampires will have improved health and as a result form better relationships with the human community of Sunnydale.

To get to an evaluation plan from here you have to know that there are basically two kinds of things you’ll want to measure: process and outcomes.

Process assessment measures that you did what you said you would do and the way you said you would do it. For example, you can count the number of classes you taught, how many people attended, and whether their survey responses showed that they thought you did a good job teaching.

Also you might want to show that you were willing to make changes in the plan if review of your process assessment showed that you weren’t getting the results you wanted.  For example, if you planned all your classes in early evening, but few vampires attended, you might interview some vampires and find out that early evening is mealtime for most vampires, and move your classes to a different time to increase attendance.  Your evaluation plan could show that you are collecting that information and that you will be responsive to what you see happening.

Outcome assessment measures the extent to which your project had the impact that you hoped it would on the recipients of the project, or even greater on their overall organizations or communities. We showed the first step of outcome assessment in last week’s assignment, but I’m going to break it down a little more here.  Put in basic terms, to do an outcome assessment, you state your outcome, you add in an indicator, a target, and a time frame to come up with a measurable objective, and then you write out the source of your data, your data collection method, and your data collection timing to complete the picture.  Let’s talk about each item here:

Indicator: This is the evidence you can gather that shows whether or not you met your outcomes.  If one of your outcomes is that the vampires have increased ability to research health information, how would you know if that had happened? The indicator could be their increased confidence level in finding health information, or it could be improvement in skills test scores given before and after a training session.

Target: The target is the goal that makes this project look like a success to you.  For example, if the vampires improve their test scores by 50% over a baseline test, is that enough to say you have successfully reached that outcome?  And how many of the vampires need to reach that 50% goal?  All of them? One of them?  Targets can be hard to identify, because you don’t want them to be too hard to reach but if they’re too easy your funder may not be impressed with your ambition.  Sometimes you can work with the funder or other stakeholders on setting targets that are credible.

Time frame: This is the point in time that when the threshold for success will be achieved.  So if you want to make sure the vampires increased their ability by the end of your training, then the time frame would be by the end of your training.

Data Source: This is the location where your information is found. Often, data sources are people (such as participants or observers) but they also may be records, pictures, or meeting notes. Here are some examples of data sources.

Data Collection Methods: Evaluation methods are the tools you use to collect data, such as a survey, observation, or quiz.  Here is more examples of data collection methods.

Data Collection Timing: The data collection timing is describing exactly when you will be collecting the data.

What does your final evaluation plan look like? 

Here is a sample piece of an evaluation plan for the Dusk to Dawn proposal.

Objective 1: teach 4 hands-on evening classes on the use of MedlinePlus and PubMed to improve Sunnydale vampires’ ability to research consumer health information and up to date research about health conditions.

Process Assessment: The PI will collect the following information to ensure that classes are being taught; expected attendance figures are being reached; teachers are doing a good job teaching classes (including surviving the classes).  Data will be reviewed after each class and changes will be made to the program as needed to reach target goals:

◊ Participant roster to measure attendance figures
◊ Class evaluations to measure teacher performance
◊ Count of number of teachers at the beginning and ending of each class to measure survival of instructors
◊ Project team will meet after the second class to review success and lessons learned and to consider course corrections to ensure objectives are met

Outcome Assessment:
Measureable Objective: In a post-test given immediately after each class, a minimum of 75% of Sunnydale vampire attendees demonstrate that they learned how to find needed resources in PubMed and MedlinePlus by showing at least a 50% improvement over the pre-test.

Based on Level 2 (Learning) in the Kirkpatrick Model, a test will be created with some basic health questions to be researched. Class participants will be given these questions as a pre-test before the class, and then will be given the same questions after the class as a post-test.  This learning outcome will be considered successful if a minimum of 75% of Sunnydale vampire participants demonstrate that their scores improved by at least 50%.

Last wishes, I mean, thoughts

This is not a complete evaluation plan, but the purpose of these two posts has been to show how you can go from a logic model to the evaluation plan of a proposal.  Don’t worry if all your outcomes cannot be measured in the scope of your project.  For example, in this Dusk to Dawn project, it might have been dangerous to find out if the vampires had passed on needed health information to their brood, even harder to find out whether the vampires had become more healthy as a result of the information.  This doesn’t mean to leave these outcomes out, but you may want to acknowledge that measuring some outcomes is out of the scope of the project’s resources.

As Grandpa Munster once said “Don’t let time or space detain ya, here you go, to Transylvania.”

Photo credit: Photo of 365::79 – Vampire Cat by Sarah Reid on Flickr under Creative Commons license CC BY 2.0.  No changes were made.

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Last updated on Monday, June 27, 2016

Funded by the National Library of Medicine under Contract No. UG4LM012343 with the University of Washington.