Skip all navigation and go to page content

NEO Shop Talk

The blog of the National Network of Libraries of Medicine Evaluation Office

Archive for the ‘Qualitative Methods’ Category

Got Documents? How to Do a Document Review

Friday, February 10th, 2017

Are you an introvert?  Then I have an evaluation method for you: document review. You usually you can do this method from the comfort of your own office. No scary interactions with strangers.

Truth is, my use of existing data in evaluation seldom rises to the definition of true document review.  I usually read through relevant documents to understand a program’s history or context. However, a recent blog post by Linda Cabral in the AEA365 blog reminded me that document review is a real evaluation method that is conducted systematically. Cabral provide tips and a resource for doing document review correctly.  For today’s post, I decided to plan a document review that the NEO might conduct someday, describing how I would use Cabral’s guidelines. I also checked out the CDC’s Evaluation Brief, Data Collection Methods for Evaluation: Document Review, which Cabral recommended.

Here’s some project background. The NEO leads and supports evaluation efforts of the National Network of Libraries of Medicine (NNLM), which promotes access to and use of health information resources developed by the NIH’s National Library of Medicine. Eight health sciences libraries (called Regional Medical Libraries or RMLs) manage a program in which they provide modest amounts of funding to other organizations to conduct health information outreach in their regions. The organizations receiving these funds (known as subawards) write proposals that include brief descriptions (1-2 paragraphs) about their projects. These descriptions, along with other information about the subaward projects, are entered that is into the NLM’s Outreach Projects Database (OPD).

The OPD has a wealth of information, so I need an evaluation question to help me focus my document review. I settle on this question: How do our subawardees collaborate with other organizations to promote NLM products?  Partnerships and collaborations are a cornerstone of NNLM. They are the “network” in our name.  Yet simply listing the diverse types of organizations involved in our work does not satisfactorily capture the nature of our collaborations.  Possibly the subaward program descriptions in our OPD can add depth to our understanding of this aspect of the NNLM.

Now that I’ve identified my primary evaluation question, here’s how I would apply Cabral’s guidelines in the actual study.

Catalogue the information available to you:  For my project, I would first review the fields on the OPD’s data entry pages to see what information is entered for each project.  I obviously want to use the descriptive paragraphs. However, it helps to peruse the other project details. For example, it might be interesting to see if different types of organization (such as libraries and community-based organizations) form different types of collaborations. This step may cause me to add evaluation questions to my study.

I also would employ some type of sampling, because the OPD contains over 4500 project descriptions from as far back as 2001.  It is neither feasible nor necessary to review all of them.  There are many sampling choices, both random and purposeful. (Check out this article by Palinkas et al for purposeful sampling strategies.)  I‘m most interested in current award projects, so I likely would choose projects conducted in the past 2-3 years.

Develop a data collection form: A data collection form is the tool that allows you to record abstracted or summarized information from the full documents. Fortunately, the OPD system downloads data into an Excel-readable spreadsheet, which is the basis for my form. I would first delete columns in this spreadsheet that contain information not irrelevant to my study, such as mailing address and  phone numbers of the subaward contact person.

Get a co-evaluator: I would volunteer a NEO colleague to partner with me, to increase the objectivity of the analysis. Document review almost always involves coding of qualitative data.  If you use qualitative analysis for your study, a partner improves the trustworthiness of conclusions drawn from the data. If you are converting information into quantifiable (countable) data, a co-evaluator allows you to assess and correct human error in your coding process. If you do not have a partner for your entire project, try to find someone who can work with you on a subset of the data so you can calibrate your coding against someone else’s.

Ensure consistently among teammates involved in the analysis: “Abstracting data,” for my project, means identifying themes in the project descriptions.  Here’s a step-by-step description of the process I would use:

  • My partner and I would take a portion of the documents (15-20%) and both of us would read the same set of project descriptions. We would develop a list of themes that both of us believe are important to track for our study. Tracking means we would add columns to our data collection form/worksheet and note absence or presence of the themes in each project description.
  • We would then divide up the remaining program descriptions. I would code half of them and my partner would take the other half.
  • After reading 20% of the remaining documents, we would check in with each other to see if important new themes have emerged that we want to track. If so, we would add columns on our data collection document. (We would also check that first 15-20% of project descriptions for presence of these new themes.)
  • When all program descriptions are coded, we would sort our data collection form so we could explore patterns or commonalities among programs that share common themes.

For a more explicit description of coding qualitative data, check out the NEO publication Collecting and Analyzing Evaluation Data. The qualitative analysis methods described starting on page 25 can be applied in qualitative document review.

So, got documents? Now you know how to use them to assess your programs.

‘Tis the Season to Do Some Qualitative Interviewing!

Friday, December 9th, 2016

For most of us, the end-of-year festivities are in full swing. We get to enjoy holiday treats. Lift a wine glass with colleagues, friends, and loved ones. Step back from the daily grind and enjoy some light-hearted holiday fun.

Or, we could take these golden holiday social events to work on our qualitative interviewing skills! That’s right.  I want to invite you to participate in another NEO’s holiday challenge: The Qualitative Interview challenge. (You can read about our Appreciative Inquiry challenge here.)

If you are a bit introverted and overwhelmed in holiday situations, this challenge is perfect for you. It will give you a mission: a task to take your mind off that social awkwardness you feel in large crowds. (Please tell me I’m not the only one!) If, on the other hand, you are more of a life-of-the-party guest, this challenge will help you talk less and listen more.  Other party-goers will love you and you might learn something.

Here’s your challenge.  Jot down some good conversational questions that fit typical categories of qualitative interview questions.  Commit a couple questions to memory before you hit a party. Use those questions to fuel conversations with fellow party-goers and see if you get the type of information you were seeking.

To really immerse yourself in this challenge, create a chart with the six categories of questions. (I provided an example below)  When your question is successful (i.e., you get the type of information you wanted), give yourself a star.  Sparkly star stickers are fun, but you can also simply draw stars beside the questions. Your goal is to get at least one star in each category by midnight on December 31.

Holiday challenge chart, There is a holiday border around a table-style chartt with the six categories of questions, the five extra credit techniques, and blank cells for stars

According to qualitative researcher/teacher extraordinaire Michael Q. Patton, there are six general categories of qualitative interview questions.  Here are categories:

  • Experience or behavior questions: Ask people to tell you a story about something they experienced or something they do. For unique experiences, you might say “Describe your best holiday ever.” You could ask about more routine behavior, such as “What traditions do you try to always celebrate during the holidays?”
  • Sensory questions: Sensory questions are similar to experience questions, but they focus on what people see, hear, feel, smell, or taste. Questions about holiday meals or vacation spots will likely elicit sensory answers.
  • Opinion and value questions: If you ask people what they think about something, you will hear their opinions and values. When Charlie Brown asked “What is the true meaning of Christmas?” he was posing a value/opinion question.
  • Emotions questions: Here, you ask people to express their emotional reactions. Emotions questions can be tricky. In my experience, most people are better at expressing opinions than emotions, so be prepared to follow up.  For example, if you ask me “What do you dislike about the holiday season?” I might say “I don’t like gift-shopping.”   “Like” is more of an opinion word than an emotion word. You want me to reach past my brain and into my heart. So you could follow-up with “How do you feel when you’re shopping for holiday gifts?”  I might say “The crowds frustrate and exhaust me” or “I feel stressed out trying to find perfect gifts on a deadline.“ Now I have described my emotions around gift-shopping. Give yourself a star!
  • Knowledge questions: These questions seek factual information. For example, you might ask for tried-and-true advice to make holiday travel easier. If answers include tips for getting through airport security quickly or the location of clean gas station bathrooms on the PA Turnpike, you asked a successful knowledge question.
  • Background and demographic questions: These questions explore how factors such as ethnicity, culture, socio-economic status, occupation, or religion affect one’s experiences and world view. What foods do their immigrant grandparents cook for New Year’s celebrations every year?  What is it like to be single during the holidays? How do religious beliefs or practices affect their approach to the holidays? These are examples of background/demographic questions.

To take this challenge up a notch, try to incorporate the following techniques while practicing interview skills over egg nog.

Ask open-ended questions. Closed-ended questions can be answered with a word or phrase.  “Did you like the movie?”  The answer “Yes” or “No” is a comprehensive response to that question.   An open-ended version of this question might be “Describe  a good movie you saw recently.”  If you phrased your question so that your conversation partner had to string together words or sentences to form an answer, give yourself an extra star.

Pay attention to question sequence:  The easiest questions for people to answer are those that ask them to tell a story. The act of telling a story helps people get in touch with their opinions and feelings about something.  Also, once you have respectfully listened to their story, they will feel more comfortable sharing opinions and feelings with you. So break the ice with experience questions.

Wait for answers:  Sometimes we ask questions, then don’t wait for a response.  Some people have to think through an answer completely before they talk out loud. Those seconds of silence make me want to jump in with a rephrased question. The problem is, you’ll start the clock again as they contemplate the new version of your question. To hold myself back, I try to pay attention to my own breathing while maintaining friendly eye contact.

Connect and support: You get another star if you listened carefully enough to accurately reflect their answers back to them. This is called reflective listening.  If you want a fun tutorial on how to listen, check out Julian Treasure’s TEDtalk.

Some of you are likely thinking “Thanks but no thanks for this holiday challenge.” Maybe it seems too much like work. Maybe you plan to avoid social gatherings like the plague this season.  Fair enough.  All of the tips apply to bona fide qualitative interviews. When planning and conducting qualitative interviews, remember to include questions that target different types of information. Make your questions open-ended and sequence them so they are easy to answer.  Listen carefully and connect with your interviewee by reflecting back what you heard.

Regardless of whether you take up the challenge or not, I wish you happy holidays full of fun and warm conversations.

My source for interview question types and interview techniques was  Patton MQ. Qualitative Research and Evaluation Methods.  4th ed.  Thousand Oaks, CA: Sage, 2015.

The Appreciative Inquiry Holiday Challenge

Wednesday, November 23rd, 2016

hand writing something in to the notebook near christmas toys.

The holiday season is upon us, so I want to give our readers a holiday Appreciative Inquiry challenge.  This is a fun way to practice the Appreciative Inquiry interview. It also provides an opportunity for you and your family to plan a better-than-usual holiday season.  Finally, it gives everyone something to talk about other than politics. (You’re welcome.)

During the coming week, ask yourself and your loved ones the following three questions:

  • What was the best holiday experience you’ve ever had?
  • What made that experience so special? What did you value about it?
  • What could happen to make this year’s holiday season exceptional?

Here’s how I would answer the questions.  My favorite holiday was the one I had as a child, traveling to Arizona to spend Christmas with extended family.  For a kid from Western Pennsylvania, Tucson was exotic.  Christmas lights on saguaro cactuses. Luminarias.  Tree ornaments from Mexico. The best part, though, was a trip to the Catalina mountains.

What I valued about that holiday was the differentness of the setting and seeing how those from another part of the country celebrated the holiday. I also liked the bright sunny days outdoors.

It’s a little too late to book a trip to Arizona for the holidays, but I can still seek out places close by that have a different take on holiday decorations. As for enjoying the outdoors, I live in a place that offers lots of opportunity on that front. My husband and I can easily fit in a hike and a trip to Helen, a Bavarian Alpine village in the North Georgia mountains.

Once you’ve talked with your family, make a list of everyone’s ideas for a great holiday and check them off as they happen. You could even do this as a group on a private Facebook page.  Or go old school and put a written list on your refrigerator door.  See if Appreciative Inquiry doesn’t add some sparkle to your holiday season this year.

Happy Thanksgiving, everyone.

 

 

How Many Interviews Does It Take to Assess A Project?

Friday, October 21st, 2016

A green piggy bank standing with a group of pink piggy banks to represent the cost effectiveness of individual interviews

FAQ from NEO users: How many interviews or focus groups do we need for our qualitative assessment project?

Our typical response: Um, how much money and time do you have?

At which point, our users probably want to throw a stapler at us. (Karen and I work remotely out of an abundance of caution.)

Although all NEO users are, in fact, quite well-mannered, I was happy to discover a study that allows us to provide a better response to that question.  A study published by Namey et al, which appears in the American Journal of Evaluation’s September issue, provides empirically based estimates of the number of one-to-one or group interviews needed for qualitative interviewing projects.  More specifically, their study compared the cost effectiveness of individual and focus group interviews. The researchers conducted an impressive 40 interviews and 40 focus groups (with an average of eight people per group). They then used a boot-strap sampling methodology, which essentially allowed them to do 10,000 mini-studies on their research questions.

They first looked at how many individual and focus group interviews it took to reach what qualitative researchers call thematic saturation. In lay terms, saturation means “Not really hearing anything new here.”  Operationally, it occurs when 80-90% of the information provided in an interview has already been covered in the previous interviews.

The researchers found that 80% saturation occurred after 2-4 focus groups or eight individual interviews. It took 16 interviews and five focus groups to reach 90% saturation. Note their estimates apply to studies that focus on one specific population.  If you want to explore the experiences of two groups, such as doctors and nurse practitioners, you would hold eight interviews per group to reach 80% thematic saturation.

For comparative cost assessment, the researchers used a formula that combined hourly rate of the data collector’s time, incentive costs per participant, and cost of transcription for recordings. They chose not to include cost for factors that vary widely, such as space rental or refreshments. Using more predictable costs made for cleaner and more generalizable comparisons.

Bottom line, they found individual interview methods cost 12-20% less than focus group methods.

Of course, many of us operate on shoestring budgets, so we are our own moderators and transcribers.  Even though most of us DIYers collect hourly wages, the cost for outsourcing these tasks is probably higher than for conducting them internally. Knowing this, the researchers looked at variations on moderator, transcriptionist, and incentive costs.  They also compared cost effectiveness of the two methods when lowering the standards for thematic saturation (i.e., aiming for 70% saturation instead of 80%). Across the board, individual interviews were more cost-effective than focus groups.

Cost is not always the only consideration when choosing between focus groups and individual interviews. Some assessment questions beg for group brainstorming, while others demand the privacy of one-to-one discussions. However, for many assessment studies, either method is equally viable.  In that case, cost and convenience will drive your decision. Personally, I often find individual interviews to be more convenient than focus groups, both for the participants and for me. It’s nice to know that the cost justifies using the more convenient approach.

The full article provides details on their methods, so it is a nice primer on qualitative analysis of interview transcripts. Here’s the full citation:

Namey E, Guest G, McKenna K, Chen M. Evaluating bang for the buck: a cost effectiveness comparison between individual interviews and focus groups based on thematic saturation levels. 2016 September; 37(3): 425-440.

 

From QWERTY to Quality Responses: How To Make Survey Comment Boxes More Inviting

Friday, July 22nd, 2016

The ubiquitous comment box.  It’s usually stuck at the end of a survey with a simple label such as “Suggestions,” “Comments:” or “Please add additional comments here.”

Those of us who write surveys over-idealistic faith in the potential of comment boxes, also known as open-ended survey items or questions.  These items will unleash our respondents’ desire to provide creative, useful suggestions! Their comments will shed light on the difficult-to-interpret quantitative findings from closed-ended questions!

In reality, responses in comment boxes tend to be sparse and incoherent. You get a smattering of “high-five” comments from your fans. A few longer responses may come from those with an ax to grind, although their feedback may be completely off topic.  More often, comment boxes are left blank, unless you make the mistake of requiring an answers before the respondent can move on to the next item. Then you’ll probably get a lot of QWERTYs in your blank space.

Let’s face it.  Comment boxes are the vacant lots of Survey City.  Survey writers don’t put such effort into cultivating them. Survey respondents don’t even notice them.

Can we do better than that?  Yes, we can, say the survey methods experts.

First, you have to appreciate this fact: open-ended questions ask a lot of respondents.  They have to create a response. That’s much harder than registering their level of agreement to a statement you wrote for them. So you need strategies that make open-ended questions easier and more motivating for the survey taker.

In his online class Don’t (Survey)Monkey Around: Learn to Make Your Surveys Work,  Matthew Champagne provides the following tips for making comment boxes more inviting to respondents:

  • Focus your question. Get specific and give guidance on how you want respondents to answer. For example, “Please tell us what you think about our new web site. Tell us both what you like and what you think we can do better.” I try to make the question even easier by putting boundaries on how much I expect from them.  So, when requesting feedback on a training session, I might ask my respondents to “Please describe one action step you will take based on what you learned in this class.”
  • Place the open-ended question near related closed-ended questions. For example, if you are asking users to rate the programming at your library, ask for suggestions for future programs right after they rate the current program. The closed-ended questions have primed them to write their response.
  • Give them a good reason to respond. A motivational statement tells respondents how their answers will be used. Champagne says that this technique is particularly effective if you can explain how their responses will be used for their personal For example, “Please give us one or two suggestions for improving our references services.  Your feedback will help our reference librarians know how to provide better service to users like you.”
  • Give them room to write. You need a sizable blank space that encourages your respondents to be generous with their comments. Personally, when I’m responding to an open-ended comment on a survey, I want my entire response to be in view while I’m writing.  As a survey developer, I tend to uses boxes that are about three lines deep and half the width of the survey page

Do we know that Champagne’s techniques work?  In the Dillman et al.’s classic book on survey methods, the authors present research findings to support Champagne’s advice. Adding motivational words to the open-ended survey questions showed a 5-15 word increase in response length and a 12-20% increase in how many respondents’ submitted answers.  The authors caution, though, that you need to use open-ended questions sparingly for the motivational statements to work well. When four open-ended questions were added to a survey, the motivational statements worked better for questions placed earlier in the survey.

I should add, however, to never make your first survey question an open-ended one.  The format itself seems to make people close their browsers and run for the hills.  I always warm up the respondents with some easy closed-ended questions before they see an open-ended item.

Dillman et al. gave an additional technique for getting better responses to open-ended items: Asking follow-up questions.  Many online software packages now allow you to take a respondent’s verbatim answer and repeat it in a follow-up question.  For example, a follow-up question about a respondent’s suggestions for improving the library facility might look like this:

“You made this suggestion about how to improve the library facility: ‘The library should add more group study rooms.’ Do you have any other suggestions for improving the library facility?” [Bolded statement is the respondents’ verbatim written comment.]

Follow-up questions like this have been shown to increase the detail of respondents’ answers to open-ended questions.  If you are interested in testing out this format, search your survey software system for instructions on “piping.”

When possible, I like to use an Appreciative Inquiry approach for open-ended questions. The typical Appreciative Inquiry approach requires two boxes, for example:

  • Please tell us what you liked most about the proposal-writing workshop.
  • What could the instructors do to make this the best workshop possible on proposal writing?

People find it easier to give you an example rooted in experience.  We are story tellers at heart and you are asking for a mini-story. Once they tell their story, they are better prepared to give you advice on how to improve that experience. The Appreciative Inquiry structure also gives specific guidance on how you want them to structure their responses.  The format used for the second question is more likely to gather actionable suggestions.

So if you really want to hear from your respondents, put some thought into your comment box questions.  It lets them know that you want their thoughtful answers in return.

Source:  The research findings reported in this post are from Internet, Phone, Mail and Mixed-Mode Surveys: The Tailored Design Method (4th ed.), by Dillman, Smyth, and Christian (Hoboken, NJ: John Wiley and Sons, Inc, 2014, pp 128-134.

Data Party for Public Librarians

Friday, May 6th, 2016

The Engage for Health project team from left to right: Lydia Collins, Kathy Silks, Susan Jeffery, Cindy Olney

Last week, I threw my first data party. I served descriptive statistics and graphs; my co-hosts brought chocolate.

I first learned about data parties from evaluation consultant Kylie Hutchinson’s presentation It’s A Data Party that she gave at the 2016 American Evaluation Association Conference. Also known as data briefings or sense-making sessions, data parties actively engage stakeholders with evaluation findings.

Guest List

My guests were librarians from a cohort of public libraries that participated in the Engage for Health project, a statewide collaboration led by the NN/LM Middle Atlantic Region (MAR) and the Pennsylvania Library Association (PaLA). The NN/LM MAR is one of PaLA’s partners in a statewide literacy initiative called PA Forward, an initiative to engage libraries in activities that address five types of literacy.  The project team was composed of Lydia Collins of NN/LM MAR (which also funded the project), Kathy Silks of the PaLA, and Susan Jeffery of the North Pocono Public Library. I joined the team to help them evaluate the project and develop reports to bring visibility to the initiative.  Specifically, my charge was to use this project to provide experiential evaluation training to the participating librarians.

Librarians from our 18 cohort libraries participated in all phases of the planning and evaluation process.  Kathy and Susan managed our participant recruitment and communication. Lydia provided training on how to promote and deliver the program, as well as assistance with finding health care partners to team-teach with the librarians. I involved the librarians in every phase of the program planning and evaluation process. We met to create the project logic model, develop the evaluation forms, and establish a standard process for printing, distributing, and returning the forms to the project team. In the end, librarians delivered completed evaluation forms from 77% of their adult participants from Engage for Health training sessions.

What We Evaluated

The objective of PA Forward includes improving health literacy, so the group’s outcomes for Engage for Health was to empower people to better manage their health. Specifically, we wanted them to learn strategies that would lead to more effective conversations with their health care providers. Librarians and their health care partners emphasized strategies such as researching health issues using quality online health resources, making a list of medications, and writing down questions to discuss at their appointments.  We also wanted them to know how to use two trustworthy online health information sources from the National Library of Medicine: MedlinePlus and NIHSeniorHealth.

 Party Activities

Sharing with Appreciative Inquiry. The data party kicked off with Appreciative Inquiry interviews. Participants interviewed each other, sharing their peak experiences and what they valued about those experiences. Everyone then shared their peak experiences in a large group. (See our blog entries here and here for detailed examples of using Appreciative Inquiry.)

Data sense-making: Participants then worked with a fact sheet with graphs and summary statistics compiled from the session evaluation data.  As a group, we reviewed our logic model and discussed whether our data showed that we achieved our anticipated outcomes.  The group also drew on both the fact sheet and the stories from the Appreciative Inquiry interviews to identify unanticipated outcomes.  Finally they identified metrics they wish we had collected. What was missing?

Consulting Circles: After a morning of sharing successes, the group got together to help each other with challenges.  We had three challenge areas that the group wanted to address: integration of technology into the classes; finding partners from local health organizations; and promotional strategies.  No area was a problem for all librarians: some were quite successful in a given areas, while others struggled. The consulting groups were a chance to brainstorm effective practices in each area.

Next steps:  As with most funded projects, both host organizations hoped that the libraries would continue providing health literacy activities beyond the funding period.  To get the group thinking about program continuity, we ran a 1-2-4-All discussion about next steps.  They first identified the next steps they will take at their libraries, then provided suggestions to NN/LM MAR and PALA on how to support their continued efforts.

Post Party Activities

For each of the four party activities, a recorder from each group took discussion notes on a worksheet developed for the activity, then turned it into the project team. We will incorporate their group feedback into written reports that are currently in process.

If you are curious about our findings, I will say generally that our data supports the success of this project.  We have plans to publish our findings in a number of venues, once we have a chance to synthesize everything.  So watch this blog space and I’ll let you know when a report of our findings becomes available.

Meanwhile, if you are interested in reading more about data parties, check out this article in the Journal of Extension.

 

Inspirational Annual Reporting with Appreciative Inquiry

Friday, March 25th, 2016

Hiker enjoying the view along the Iceberg Lake trail in Glacier National Park

Do you have to file annual reports? How much do you love doing them?

Did I hear someone say “no comment?”

In January, I challenged the outreach librarians of the National Network of Libraries of Medicine Greater Midwest Region (NN/LM GMR) to experiment with a reflective exercise designed to add some inspiration to their annual reporting. The setting was a monthly webinar attended by librarians who led outreach activities at their libraries to promote health information access and use. Because their libraries received funding from the NN/LM GMR, they were required to submit annual reports for their funded activities.

My charge was to teach this group something about evaluation. In response, I presented them with this short (about 15 minute) exercise to be used when they began preparing their annual reports.

When preparing your report, answer these questions. Then write a short paragraph based on your answers and add it to your annual report:

  1. Describe one of the best experiences you had this year conducting outreach for the NN/LM.
  2. What do you value most about that experience?
  3. What do you wish could happen so that you had more experiences like this?

You may recognize these as the three signature questions of the basic Appreciative Inquiry (AI) interview. Appreciative Inquiry is a practice of influencing organizational change by identifying peak experiences and discovering ways to build on them. The book Reframing Evaluation through Appreciative Inquiry (Preskill and Catsambas, Sage, 2006) provides descriptions and examples of how to apply AI to every part of the evaluation process.

My partners for this webinar were host Jacqueline Leskovec, Outreach, Planning and Evaluation Coordinator, and presenter Carmen Howard, who is the Regional Health Sciences Librarian and Visiting Assistant Professor from UIC Library of the Health Sciences Peoria. Carmen headlined the webinar with her presentation about the Nursing Experts: Translating the Evidence (NExT) Guide, which provides resources on evidence-based practice to nurses. Good sport that she was, Carmen helped me demonstrate the exercise to our audience by participating in an AI interview about her outreach project. The outreach librarians then brainstormed ways to use the three questions to prepare their own NN/LM reports. We also talked about how to add their reflective statements to their annual reports, which are entered into an online system.

Soon after that webinar, Carmen wrote an entry about her experience using the three questions that appeared in the NN/LM GMR’s blog The Cornflower. Here is my favorite quote from her entry:

“These three simple questions which only take about 10-15 minutes to answer forced me to stop and reflect on the NExT project. Rather than just being focused on what was next on the to-do list, I was looking back on what had already been accomplished, and better yet, I was thinking about the good stuff.”

The NN/LM GMR outreach librarians have not yet filed their 2016 annual reports, so I can’t tell you how many rose to my challenge. (This exercise was a suggestion, not a requirement.) One other outreach librarian did send an email to say she was using the three questions to have a reflective group discussion with other librarians who participate in NN/LM outreach activities.

I would like to extend the challenge to our readers who may be facing annual reports. Try this exercise and see if you don’t start thinking and writing differently about your efforts over the past year.

If you want to know more about Appreciative Inquiry, we highly recommend this source:

  • Preskill H, Catsambas TT. Reframing Evaluation through Appreciative Inquiry. Thousand Oaks, CA: Sage, 2006.

You also might be interested in the OERC’s other blog posts about Appreciative Inquiry:

If you are interested in earning some continuing education credits from the Medical Library Association while trying your hand at an Appreciative Inquiry project, reach this post: Appreciative Inquiry of Oz: Building on the Best in the Emerald City 

 

A Most Pragmatic Theory: Diffusion of Innovation and User Assessment (Part 1)

Friday, March 4th, 2016

Seven tomatoes in a row, increasing in maturity from left to right

If your work includes teaching or providing products or services to people, you are in the business of influencing behavior change. In that case, behavior change theories should be one of the tools in your evaluation toolbox. These theories are evidence-based descriptions of how people change and the factors that affect the change process. If you have a handle on these influences, you will be much more effective in gathering data and planning programs or services.

Today and next week, I’m going to talk about my go-to behavioral change theory: Diffusion of Innovations. It was introduced in the 1960s by communication professor Everett Rogers to explain how innovations spread (diffuse) through a population over time. The term innovation is broadly defined as anything new: activities, technologies; resources; or beliefs. There are a number of behavioral change theories that guide work in health and human services, but I particularly like Diffusion of Innovations because it emphasizes how social networks and interpersonal relationships may impact your success in getting people to try something new.

I use Diffusion of Innovations for most user or community assessment studies I design. Next week, we’ll talk about using these concepts to frame community or user assessment studies. This week, I want to cover the basic principles I found to be most helpful.

People change in phases

The heart of behavior change is need.  People adopt an innovation if it that solves a problem or improves quality of life. Adoption is not automatic, however. People change in phases. They first become aware and gather information about an innovation. If it is appealing, they decide to employ it and assess its usefulness. Adoption occurs if the innovation lives up to or exceeds their expectation.

Product characteristics influence phase of adoption

Five criteria impact the rate and success of adoption within a group. First, the innovation must be better than the product or idea it is designed to replace. Second, it must fit well with people’s values, needs and experiences. Innovations that are easy to use will catch on faster, as will technologies or resources that can allow experimentation before the user must commit to it. Finally, if people can easily perceive that the innovation will lead to positive results, they are more likely to use it.

Peers’ opinions matter greatly when it comes to innovation adoptions. Marketers will tell you that mass media spreads information, but people choose to adopt innovations based on recommendations from others who are “just like them.” Conversations and social networks are key channels for spreading information about new products and ideas. If you are going to influence change, you have to identify and use how members of your audience communicate with one another.

Migration of flock of birds flying in V-formation at dusk

Riding the Wave

Segments of a population adopt innovations at different rates. In any given target population, there will be people who will try an innovation immediately just for the pleasure of using something new. They are called innovators. The second speediest are the early adopters, who like to be the trendsetters. They will use an innovation if they perceive it will give them a social edge. They value being the “opinion leaders” of their communities.

Sixty-eight percent of a population comprise the majority.  The first half (early majority) will adopt an innovation once its reliability and usefulness have been established. (For example, these are the folks who wait to update software until the “bugs” have been worked out.) The other half (late majority) are more risk adverse and tend to succumb through peer pressure, which builds as an innovation gathers momentum. The last adopters are called the laggards, who are the most fearful of change. They prefer to stick with what they know. Laggards may have a curmudgeonly name, but Les Robinson of Enabling Change pointed out that they also may be prophetic, so ignore them at your own risk.

Next Step: Diffusion of Innovations and User/Community Assessment

Next week, I will show you how I develop my needs assessment methods around Diffusion of Innovation concepts. In the meantime, here are some sources that might interest you. Everett Rogers and Karyn Scott wrote an article specifically for the NN/LM Pacific Northwest Region that you can read here. Les Robinson’s article has an interesting discussion of the specific needs of the different population segments: Finally, If you want the classic text by Ev Rogers himself, here is the full citation.

Rogers EM.  Diffusion of innovations (5th ed). New York, NY: The Free Press, 2003.

Appreciative Inquiry of Oz: Building on the Best in the Emerald City

Friday, February 19th, 2016

Cartoon image of an Emerald City

“One day not very long ago, librarians came to the Emerald City from their libraries in all of the countries of Oz. They came to visit the Great Library of the Emerald City, and to petition the Wizard allow them to borrow books and other items at the Great Library. Their hope was to transport items from one library to another using the Winged Monkeys, who offered their skills for this task after they were set free and got bored.”

Thus begins the latest OERC project – an online class in Appreciative Inquiry (AI), offered through the MidContinental Region’s Librarians in the Wonderful Land of Oz Moodle ‘game’ (i.e. series of online classes worth game points and CE credits from the Medical Library Association).  The game is made up of several ‘challenges’ (online classes) for librarians offered by NN/LM instructors.

In OERC’s challenge, Building on the Best at the Great Library of the Emerald City: Using Appreciative Inquiry to Enhance Services and Programs, the Wizard of Oz makes a deal with the librarians.  He will allow interlibrary loan of the Great Library’s resources if the librarians will assess customer satisfaction of the Great Library’s services and find things to improve.  And students in the class will learn to use a qualitative data collection technique called Appreciative Inquiry to do this assessment.

Sometimes people avoid customer service assessment because they find the methods to be complicated and time-consuming. Negative feedback can be uncomfortable on the part of the listener and the speaker. Appreciative Inquiry, with a focus on identifying and building on organizational strengths, removes that discomfort. A number of OERC workshops touch on Appreciative Inquiry but this Librarians of Oz challenge allows you to practice the technique, something that the OERC has not been able to provide in the traditional webinar or workshop context.  Completing the class is worth 14 MLA CE credits.

The class is free, but in order to take it you will need to register for the game Librarians in the Wonderful Land of Oz .  If you don’t want to take the class, but would still like to learn more about Appreciative Inquiry, I recommend these earlier blog posts:

From Cindy and Karen’s perspective, one of the best parts of this experience is that we finally get the official title of Wizard.  Special thanks to John Game Wizard Bramble of the NN/LM MCR who made all this happen.

 

W.A.I.T for Qualitative Interviews

Friday, February 12th, 2016

WAIT

Why 

Am 

Talking?

 

My sister-in-law recently told me about the W.A.I.T. acronym that she learned from a communication consultant who spoke to her staff. It’s a catchy phrase for an important communication concept: Be purposeful when you talk. This self-reflective question can be applied to any conversational setting, but I want to discuss it in the context of qualitative interviews for evaluation data collection.

Surveys and tests are examples of quantitative data collection instruments. They require careful crafting and pilot-testing to be sure they collect valid information from respondents. By contrast, in qualitative interviews, the data collection instrument is the interviewer.  The interview guide itself is important, but the interpersonal manner of the interviewer has far greater impact on the trustworthiness of the information gathered. The key responsibility of the interviewer is described succinctly by Michael Q. Patton in Qualitative Research and Evaluation Methods:

“It is the responsibility of the interviewer to provide a framework within which people can respond comfortably, accurately, and honestly to open-ended questions.”

Listening skills, of course, are key to good interviewing.  As program evaluator Kylie Hutchinson said recently in a 2016 American Evaluation Association conference presentation, evaluators need ask their questions, then shut up.  If you can learn to do this, you are more than halfway there.  Julian Treasure has a TEDtalk with excellent tips on developing your listening skills.

However, how you talk is important as well.  Here are a few ways I would answer the question “Why Am I Talking?” during an interview:

  • I want to show that I share something in common with my interviewee: People are more comfortable talking to others who are like them. I say things like “I feel that way, too, sometimes” or “I know what you mean. Something like that happened to me a few years ago.”  These statements can help me build rapport.
  • I want the interviewee to know that no answer he or she gives can surprise me. Social desirability is something that survey researchers always consider in instrument design. Even in the anonymous survey context, people may give answers to make themselves “look good.” So you can imagine that the dynamic is even greater in the face-to-face interview setting. When broaching a sensitive topic, I let my interviewee know I’ve heard it all before. I might say, for instance, “Some people have told me they spent hours researching a serious health condition. Others say they were so frightened by the diagnosis, they didn’t want to read anything about it. How did you respond when you were diagnosed?”
  • I want to allow the interviewee an opportunity to answer a question hypothetically. Sometimes you may ask an interviewee about choices or behaviors that are potentially embarrassing. Let’s say I want to know what barriers prevented them from following their doctors’ orders. This question could feel awkward to interviewees if, for example, they lacked understanding or willpower to follow a physician’s recommendations. So I frame questions that allow them to distance themselves personally from their answers. Rather than asking them to describe a time they didn’t follow a doctor’s orders, I might say “Sometimes people don’t do what their doctors tell them to. In your experience, what are some of the reasons people might not follow their doctor’s orders?
  • I want to show I’m listening and to check my understanding: Paraphrasing your interviewee’s comments is an active listening technique that demonstrates your interest in the ongoing discussion. It also is a validity check on your own interpretations of their answers. I say things like “Okay, so let me make sure I understand.  Essentially, you are saying…?”
  • I’m managing the emotional climate and turn-taking in a focus group. I choose language to maintain a neutral, non-judgmental atmosphere and to model respectful interaction. I also talk when I need to reign in someone who is dominating the discussion. I might say “So  Truman gave us quite a few great examples of how she uses MedlinePlus. What examples can someone else add to Truman’s examples?”

All of these tips, by the way, are from Patton’s book on qualitative methods. Here is the full citation:

Patton, MQ. Qualitative Research and Evaluation Methods: Integrating Theory and Practice (4th ed.). Thousand Oaks, CA: Sage, 2015.

If you would like to read more about W.A.I.T, here’s an excellent article from the National Speakers Association.  I also want to thank Lauren Yee and Donna Speller Turner from the NASA Langley Research Center for alerting me to W.A.I.T.

 

Last updated on Monday, June 27, 2016

Funded by the National Library of Medicine under Contract No. UG4LM012343 with the University of Washington.