Skip all navigation and go to page content

NEO Shop Talk

The blog of the National Network of Libraries of Medicine Evaluation Office

The OERC Is On The Road in April

 

A young boy having fun driving his toy car outdoors.

The OERC staff will be putting on some miles this month. Karen and Cindy are slated to present and teach at various library conferences and meetings. If you happen to be at any of these events, please look for us and say “hello.”  Here is the April itinerary: 

Cindy will participate in a panel presentation titled “Services to Those Who Serve: Library Programs for Veterans and Active Duty Military Families” at the Public Library Association’s 2016 conference in Denver. The panel presentation will be held from 10:45 – 11:45 am, April 7. She and Jennifer Taft, who is now with Harnett County Public Library,  will present a community assessment project they conducted for the Cumberland County Public Library, described here in the November/December 2014 edition of Public Libraries.

Karen will conduct the OERC workshop “Planning Outcomes-Based Outreach Programs” on April 8 for the Joint Meeting of the Georgia Health Sciences Library Association and the Atlanta Health Science Library Consortium in Decatur, GA. This workshop teaches participants how to develop logic models for both program and evaluation planning.

 Cindy and Karen will facilitate two different sessions for the Texas Library Association’s annual conference, both on April 20 in Houston. One session will be a large-group participatory evaluation exercise to gather ideas from the TLA membership about how  libraries can become more welcoming to diverse populations. The second is an 80-minute workshop on participatory evaluation methods, featuring experiential learning exercises about Appreciative Inquiry, 1-2-4-All, Photovoice, and Most Significant Change methods.

Cindy will join the NN/LM Middle Atlantic Region and the Pennsylvania Library Association to talk about evaluation findings from a collaborative health literacy effort conducted with 18 public libraries across the state. The public libraries partnered with health professionals to run health literacy workshops targeted at improving consumers’ ability to research their own health concerns and talk more effectively with their doctors. The public librarians involved in this initiative worked together to design an evaluation questionnaire that they gave to participants at the end of their workshops. The combined effort of the cohort librarians allowed the group to pool a substantial amount of evaluation data. Cindy will facilitate a number of participatory evaluation exercises to help the librarians interpret the data, make plans for future programming, and develop a communication plan that will allow them to publicize the value of the health literacy initiative to various stakeholders. The meeting will be held April 29 in Mechanicsburg, PA.

In addition, Cindy will be attending a planning meeting at the National Library of Medicine in Bethesda in mid-April with Directors, Associate Directors, and Assistant Directors from the NN/LM. Our library, the University of Washington Health Sciences Library, will receive cooperative agreements for both the NN/LM Pacific Northwest Regional Medical Library and the NN/LM Evaluation Office, which will replace the OERC on May 1. (You can see the announcement here.)  We will let you know more about the NEO later, except to say that we will be moving into the same positions in the NEO that we hold with the OERC. You have not heard the end of us!

Although we will be on the road quite a bit, rest assured we will not let our loyal readers down. So please tune in on Fridays for our weekly posts.

 

What chart should I use?

It’s time to put your carefully collected data into a chart, but which chart should you use?  And then how do you set it up from scratch in your Excel spreadsheet or Power Point presentation if you aren’t experienced with charts?

Here’s one way to start: go to the Chart Chooser at Juice Analytics.  They allow you to pick your chart and then download it into Excel or Power Point. Then you can simply put in your own data and modify the chart the way you want to.

They also have a way to narrow down the options.  As a hypothetical example, let’s say a fictional health science librarian, Susan, is in charge of the social media campaign for her library.  She wants to compare user engagement for her Twitter, Facebook and Blog posts to see if there is any patterns in their trends. Here are some fictional stats showing how difficult it is to find trends in the data.

Monthly stats of blog, Twitter and Facebook engagement

Susan goes to the Juice Analytics Chart Chooser and selects from the options given (Comparison, Distribution, Composition, Trend, Relationship, and Table).  She selects Comparison and Trend, and then also selects Excel, because she is comfortable working in Excel.  The Chart Chooser selects two options: a column chart and a line chart.  Susan thinks the line chart would work best for her, so she downloads it (by the way, you can download both and see which one you like better).  After substituting their data with hers, and making a couple of other small design changes, here is Susan’s resulting chart in Excel, showing that user engagement with both blog posts and Facebook posts shows a pattern of increasing and decreasing at the same time, but that Twitter engagement does not show the same pattern.

Line chart of Blog Twitter and Facebook engagment

By the way, the total time spent selecting the chart, downloading it, putting in the fictional data, and making chart adjustments was less than 15 minutes.  Is it a perfect chart?  Given more time, I would suggest adjusting some more of the chart features (see our January 29, 2016 post The Zen Trend in Data Visualization). But it was a very easy way to pick out a chart that allowed Susan to learn what she needed to from the data.

One thing I want to point out is that this is not a complete list of charts.  This is a good starting place, and depending on your needs, this might be enough. But if you get more involved in data, you might want to take a look at small multiples, lollipop charts, dot plots, and other ways to visualize data.  Check out Stephanie Evergreen’s EvergreenData Blog  for more chart types.

 

Inspirational Annual Reporting with Appreciative Inquiry

Hiker enjoying the view along the Iceberg Lake trail in Glacier National Park

Do you have to file annual reports? How much do you love doing them?

Did I hear someone say “no comment?”

In January, I challenged the outreach librarians of the National Network of Libraries of Medicine Greater Midwest Region (NN/LM GMR) to experiment with a reflective exercise designed to add some inspiration to their annual reporting. The setting was a monthly webinar attended by librarians who led outreach activities at their libraries to promote health information access and use. Because their libraries received funding from the NN/LM GMR, they were required to submit annual reports for their funded activities.

My charge was to teach this group something about evaluation. In response, I presented them with this short (about 15 minute) exercise to be used when they began preparing their annual reports.

When preparing your report, answer these questions. Then write a short paragraph based on your answers and add it to your annual report:

  1. Describe one of the best experiences you had this year conducting outreach for the NN/LM.
  2. What do you value most about that experience?
  3. What do you wish could happen so that you had more experiences like this?

You may recognize these as the three signature questions of the basic Appreciative Inquiry (AI) interview. Appreciative Inquiry is a practice of influencing organizational change by identifying peak experiences and discovering ways to build on them. The book Reframing Evaluation through Appreciative Inquiry (Preskill and Catsambas, Sage, 2006) provides descriptions and examples of how to apply AI to every part of the evaluation process.

My partners for this webinar were host Jacqueline Leskovec, Outreach, Planning and Evaluation Coordinator, and presenter Carmen Howard, who is the Regional Health Sciences Librarian and Visiting Assistant Professor from UIC Library of the Health Sciences Peoria. Carmen headlined the webinar with her presentation about the Nursing Experts: Translating the Evidence (NExT) Guide, which provides resources on evidence-based practice to nurses. Good sport that she was, Carmen helped me demonstrate the exercise to our audience by participating in an AI interview about her outreach project. The outreach librarians then brainstormed ways to use the three questions to prepare their own NN/LM reports. We also talked about how to add their reflective statements to their annual reports, which are entered into an online system.

Soon after that webinar, Carmen wrote an entry about her experience using the three questions that appeared in the NN/LM GMR’s blog The Cornflower. Here is my favorite quote from her entry:

“These three simple questions which only take about 10-15 minutes to answer forced me to stop and reflect on the NExT project. Rather than just being focused on what was next on the to-do list, I was looking back on what had already been accomplished, and better yet, I was thinking about the good stuff.”

The NN/LM GMR outreach librarians have not yet filed their 2016 annual reports, so I can’t tell you how many rose to my challenge. (This exercise was a suggestion, not a requirement.) One other outreach librarian did send an email to say she was using the three questions to have a reflective group discussion with other librarians who participate in NN/LM outreach activities.

I would like to extend the challenge to our readers who may be facing annual reports. Try this exercise and see if you don’t start thinking and writing differently about your efforts over the past year.

If you want to know more about Appreciative Inquiry, we highly recommend this source:

  • Preskill H, Catsambas TT. Reframing Evaluation through Appreciative Inquiry. Thousand Oaks, CA: Sage, 2006.

You also might be interested in the OERC’s other blog posts about Appreciative Inquiry:

If you are interested in earning some continuing education credits from the Medical Library Association while trying your hand at an Appreciative Inquiry project, reach this post: Appreciative Inquiry of Oz: Building on the Best in the Emerald City 

 

Diversity in Evaluation – It’s About All of Us

Picture of children running through different colors with text "Diversity is about all of us, and about us having to figure out how to walk through this world together.

Unless you’ve been living under a rock, you know that culture permeates everything we do, and that we live in a diverse society with lots of different cultures. Odds are good that no matter what your job is, you take into consideration issues of culture, diversity and inclusion. This applies to evaluation as it does everywhere else.

At the 2015 Summer Evaluation Institute, each attendee was given a copy of The American Evaluation Association’s Statement on Cultural Competence in Evaluation.  I was impressed that the document was frequently mentioned, because it was clear that the AEA felt that cultural competence was central to quality evaluation. As it says in the Statement, “evaluations cannot be culture free… culture shapes the way the evaluation questions are conceptualized which in turn influences what data are collected, how the data will be collected and analyzed, and how the data are interpreted.”

The Statement describes the importance of cultural competence in terms of ethics, validity of results, and theory.

  • Ethics – quality evaluation has an ethical responsibility to ensure fair, just and equitable treatment of all persons.
  • Validity – evaluation results that are considered valid require trust from the diverse perspectives of the people providing the data and trust that the data will be honestly and fairly represented.
  • Theory – theories underlie all of evaluation, but theories are not created in a cultural vacuum. Assumptions behind theories must be carefully examined to ensure that they apply in the cultural context of the evaluation.

The Statement makes some recommendations for essential practices for cultural competence. I highly recommend reading all of the essential practices, but here are a few examples:

  • Acknowledge the complexity of cultural identity. Cultural groups are not static, and people belong to multiple cultural groups. Attempts to categorize people often collapse them into cultural groupings that may not accurately represent the true diversity that exists.
  • Recognize the dynamics of power. Cultural privilege can create and perpetuate inequities in power. Work to avoid reinforcing cultural stereotypes and prejudice in evaluation. Evaluators often work with data organized by cultural categories. The choices you make in working with these data can affect prejudice and discrimination attached to such categories.
  • Recognize and eliminate bias in language: Language is often used as the code for a certain treatment of groups. Thoughtful use of language can reduce bias when conducting evaluations.

This may sound good, but how can it apply to the evaluation of your outreach project?

Recently, the EvergreenData Blog had two entries on data visualizations and how they can show cultural bias. In the first one, How Dataviz Can Unintentionally Perpetuate Inequality: The Bleeding Infestation Example, she shows how using red to represent individual participants on a map made the actual participants feel like they were perceived as a threat. The more recent blog post, How Dataviz Can Unintentionally Perpetuate Inequality Part 2, shows how the categories used in a chart on median household income contribute to stereotyping certain cultures and skew the data to show something that does not accurately represent income levels of the different groups.

Does it sometimes feel like cultural competence is too much to add to your already full plate of required competencies? This quote from the AEA Statement on Cultural Competence in Evaluation may be reassuring: “Cultural competence is not a state at which one arrives; rather, it is a process of learning, unlearning, and relearning.”

Save

Get to Know a Community though Diffusion of Innovation (Part 2)

two granddaughter whispering some news to their grandmother

It’s really simple. You sell to the people who are listening, and just maybe, those people tell their friends.“ — Seth Godin, marketer.

Diffusion of Innovation changed my approach to community assessment. I now focus primarily on identifying the following three things: the key problem that the program or product (the innovation) offers the target community; key community members who will benefit both from the innovation and promoting it; and the best channels for capturing the attention of the majority and laggard segments of the group.

I now purposely use the term “community” rather than “needs” assessment because you have to assess much more needs. You must learn the key components of an entire social system. A community could be faculty or students in a particular department, staff in a given organization, or an online support group of people with a challenging health condition. All of these groups fit my definition of “community” by virtue of their connectedness and ability to influence each other.

Key Evaluation Questions

No matter what type of evaluation I do, I always start with guiding evaluation questions. These questions lead my investigation and help me design data collection methods. Here are my most typical community assessment questions:

  • What problems can the innovation solve for the target audience?
  • What are their beliefs, lifestyles, and values; and will the innovation fit all three? (Marketers call these characteristics “psychographics.”)
  • Who in the group is most likely to want to use the innovation and talk about it to their friends? (These are the early adopters who fit a second category: opinion leaders.)
  • Who among the early adopters will want to work with the project team and how can we work with them?
  • Where are the best places to connect with community members?
  • What are the best ways to communicate with the larger majority?

Answering Evaluation Question

 I also have a series of steps that I usually take to gather information about my key evaluation questions.  This is my typical process:

  1. Talk to “advisors” about their ideas and their contacts. Start talking with people you know who are part of or had experience with a community. I call this group my “advisors.” They don’t have to be high-level officials, but they do need to have solid social connections. It helps if they are well liked within the target community. They will know about the daily lives of target community members, as well as the influential voices in the community. They also can help you gain access.
  2. Look at publically available data: Local media provides clues to the concerns and interests of your target community. In a town or neighborhood, newspapers and websites for television stations are good sources.  Inside an organization, you should look at public and employee relation publications to see what is on the minds of leaders and employees.
  3. Interviews with key informants: Get your advisors to recommend people they think would be early adopters and opinion leaders for your innovation. But don’t stop there. Early adopters are different from those in the “later adopter” segments. You need to talk with people from the other segments to understand how to get their attention and participation. The best way to find community members in the other segments is to ask for recommendations and introductions from the early adopters. This is called “snowball sampling.”
  4. Visit the community: More specifically, visit locations where you are most likely to connect with members of your target audience. Visit the venue of a health fair where you could exhibit.  Stop by the computer lab in an academic department where you might teach students. Check out the parking and building access at the public library or community-based organization that could host a consumer health information workshop. If your community is virtual, see if you can visit and participate with group members through their favored social media channels.
  5. Put ideas together, then present them to early adopters for feedback: If at all possible, bring together a group of early adopters and potential partners to listen to and respond to your ideas. Early adopters are the people that companies use for beta testing, so you can do the same. It may be the same people you interviewed or a different crowd (or a mix).

Other Tips

For the most part, I tend to rely heavily on qualitative methods for community assessment. Diffusion of Innovation describes how ideas spread through a system of relationships and communication channels. You need details to truly understand that system.  You need to talk to people and understand how they live. Interviews, focus groups, and site visits provide the most useful planning information, in my opinion. You may have to include quantitative data, though.  Library usage statistics might indicate the best branches for providing workshops.  Short surveys might confirm broad interest in certain services. In the end, a blend of mixed-methods gives you the best picture of a community.

The downside to mostly qualitative data collection methods is that you get an overwhelming amount of information. I like to use an information sheet that allows me to summarize information as I conduct a community assessment. A version of this worksheet is available in OERC’s Planning and Evaluating Health Information Outreach Projects Booklet 1: Getting Started with Community-Based Outreach (downloadable version available here.). See Worksheet 2 on page 19. You should see how the evaluation questions I posed above are related to this worksheet.

Final Thoughts

Seth Godin said ideas that spread are remarkable, meaning they are “worth making a remark about.” Use community assessment to find out why your innovation is remarkable and how to start the conversation.

Other Resources

If you want to see an example of a community assessment that used this process, check out this article in Public Libraries.

You also might be interested in  Seth Godin’s TEDtalk How to Get Your Ideas to Spread.

A Most Pragmatic Theory: Diffusion of Innovation and User Assessment (Part 1)

Seven tomatoes in a row, increasing in maturity from left to right

If your work includes teaching or providing products or services to people, you are in the business of influencing behavior change. In that case, behavior change theories should be one of the tools in your evaluation toolbox. These theories are evidence-based descriptions of how people change and the factors that affect the change process. If you have a handle on these influences, you will be much more effective in gathering data and planning programs or services.

Today and next week, I’m going to talk about my go-to behavioral change theory: Diffusion of Innovations. It was introduced in the 1960s by communication professor Everett Rogers to explain how innovations spread (diffuse) through a population over time. The term innovation is broadly defined as anything new: activities, technologies; resources; or beliefs. There are a number of behavioral change theories that guide work in health and human services, but I particularly like Diffusion of Innovations because it emphasizes how social networks and interpersonal relationships may impact your success in getting people to try something new.

I use Diffusion of Innovations for most user or community assessment studies I design. Next week, we’ll talk about using these concepts to frame community or user assessment studies. This week, I want to cover the basic principles I found to be most helpful.

People change in phases

The heart of behavior change is need.  People adopt an innovation if it that solves a problem or improves quality of life. Adoption is not automatic, however. People change in phases. They first become aware and gather information about an innovation. If it is appealing, they decide to employ it and assess its usefulness. Adoption occurs if the innovation lives up to or exceeds their expectation.

Product characteristics influence phase of adoption

Five criteria impact the rate and success of adoption within a group. First, the innovation must be better than the product or idea it is designed to replace. Second, it must fit well with people’s values, needs and experiences. Innovations that are easy to use will catch on faster, as will technologies or resources that can allow experimentation before the user must commit to it. Finally, if people can easily perceive that the innovation will lead to positive results, they are more likely to use it.

Peers’ opinions matter greatly when it comes to innovation adoptions. Marketers will tell you that mass media spreads information, but people choose to adopt innovations based on recommendations from others who are “just like them.” Conversations and social networks are key channels for spreading information about new products and ideas. If you are going to influence change, you have to identify and use how members of your audience communicate with one another.

Migration of flock of birds flying in V-formation at dusk

Riding the Wave

Segments of a population adopt innovations at different rates. In any given target population, there will be people who will try an innovation immediately just for the pleasure of using something new. They are called innovators. The second speediest are the early adopters, who like to be the trendsetters. They will use an innovation if they perceive it will give them a social edge. They value being the “opinion leaders” of their communities.

Sixty-eight percent of a population comprise the majority.  The first half (early majority) will adopt an innovation once its reliability and usefulness have been established. (For example, these are the folks who wait to update software until the “bugs” have been worked out.) The other half (late majority) are more risk adverse and tend to succumb through peer pressure, which builds as an innovation gathers momentum. The last adopters are called the laggards, who are the most fearful of change. They prefer to stick with what they know. Laggards may have a curmudgeonly name, but Les Robinson of Enabling Change pointed out that they also may be prophetic, so ignore them at your own risk.

Next Step: Diffusion of Innovations and User/Community Assessment

Next week, I will show you how I develop my needs assessment methods around Diffusion of Innovation concepts. In the meantime, here are some sources that might interest you. Everett Rogers and Karyn Scott wrote an article specifically for the NN/LM Pacific Northwest Region that you can read here. Les Robinson’s article has an interesting discussion of the specific needs of the different population segments: Finally, If you want the classic text by Ev Rogers himself, here is the full citation.

Rogers EM.  Diffusion of innovations (5th ed). New York, NY: The Free Press, 2003.

Setting a Meaningful Participation Target

Picture of people stand on each other to reach a star

I enjoyed reading an article in Public Libraries titled “The Grass Is Always Greener” by Melanie A. Lyttle and Shawn D. Walsh.  They discuss the complexities of deciding whether a program was “well attended” or “nobody came.”  Sometimes a program that seems well attended in one situation is the same as a poorly attended program in another.

I can think of a lot of times I’ve experienced this exact situation. When I was a branch manager at a public library, the program manager at the main library would ask if she could send authors to speak at our branch library.  When I said, “maybe you should send them somewhere else – we only had ten people come to the last one,” she replied “ten is a lot – ten is more than we get anywhere else.”

When I worked at the NN/LM South Central Region, in some parts of the region 30 people could be expected to attend training sessions.  In other parts of the region, we considered 6 people a successfully attended program.  These differences often corresponded to urban vs. rural, or the travel distance needed to get to the training, or whether the librarians were largely solo librarians or worked in multi-librarian organizations, or whether their institutions supported taking time off for training.  Other considerations include whether the trainers had already built an audience over time that would regularly attend the programs. Or on the other hand, whether the trainers had saturated their market and there were very few new people to learn about the topic.

So how can you decide what a good target participation level should be, or maybe more importantly, how can you explain your participation targets to your funder or parent organization?

Tying your participation level to your intermediate and long-term intended outcomes is one way to do that.  Let me give you an example of a program in Houston that was funded by the NN/LM South Central Region. The Greater Houston AHEC received funding many years ago to do an in-depth training project with a small number of seniors in the most underserved areas of Houston.  The goals were to teach these seniors how to use computers, how to get on the Internet, how to use email, and then how to use MedlinePlus and NIHSeniorHealth to look up health information.  They planned for the seniors to take 2-3 classes a week, and each class lasted several hours. It was a big commitment, but they intended for these seniors to really know how to use the Internet at the end of the series.  There were so few seniors who saw the need to learn to use computers that they had to persuade about 10 people from each location to sign up.  However, the classes were so good and the seniors so enthusiastic, that after a couple of weeks, the other seniors wanted to take classes too.  This led to a phase 2 project which included funding for a permanent computer and coffee area in a senior center where students could practice their Internet skills. There is now a third phase of the program called M-SEARCH which teaches seniors to use mobile devices to look up their health information.

At the beginning, Greater Houston AHEC may not have envisioned these specific outcomes.  However, if they were trying to convince a funder that 10 person classes were a reasonable use of the funder’s money, it might be good to show that small in-depth classes could lead to a long-term outcome like “seniors in even the poorest neighborhoods in Houston will be able to research their health conditions on NIHSeniorHealth.”  In addition, it would be important to bring in other factors, such as your intended goals for the project, for example whether you hope to have a small group of these seniors that you can train to really use the Internet for health research or whether you want to reach a lot of seniors in underserved areas to let them know that it’s possible to find great health information using NLM resources (see the Kirkpatrick Model of training evaluation for more information on evaluating your training goals).

For more on creating long-term outcomes, see Booklet 2 of the OERC’s booklets: Planning Outcomes-based Outreach Projects

Appreciative Inquiry of Oz: Building on the Best in the Emerald City

Cartoon image of an Emerald City

“One day not very long ago, librarians came to the Emerald City from their libraries in all of the countries of Oz. They came to visit the Great Library of the Emerald City, and to petition the Wizard allow them to borrow books and other items at the Great Library. Their hope was to transport items from one library to another using the Winged Monkeys, who offered their skills for this task after they were set free and got bored.”

Thus begins the latest OERC project – an online class in Appreciative Inquiry (AI), offered through the MidContinental Region’s Librarians in the Wonderful Land of Oz Moodle ‘game’ (i.e. series of online classes worth game points and CE credits from the Medical Library Association).  The game is made up of several ‘challenges’ (online classes) for librarians offered by NN/LM instructors.

In OERC’s challenge, Building on the Best at the Great Library of the Emerald City: Using Appreciative Inquiry to Enhance Services and Programs, the Wizard of Oz makes a deal with the librarians.  He will allow interlibrary loan of the Great Library’s resources if the librarians will assess customer satisfaction of the Great Library’s services and find things to improve.  And students in the class will learn to use a qualitative data collection technique called Appreciative Inquiry to do this assessment.

Sometimes people avoid customer service assessment because they find the methods to be complicated and time-consuming. Negative feedback can be uncomfortable on the part of the listener and the speaker. Appreciative Inquiry, with a focus on identifying and building on organizational strengths, removes that discomfort. A number of OERC workshops touch on Appreciative Inquiry but this Librarians of Oz challenge allows you to practice the technique, something that the OERC has not been able to provide in the traditional webinar or workshop context.  Completing the class is worth 14 MLA CE credits.

The class is free, but in order to take it you will need to register for the game Librarians in the Wonderful Land of Oz .  If you don’t want to take the class, but would still like to learn more about Appreciative Inquiry, I recommend these earlier blog posts:

From Cindy and Karen’s perspective, one of the best parts of this experience is that we finally get the official title of Wizard.  Special thanks to John Game Wizard Bramble of the NN/LM MCR who made all this happen.

 

W.A.I.T for Qualitative Interviews

WAIT

Why 

Am 

Talking?

 

My sister-in-law recently told me about the W.A.I.T. acronym that she learned from a communication consultant who spoke to her staff. It’s a catchy phrase for an important communication concept: Be purposeful when you talk. This self-reflective question can be applied to any conversational setting, but I want to discuss it in the context of qualitative interviews for evaluation data collection.

Surveys and tests are examples of quantitative data collection instruments. They require careful crafting and pilot-testing to be sure they collect valid information from respondents. By contrast, in qualitative interviews, the data collection instrument is the interviewer.  The interview guide itself is important, but the interpersonal manner of the interviewer has far greater impact on the trustworthiness of the information gathered. The key responsibility of the interviewer is described succinctly by Michael Q. Patton in Qualitative Research and Evaluation Methods:

“It is the responsibility of the interviewer to provide a framework within which people can respond comfortably, accurately, and honestly to open-ended questions.”

Listening skills, of course, are key to good interviewing.  As program evaluator Kylie Hutchinson said recently in a 2016 American Evaluation Association conference presentation, evaluators need ask their questions, then shut up.  If you can learn to do this, you are more than halfway there.  Julian Treasure has a TEDtalk with excellent tips on developing your listening skills.

However, how you talk is important as well.  Here are a few ways I would answer the question “Why Am I Talking?” during an interview:

  • I want to show that I share something in common with my interviewee: People are more comfortable talking to others who are like them. I say things like “I feel that way, too, sometimes” or “I know what you mean. Something like that happened to me a few years ago.”  These statements can help me build rapport.
  • I want the interviewee to know that no answer he or she gives can surprise me. Social desirability is something that survey researchers always consider in instrument design. Even in the anonymous survey context, people may give answers to make themselves “look good.” So you can imagine that the dynamic is even greater in the face-to-face interview setting. When broaching a sensitive topic, I let my interviewee know I’ve heard it all before. I might say, for instance, “Some people have told me they spent hours researching a serious health condition. Others say they were so frightened by the diagnosis, they didn’t want to read anything about it. How did you respond when you were diagnosed?”
  • I want to allow the interviewee an opportunity to answer a question hypothetically. Sometimes you may ask an interviewee about choices or behaviors that are potentially embarrassing. Let’s say I want to know what barriers prevented them from following their doctors’ orders. This question could feel awkward to interviewees if, for example, they lacked understanding or willpower to follow a physician’s recommendations. So I frame questions that allow them to distance themselves personally from their answers. Rather than asking them to describe a time they didn’t follow a doctor’s orders, I might say “Sometimes people don’t do what their doctors tell them to. In your experience, what are some of the reasons people might not follow their doctor’s orders?
  • I want to show I’m listening and to check my understanding: Paraphrasing your interviewee’s comments is an active listening technique that demonstrates your interest in the ongoing discussion. It also is a validity check on your own interpretations of their answers. I say things like “Okay, so let me make sure I understand.  Essentially, you are saying…?”
  • I’m managing the emotional climate and turn-taking in a focus group. I choose language to maintain a neutral, non-judgmental atmosphere and to model respectful interaction. I also talk when I need to reign in someone who is dominating the discussion. I might say “So  Truman gave us quite a few great examples of how she uses MedlinePlus. What examples can someone else add to Truman’s examples?”

All of these tips, by the way, are from Patton’s book on qualitative methods. Here is the full citation:

Patton, MQ. Qualitative Research and Evaluation Methods: Integrating Theory and Practice (4th ed.). Thousand Oaks, CA: Sage, 2015.

If you would like to read more about W.A.I.T, here’s an excellent article from the National Speakers Association.  I also want to thank Lauren Yee and Donna Speller Turner from the NASA Langley Research Center for alerting me to W.A.I.T.

 

Logic Model for a Birthday Party

Cindy and I feel that logic models are wonderful planning tools that can be used in many life events to stay focused on what’s meaningful. This blog post is an example of such a logic model.

My daughter’s birthday is coming up this week and we are having a party for her. My husband and I have quite a few friends with children about the same age as our daughter (who is turning 3).  This means that we go to birthday parties and we have birthday parties, and we are looking forward to another 15 years or so of birthday parties.  Even though we live in the 4th largest city in the country, it’s a bit of an project to come up with a place for the party.  I could see this problem stretching out into future years of Chuck E. Cheese’s and trampoline parks. Not that there’s anything wrong with those places, but we realized that for us it was time to stop the train before we went off the rails.  Looking at my own childhood, my birthday parties growing up were all at my own house. So we decided to see if we could have a party at our house and just have fun.

To make sure we had a great event and kept our heads on straight (and had something to blog about this week), I created a logic model for my daughter’s birthday party. We needed an evaluation question, which is “is it possible to have a party of preschoolers at our tiny, not-that-childproofed-house without going crazy?”

So here is the event we have planned.

Birthday Party Logic Model

If you’re new to logic models, they are planning tools that you use from right to left, starting with long-term outcomes (what you hope to see out in the future), intermediate outcomes, and short term outcomes. Then you think of the activities that would lead to those outcomes, and then inputs, the things you need in order to do the activities. (For more information on logic models, take a look at the OERC Blog category “Logic Models“).

What I’ve learned from this process is that every time I would come up with an idea about what we could do at the party, it would need to pass the test of whether or not it leads to the long-term outcome of being willing to throw parties in the house in the future – in other words if the party takes too much work or money (or it isn’t fun), we won’t remember it as an event we are likely to do again. For example, while we are inviting a person to our house to entertain the kids, we’re bringing our daughter’s music teacher from her preschool, so it should be fun for the kids that she knows from pre-school and everyone will know the music and can sing along.  Another activity that has high enjoyment and low effort is the dance party with bubbles. All toddlers love to dance, and we can make a playlist of all of our daughter’s favorite dance songs.  Adding bubbles to the mix is frosting.

The short term goals are our immediate party goals.  We would like the party to be fun for our daughter and for most of her friends (can we really hope for 100%?  Probably not, so we put 90%).  My husband and I may be a little stressed but we’re setting our goal fairly low at being relaxed 60% of the time (you’ll have to imagine maniacal laughter here).  Our intermediate goals are simply that we all can feel comfortable having our daughter’s friends over to our house in the near future. And the long term goal is to think this is a good idea to do again and again.

Wish us luck!

Last updated on Monday, June 27, 2016

Funded by the National Library of Medicine under Contract No. UG4LM012343 with the University of Washington.