Planning & Evaluating
Health Information
Outreach Projects

Booklet Two

COLLECTING AND ANALYZING
EVALUATION DATA

2nd Edition - Outreach Evalution Resource Center 2013

Title Page & Verso information

 

Table of Contents

 

Preface

This booklet is part of the Planning and Evaluating Health Information Outreach Projects series designed to supplement Measuring the Difference: Guide to Planning and Evaluating Health Information Outreach [1]. This series also supports evaluation workshops offered through the Outreach Evaluation Resource Center of the National Network of Libraries of Medicine. The goal of the series is to present step-by-step planning and evaluation methods. 

The series is aimed at librarians, particularly those from the health sciences sphere, and representatives from community organizations who are interested in conducting health information outreach projects. We consider "health information outreach" to be promotional and educational activities designed to enhance community members' abilities to find and use health information. A goal of these activities often is to equip members of a specific group or community to better address questions about their own health or the health of family, peers, patients, or clients. Such outreach often focuses on online health information resources such as the websites produced by the National Library of Medicine. Projects may also include other sources and formats of health information. 

We strongly endorse partnerships among organizations from a variety of environments, including health sciences libraries, hospital libraries, community-based organizations and public libraries. We also encourage broad participation of members of target outreach populations in the design, implementation, and evaluation of the outreach project. We try to describe planning and evaluation methods that accommodate this participatory approach to communitybased outreach. Still, we may sound like we are talking to project leaders. In writing these booklets we have made the assumption that one person or a small group of people will be in charge of initiating an outreach project, writing a clear project plan, and managing the evaluation process. 

Booklet 1 in the series, Getting Started with Community Assessment, is designed to help you collect community information to assess need for health information outreach and the feasibility of conducting an outreach project. Community assessment also yields contextual information about a community that will help you set realistic program goals and design effective strategies. It describes three phases of community assessment: 

  1. Get organized,
  2. Collect data about the community, and
  3. Interpret findings and make project decisions.

The second booklet, Planning Outcomes-Based Outreach Projects, is intended for those who need guidance in designing a good evaluation plan. By addressing evaluation in the planning stage, you are committing to doing it and you are more likely to make it integral to the overall project. The booklet describes how to do the following: 

  1. Plan your program with a logic model,
  2. Use your logic model for process assessment, and
  3. Use your logic model to develop an outcomes assessment plan.

The third booklet, Collecting and Analyzing Evaluation Data, presents steps for quantitative methods (methods for collecting and summarizing numerical data) and qualitative methods (specifically focusing on methods for summarizing text-based data.) For both types of data, we present the following steps: 

  1. Design your data collection methods,
  2. Collect your data,
  3. Summarize and analyze your data, and
  4. Assess the validity or trustworthiness of your findings.

Finally, we believe evaluation is meant to be useful to those implementing a project. Our booklets adhere to the Program Evaluation Standards developed by the Joint Committee on Standards for Educational Evaluation [2]. Utility standards, listed first because they are considered the most important, specify that evaluation findings should serve the information needs of the intended users, primarily those implementing a project and those invested in the project's success. Feasibility standards direct evaluation to be cost-effective, credible to the different groups who will use evaluation information, and minimally disruptive to the project. Propriety standards uphold evaluation that is conducted ethically, legally, and with regard to the welfare of those involved in or affected by the evaluation. Accuracy standards indicate that evaluation should provide technically adequate information for evaluating a project. Finally, the accountability standards encourage adequate documentation of program purposes, procedures, and results. 

We sincerely hope that you find these booklets useful. We welcome your comments, which you can email to one of the authors: Cindy Olney at olneyc@uw.edu or Susan Barnes at sjbarnes@uw.edu. 

Return to Table of Contents

 

Acknowledgements

We deeply appreciate Cathy Burroughs' groundbreaking work, Measuring the Difference: Guide to Planning and Evaluating Health Information Outreach, and thank her for her guidance in developing the Planning and Evaluating Health Information Outreach Projects series as a supplement to her publication. We also are grateful to our colleagues who provided feedback for the first edition of the series. 

To update the series, we were fortunate to work with four reviewers who brought valuable different viewpoints to their critiques of the booklets. We want to thank our reviewers for their insightful suggestions: 

This project has been funded in whole with federal funds from the Department of Health and Human Services, National Institutes of Health, National Library of Medicine, under Contract No. HHS-N-276-2011-00008-C with the University of Washington. 

Return to Table of Contents

Introduction

This booklet provides tips and techniques for collecting and analyzing information about your outreach projects so that you can make decisions like these: 

If you are going to make good decisions about your outreach project − or any other project − you need information or data. In this booklet we use the word "data" to include numerical and text-based information (such as interview transcriptions or written comments) gathered through surveying, observing, interviewing, or other methods of investigation. 

During community assessment data can help you identify groups that are in particular need of health information outreach [3]. Data also can be used to assess the resources and challenges facing your project. While you are implementing your activities and strategies, data can provide you with feedback for project improvement − this is called process assessment, which we described in Booklet 2, Planning Outcomes-Based Outreach Projects [4]. During outcomes assessment, data can provide the basis for you and other stakeholders to identify and understand results and to determine if your project has accomplished its goals. 

Therefore, much care must go into the design of your data collection methods to assure accurate, credible and useful information. To really understand and assess an outreach project, it is recommended to use multiple and mixed methods when possible: 

We provide an example of how to use mixed methods in the Toolkit that starts on page 33. 

When possible, project evaluation should combine both em>quantitative and qualitative methods. Quantitative methods gather numerical data that can be summarized through statistical procedures. Qualitative methods collect nonnumerical data, often textual, that can provide rich details about your project. Each approach has its particular strengths and, when used together, can provide a thorough picture of your project. (Note: When we talk about data collection methods, we are referring to procedures or tools designed to gather information. Surveys and interviews are data collection methods. When we compile, summarize and analyze the data, we use the term "analytic methods.") 

Quantitative Methods

This booklet is organized into two sections: one for quantitative methods and one for qualitative methods. After a brief overview, each section focuses on a specific method that is common and applicable to a variety of evaluation projects. In the quantitative section, the survey method has been chosen. For the qualitative section, interviewing is the method addressed. 

However, we should note that neither surveys nor interviews are limited to collecting one type of data. Either method can be designed to produce qualitative or quantitative data. Often, they are designed to collect a combination of both. 

You choose the type of method based on the evaluation question you want to answer. Evaluation questions describe what you want to learn (as differentiated from survey and interview questions, which are carefully worded, sequenced, and formatted to elicit responses from participants). Figure 1 provides an approach to selecting the type of method. 

This section will take you through the steps of using quantitative methods for evaluation, as shown above in Figure 2. Any piece of information that can be counted is considered quantitative data, including: 

Quantitative methods show the degree to which certain characteristics are present, such as frequency of activities, opinions, beliefs, or behaviors within a group. They can also provide an "average" look at a group or population. 

The advantage of quantitative methods is the amount of information you can quickly gather and analyze. The questions listed below are best answered using quantitative methods: 

Appendix 1 describes some typical data collection methods for quantitative data. 

Return to Table of Contents

Step One - Design Your Data Collection Methods

This section will focus on one of the most popular quantitative methods: surveys. This method has been chosen because of its usefulness at all stages of evaluation. Surveys use a standard set of questions to get a broad overview of a group’s opinions, attitudes, self-reported behaviors, and demographic and background information. In this booklet, our discussion is limited to written surveys such as those sent electronically or through surface mail. 

Write your evaluation questions

The first task in developing the questionnaire is to write the general evaluation questions you want to answer with a questionnaire. Evaluation questions describe what you want to learn by conducting a survey. They are different from your survey questions, which are specific, carefully formatted questions designed to collect data from respondents related to the evaluation questions. (We will use the term "survey items" when referring to survey questions to distinguish them from evaluation questions.) 

Listed below are examples of evaluation questions associated with different phases of evaluation: 

If you have a logic model [4], you should review the input and activities sections to help you focus the community assessment questions. You also should consider the information you might want to gather to check assumptions listed in the assumptions section of the logic model.

You should look at the activities and inputs column of your logic model to determine the questions you might want to ask. You also can check the outcomes columns to determine if your survey can help you collect baseline information that will allow you to assess change. 

Develop the data collection tool (i.e. questionnaire)

Your next task is to write survey items to help you answer your evaluation questions. One approach is to use a table like that shown in Figure 3, above, to align survey items with evaluation questions. 

Writing surveys can be tricky, so you should consider using questions from other projects that already have been tested for clarity and comprehension. (Although adopting items from existing questionnaires does not mean that you should forgo your own pilot test of your questionnaire.) Journal articles about health information outreach projects sometimes include complete copies of questionnaires. You can also contact the authors to request copies of their surveys. You also could try contacting colleagues with similar projects to see if they mind sharing their surveys. However, if you do copy verbatim from other surveys, always be sure to secure permission from the original author or copyright holder. It also is a collegial gesture to offer to share your findings with the original authors. Figure 4, gives you six examples of commonly used item formats. 

The visual layout of your survey is also important. Commercial websites that offer online survey software give examples of how to use layout, color, and borders to make surveys more appealing to respondents and easier for them to complete. There are several popular commercial products to create web-based surveys, such as SurveyMonkey http://surveymonkey.com and Zoomerang http://www.zoomerang.com

In most cases, you will want to design online surveys that are accessible to respondents with disabilities. This means that your survey should be available to respondents who use screen reader software or need high contrast in the surveys or those who are limited in their keyboard use. SurveyMonkey.com states that its application meets all current U.S. Federal Section 508 certification guidelines and, that if you use one of its standard questionnaire templates, your survey will be 508 compliant. If you are not using SurveyMonkey or one of its templates, you should read tips from SurveyMonkey.com about how to make your questionnaires accessible by visiting its Section 508 Compliancy tutorial page [5]. 

Pilot test the questionnaire

Always pilot test your questionnaire before you send it to the target audience. Even if you think your wording is simple and direct, it may be confusing to others. A pilot test will reveal areas that need to be clarified. First, ask one or two colleagues to take the survey while you are present and request that they ask questions as they respond to each item. Make sure they actually respond to the survey because they may not pick up confusing questions or response options just by reading it. 

Once you have made adjustments to the survey, give it to a small portion of your target audience and look at the data. Does anything seem strange about the responses? For instance, if a large percentage of people are picking "other" on a multiple-option question, you may have missed a common option. 

The design stage also entails seeking approval from appropriate committees or boards that are responsible for the safety and well-being of your respondents. If you are working with a university, most evaluation research must be reviewed by an Institutional Review Board (IRB). Evaluation methods used in public schools often must be approved by the school board, and community-based organizations may have their own review processes that you must follow. Because many evaluation methods pose little to no threat to participants, your project may not require a full review. Therefore, you should consider meeting with a representative from the IRB or other committee to find out the best way to proceed with submitting your evaluation methods for approval. Most importantly, it is best to identify all these review requirements while you are designing your methods. Otherwise, your evaluation may be significantly delayed.

Once you have pilot-tested the survey and attained required approvals you are ready to administer it to your entire sample.

Return to Table of Contents

Step Two - Collect Your Data

Decide who will receive the questionnaire

As part of planning your survey, you will decide whether to collect data from a sample (that is, a subgroup) of your target population and generalize the responses to the larger population or to collect data from all participants targeted by the survey. Sampling is used when it is too expensive or timeconsuming to send a survey to all members of a group, so you send the survey to a portion of the group instead. 

Random sampling means everyone in the population has an equal chance of being included in the sample. For example, if you want to know how many licensed social workers in your state have access to online medical journals, you probably do not have to survey all social workers. If you use random sampling procedures, you can assume (with some margin of error) that the percentage of all social workers in your state with access is fairly similar to the sample percentage. In that case, your sample provides adequate information at a lower cost compared with a census of the whole population. For details about random sampling, see Appendix C of Measuring the Difference [1].

With smaller groups, it is possible to send the survey to everyone. In this case, any information you summarize is a description of the group of respondents only. For instance, if you survey all seniors who were trained in your outreach project to use MedlinePlus and 80% of them said they used it at home one month after the session, you can describe how many of your trainees used MedlinePlus after training. This percentage provides important information about a result of your outreach project. However, you cannot make the generalization that 80% of all trained seniors use MedlinePlus at home after they are trained because you have not randomly sampled from among all seniors who received training on the resource.

Maximize response rate

The quality of your survey data depends heavily on how many people complete and return your questionnaire. Response rate refers to the percentage of people who return a survey. When a high percentage of people respond to your survey, you have an adequate picture of the group. But when you have a high percentage of nonresponders (members of your sample who did not complete your questionnaire), you are not sure if they share a lot of similar characteristics that might affect the accuracy of your interpretation of your findings. For example, the nonresponders may have been less enthusiastic than responders and were not motivated to complete the questionnaire. If they had actually responded, you may have found lower levels of satisfaction in the total group. If the survey was administered electronically, the responders may be more computerliterate than nonresponders. Without participation of these nonresponders, your findings may be favorably biased toward the online resources that you are promoting. The problem with low response rate is that, while you may suspect bias when your response rate is low, it is difficult to confirm your suspicion or determine the degree of bias that exists.

Statisticians vary in their opinions of what constitutes a good response rate. Sue and Ritter reviewed survey literature and reported that the median response rate was 57% for mailed surveys and 49% for online surveys [6]. In our experience talking with researchers and evaluators, 50% seems to be the minimal response rate acceptable in the field of evaluation.

Figure 5 defines a typical protocol for administering mailed surveys. Studies show that these procedures are effective for surveys sent either through regular mail or email [7, 8, 9]. Because electronic surveys are becoming increasingly popular, we have provided additional tips for increasing their response rate:

Check for nonresponse bias

Getting a high response rate can be difficult even when you implement procedures for maximizing it. Because survey researchers have been facing decreased levels of response rate in recent years, the field of public opinion research has a number of studies focused on the relationship between low response rate and bias (called nonresponse bias). Some survey researchers have concluded that the impact of nonresponse rate is lower than originally thought [12]. If you fail to get a return rate of 50% or more, you should try to discern where the bias might be:

The bottom line is that you should explore your sample for nonresponse bias. You may decide, in fact, that you should not analyze and report your findings. However, if you believe your data are still valid, you can then include your response rate, potential biases, and the results of your exploration into nonresponse bias to your stakeholders. They can then judge for themselves the credibility of your data. 

Provide motivation and information about risks and participants' rights

The correspondence around surveys is an important part of the overall survey design. The pre-notification letter is your first contact with respondents, and the impression you create will determine the likelihood that they will ultimately complete the questionnaire. It needs to be succinct and motivational.

The cover letter also is a motivational tool for inducing them to participate, and it should inform them before they begin the survey of their rights and potential risks to participation.

This is called informed consent, and it is part of the Program Evaluation Standards described in the booklet preface (specifically, the "propriety standard") [2]. If you must have your project reviewed through an institutional review board (IRB) or some other type of review group, you should get specific details of what should be in the letter. (The IRB will want to see your correspondence as well as your questionnaire.) 

Your reminder notices are your last attempt to motivate participation. They generally are kept short and to the point. Figure 6 provides a checklist for creating the letters and emails used in survey distribution. 

Once you have received the last of your surveys, you have accumulated raw data that you summarize so that you can see patterns and trends in the data that you can use to inform project decisions. To do so, you must summarize the raw data so you can then interpret it. 

Return to Table of Contents

Step Three - Summarize and Analyze Your Data

Compile descriptive data

The first step in analyzing quantitative data is to summarize the responses using descriptive statistics that help identify the main features of data and discern any patterns. When you have a group of responses for one question on your survey, that group of responses is called a "response distribution" or a "distribution of scores." Each question on your survey, except open-ended ones, creates a distribution of scores. Descriptive statistics describe the characteristics of that distribution. 

For some survey question distributions, you want to see how many respondents chose each possible response. This will tell you which options were more or less popular. You start by putting together a table that shows how many people chose each of the possible responses to that question. You then should show the percentage of people who chose each option. Percentages show what proportion of your respondents answered each question. They convert everything to one scale so you can compare across groups of varying sizes, such as when you compare the same survey question administered to training groups in 2011 and 2012. Figure 7 shows you how to construct a frequency table.

Calculate measures of central tendency and dispersion

You also should determine the "most representative score" of a distribution. The most representative score is called the "measure of central tendency." Depending on the nature of your score distribution, you will choose among three measures:

You also need to calculate the spread of your scores, so you can know "how typical" your measure of tendency is. We call these "measures of dispersion," the most frequently reported measures being range (the lowest and highest scores reported) and standard deviation (the "spread" of scores, with a higher standard deviation meaning a bigger spread of scores). 

You do not always report all central tendency and dispersion measures. The ones you use depend on the type of data collected by a given item. Figure 8 shows how you would represent these measures for three levels of data. The first level is called nominal-level data. It is considered "first level" because the only information you get from a respondent's answer is whether he or she does or does not belong in a given category. Table A shows a very typical nominal-level question: "Where do you live?" For this categorical data, the measure of central tendency (the "most representative response") is the mode. Percentages tell you how responses disperse among the options. 

Table B describes responses to the same question used in Figure 7, on the previous page. The data collected in Table B's question gives you information about intensity. As you read the response options on Table B from left to right, each option indicates a higher level of agreement with the statement. So if someone marked "strongly agree," that respondent indicated a higher degree of agreement with the statement than a respondent who marked "somewhat agree." Because we can rank responses by their degree of agreement with the response, they are considered "ordinal-level" data. For ordinal-level data, the "most representative score" is the median. In the example in Table B, the median score is 4. That score indicates that 50% of responses were "4" (somewhat agree) or above and 50% were responses of "4" and below. The range of responses presents how widespread the ratings were in a distribution. For this question, all responses were between 2 ("somewhat disagree") and 5 ("strongly agree"). 

In Table C, the interval/ratio-level data suggest even more information than provided by the question in Table B. The question asked respondents how many times they visited their public library in the past 30 days. As with our question in Table B, a higher number means "more." A respondent who visited 4 times visited more often than a person who visited 2 times.

But notice that you also can describe "how much more," because each visit counts an equal amount. So you know that the respondent who went 4 times to the public library visited twice as often as the person who went 2 times. (There is a difference between interval-level and ratio-level data that we will not discuss here because both are described with the same statistics. If you are interested, this difference is described in any basic statistics textbook.) 

For this level of data, the most representative score usually is the mean (also known as the average) and the standard deviation is an index of how far the scores scatter from the mean. If you have a relatively normal distribution (something you probably know as a "bell-shaped" distribution), then approximately 68% of scores will fall between one standard deviation below and one standard deviation above the mean. The standard deviation is really more useful in understanding statistical tests of inference, such as t-tests and correlations. It may not be particularly meaningful to you, but if you report a mean, it is good practice to report the standard deviation. For one reason, it tells people how similarly the people in your sample were in their responses to the questions. Also, it is another way to compare similarities in samples that responded to the question. 

Notice that we qualified our statement about the mean being the most representative central tendency measure for interval/ ratio-level data. You also will notice that, in Table C, the median and range are reported as well as the mean and standard deviation. Sometimes, you may get a couple of extremely high or low scores in a distribution that can have too much effect on the mean. In these situations, the mean is not the "most representative score" and the standard deviation is not an appropriate measure of dispersion. 

For the data presented in Table C, let's say the highest number of visits was 30 rather than 7. For the fictional data of this example, changing that one score from 7 to 30 visits would alter the mean from 1.8 to 2.7. However, the median (which separates the top 50% of the distribution from the bottom 50%) would not change. Even though we increased that highest score, a median of 1 would continue to divide the score distribution in half. 

For the data presented in Table C, let's say the highest number of visits was 30 rather than 7. For the fictional data of this example, changing that one score from 7 to 30 visits would alter the mean from 1.8 to 2.7. However, the median (which separates the top 50% of the distribution from the bottom 50%) would not change. Even though we increased that highest score, a median of 1 would continue to divide the score distribution in half. 

In fact, there are other reasons you might want to report median and range rather than mean and standard deviation for interval/ ratio-level data. There are times when the "average" doesn't make as much sense as a median when you report your findings to others. In Table C, it may make more sense to talk about 2 visits rather than 1.8 visits. (No one really paid 1.8 visits, right?). The range is also easier to grasp than standard deviations. 

One other note: There are times that you might see people report means and standard deviations for data similar to what you see in Table B. Discussions among measurement experts and statisticians about the appropriateness of this practice have been long and heated. Our opinion is that our discussion here reflects what makes sense most of the time for health information outreach. However, we also can see the other point of view and would not discredit this practice out of hand. 

There are other ways to use tables to help you understand your data. Figure 9, Figure 10, Figure 11, and Figure 12 show formats that will help you analyze your descriptive data. After you compile a table, write a few notes to explain what your numbers mean. 

Simplify data to explore trends

You can simplify your data to make the positive and negative trends more obvious. For instance, the two tables in Figure 9 show two ways to present the same data. In Table A, frequencies and percentages are shown for each response category. In Table B, the "Strongly Agree" and "Agree" responses were combined into a "Positive" category and the "Disagree" and "Strongly Disagree" responses were put into a "Negative"category. 

Provide comparisons

Sometimes, you may want to see how participants' attitudes, feelings, or behaviors have changed over the course of the project. Figure 10 shows you how to organize pre-project and post-project data into a chart that will help you assess change. Figure 10 also presents means rather than percentages because numbers of websites represent interval-level data. Data to open-ended questions in which participants may give a wide range of scores, such as the number of continuing education credits completed, are easier to describe using averages rather than percentages. 

You may wonder if the findings vary for the different groups you surveyed. For instance, you may wonder if nurses, social workers, or members of the general public found your resources as useful as the health librarians who had your training. To explore this question, you would create tables that compare statistics for subgroups in your distribution, as in Figure 11.

Finally, you also may want to compare your findings against the criteria you identified in your objectives. Figure 12 gives an example of how to present a comparison of objectives with actual results.

Return to Table of Contents

Step Four: Assess the Validity of Your Findings

Validity refers to the accuracy of the data collected through your survey: Did the survey collect the information it was designed to collect? It is the responsibility of the evaluator to assess the factors that may affect the accuracy of the data and present those factors along with results. 

You cannot prove validity. You must build your case for the credibility of your survey by showing that you used good design principles and administered the survey appropriately. After data collection, you assess the shortcomings of your survey and candidly report how they may impact interpretation of the data. Techniques for investigating threats to validity of surveys include the following: 

Surveys allow you to collect a large amount of quantitative data, which then can be summarized quickly using descriptive statistics. This approach can give you a sense of the experience of participants in your project and can allow you to assess how closely you have come to attaining your goals. However, based on the analysis given for each table in Figure 9, Figure 10, Figure 11, and Figure 12, you may notice that the conclusions are tentative. This is because the numbers may describe what the respondents believe or feel about the questions you asked but they do not explain why participants believe or feel that way. Even if you include open-ended questions on your survey, only a small percentage of people are likely to take the time to comment. 

For evaluation, the explanations behind the numbers usually are very important, especially if you are going to make changes to your outreach projects or make decisions about canceling or continuing your efforts. That is why most outreach evaluation plans include qualitative methods. 

Qualitative Methods

This section will take you through the steps of using qualitative methods for evaluation, as shown above in Figure 13. Qualitative methods produce non-numerical data. Most typically these are textual data, such as written responses to open-ended questions on surveys; interview or focus group transcripts; journal entries; documents; or field notes. However, qualitative researchers also make use of visual data such as photographs, maps, and videos. 

The advantage of qualitative methods is that they can give insight into your outreach project that you could never obtain through statistics alone. For example, you might find qualitative methods to be particularly useful for answering the following types of questions: 

Qualitative evaluation methods are recommended when you want detailed information about some aspect of your outreach project. Listed here are some examples of the type of information best collected through qualitative methods: 

Appendix 2 describes some typical qualitative methods used in evaluation. Interviewing individual participants will be the focus of the remainder of this booklet because it is a qualitative method that has broad application to all stages of evaluation. We specifically talk about one-toone interviewing here. There is overlap between one-toone and focus group interviewing, but we do not go into details about aspects of focus groups that are important to understand, such as group composition or facilitation techniques. An excellent resource for conducting focus groups is Focus Groups by Krueger and Casey [13].

Step One: Design Your Data Collection Methods

Write your evaluation questions

As with quantitative methods, you design your qualitative data collection process around your evaluation questions. You may, in fact, decide to use both quantitative and qualitative methods to answer the same evaluation questions. For instance, if the evaluation question is " Do participants use the online resources we taught after they have completed training?" You may decide to include a quantitative "yes/no" question on a survey that is sent to all participants, but you also may decide to interview 10-12 participants to see how they used it. 

Develop the data collection tool (i.e., interview guide)

Once you have your list of evaluation questions, the next step is to design an interview guide which lists questions that you plan to ask each interviewee. Interviewing may seem less structured than surveys, but preparing a good interview guide is essential to gathering good information. An interview guide includes all the questions you plan to ask and ensures that you collect the information you need.

Patton [14] discusses different types of interview questions such as those presented in Figure 14 and provides these tips for writing questions:

You also need to pay attention to how you sequence your questions. Here are some tips, also adapted from Patton [14], to help you with the order of your questions: 

Pilot test the interview guide

As with a survey, it is a good idea to pilot-test your interview questions. You might pilot-test your guide with someone you are working with who is familiar with your interviewees. (This step is particularly important if your interviewees are from a culture that is different from your own.) Sometimes evaluators consider the first interview a pilot interview. Any information they gather on the first interview is still used, but they revisit the question guide and make modifications if necessary. Unlike surveys, question guides do not have to remain completely consistent from interview to interview. While you probably want a core set of questions that you ask each interviewee, it is not unusual to expand your question guide to confirm information you learned in earlier interviews. 

Finally, be sure your interview project is reviewed by the appropriate entities (such as your IRB). Interviews are so personal, they may not seem like research, and you may forget they are subject to the same review procedures as surveys. So do not make this assumption or you may face a delay in collecting your data. 

Step Two: Collect your data

Decide who will be interviewed

Like quantitative methods, interviewing requires a sampling plan. However, random sampling usually is not recommended for interviewing projects because the total number of interviewees in a given project is quite small. Instead, most evaluators use purposeful sampling, in which you choose participants who you are sure can answer your questions thoroughly and accurately. 

There are a number of approaches to purposeful sampling, and use of more than one approach is highly recommended. The following are just a few approaches you can take to sampling [14]

Convenience samples, in which participants are chosen simply because they are readily accessible, should be avoided except when piloting survey methods or conducting preliminary research. The typical "person-on-the-street" interview you sometimes see on the evening news is an example of a convenience sample. This approach is fast and low-cost, but the people who agree to participate may not represent those who can provide the most or best information about the outreach project. 

A common question asked by outreach teams is "How many interviews do we need to conduct?" That question can be answered in advance for quantitative procedures but not for qualitative methods. The usual suggestion is that you continue to interview until you stop hearing new information. However, resource limitations usually require that you have some boundaries for conducting interviews. Therefore, your sampling design should meet the following criteria: 

Provide informed consent information

Interviewing is a much more intimate experience than completing surveys, and the process is not anonymous. The ethics of interviewing require that you provide introductory information to help the interviewee decide whether to participate. You can provide this information in writing, but you must be sure the person reads and understands it before you begin the interview. If your project is to be reviewed by an IRB, the board's guidelines will help you develop an informed consent process. However, with or without institutional review, you should provide the following information to your interviewees:

If you want to record the interview, explain what will happen to the recording (e.g., who else will hear it, how it will be discarded). Then gain permission from the interviewee to proceed with the recording. 

Record the interviews

It is usually a good idea to record your interviews, unless your interviewee objects or becomes visibly uncomfortable. You may not transcribe the interview verbatim, but you will want to review the interview to get thorough notes. Here are some relatively inexpensive tools that will help you record and transcribe interviews:

Build trust and rapport through conversation

How you conduct yourself in an interview and your ability to build trust and rapport with an interviewee will affect the quality of the data you collect. Patton wrote, "It is the responsibility of the interviewer to provide a framework within which people can respond comfortably, accurately, and honestly to open-ended questions." 1 To accomplish this framework you have to be a good listener. Sound consultant Julian Treasure uses the acronym RASA (which means "essence" in Sanskrit) to describe four steps to effective listening [15]

  1. Receive: Pay attention to your interviewee.
  2. Appreciate: Show your appreciation verbally by saying "hmm" and "okay."
  3. Summarize: Repeat what you heard.
  4. Ask: Further your understanding by asking follow-up questions.

(You can hone your skills by practicing with family, friends and colleagues. They probably will be happy to accommodate you.) 

An interview is a social exchange, and your question order should reflect social norms of conversations between strangers. Consider how you talk with a stranger you meet at a party. You usually start with easy, safe questions and then, if you build rapport in the first stages of conversation, you start to ask more about the stranger's opinions, feelings, and personal information. To protect the comfort of your interviewee, you might incorporate some of the following tips [14]

Start the analysis during data collection

Step Three talks about summarizing and analyzing your interview data, but you should start doing some interpretation during the data collection stage. In preparation for this step, you should take reflective notes about what you heard soon after each actual interview (preferably within 24 hours). Reflective notes differ from the notes you take during the interview to describe what the participant is saying. They should include your commentary on the interaction. Miles and Huberman [16] suggest these memos should take from a few minutes to a half hour to construct and could address some of the following: 

Be sure to add descriptive information about the encounter: time, date, place, and interviewee. You also can start to generate a list of codes with each reflective note and write the codes somewhere in the margins or in the corner of your memo. 

By starting to process your notes during the data collection process, you may start to find themes or ideas that you can confirm in subsequent interviews. This reflective practice also will make Step Three a little less overwhelming. 

Step Three: Summarize and Analyze Your Data

Those who are "number phobic" may believe that analyzing non-numerical data should be easier than analyzing quantitative data. However, the sheer amount of text that accumulates with the simplest evaluation project makes the data analysis task daunting. It might help to remember the goals of qualitative data analysis in the context of program evaluation [17]

There are various approaches to data analysis used by qualitative researchers. We have adapted an approach developed specifically for evaluation by Thomas [17]. We suggest you approach the data analysis process in phases. 

Prepare the text

Interviews may be transcribed verbatim, or you may produce summaries based on reviews of recordings or notes. If you are fortunate enough to pay a transcriptionist, you should still review your recordings and check the transcript for accuracy. Interviewers with more limited resources will produce detailed summaries from their notes and then fill in details by reviewing the recordings. In some instances, interviewers may have to simply rely on notes for their summaries. If you are not using verbatim transcripts, it is a good idea to have your interviewees review your summary for accuracy. Your transcripts, regardless of detail level, are your raw data. Each summary should be contained in its own document and identified by interviewee, date, location, and length of interview. It is also helpful for future analysis to turn on the "line numbering" function (use the continuous setting) so you can identify the location of examples and quotes. 

Note themes (or categories)

Once you have transcribed or summarized the information, read through all the qualitative data, noting themes or "categories." Create a codebook to keep track of your categories, listing a category label (a short phrase that can be written easily in margins or with a qualitative software package) and a description (a lengthier description defining the category label.) You probably will have two tiers of categories. Upper-level categories are broader and may be associated with your evaluation questions. For instance, you may have conducted interviews to learn how participants in a training session are using the training and whether they have recommendations for improving future sessions. Therefore, you may read through the notes looking for examples that fit themes related to "results," "unexpected outcomes," "barriers to project implementation," and "suggestions for improvement." Lower-level categories emerge from phrases in the text. These lower-level categories may or may not be subthemes of your upper-level categories. 

Code the text

Systematically code your material. You do this by identifying "units" of information and categorizing them under one of your themes. A unit is a collection of words related to one main theme or idea and may be a phrase, sentence, paragraph or several paragraphs. Note that not all of your information may be relevant to your evaluation questions. You do not need to code all of your data. 

Try to organize your categories into major themes and subthemes. Combine categories that seem redundant. Thomas [17] suggests refining your categories until you have 3-8 major themes. To describe themes, identify common viewpoints along with contradictory opinions or special insights. Highlight quotes that seem to present the essence of a category. 

One simple approach to coding is to highlight each unit of information using a different color for upper-level categories. Then pull the upper-level categories into one table or document and apply subthemes. (See Figure 15 and Figure 16 for an example of how to do this.) For simpler projects, this process is manageable with tables and spreadsheets. If you have more complicated data, you may want to invest in a qualitative software package. There are various popular packages, including ATLAS.ti (http://www. atlasti.com/) and NVivo 9 (http://www.qsrinternational. com/products_nvivo). We have experience with HyperRESEARCH, which is produced by Researchware (http://www.researchware.com), the same company that offers HyperTRANSCRIBE. HyperRESEARCH includes helpful tutorials for how to use the software. 

Interpret results

Produce written summaries of the categories. The summaries include the broader theme, the sub-themes, a written definition of the category, and examples or quotes. See Figure 17 for how to produce these category write-ups.

Eventually you want to go beyond just summarizing the categories in your data. You should interpret the findings to answer questions such as: 

You also might describe classifications of answers, such as categories of how people used MedlinePlus after training. 

The analysis might even involve some counting. For instance, you might count how many users talked about looking up health information to research their own health issues and how many used it to look up information for others. This will help you assess which uses were more typical and which ones were unusual. However, remember these numbers are only describing the group of people that you interviewed; they cannot be generalized to the whole population. 

It is a good idea to describe both the typical and the unusual cases in each category. You want to look for contradictory findings or findings that differ across groups. For example, you may find that doctors preferred different online health resources than did health educators or nurse practitioners. 

There are numerous approaches to analyzing qualitative data. Two excellent resources for beginners are "Analyzing Qualitative Data" at the University of Wisconsin-Extension website [18] and Glesne's Becoming Qualitative Researchers. [19] Qualitative Data Analysis by Miles and Huberman [16] also provides methods for analysis, although a little more advanced. 

Step Four: Assess the Trustworthiness of Your Findings

In quantitative data, you assess your findings for validity, which is roughly synonymous with accuracy. With qualitative analysis, you actually are exploring varying viewpoints, so qualitative researchers favor the term "trustworthiness" over "validity." A trustworthy report will focus on presenting findings that are fair and represent multiple perspectives [20]. 

Use procedures that check the fairness of your interpretation

Listed below are some approaches that you can choose from to assess the trustworthiness of your findings [14, 17, 19]

When you interview, you should use at least one other source of data to see if the data corroborate one another. For instance, you may compare interview data to some focus group data or written comments on training evaluation forms. You do not have to triangulate with other qualitative data. In evaluation, it is not unusual to compare interview findings with survey data. 

Present findings to reflect multiple points of view

Take-Home Messages

  1. Be prepared to mix qualitative and quantitative data. Mixed approaches often tell the whole story better than either approach alone.
  2. Quantitative methods are excellent for exploring questions of "quantity:" how many people were reached; how much learning occurred; how much opinion changed; or how much confidence was gained.
  3. The two key elements of a successful survey are a questionnaire that yields accurate data and a high response rate.
  4. With surveys, descriptive statistics usually are adequate to analyze the information you need about your project. Using tables to make comparisons also can help you analyze your findings.
  5. Qualitative methods are excellent for exploring questions of "why," such as why your project worked; why some people used the online resources after training and others did not; or why some strategies were more effective than others.
  6. A good interview study uses a purposeful approach to sampling interviewees.
  7. Well-constructed and sequenced questions, along with good listening skills, facilitate the interview conversation.
  8. Analysis of interview data entails systematic coding and interpretation of the text produced from the interviews. Multiple readings of the data and revised coding schemes are typical.
  9. In interpreting and reporting findings from qualitative data analysis, make sure your interpretations are thorough, accurate, and inclusive of all viewpoints.

 

Return to Table of Contents

 

References

  1. Burroughs C. Measuring the difference: Guide to planning and evaluating health information outreach [Internet]. Seattle, WA: National Network of Libraries of Medicine, Pacific Northwest Region; 2000 [cited 28 Feb 2012]. http://nnlm.gov/evaluation/guides.html#A1
  2. Yarbrough DB, Shulha LM, Hopson RK, Caruthers FA. The Program Evaluation Standards: A guide for evaluators and evaluation users. 3rd ed. Thousand Oaks, CA: Sage; 2011.
  3. Olney CA, Barnes SJ. Planning and evaluating health information outreach projects. Booklet 1: Getting started with community assessment, 2nd edition. Seattle, WA: National Network of Libraries of Medicine Outreach Evaluation Resource Center; 2013.
  4. Olney CA, Barnes SJ. Planning and evaluating health information outreach projects. Booklet 2: Planning outcomes-based outreach projects, 2nd edition. Seattle, WA: National Network of Libraries of Medicine Outreach Evaluation Resource Center; 2013.
  5. Survey Monkey. Tutorial: Section 508 compliancy [Internet] [cited 8 May 2012]. http://help.surveymonkey. com/app/tutorials/detail/a_id/427
  6. Sue VM, Ritter LS. Conducting online surveys. Thousand Oaks, CA: Sage; 2007.
  7. Cui WW. Reducing error in mail surveys [Internet]. Practical assessment, research & evaluation. 2003; 8(18) [cited 17 March 2012]. http://pareonline.net/getvn.asp?v=8&n=18>
  8. Dillman DA, Smyth JD, Christian LM. Internet, mail, and mixed-mode surveys: The tailored design method. 3rd ed. Hoboken, NJ: Wiley; 2009.
  9. Millar MM, Dillman DA. Improving response to web and mixed-mode surveys [Internet]. Public opinion quarterly. 2011 Summer; 75(2): 249–269 [cited 8 May 2012].
  10. Birnholtz JF, Horn DB, Finholt TA, Bae SJ. The effects of cash, electronic, and paper gift certificates as respondent incentives for a web-based survey of technologically sophisticated respondents. Social science computer review. 2004 Fall; 22 (3): 355-362 [cited 8 May 2012]. http://www-personal.umich.edu/~danhorn/ reprints/Horn_2004_Web_Survey_Incentives_SSCORE.pdf
  11. Bosniak M, Tuten TL. Prepaid and promised incentives in web surveys [Internet]. Paper presented to the 57th American Association of Public Opinion Research Annual Conference, St. Pete Beach, FL; 2002 [cited 8 May 2012]. http://www.psyconsult.de/bosnjak/publications/AAPOR2002_Bosnjak_Tuten.pdf
  12. Langer G. About response rate [Internet]. Public Perspectives. 2003 May/June; 16-18 [cited 8 May 2012]. http://www.aapor.org/Content/NavigationMenu/PollampSurveyFAQs/DoResponseRatesMatteR/Response_ Rates -_Langer.pdf
  13. Krueger RA, Casey MA. Focus groups: a practical guide for applied research. 4th ed. Los Angeles, CA: Sage; 2009.
  14. Patton MQ. Qualitative Research & Evaluation Methods. 3rd ed. Thousand Oaks, CA: Sage; 2002.
  15. Treasure J. 5 ways to listen better [Internet]. TEDtalks. 2011-July [cited 8 May 2012]. http://www.ted.com/ talks/julian_treasure_5_ways_to_listen_better.html [Transcript available by clicking on "interactive video" under the video.]
  16. Miles MB; Huberman M. Qualitative data analysis. 2nd ed. Thousand Oaks, CA: Sage; 1994.
  17. Thomas DR. A general inductive approach for analyzing qualitative evaluation data. American Journal of Evaluation. 2006; 27: 237-246.
  18. Taylor-Powell ET, Renner M. Analyzing qualitative data [Internet]. Madison, WE: University of Wisconsin- Extension; 2003 [cited 8 May 2012]. http://learningstore.uwex.edu/Assets/pdfs/G3658-12.pdf
  19. Glesne, C. Becoming qualitative researchers. 2nd ed. New York: Longman; 1999.
  20. Lincoln YS, Guba EG. But is it rigorous? Trustworthiness and authenticity in naturalistic evaluation. New Directions in Program Evaluation. 1986 Summer; 30:73-84.

Return to Table of Contents

 

Appendix 1 - Examples of Commonly Used Quantitative Methods

Method Examples of Sources Examples of information collected
End-of session evaluations
  • Trainees
  • Service recipients
  • Satisfaction with training
  • Intentions of using the resources in the future
  • Beliefs about the usefulness of the resources for various health concerns
  • Confidence in skills to find information
Tests (best if conducted before and after training)
  • Trainees
  • Ability to locate relevant, valid health information
  • Ability to identify poor-quality health information

Surveys

  • Follow-up surveys (conducted some time period after training)
  • Attitude or opinion scales (e.g., strongly agree, agree, etc.)
  • Dichotomous scales (yes/no)
  • Trainees
  • Collaborative partners
  • Usefulness of resources for health concerns (becoming more informed about treatments, learning more about a family member's illness)
  • Use of resources as part of one's job
  • Level of confidence in using the resource
  • Sharing the resource with other co-workers, family members, etc.
  • Use and usefulness of certain supplemental products (listservs and special websites)

Records

  • Frequency counts
  • Percentages
  • Averages
  • Website traffic information
  • Attendance records
  • Distribution of materials
  • Hits to website
  • Amount of participation on listservs
  • Training participation levels
  • Retention levels (for training that lasts more than one session)
  • Numbers of people trained by "trainers"
  • Number of pamphlets picked up at health fairs

Observations

  • Absence/presence of some behavior or property
  • Quality rating of behavior (Excellent to Poor)
  • Trainee behavior
  • Site characteristics
  • Level of participation of trainees in the sessions
  • Ability of trainee to find health information for the observer upon request
  • Number of computers bookmarked to resource website
  • Number of items promoting the resources made available at the outreach site (handouts, links on home pages)

 

Return to Table of Contents

 

Appendix 2 - Examples of Commonly Used Qualitative Methods

Method Description Example
Interviews People with knowledge of the community or the outreach project are interviewed to get their perspectives and feedback
  • Interviews with people who have special knowledge of the community or the outreach project
  • Focus group interviews with 6-10 people
  • Large group or “town hall” meeting discussions with a large number of participants
Field observation An evaluator either participates in or observes locations or activities and writes detailed notes (called field notes) about what was observed
  • Watching activities and taking notes while a user tries to retrieve information from an online database
  • Participating in a health fair and taking notes after the event
  • Examining documents and organizational records (meeting minutes, annual reports)
  • Looking at artifacts (photographs, maps, artwork) for information about a community or organization
Written documents Participants are asked to express responses to the outreach project in written form
  • Journals from outreach workers about the ways they helped consumers at events
  • Reflection papers from participants in the project about what they learned
  • Electronic documents (chats, listservs, or bulletin boards) related to the project
  • Open-ended survey questions to add explanation to survey responses

 

Return to Table of Contents

 

Toolkit: Using Mixed Methods

Part 1: Planning a Survey

A health science library is partnering with a local agency that provides services, support, and education to low-income mothers and fathers who are either expectant parents or have children up to age 2. The projects will provide training on search strategies to staff and volunteers on MedlinePlus and Household Products with a goal of improving their ability to find consumer health information for their clients. 

The objectives of the project are the following: 

All staff and volunteers will be required to undergo MedlinePlus training conducted by a health science librarian. Training will emphasize searches for information on maternal and pediatric health care. The trainers will teach users to find information in MedlinePlus's Health Topics, Drugs and Supplements, and Videos and Cool Tools. The training will also include use of the Household Products Database. 

To evaluate the project outcomes, staff and volunteers will be administered a survey one month after training. Worksheet 1 demonstrates how to write evaluation questions from objectives, then how to generate survey questions related to the evaluation questions. (This worksheet can be adapted for use with pre-program and process assessment by leaving the objectives row blank.)

Part 2: Planning an Interview

After six months of the training project, the team considered applying for a second grant to expand training to clients. The team decided to do a series of interviews with key informants to explore the feasibility of this idea. Worksheet 2 demonstrates how to plan an interview project. The worksheet includes a description of the sampling approach, the evaluation questions to answer, and some interview questions that could be included on your interview guide. 

Blank versions of the worksheets used in the case example are provided on pages 39 and 40 for your use. 

Return to Table of Contents

 

Worksheet 1 – Planning a Survey


Objective 1 At the end of the training session, at least 50% of trained staff and volunteers will say that their ability to access consumer health information for their clients has improved because of the training they received. Evaluation
Evaluation Questions
  • Do staff and volunteers think the training session improved their ability to find good consumer health information?
  • Did the training session help them feel more confident about finding health information for their clients?
Survey Items
  • The training session on MedlinePlus improved my ability to find good consumer health information. (strongly disagree/disagree/neutral/agree/strongly agree)
  • The training session on MedlinePlus made me more confident that I could find health information for the agency's clients. (strongly disagree/disagree/neutral/agree/strongly agree)

 

Objective 2 Three months after the training session, 75% of trained staff and volunteers will report finding health information for a client using MedlinePlus or the Household Products Database.
Evaluation Questions
  • Did the staff and volunteers use MedlinePlus or Household Products to get information for clients?
  • What type of information did they search for most often?
Survey Items
  • Have you retrieved information from MedlinePlus or Household Products to get information for a client or to answer a client's question? (yes/no)
  • If you answered yes, which of the following types of information did you retrieve (check all that apply)
    • A disease or health condition
    • Prescription drugs
    • Contact information for an area health care provider or social
    • Service agency
    • Patient tutorials
    • Information about household products
    • Other (please describe)

 

Objective 3 Three months after receiving training on MedlinePlus or Household Products, 50% of staff and volunteers will say they are giving clients more online health information because of the training they received.
Evaluation Questions
  • Is staff helping more clients get online health information now that they have had training on MedlinePlus or Household Products?
  • What are some examples of how they used MedlinePlus or Household Products to help clients?
Survey Items
  • The training I have received on MedlinePlus or Household Products has made me more likely to look online for health information for clients. (strongly disagree/disagree/neutral/agree/strongly agree)
  • Since receiving training on MedlinePlus or Household Products, I have increased the amount of online health information I give to clients. (strongly disagree/disagree/neutral/agree/strongly agree)
  • Give at least two examples of clients’ health questions that you have answered using MedlinePlus or Household Products. (open-ended)

 

Return to Table of Contents

 

Worksheet 2 - Planning an Interview

Interview Group Staff
Sampling strategy
  • Agency director
  • Volunteer coordinator
  • 2 staff members
  • 2 volunteers
  • 2 health science librarian trainers
Evaluation questions
  • How ready are the clients to receive this training?
  • What are some good strategies for recruiting and training clients?
  • How prepared is the agency to offer this training to its clients?
  • Do the health science librarians have the skill and time to expand this project?
Sample questions for the interview guide
  • What are some good reasons that you can think of to offer online consumer health training to clients?
  • What are some reasons not to offer training?
  • If we open the training we have been offering to staff and volunteers to clients, how likely are the clients to take advantage of it?
  • What do you think it will take to make this project work? (Probe: recommendations for recruitment; recommendations for training. )
  • Do you have any concerns about training clients?

 

Interview Group Clients
Sampling strategy

Six clients recommended by case managers:

  • All interviewees must have several months of experience with the agency and must have attended 80% of sessions in the educational plan written by their case manager.
  • At least one client must be male
  • At least one client should not have access to the Internet from home or work
Evaluation questions
  • How prepared and interested are clients to receive training on online consumer health resources?
  • What are the best ways to recruit agency clients to training sessions?
  • What are the best ways to train clients?
Sample questions for the interview guide
  • When you have questions about your health, how do you get that information?
  • How satisfied are you with the health information you receive?
  • If this agency were to offer training to you on how to access health information online, would you be interested in taking it?
  • What aspects of a training session would make you want to come?
  • What would prevent you from taking advantage of the training?

Return to Table of Contents

 

Blank Worksheet 1 - Planning a Survey

Download Blank Worksheet 1 Microsoft Word

Return to Table of Contents

 

Blank Worksheet 2 - Planning an Interview

Download Blank Worksheet 2 Microsoft Word

Return to Table of Contents

 

Checklist for Booklet 3: Collecting and Analyzing Evaluation Data

Quantitative Methods

Step One - Design Your Data Collection Methods

Step Two - Collect Your Data

Step Three - Summarize and Analyze Your Data

Step Four - Assess the Validity of Your Findings

Qualitative Methods

Step One - Design Your Data Collection Methods

Step Two - Collect Your Data

Step Three - Summarize and Analyze Your Data

Step Four - Assess the Trustworthiness of Your Findings

Download Checklist Microsoft Word

Return to Table of Contents