Skip all navigation and go to page content
NN/LM Home About NER | Contact NER | Feedback |Site Map | Help | Bookmark and Share

Archive for the ‘OERC’ Category

Getting Started in Evaluation – Evaluation Guides from the OERC

Friday, July 24th, 2015

[Guest post by Karen Vargas]

New to the world of evaluation? What is your boss talking about when she says she wants you to measure outcomes, not outputs?  What is an indicator? How many responses should you get from your surveys?

Sometimes people think evaluation is just the form that you fill out at the end of a class or event. But in fact evaluation can start at the beginning of the project when you do a community assessment and evaluation includes building support for your project from your stakeholders. And it continues through making an evaluation plan as part of your project, gathering data, analyzing data, and reporting the data back to the stakeholders in a way that it is useful.  Here is a model that the CDC uses to describe the evaluation framework:

The Outreach Evaluation Resource Center (OERC) has a series of three booklets entitled Planning and Evaluating Health Information Outreach Projects that guide people through the evaluation process, from needs assessment to analyzing data.  While focusing on “health information outreach” this series of books can be used to learn how to do evaluation for any type of project.

Booklet 1: Getting Started with Community-Based Outreach

  • Getting organized: literature review; assembling team of advisors; taking an inventory; developing evaluation questions
  • Gathering information: primary data; secondary data, and publicly accessible databases
  • Assembling, Interpreting and Acting: summarizing data and keeping stakeholders involved

Booklet 2: Planning Outcomes-Based Outreach Projects

  • Planning your program with a logic model to connect activities to outcomes
  • Planning your process assessment
  • Developing an outcomes assessment plan, using indicators, objectives and an action plan

Booklet 3: Collecting and Analyzing Evaluation Data

  • Designing data collection methods; collecting data; summarizing and analyzing data for:
    • Quantitative methods
    • Qualitative methods

The books can be read in HTML, downloaded as a PDF or physical booklets can be ordered for free from the OERC by sending an email request to: nnlm@u.washington.edu

Learn more about the CDC’s Evaluation Framework: http://www.cdc.gov/eval/framework/

Fast Track Interview Analysis: The RITA Method

Monday, July 20th, 2015

[Guest post by Cindy Olney]

If you want a systematic way to analyze interview data, check out the Rapid Identification of Themes from Audio Recordings (RITA) method described in Neal et al. (2014). This method skips the time-consuming transcription process, because you conduct your analysis while listening to the recordings.  Also, the process maintains nonverbal elements of your data (i.e., intonation), which are lost when interviews are transcribed. The authors presented a case in their article to demonstrate how to use the RITA method.

The five-step RITA process, briefly described below, is meant to be used with multiple raters:

  1. Develop focused evaluation questions. Don’t try to extract every detail from the recordings. Instead, write some focused evaluation questions to guide your analysis. For instance, you might want to know how participants applied lessons learn from a class on consumer health information or what services are desired by a specific type of library user.
  2. Create a codebook. Develop a list of themes by talking with the project team, reviewing interviewer notes, or checking theories or literature related to your project. For their sample case, the authors used eight themes. That’s probably is the upper limit for the number of themes that can be effectively used for this process. Once you have the list, create a codebook with detailed theme definitions.
  3. Develop a coding form. (See the figure below.) This will be used by all coders to record absence or presence of a theme in time-specified (e.g., 3 minute) segments of the interview. They listen to a time segment, mark if any themes were present, and then repeat the process with the next segment. (The article describes the process for figuring out the most appropriate time segment length for your project.) If you want, you can also incorporate codes for “valance,” indicating if comments were expressed positively, negatively, or in neutral tones.
  4. Have the coding team pilot-test the codebook and coding form on a small subset of interviews. The team then should refine both documents before coding all recordings.
  5. Code the recordings. At this stage, one coder per interview is acceptable, although the authors recommend that a subset of the interviews be coded by multiple coders and results tested for rater agreement.

While the RITA process may seem time consuming, it is much more efficient than producing verbatim transcripts. Once the authors finalized their coding form, it took a team member about 68 minutes to code a one-hour interview. Because coded data was expressed in numbers, it allowed the authors to assess inter-rater reliability (agreement), which demonstrated an acceptable level of agreement among coders. Rater agreement adds credibility to your findings and can be helpful if you seek to publish your results.

While the RITA method is used with qualitative data, it is essentially a quantitative analytic method, producing numbers from text.  That leads me to my main concern. By reducing the data to counts, you lose some of the rich detail and subtle nuances that are the hallmarks of qualitative data. However, most evaluation studies use mixed methods to provide a complete picture of their programs.  In that spirit, you can  simply  keep track of time segments that contain particularly great quotes and stories, then transcribe and include them in your project report. They will complement nicely the findings from your RITA analysis.

Here is the full citation for the Neal et al  article, which provides excellent instructions for conducting the RITA process.

Neal JW, Neal ZP, VanDyke E, Kornbluh M. Expediting the analysis of qualitative data in evaluation: a procedure for the rapid identification of themes from audio recordings (RITA). American Journal of Evaluation. 2015; 36(1): 118-132.

Designing Questionnaires for the Mobile Age

Tuesday, July 14th, 2015

How does your web survey look on a handheld device?  Did you check?

The Pew Research Center reported that 27% of respondents to one of its recent surveys answered using a smartphone. Another 8% used a tablet. That means over one-third of participants used handheld devices to answer the questionnaire. Lesson learned: Unless you are absolutely sure your respondents will be using a computer, you need to design with mobile devices in mind.

As a public opinion polling organization, the Pew Center knows effective practices in survey research. It offers advice on developing questionnaires for handhelds in its article Tips for Creating Web Surveys for Completion on a Mobile Device. The top suggestion is to be sure your survey software is optimized for smartphones and tablets. The OERC uses SurveyMonkey, which fits this criteria. Many other popular Web survey applications do as well. Just be sure to check.

However, software alone will not automatically create surveys that are usable on handhelds devices. You also need to follow effective design principles. As a rule of thumb, keep it simple. Use short question formats. Avoid matrix-style questions. Keep the length of your survey short. And don’t get fancy: Questionnaires with logos and icons take longer to load on smartphones.

This article provides a great summary of tips to help you design mobile-device friendly questionnaires. My final word of advice? Pilot test questionnaires on computers, smartphones, and tablets. That way, you can make sure you are offering a smooth user experience to all of your respondents.

Share Your Program Success Stories

Thursday, July 9th, 2015

The NN/LM Outreach Evaluation Resource Center’s Blog features two new posts about how to create a program success story.

I’m planning to use the CDC’s Story Builder to document a program success story from our New Hampshire State Library MedlinePlus Train-the-Trainer.  If you use the Story Builder, the OERC would love to hear about your experience.  Share your experience with Cindy Olney.  The OERC is always interested to publicize NN/LM Network Members program successes in their blog.  If you have a project success (that includes an assessment/evaluation component), please send your story to Cindy.

-Michelle Eberle

Please visit WP-Admin > Options > Snap Shots and enter the Snap Shots key. How to find your key