Skip all navigation and go to page content

NEO Shop Talk

The blog of the National Network of Libraries of Medicine Evaluation Office

Archive for 2009

Oregon Program Evaluators Network Slides Online

Wednesday, December 9th, 2009

Slides and handouts from October’s OPEN (Oregon Program Evaluators Network) meeting are available at their web site in the “OPEN Past Events & Resources” section.

Oregon Program Evaluators Network: Visual Displays of Data

Wednesday, October 7th, 2009

One of the OPEN Annual Conference afternoon workshops was “Groovy Graphics: Visual displays of Quantitative Information” from Elizabeth O’Neill and Ralph Holcomb of Multnomah County Aging and Disability Services. (Silly presentation title, not clear why they used it, they didn’t seem to like it, either)

The presentation began with a “Graphics 101” review of some of Edward Tufte’s concepts about using graphics that “concentrate on the eloquent statement”:

  • Show comparisons;
  • Provide data at different levels of detail;
  • Integrate information so viewers can understand it quickly;
  • Provide documentation;
  • Reduce clutter.

Every visual choice should convey a meaning. MS Excel provides a toolbox of fundamentals for us:

  • Bar Graphs–avoid using the 3D option because it serves no purpose and simply adds visual clutter; typical axis choices are to make the x-axis nominal and the y-axis ordinal, interval, or ratio; horizontal bar graphs are especially useful in facilitating comparison of data to a benchmark;
  • Line Graphs–typically the x axis is ordinal or higher and the y axis is interval or ratio;
  • Scatterplots–for continuous data; both x and y are interval or ratio;
  • Stem and Leaf charts–show mean, median, and mode in one glance (but I find them a bit hard to read);
  • Pie Charts–for displaying many values of one variable.

The presenter stated that he is a “strong advocate of DDT”–meaning Dashboards, Drill-downs, and Templates:

  • Dashboards provide quick groups of data presented simply (often featuring traffic signals in green, yellow, and red);
  • Drilldowns (web pages where viewers can click on a word or a tab) are ways to provide more data with less clutter;
  • Trendlines show patterns of data over time.

Sparklines are tiny graphs, charts, traffic signals embedded within text, showing lots of data in a small space. They provide very quick views of data within the context of a narrative, and can be produced with an add-on that can be purchased for Excel (they will be a standard feature in the next version).

Word clouds, such as those produced from Wordle, can be used in textual data analysis for counting the occurrences of words. In word clouds, word size matters–bigger means more–but word position does not–it’s random. So, conclusions cannot be drawn from word proximity. But, the presenter suggested that word clouds could be given to decision makers to help them generate hypotheses and could be used in generating “frequently asked questions” lists. Word clouds also have good potential for use as graphics for the fronts of reports.

Handouts and powerpoints from this workshop will be made available at the OPEN web site.

Oregon Program Evaluators Network: Program Evaluators and Performance Auditors

Wednesday, October 7th, 2009

One of the OPEN Annual Conference afternoon workshops was “What Program Evaluators Can Learn from How We Performancy Auditors Do Our Work” from Gary Blackmer, Oregon Audits Director from the office of the Oregon Secretary of State. He referred to the 2007 Government Auditing Standards definition of performance auditing, pointing out that information resulting from audits is intended by use of those charged with governance and oversight to improve program performance and operations, reduce costs, and facilitate decision making. How does this differ from program evaluation? Auditors are required to follow the Government Auditing Standards from the Government Accountability Office (also known as “The Yellow Book”). Auditors are organizationally independent from the entities being audited; they do not negotiate scope, objectives, or access to data; and they always produce public reports. He pointed out that auditors make it a practice to spend one-third of their time conducting assessment and developing their audit plan, one-third gathering evidence, and one-third producing reports of findings. (Audience members generally agreed that evaluators tend to spend less time than that on planning and reporting.)

Published audit reports are typically intended to provide a window into an organization–a portrait for the public–with referrals to separately published working papers that provide details about methodology, data, and analysis. The speaker observed that bad news doesn’t travel up very well and we don’t get rewarded for doing things wrong. Organizations often only want to be assessed on things that can be controlled, leading to an emphasis on process rather than outcomes. Auditors sometimes have a more accurate view of “reality” than management and, emphasizing that there is always room for improvement in every organization, provide “bad news” (ie, suggestions regarding changes) in doses that can be tolerated.

Handouts and powerpoints from this workshop will be made available at the OPEN web site.

Oregon Program Evaluators Network: Context in Evaluation

Wednesday, October 7th, 2009

The Oregon Program Evaluators Network (OPEN) held its Annual Conference last week in Portland. OPEN is a regional affiliate of the American Evaluation Association, with members from government agencies, nonprofits, universities, and private consulting firms. Their annual meeting primarily attracts evaluators from western Oregon and southwestern Washington (Vancouver, WA down through Eugene, OR) but there was at least one international participant and several of us from Seattle. This was a very interesting meeting and I’ll provide my subjective take-aways from it in this post and the next two posts.

The opening keynote speaker was Dr. Debra Rog, 2009 president of the American Evaluation Association, and her talk was titled, “When Background Becomes Foreground: The Importance of Context in Evaluation.” She mentioned ongoing discussions in the evaluation field about whether randomized studies can actually be considered a “gold standard”–they’re great when you can do them, but they’re not always appropriate (they may not be practical or ethical). She spoke of realist evaluation, pointing out that programs are embedded in layers of social reality. For example, power and privilege can influence programs (and the evaluation of those programs) in fundamental ways. Often there are only two “degrees of separation” between the people in power and the people who are providing data–this can lead to a lack of openness and honesty. Context can be difficult to identify and this speaks to the importance of including multiple voices and views. She provided a great insight from her own experience about sharing evaluation results with stakeholders: decision makers are busy and don’t want nuances, they want the bottom line.

The afternoon featured six workshops running in two sets of three (here’s a copy of the agenda). I attended these sessions:

Handouts and powerpoints from the afternoon workshops will be made available at the OPEN web site.

CDC resource on developing project “success stories”

Friday, August 14th, 2009

You can compile all the statistics in the world to demonstrate the effectiveness of your program – but it’s the stories behind the statistics that will be remembered. The workbook Impact and Value: Telling Your Program’s Story provides valuable tips and examples for developing success stories to demonstrate program achievements. This workbook goes beyond describing how to collect anecdotes of individuals who have benefited from your program (although anecdotes are used effectively in the workbook examples). Rather, the workbook shows how to frame your project’s successes in a story format that is easily communicated and remembered. The authors give outlines for a variety of presentation formats, from short elevator speeches delivered to high-powered stakeholders to 2-page write-ups for various audiences. The workbook includes a useful template for developing success stories. Impact and Value: Telling Your Program’s Story is a good resource for those who want to use their evaluation results for effective program advocacy.

This workbook, published by the CDC, is available for download here.

Citation: Lavinghouze SR, Price AW. Impact and Value: Telling your Program’s Story. Atlanta, Georgia: Centers for Disease Control and Prevention, National Center for Chronic Disease Prevention and Health Promotion, Division of Oral Health, 2007.

Are Focus Group Transcripts Necessary?

Monday, July 27th, 2009

How important is it to transcribe focus group discussions? Dr. Rita O’Sullivan from the UNC-Chapel Hill School of Education sought an objective answer to that question. She and colleagues ran an experiment in which two co-facilitators ran seven focus groups and created summary reports of the discussions. Each co-facilitator produced a report for each focus group: one wrote a summary based on memory, handwritten notes and a transcript of the audio tape; the other wrote a summary using memory, notes and the audiotape. (Each facilitator prepared seven summaries, some using the first method and some using the second.)  Then, 18 educational professionals who were enrolled in a graduate-level educational research class compared the pairs of summaries.  Sixteen of the 18 reviewers found no substantive differences between the two versions of the summaries.

What does this mean for evaluators?  The authors concluded that their findings, although preliminary, suggest that, for the typical program evaluation setting, transcripts are not necessary to produce useful focus group discussion summaries. The findings also make it hard to justify the transcription costs for focus groups in evaluation settings – because every dollar spent on evaluation is one not spent on the program.  

Source: O’Sullivan et al. Transcribing focus group articles: Is there a viable alternative? 2004 November.  Paper presented at the joint international meeting of the American Evaluation Association and the Canadian Evaluation Society, Toronto, Canada.

SurveyMonkey software application meets federal accessibility guidelines

Tuesday, July 14th, 2009

Someone recently asked me if SurveyMonkey forms are accessible to those with functional limitations and disabilities. In fact, SurveyMonkey received Section 508 certification in June 2008. According to the company’s Web site, they are the only commercial online survey application that has this certification.

While SurveyMonkey software automatically formats surveys to be accessible, there are a few practices that we need to follow to make sure SurveyMonkey questionnaires are user-friendly with screen-readers and other visual technologies. For instance, don’t add extra html coding to your questionnaire (e.g., to bold-face or italicize words) because screen-readers may read parts of html coding as text. Also, SurveyMonkey’s default color schemes are configured for maximum contrast to help low-vision users. Creating your own color schemes may make your forms less readable for this population. You can find more tips from SurveyMonkey for creating screen-reader friendly forms at this link.


AEA/CDC Training session: Utilization-Focused Evaluation

Wednesday, July 1st, 2009

AEA/CDC Training session: Utilization-focused evaluation

The first training session I took at the AEA/CDC Institute was Michael Patton’s Utilization-Focused Evaluation.  This workshop was pitched primarily for evaluators who are sick of producing time-consuming evaluation report tombs that sit on shelves. (You’re thinking I should have written “evaluation report tomes,” but actually, those reports are where evaluation results go to die.)  Patton commented that you could probably attach an executive report to 500 sheets of blank paper – or 500 pages from a phone book pulled from your recycling bin – and no one would ever notice because they never read past the executive summary.

Here’s some interesting food for thought: Patton said that the order of the evaluation standards (Utility, Feasibility, Propriety, and Accuracy) is deliberate: Utility, or usefulness to intended users, is listed first because it’s deemed the most important. So, in evaluation design, the evaluation’s usefulness should be considered ahead of its feasibility (practicality and cost effectiveness), propriety (legality, ethics, and concern for the welfare of others), and accuracy (technically adequate information about features that determine merit or worth of a program). All are important standards, but utility gets top ranking. (Definitions for the four evaluation standards are listed here at the American Evaluation Association web site.)

To enhance the utility of evaluation findings, Patton said it is important to identify the intended users and uses of the evaluation information at the beginning of the evaluation and create an action plan for use of evaluation results that takes the following into account:

·         The decisions the evaluation findings are meant to inform

·         Timing of those decisions

·         The stakeholders who will see and respond to the data

The responsibility for facilitating use of the findings falls on the evaluation consultant (or whoever is in charge of conducting the evaluation.)

If you are interested in learning how to conduct more useful evaluations, I recommend Patton’s Utilization-Focused Evaluation (2008, Sage), which is now in its 4th edition.

AEA/CDC Summer Evaluation Institute

Friday, June 19th, 2009

I spent the earlier part of the week (June 15-17) in Atlanta attending the AEA/CDC Summer Evaluation Institute and, as usual, came away with some great information.  I’ll be adding some separate blog entries about the sessions I attended, but I thought I would give a rundown on this particular event.  The Summer Evaluation Institute is conducted jointly by the American Evaluation Association and the Centers for Disease Control and Prevention, so many presenters and attendees were from the CDC – but those of us who attend the AEA conference or other evaluation training events found familiar names on the roll of presenters. The Summer Evaluation Institute differs from the AEA conference in that it is totally training-oriented – offering a limited number of educational sessions between 8:30 and 4:00 pm over 2.5 days.  So you don’t feel conflicted over all the options of a conference and you have plenty of downtime to meet and network with colleagues. As you might expect, there is an emphasis on health-related evaluation in many of the sessions, but that emphasis appears more in the examples used by instructors – the evaluation techniques themselves are applicable across disciplines.  The cost is reasonable.  This year, the cost was $395 for AEA members (and CDC employees) and a little more for non-members. (Sorry I can’t be more specific: the fee is no longer listed at the AEA Web site now that the event is over).  That fee includes three keynote speeches, a choice of training sessions each morning and “breakout” sessions in the afternoon. (I’m not sure how “training sessions” differed from “breakout sessions,” other than length of time – the training sessions were about an hour longer than the breakout sessions).  It also includes breakfast and lunch on most days.  Beginner workshops were offered on June 14 for an additional cost:  “Quantitative Methods for Evaluation;” and “Introduction to Evaluation.”  The Summer Evaluation Institute is held annually, so if you think you might be interested in the 2010 event, check out the AEA web site (eval.org) starting in March.

Data.gov recently launched

Wednesday, June 3rd, 2009

A new government Web site, Data.gov, may prove to be a good tool for locating existing data from federal agencies, particularly for those of us doing needs or community assessment.  The Web site is the public’s “one-stop shop” for raw data from economic, healthcare, environmental, and other government agencies.  Along with raw data, the site provides tools for compiling raw data into more analyzable formats (e.g. tables, maps) and widgets (interactive tools with single-service purposes, like showing users the latest news). My quick browsing of the Web site gives me the impression that it is a work in progress.   However, the “about” page says that the catalog of datasets will continue to grow and that the site will be improved based on public feedback. 

Here is a link to the blog entry about Data.gov from the Office of Management and Budget: www.whitehouse.gov/omb/blog/09/05/21/DemocratizingData/ 

Last updated on Monday, June 27, 2016

Funded by the National Library of Medicine under Contract No. UG4LM012343 with the University of Washington.