Skip all navigation and go to page content
NN/LM Home About OERC | Contact Us | Feedback |OERC Sitemap | Help | Bookmark and Share

Archive for October, 2009

Oregon Program Evaluators Network: Visual Displays of Data

One of the OPEN Annual Conference afternoon workshops was “Groovy Graphics: Visual displays of Quantitative Information” from Elizabeth O’Neill and Ralph Holcomb of Multnomah County Aging and Disability Services. (Silly presentation title, not clear why they used it, they didn’t seem to like it, either)

The presentation began with a “Graphics 101″ review of some of Edward Tufte’s concepts about using graphics that “concentrate on the eloquent statement”:

  • Show comparisons;
  • Provide data at different levels of detail;
  • Integrate information so viewers can understand it quickly;
  • Provide documentation;
  • Reduce clutter.

Every visual choice should convey a meaning. MS Excel provides a toolbox of fundamentals for us:

  • Bar Graphs–avoid using the 3D option because it serves no purpose and simply adds visual clutter; typical axis choices are to make the x-axis nominal and the y-axis ordinal, interval, or ratio; horizontal bar graphs are especially useful in facilitating comparison of data to a benchmark;
  • Line Graphs–typically the x axis is ordinal or higher and the y axis is interval or ratio;
  • Scatterplots–for continuous data; both x and y are interval or ratio;
  • Stem and Leaf charts–show mean, median, and mode in one glance (but I find them a bit hard to read);
  • Pie Charts–for displaying many values of one variable.

The presenter stated that he is a “strong advocate of DDT”–meaning Dashboards, Drill-downs, and Templates:

  • Dashboards provide quick groups of data presented simply (often featuring traffic signals in green, yellow, and red);
  • Drilldowns (web pages where viewers can click on a word or a tab) are ways to provide more data with less clutter;
  • Trendlines show patterns of data over time.

Sparklines are tiny graphs, charts, traffic signals embedded within text, showing lots of data in a small space. They provide very quick views of data within the context of a narrative, and can be produced with an add-on that can be purchased for Excel (they will be a standard feature in the next version).

Word clouds, such as those produced from Wordle, can be used in textual data analysis for counting the occurrences of words. In word clouds, word size matters–bigger means more–but word position does not–it’s random. So, conclusions cannot be drawn from word proximity. But, the presenter suggested that word clouds could be given to decision makers to help them generate hypotheses and could be used in generating “frequently asked questions” lists. Word clouds also have good potential for use as graphics for the fronts of reports.

Handouts and powerpoints from this workshop will be made available at the OPEN web site.

Oregon Program Evaluators Network: Program Evaluators and Performance Auditors

One of the OPEN Annual Conference afternoon workshops was “What Program Evaluators Can Learn from How We Performancy Auditors Do Our Work” from Gary Blackmer, Oregon Audits Director from the office of the Oregon Secretary of State. He referred to the 2007 Government Auditing Standards definition of performance auditing, pointing out that information resulting from audits is intended by use of those charged with governance and oversight to improve program performance and operations, reduce costs, and facilitate decision making. How does this differ from program evaluation? Auditors are required to follow the Government Auditing Standards from the Government Accountability Office (also known as “The Yellow Book”). Auditors are organizationally independent from the entities being audited; they do not negotiate scope, objectives, or access to data; and they always produce public reports. He pointed out that auditors make it a practice to spend one-third of their time conducting assessment and developing their audit plan, one-third gathering evidence, and one-third producing reports of findings. (Audience members generally agreed that evaluators tend to spend less time than that on planning and reporting.)

Published audit reports are typically intended to provide a window into an organization–a portrait for the public–with referrals to separately published working papers that provide details about methodology, data, and analysis. The speaker observed that bad news doesn’t travel up very well and we don’t get rewarded for doing things wrong. Organizations often only want to be assessed on things that can be controlled, leading to an emphasis on process rather than outcomes. Auditors sometimes have a more accurate view of “reality” than management and, emphasizing that there is always room for improvement in every organization, provide “bad news” (ie, suggestions regarding changes) in doses that can be tolerated.

Handouts and powerpoints from this workshop will be made available at the OPEN web site.

Oregon Program Evaluators Network: Context in Evaluation

The Oregon Program Evaluators Network (OPEN) held its Annual Conference last week in Portland. OPEN is a regional affiliate of the American Evaluation Association, with members from government agencies, nonprofits, universities, and private consulting firms. Their annual meeting primarily attracts evaluators from western Oregon and southwestern Washington (Vancouver, WA down through Eugene, OR) but there was at least one international participant and several of us from Seattle. This was a very interesting meeting and I’ll provide my subjective take-aways from it in this post and the next two posts.

The opening keynote speaker was Dr. Debra Rog, 2009 president of the American Evaluation Association, and her talk was titled, “When Background Becomes Foreground: The Importance of Context in Evaluation.” She mentioned ongoing discussions in the evaluation field about whether randomized studies can actually be considered a “gold standard”–they’re great when you can do them, but they’re not always appropriate (they may not be practical or ethical). She spoke of realist evaluation, pointing out that programs are embedded in layers of social reality. For example, power and privilege can influence programs (and the evaluation of those programs) in fundamental ways. Often there are only two “degrees of separation” between the people in power and the people who are providing data–this can lead to a lack of openness and honesty. Context can be difficult to identify and this speaks to the importance of including multiple voices and views. She provided a great insight from her own experience about sharing evaluation results with stakeholders: decision makers are busy and don’t want nuances, they want the bottom line.

The afternoon featured six workshops running in two sets of three (here’s a copy of the agenda). I attended these sessions:

Handouts and powerpoints from the afternoon workshops will be made available at the OPEN web site.