Slides and handouts from October’s OPEN (Oregon Program Evaluators Network) meeting are available at their web site in the “OPEN Past Events & Resources” section.
Archive for the ‘News’ Category
One of the OPEN Annual Conference afternoon workshops was “Groovy Graphics: Visual displays of Quantitative Information” from Elizabeth O’Neill and Ralph Holcomb of Multnomah County Aging and Disability Services. (Silly presentation title, not clear why they used it, they didn’t seem to like it, either)
The presentation began with a “Graphics 101″ review of some of Edward Tufte’s concepts about using graphics that “concentrate on the eloquent statement”:
- Show comparisons;
- Provide data at different levels of detail;
- Integrate information so viewers can understand it quickly;
- Provide documentation;
- Reduce clutter.
Every visual choice should convey a meaning. MS Excel provides a toolbox of fundamentals for us:
- Bar Graphs–avoid using the 3D option because it serves no purpose and simply adds visual clutter; typical axis choices are to make the x-axis nominal and the y-axis ordinal, interval, or ratio; horizontal bar graphs are especially useful in facilitating comparison of data to a benchmark;
- Line Graphs–typically the x axis is ordinal or higher and the y axis is interval or ratio;
- Scatterplots–for continuous data; both x and y are interval or ratio;
- Stem and Leaf charts–show mean, median, and mode in one glance (but I find them a bit hard to read);
- Pie Charts–for displaying many values of one variable.
The presenter stated that he is a “strong advocate of DDT”–meaning Dashboards, Drill-downs, and Templates:
- Dashboards provide quick groups of data presented simply (often featuring traffic signals in green, yellow, and red);
- Drilldowns (web pages where viewers can click on a word or a tab) are ways to provide more data with less clutter;
- Trendlines show patterns of data over time.
Sparklines are tiny graphs, charts, traffic signals embedded within text, showing lots of data in a small space. They provide very quick views of data within the context of a narrative, and can be produced with an add-on that can be purchased for Excel (they will be a standard feature in the next version).
Word clouds, such as those produced from Wordle, can be used in textual data analysis for counting the occurrences of words. In word clouds, word size matters–bigger means more–but word position does not–it’s random. So, conclusions cannot be drawn from word proximity. But, the presenter suggested that word clouds could be given to decision makers to help them generate hypotheses and could be used in generating “frequently asked questions” lists. Word clouds also have good potential for use as graphics for the fronts of reports.
Handouts and powerpoints from this workshop will be made available at the OPEN web site.
One of the OPEN Annual Conference afternoon workshops was “What Program Evaluators Can Learn from How We Performancy Auditors Do Our Work” from Gary Blackmer, Oregon Audits Director from the office of the Oregon Secretary of State. He referred to the 2007 Government Auditing Standards definition of performance auditing, pointing out that information resulting from audits is intended by use of those charged with governance and oversight to improve program performance and operations, reduce costs, and facilitate decision making. How does this differ from program evaluation? Auditors are required to follow the Government Auditing Standards from the Government Accountability Office (also known as “The Yellow Book”). Auditors are organizationally independent from the entities being audited; they do not negotiate scope, objectives, or access to data; and they always produce public reports. He pointed out that auditors make it a practice to spend one-third of their time conducting assessment and developing their audit plan, one-third gathering evidence, and one-third producing reports of findings. (Audience members generally agreed that evaluators tend to spend less time than that on planning and reporting.)
Published audit reports are typically intended to provide a window into an organization–a portrait for the public–with referrals to separately published working papers that provide details about methodology, data, and analysis. The speaker observed that bad news doesn’t travel up very well and we don’t get rewarded for doing things wrong. Organizations often only want to be assessed on things that can be controlled, leading to an emphasis on process rather than outcomes. Auditors sometimes have a more accurate view of “reality” than management and, emphasizing that there is always room for improvement in every organization, provide “bad news” (ie, suggestions regarding changes) in doses that can be tolerated.
Handouts and powerpoints from this workshop will be made available at the OPEN web site.
The Oregon Program Evaluators Network (OPEN) held its Annual Conference last week in Portland. OPEN is a regional affiliate of the American Evaluation Association, with members from government agencies, nonprofits, universities, and private consulting firms. Their annual meeting primarily attracts evaluators from western Oregon and southwestern Washington (Vancouver, WA down through Eugene, OR) but there was at least one international participant and several of us from Seattle. This was a very interesting meeting and I’ll provide my subjective take-aways from it in this post and the next two posts.
The opening keynote speaker was Dr. Debra Rog, 2009 president of the American Evaluation Association, and her talk was titled, “When Background Becomes Foreground: The Importance of Context in Evaluation.” She mentioned ongoing discussions in the evaluation field about whether randomized studies can actually be considered a “gold standard”–they’re great when you can do them, but they’re not always appropriate (they may not be practical or ethical). She spoke of realist evaluation, pointing out that programs are embedded in layers of social reality. For example, power and privilege can influence programs (and the evaluation of those programs) in fundamental ways. Often there are only two “degrees of separation” between the people in power and the people who are providing data–this can lead to a lack of openness and honesty. Context can be difficult to identify and this speaks to the importance of including multiple voices and views. She provided a great insight from her own experience about sharing evaluation results with stakeholders: decision makers are busy and don’t want nuances, they want the bottom line.
The afternoon featured six workshops running in two sets of three (here’s a copy of the agenda). I attended these sessions:
- “What Program Evaluators Can Learn from How We Performance Auditors Do Our Work” and
- “Groovy Graphics: Visual Displays of Quantitative Information”
Handouts and powerpoints from the afternoon workshops will be made available at the OPEN web site.
You can compile all the statistics in the world to demonstrate the effectiveness of your program – but it’s the stories behind the statistics that will be remembered. The workbook Impact and Value: Telling Your Program’s Story provides valuable tips and examples for developing success stories to demonstrate program achievements. This workbook goes beyond describing how to collect anecdotes of individuals who have benefited from your program (although anecdotes are used effectively in the workbook examples). Rather, the workbook shows how to frame your project’s successes in a story format that is easily communicated and remembered. The authors give outlines for a variety of presentation formats, from short elevator speeches delivered to high-powered stakeholders to 2-page write-ups for various audiences. The workbook includes a useful template for developing success stories. Impact and Value: Telling Your Program’s Story is a good resource for those who want to use their evaluation results for effective program advocacy.
This workbook, published by the CDC, is available for download here.
Citation: Lavinghouze SR, Price AW. Impact and Value: Telling your Program’s Story. Atlanta, Georgia: Centers for Disease Control and Prevention, National Center for Chronic Disease Prevention and Health Promotion, Division of Oral Health, 2007.
Someone recently asked me if SurveyMonkey forms are accessible to those with functional limitations and disabilities. In fact, SurveyMonkey received Section 508 certification in June 2008. According to the company’s Web site, they are the only commercial online survey application that has this certification.
While SurveyMonkey software automatically formats surveys to be accessible, there are a few practices that we need to follow to make sure SurveyMonkey questionnaires are user-friendly with screen-readers and other visual technologies. For instance, don’t add extra html coding to your questionnaire (e.g., to bold-face or italicize words) because screen-readers may read parts of html coding as text. Also, SurveyMonkey’s default color schemes are configured for maximum contrast to help low-vision users. Creating your own color schemes may make your forms less readable for this population. You can find more tips from SurveyMonkey for creating screen-reader friendly forms at this link.
SurveyMonkey’s newsletter reports that SurveyMonkey surveys are now optimized for use on iPhones. The June 2009 newsletter states:
“Because it is a device with a modern, standards-compliant browser, any respondent can receive a link to your survey and access it directly on their iPhone.”
Furthermore, SurveyMonkey is currently working to make their surveys optimized on other media or hand-held devices.
In addition, you now have the ability to do the following:
- Create and download custom charts to enhance the presentation of your survey data.
- Import these graphics into your own presentation software such as PowerPoint, Word, etc.
To learn more about the updates, you can visit the following topic in the help center: Creating Custom Charts
I spent the earlier part of the week (June 15-17) in Atlanta attending the AEA/CDC Summer Evaluation Institute and, as usual, came away with some great information. I’ll be adding some separate blog entries about the sessions I attended, but I thought I would give a rundown on this particular event. The Summer Evaluation Institute is conducted jointly by the American Evaluation Association and the Centers for Disease Control and Prevention, so many presenters and attendees were from the CDC – but those of us who attend the AEA conference or other evaluation training events found familiar names on the roll of presenters. The Summer Evaluation Institute differs from the AEA conference in that it is totally training-oriented – offering a limited number of educational sessions between 8:30 and 4:00 pm over 2.5 days. So you don’t feel conflicted over all the options of a conference and you have plenty of downtime to meet and network with colleagues. As you might expect, there is an emphasis on health-related evaluation in many of the sessions, but that emphasis appears more in the examples used by instructors – the evaluation techniques themselves are applicable across disciplines. The cost is reasonable. This year, the cost was $395 for AEA members (and CDC employees) and a little more for non-members. (Sorry I can’t be more specific: the fee is no longer listed at the AEA Web site now that the event is over). That fee includes three keynote speeches, a choice of training sessions each morning and “breakout” sessions in the afternoon. (I’m not sure how “training sessions” differed from “breakout sessions,” other than length of time – the training sessions were about an hour longer than the breakout sessions). It also includes breakfast and lunch on most days. Beginner workshops were offered on June 14 for an additional cost: “Quantitative Methods for Evaluation;” and “Introduction to Evaluation.” The Summer Evaluation Institute is held annually, so if you think you might be interested in the 2010 event, check out the AEA web site (eval.org) starting in March.
A new government Web site, Data.gov, may prove to be a good tool for locating existing data from federal agencies, particularly for those of us doing needs or community assessment. The Web site is the public’s “one-stop shop” for raw data from economic, healthcare, environmental, and other government agencies. Along with raw data, the site provides tools for compiling raw data into more analyzable formats (e.g. tables, maps) and widgets (interactive tools with single-service purposes, like showing users the latest news). My quick browsing of the Web site gives me the impression that it is a work in progress. However, the “about” page says that the catalog of datasets will continue to grow and that the site will be improved based on public feedback.
Here is a link to the blog entry about Data.gov from the Office of Management and Budget: www.whitehouse.gov/omb/blog/09/05/21/DemocratizingData/
Read even more about the American Evaluation Association meeting at the Eagle Dawg Blog, where Nikki Detmar has summarized her Ten thousand four hundred and thirty one words of notes. Nikki attended many different sessions from the ones I went to, and where we were both in the audience for a session, Nikki took more detailed notes than I did. She’s a fast typist who uses her laptop for notes; I’m a codger who writes with a pen in cursive scrawls on pieces of lined notebook paper. Also, the Eagle Dawg Blog is just an all-around good read for Nikki’s perspectives on life, the universe, health informatics, and medical librarianship.