Skip all navigation and go to page content
NN/LM Home About OERC | Contact Us | Feedback |OERC Sitemap | Help | Bookmark and Share

Cindy’s AEA Report

AEA has been my favorite professional conference since I started attending about 5 years ago. This year, I attended some of the same sessions as my colleagues, so I won’t report on those. Here are some other sessions that I thought were noteworthy.


Improving Data Quality Through Two-Dimensional Surveying: The Kano Method

On my plane trip to Portland, I had the pleasure of sitting beside Vathsala Stone, an evaluator from University of Buffalo who had a poster session on the Kano Method (her presentation partner was Stephen Bauer, University at Buffalo). I did not get to the poster sessions, but Dr. Stone explained this technique on our plane trip. The Kano Method is used in product development to get an accurate assessment of how important various product features are to consumers, allowing developers to make more accurate product-design decisions. I think the Kano Method has application for development of products like web-based health resources, educational materials, and training curriculum. I found an article at a web site for the Center for Quality of Management that provides a description of the method : http://cqmextra.cqm.org/cqmjournal.nsf/issues/vol2no4

Three Effective Ways to Build Organizational Capacity in Local Nonprofit Agencies

The terms “evaluation capacity building” refers to increasing organizations’ ability to identify meaningful indicators to evaluate, collect useful evaluation data, and use the data to improve their programs. The ideas that I took away from this panel presentation emphasized methods for motivating grantees to use evaluation data. Until program implementers see data as a tool for improving their programs, evaluation will continue to seem like a meaningless, funder-imposed requirement.

Abraham Wandersman (University of South Carolina) and Jan Yost (Health Foundation of Central Massachusetts) presented their “Results-Oriented Grantmaking and Grant-Implementation” approach, in which nonprofit grantees receive an orientation in use of tools that allow them evaluate their programs by addressing 10 accountability questions. Michael Hendricks, an independent consultant, described his work with a community health foundation to help grantees develop a “Managing for Results” orientation. One key strategy for the foundation was to ask for short reports bi-monthly about the indicators – essentially they report on what the indicators are showing and what changes in services were made as a result. (Funding recipients, of course, receive evaluation training so they can create these reports.) The third presenter, Ann Bessell, University of Miami, talked about using spider diagrams to display data and help organizations interpret and apply their evaluation findings.

What’s the Score: Disentangling Qualitative Data by Building Scoring Guides

Ann Davis and Phyllis Ault of Northwest Regional Educational Laboratory conducted a demonstration on developing scoring guides to assess products (e.g., multimedia presentations) or performances (e.g., effectiveness of learning communities). They described how to build a framework and establish criteria, then how to review and revise the rating form. They also provided concrete explanations of how to establish inter-rater reliability and how to assign a score based on ratings from multiple raters. I think scoring guides could be used to assess trainees’ ability to locate health information. Also, if high school peer tutor programs become more popular, they could be used to assess students’ presentations and give them feedback.

Comments are closed.