The Agony of AEA ’06– So Many Good Sessions, So Hard to Choose
I was only able to attend 1.5 days at AEA this year. But, for even a short time, it was well worth the drive to Portland. I loved seeing many of us RMLers at the meeting. I definitely agree with Heidi’s encouragement that everyone would benefit from participating in the conference in some way, some day.
Highlights of the conference for me…
Session title: Needs Assessment in Centers for Public Health Preparedness: Strategies & Issues (Panel Presentation)
This session was lead by James Altschuld (wonderful!). ( He’s the one Susan mentioned who is an expert at needs assessments). Others included: James T. Austin, Ediie Cook, and Lisles Hites. Mary Davis (NC Institute for Pubic Health) and Molly Engle were discussants.
The session focused on ascertaining needs for training by public health workforce in readiness for disasters, terrorist acts, epidemics, etc. Each of the panelists work for or within one of the Centers for Public Health Preparedness.
The CDC-funded Centers for Public Health Preparedness (CPHP) is a national network of 37 academic institutions working in collaboration with state and local public health departments and other community partners. Their mission is to meet national preparedness goals by providing life-long learning opportunities to the public health workforce, in order to handle the next public health crisis. (quoted from their website: http://www.asph.org/cphp/cphp_home.cfm).
Amongst all the panelists, I sensed their awe and admiration for public health workers. Two of them conducted needs assessments during Katrina and saw first hand what the responders were experiencing . Lisle Hites will be publishing results in a few months about his study in Mississippi and Alabama after Katrina — based on the request by the CPDPs there who asked him to find out “What needs are arising from response roles differing from job roles?” “What are the effects of being in a dual role: victim-responder”?
From a methodology point of view, another panelist gave an interesting overview of the kind of question formats that work best in figuring out and prioritiziting real and acute training needs. Some of that was definitely new to me and it might be helpful to consider (re: questionnaire design) in some of the needs assessments we do with Network members.
Session Title: Focus Hocus Pocus: Four Evaluators’ Tales of Magic From the Centers for Disease Control and Prevention
I only saw one paper in this session, but it was a good one by Goldi MacDonald, PhD (at CDC). Her clear message was to keep a persistent focus on intended use of evaluation data. She noted it can be a struggle to keep logic models in perspective, and see them as as a tool to share information, to facilitate program planning, and to provide direction for evaluation. In some situations, she finds that logic models are used as data management tools– a list of data to be collected, that are then called evaluation plans. Instead, she encouraged people to focus data collection decisions about evaluation questions, that might be drawn from a theory-based logic model. She highly recommended this article:
Session Title: Innovative Strategies and Practical Tips for Evaluation Capacity Building
This was a 4 panel session. The best presentation for me was Successful Use of Evaluation Learning Circles by Carolyn Cohen. Based on a 2-year case study, Carolyn described effective practices in building organizational capacity for evaluation through study groups (“learning circles”). She presented several practical and yet engaging ways to bring a focus on evaluation topics amongst new learners *and* those who have some experience. Several exercises seemed really fun and things that would be great to try out in our “circle”!
Session Title: A Generic Theory-Based Logic Model for Creating Scientifically-Based Program Logic Models
Gretchen Jordan, Ed Vine, Jeff Dowd
Quoted from the abstract: A recurring problem in intervention programs, particularly complex government programs, is defining and demonstrating the linkages between the outputs of an intervention and the projected impacts. This lecture describes a generic logic model that incorporates Everett Rogers’ Diffusion of Innovations to help overcome this problem.
The abstract says it all, and the presentation was fascinating.