Denver in November 5: Saturday Sessions 11-8-08
Going to meetings is hard work! Especially meetings like the American Evaluation Association annual meeting, which is chock full of interesting sessions that make you think. Saturday was a very full day, and quite rewarding.
Fine-tuning Evaluation Methodologies for Innovative Distance Education Programs (Debora Goetz Goldberg, John James Cotter, Virginia Commonwealth University)
VCU Medical School offers a PhD in Health Related Sciences via distance education that combines on-campus learning, asynchronous discussions, synchronous chat, podcasting, and other approaches. Program evaluation followed these steps: define quality (support, course structure, curriculum, instruction), select important areas to review (were goals met, what skills were developed, was advising adequate, was IT adequate, overall program), identify data collection sources (course evaluations, followup assessments, interviews with instructors, feedback from students’ employers), collect and analyze data. Findings showed areas where the curriculum needed adjustment, where technology could be enhanced (for example, offering streaming videos of lectures), and where supplementary use of teaching assistants was needed. The supplementary TAs worked with students in the statistics course.
Evaluation of an Interactive Computer-based Instruction in Six Universities: Lessons Learned (Rama Radhakrishna, Marvin Hall, Kemirembe Olive, Pennsylvania State University)
In a USDA-sponsored (with institutional matching funds) project, Penn State collaborated with five other land-grant universities to develop and offer a 1-semester agronomy course that comprised 11 interactive modules. Development took two years and addressed the funding agency’s desire for collaborative courses that make collective use of expertise, share resources, and reduce duplication of effort. Each module featured 20 knowledge questions plus items about the modules’ navigability, design, and layout. Pre- and post-tests showed knowledge gain. The project showed that multi-institutional collaboration can work, although it can be challenging. In this case, IRB review was needed (because human subjects–the students–were involved) and the crop scientists were unfamiliar with that process.
The Use of a Participatory Multimethod Approach in Evaluating a Distance Education Program in Two Developing Countries (Charles Potter, Sabrina Liccardo, University of the Witwatersrand)
This radio-based series of English lessons for school children in South Africa and Bangladesh has grown significantly since it began in 1992. In 1995 it was reaching 72,000 learners and as of 2005 it was reaching 1,800,000. Evaluation has involved questionnaires, observations, focus groups, and photography; results have been used to report progress to stakeholders and to identify areas for improvement.
Building Evaluation Practice Into Online Teaching: An Action Research Approach to the Process Evaluation of New Courses (Juna Z Snow, InnovatEd Consulting)
The author has developed a Student Performance Portfolio that has been used with two online teacher education courses. The portfolios allow students to conduct ongoing evaluation of their work and of the course, and include weekly goals, activities and time spent, with reflections on assignments and performance. Students submit their portfolios each week. To get the most from the portfolios, it is important to conduct ongoing content analysis and be responsive to students.
Incorporating Cellular Telephones into a Random-digit-dialed Survey to Evaluate a Media Campaign (Lance Potter and Andrea Piesse, Westat; Rebekah Rhoades and Laura Beebe, University of Oklahoma)
When both cellphones and landlines are included in telephone surveys, different sampling frames must be constructed for groups who are cellphone-only, who are landline-only, and who have both. A tobacco intervention study found one significant difference between the 3 groups: those who have both cellphones and landlines smoke less–a difference theorized to stem from income and educational characteristics. The sociology of cellphones is different from landlines. For example, if a cellphone on a counter or a desk rings and the cellphone’s owner is not present, no one else will answer the phone. In addition, many cellphone contracts require cellphone owners to pay for calls they receive; these cellphone owners will not want to use their “minutes” up by answering survey questions. This issue could be addressed by offering gift cards to participants or by conducting surveys on weekends.
The Growing Cell Phone-Only Population in Telephone Survey Research: Evaluators Beware (Joyce Wolfe, Brett Zollinger, Fort Hays State University)
Telephones have been fundamental tools for survey research, and cell phones are introducing new variables to be considered. At one time more than 90% of households had landlines but now almost 16% of telephone users are cellphone-only. The size of the cellphone-only population is projected to increase. Whether there are significant differences between groups of people who have landlines and those who only use cellphones is a topic of debate. The cellphone-only population tends to be young, unmarried, renters, and lower income (and more likely to have financial barriers to treatment). Samples of cell phone numbers can be obtained, but it is illegal to use automatic dialers with these numbers. In addition, more screening is needed because cell phones are linked to individuals rather than to households or geographic locations, and individuals can range in age down to elementary school students.
Perspectives on a Promising Practices Evaluation (Susan Ladd, Rosanne Farris, Jan Jernigan, Belinda Minta, Centers for Disease Control and Prevention; Pam Williams-Piehota, RTI International)
The Centers for Disease Control and Prevention’s Division for Heart Disease and Stroke Prevention (DHDSP) has conducted evaluations of heart disease and stroke interventions to identify effective interventions and promising practices, with the intention of building evaluation capacity at the state level. Lessons learned included: collaboration and comprehensive evaluation planning is time-consuming; better evaluability assessments are needed; periodic reaffirmation of commitments and expectations is necessary.
Rapid Evaluation of Promising Asthma Programs in Schools (Marian Huhman, Dana Keener,Centers for Disease Control and Prevention)
The CDC’s Division of Adolescent and School Health (DASH) funds school-based programs for asthma management and uses a rapid evaluation model to help schools assess program impacts. These evaluations are intended to be completed within one year, with two days devoted to conducting a site’s evaluability assessment and six months devoted to data collection. These evaluations focus on short-term outcomes.
Best of the Worst Practices: What Every New Evaluator Should Know and Avoid in Evaluation Practice (Dymaneke Mitchell, National-Louis University; Amber Golden, Florida A&M University; Roderick L Harris, Sedgwick County Health Department; Nia K Davis, University of New Orleans)
Panel presenters discussed lessons they learned from their evaluation experiences in the American Evaluation Association/Duquesne University Graduate Education Diversity Internship program. The experiences and lessons included the difficulties faced by an evaluator who is working with a group that they feel sympathetic toward. It is hard to be an objective evaluator if you want to help the program succeed. In working with nonprofits it is important to develop patience with ambiguity, to clarify short and long term goals, and align goals with organizational readiness. Strong negotiating skills are needed, along with a focus on building trust and credibility. Evaluation seems to be 10% science and 90% relationships. It is challenging to manage stakeholders’ diverse and sometimes conflicting agendas.
Ethics and Evaluation: Respectful Evaluation with Underserved Communities
This excellent and thought-provoking session featured three presentations that were based on chapters in the recently-published book, The Handbook of Social Research Ethics by DM Mertens and PE Ginsberg (Sage, 2008).
1. Ethical Responsibilities in Evaluations with Diverse Populations: A Critical Race Theory (CRT) Perspective
(Veronica Thomas, Howard University)
In traditional social science research, white men are normative. Critical Race Theory (CRT) is from the critical theory approach which views scholarship as a means to critique and change society and to counteract discrimination and oppression. In the traditional positivist approach, research is explanatory. CRT uses a critical lens to foreground oppressed populations and form conclusions and recommendations that promote social equity and justice. IRBs, with their positivist emphasis on value-free research, can feature a lack of concern for community impacts of projects.
2. Researching Ourselves Back to Life (Joan LaFrance, Mekinak Consulting)
Frustration has built up for many years among Native populations from their sense of being abused by researchers. The traditional IRB approach to human subject protection can fail to address the question of whose voice speaks with authority about Aboriginal experiences. Tribal members are beginning to understand that they can define the degree to which they make themselves available. Five tribes have developed their own IRBs–capacity-building is needed for more tribes to do this. Tribal approaches involve inclusive review teams, a clear definition of who is expert, broader reporting, an understanding of data ownership and publication approval needs, and a negotiation of how stories will be told. Different ways of knowing are accepted: traditional (knowledge of the past), empirical (evidence-based), and revealed (knowledge that comes through channels other than the intellect).
3. Re-Conceptualizing Ethical Concerns in Underserved Communities (Katrina Bledsoe, Walter R McDonald and Associates Inc; Rodney Hopson, Duquesne University)
Underserved communities are those that suffer from a lack of resources that would allow them to thrive. There is a need to reconceptualize traditional views of research and ethics. Unintentional ethical violations have grown from inappropriate methods, use of data, and dissemination of results. There is a power differential between researchers and participants and, in randomized controlled trials, a problem from the assumption of population homogeneity. In a new philosophical perspective, evaluators will consider culture, history, community consent, social responsibility, and the differences between what is meaningful statistically and what is meaningful to a community.