Evidence-Based Library and Information Practice has published an open-access article as part of its EBL 101 series, titled “Research Methods: Design, Methods, Case Study…oh my!” in which the author talks about research design as the process of research from research question through “data collection, analysis, interpretation, and report writing” and the logic that connects questions and data to conclusions. It can be thought of as “a particular style, employing different methods.” In the way that evaluators love to discuss terminology, she then differentiates “research design” from “research method” and points out that research methods are ways to “systematize observation” including the approaches, tools, and techniques that are used during data collection. Asking whether a “case study” is a research design or a research method, the author concludes that “case study” is an example of a kind of research design that can employ various methods.
Archive for the ‘Research Reads’ Category
A good example of mixed methods in this study, which combined quantitative measures of frequency of smartphone use and email messages with interviews and ethnographic observations. The semistructured interviews explored clinician’s perceptions of their smartphone experiences; participants were selected using a purposive sampling stratgy that chose from different groups of health care professionals with differing views on the use of smartphones for clinical communications. The observational methods included nonparticipatory “work-shadowing,” in which a researcher followed medical residents during day and evening shifts, plus observations at the general internal medicine nursing stations. Analysis of qualitative data resulted in five major themes: efficiency, interruptions, interprofessional relations, gaps in perceived urgency, and professionalism. The full article is available open access:
The OERC promotes collaborative evaluation approaches in our training and consultations, so it is always nice to have published examples of collaborative evaluation projects. A recent issue of Academic Medicine features an article showing how a collaborative evaluation approach called Empowerment Evaluation was applied to improve the medical curriculum at Stanford University School of Medicine. David Fetterman, originaor of Empowerment Evaluation, is the primary author of the article. A key contribution of this evaluation approach — which is exemplified in the article — is the ongoing involvment of stakeholders in the analysis and use of program data. This article provides strong evidence that the approach can lead to positive outcomes for programs.
Citation: Fetterman DM, Deitz J, Gesundheit N. Empowerment Evaluation: A Collaborative Approach to Evaluating and Transforming a Medical School Curriculum. Academic Medicine 2010 May; 85(5):813-820. A link to the abstract is available at http://journals.lww.com/academicmedicine/Abstract/2010/05000/Empowerment_Evaluation__A_Collaborative_Approach.25.aspx
House, ER Blowback: Consequences of Evaluation for Evaluation. American Journal of Evaluation. 2008 December 29 (4), 416-426.
For some, the Randomized Controlled Trial (RCT) has the mystique of separating the researcher from the method and therefore guaranteeing research objectivity. However, in a 2008 article in the American Journal of Evaluation, Ernest House disputes this myth. He describes many examples of how bias has been introduced into the RCTs for new drugs, which are funded primarily by drug companies (over 70%, according to the article). He talks about suppression of negative results as one problem with drug-company sponsored research; but he also describes ways that drug trials can actually be manipulated to influence results favoring the drug company’s products. For example, the drugs under investigation may be compared to a lower dosage of the competitor’s drug in the control group or the competitor’s drug may be administered in a less effective manner. Studies also may be conducted on younger subjects who generally tend to show fewer side effects or conducted for short periods of time even though the drug was developed for long-term use.
House also notes that, as an evaluation tool, RCTs are very limited in providing all information needed to judge a new drug’s value. Usually the drug group is compared to a control group that gets no treatment instead of the typical dosage of the closest competitor drug. So, while consumers may know the tested drug is superior to no treatment, we know nothing about the cost-benefit or “clinical effectiveness” of a new (often more expensive) drug.
This article provides some important take-home messages for evaluators as well as for consumers of drugs. First of all, no method guarantees objectivity: even the highly acclaimed RCT can be manipulated (deliberately or unconsciously) to influence desired results. Second, evaluation – finding the value of products, services, or programs – usually involves multiple issues that must be investigated through mixed methods. Finally, evaluators need to be aware that they are not completely objective and methods cannot protect them from their own subjectivity. We need to be transparent about our data collection and analysis process and be open to feedback from peers and stakeholders.
This paper presents research results that provide insights into how information influences healthcare managers’ decisions. Information needs included explicit Organizational Knowledge (such as policies and guidelines), Cultural Organizational Knowledge (situational such as buy-in, controversy, bias, conflict of interest; and environmental such as politics and power), and Tacit Organizational Knowledge (gained experientially and through intuition). Managers tended to use internal information (already created or implemented within an organization) when investigating an issue and developing strategies. When selecting a strategy, managers either actively looked for additional external information–or else they didn’t, and simply made a decision without all of the information that they would have liked to have. Managers may be more likely to use external information (ie, research-based library resources) if their own internal information is well-managed. The article’s authors suggest that librarians may have a role in managing information created within an organization in order to integrate it with externally created information resources.
An article entitled “Demystifying Survey Research: Practical Suggestions for Effective Question Design” was published in the journal Evidence Based Library and Information Practice (2007). The aim of this article is to provide practical suggestions for effective questions when designing written surveys. Sample survey questions used in the article help to illustrate how some basic techniques, such as choosing appropriate question forms and incorporating the use of scales, can be used to improve survey questions.
Since this is a peer reviewed, open-access journal, those interested may access the full-text article online at: http://ejournals.library.ualberta.ca/index.php/EBLIP/article/view/516/668.
In addition, for those interested in exploring survey research more, I have found the following print resources to be very helpful in this learning process:
Converse, J.M., and S. Presser. Survey Questions: Handcrafting the Standardized Questionnaire. Thousand Oaks, CA: Sage, 1986.
Fink, A. How to Ask Survey Questions. Thousand Oaks: Sage Publications, 2003.
Fowler, F.J. Improving Survey Questions: Design and Evaluation. Thousand Oaks: Sage Publications, 1995.
Sullivan, M. “The Promise of Appreciative Inquiry in Library Organizations.” Library Trends. Summer 2004. 53(1):218-229.
According to Sullivan (2004), Appreciative Inquiry is a different approach to organizational change that “calls for the deliberate search for what contributes to organizational effectiveness and excellence” (p. 218). This perspective proposes moving from a traditional “deficit-based approach” in which there is an emphasis on problems to a more positive and collaborative framework. Therefore, Appreciative Inquiry is a unique approach that includes the identification of positive experiences and achievements as a “means to create change based upon the premise that we can effectively move forward if we know what has worked in the past” (p. 219). Furthermore, this approach “engages people in an exploration of what they value most about their work” (p. 219).
Overall, this article discusses the origins and basic principles of Appreciative Inquiry. In particular, the author provides practical suggestions for how libraries can begin to apply the principles and practices of Appreciative Inquiry to foster a more positive environment for creating change in libraries. For example:
· Start a problem-solving effort with a reflection on strengths, values, and best experiences.
· Support suggestions, possible scenarios, and ideas.
· Take time to frame questions in a positive light that will generate hope, imagination, and creative thinking.
· Ask staff to describe a peak experience in their professional work or a time when they felt most effective and engaged.
· Close meetings with a discussion of what worked well and identify individual contributions to the success of the meeting.
· Create a recognition program and make sure that it is possible (and easy) for everyone to participate.
· Expect the best performance and assume that everyone has the best intentions in what they do.
In conclusion, Appreciative Inquiry entails a major shift in thinking about how change can occur in library organizations. By examining what is working, this approach provides a useful and positive framework for transforming libraries.
In this interview, Preskill defines appreciative inquiry as “…a process that builds on past successes (and peak experiences) in an effort to design and implement future actions.” (p. 466) She points out that when we look for problems we find them and this deficit-based approach can lead to feelings of powerlessness. In appreciative inquiry the focus is on what has worked well, and use of affirmative and strengthening language improves morale. She suggests a focus on the positive through interviews asking for descriptions of “peak experiences” that led to feelings of being energized and hopeful; asking for information about what is valued most. She cautions that skeptics will find this to be a “Pollyanna” approach that lacks scientific rigor.
Evaluators have a way of coming up with answers to questions we didn’t know we had, such as, “what does ‘effective’ mean?” This article points out that the meaning varies according to context. Sometimes a positive judgement means the changes that occurred were the ones that were expected; in others it requires that the changes were better than what would have occurred without any intervention (which needs evidence regarding cause and effect). In true academic evaluator fashion, the author presents three different meanings of “effectiveness”:
- increased understanding through clarifying assumptions, documenting influences, identifying patterns, assessing expected and unexpected results, etc.
- accountability through making decisions based on performance expectations and standards, such as in benchmarking.
- demonstration of causal linkages through experimental and quasi-experimental evidence showing what works. “Although randomized experiments have been called the ‘gold standard’ of social science research and evaluation, evaluators are well aware that experimental designs are not always possible, feasible, necessary, or even desirable.” (p. 427)
Grembowski, D. The Practice of Health Program Evaluation. Sage, 2001. Info about this book from Google books.
Not a new book, but an interesting one, with information of potential use to us in thinking about evaluating health information outreach. Some general overview perspective from the book:
- Most evaluations are conducted to answer two questions: Is the program working? Why or why not?
- All evaluation is political since judging worth is based on attaching values.
- Evaluation as a 3-act play: Act 1 is asking questions; Act 2 is answering them; Act 3 is using these answers in decision-making.
- Evaluators’ roles range from objective researcher through participant, coach, and advocate.
- Evaluations look at the “theories” behind programs, such as the causes and effects of implementing activities.
- Premises underlying cost-effectiveness analysis: health care resources are scarce, resources have alternate uses, people have different priorities, there are never enough resources to satisfy all.
- Evaluation standards include utility (results are intended to be used), feasibility (methods should be realistic and practical), propriety (methods should be ethical, legal, and respectful of the rights and interests of all participants), accuracy (produce sound information and conclusions that are related logically to data).