Skip all navigation and go to page content
NN/LM Home About OERC | Contact Us | Feedback |OERC Sitemap | Help | Bookmark and Share

Archive for the ‘Research Reads’ Category

The Critical Incident Technique and Service Evaluation

In their systematic review of clinical library (CL) service evaluation, Brettle et al. summarize evidence showing that CLs contribute to patient care through saving time and providing effective results.  Pointing out the wisdom of using evaluation measures that can be linked to organizational objectives, they advocate for using the Critical Incident Technique to collect data on specific outcomes and demonstrate where library contributions make a difference.  In the Critical Incident Technique, respondents are asked about an “individual case of specific and recent library use/information provision rather than library use in general.”  In addition, the authors point to Weightman, et al.’s suggested approaches for conducting a practical and valid study of library services:

  • Researchers are independet of the library service.
  • Respondents are anonymous.
  • Participants are selected either by random sample or by choosing all members of specific user groups.
  • Questions are developed with input from library users.
  • Questionnaires and interviews are both used.

Brettle, et al., “Evaluating clinical librarian services: a systematic review.”  Health Information and Libraries Journal, March 2010.  28(1):3-22.

Weightman, et al., “The value and impact of information provided through library services for patient care: developing guidance for best practice.”  Health Information and Libraries Journal, March 2009.  26(1):63-71.

Research Design and Research Methods

Evidence-Based Library and Information Practice has published an open-access article as part of its EBL 101 series, titled “Research Methods: Design, Methods, Case Study…oh my!” in which the author talks about research design as the process of research from research question through “data collection, analysis, interpretation, and report writing”  and  the logic that connects questions and data to conclusions.  It can be thought of as “a particular style, employing different methods.”  In the way that evaluators love to discuss terminology, she then differentiates “research design” from “research method” and points out that research methods are ways to “systematize observation” including the approaches, tools, and techniques that are used during data collection.  Asking whether a “case study” is a research design or a research method, the author concludes that “case study” is an example of a kind of research design that can employ various methods.

Wilson, V. “Research Methods: Design, Methods, Case Study…oh my!”  Evidence Based Library and Information Practice 2011, 6(3):90-91.

An Evaluation of the Use of Smartphones to Communicate Between Clinicians: A Mixed-Methods Study

A good example of mixed methods in this study, which combined quantitative measures of frequency of smartphone use and email messages with interviews and ethnographic observations. The semistructured interviews explored clinician’s perceptions of their smartphone experiences; participants were selected using a purposive sampling stratgy that chose from different groups of health care professionals with differing views on the use of smartphones for clinical communications. The observational methods included nonparticipatory “work-shadowing,” in which a researcher followed medical residents during day and evening shifts, plus observations at the general internal medicine nursing stations. Analysis of qualitative data resulted in five major themes: efficiency, interruptions, interprofessional relations, gaps in perceived urgency, and professionalism. The full article is available open access:

Wu, R, et al. An Evaluation of the Use of Smartphones to Communicate Between Clinicians: A Mixed-Methods Study. Journal of Medical Internet Research, 2011, volume 13, Issue 3.

Empowerment Evaluation in Academic Medicine: Example

The OERC promotes collaborative evaluation approaches in our training and consultations, so it is always nice to have published examples of collaborative evaluation projects.  A recent issue of Academic Medicine features an article showing how a collaborative evaluation approach called Empowerment Evaluation was applied to improve the medical curriculum at Stanford University School of Medicine.  David Fetterman, originaor of  Empowerment Evaluation, is the primary author of the article.  A key contribution of this evaluation approach — which is exemplified in the article — is the ongoing involvment of stakeholders in the analysis and use of program data. This article provides strong evidence that the approach can lead to positive outcomes for programs.

Citation: Fetterman DM, Deitz J, Gesundheit N. Empowerment Evaluation: A Collaborative Approach to Evaluating and Transforming a Medical School Curriculum. Academic Medicine 2010 May; 85(5):813-820.  A link to the abstract is available at http://journals.lww.com/academicmedicine/Abstract/2010/05000/Empowerment_Evaluation__A_Collaborative_Approach.25.aspx

Drug Research, RCTs, and Objectivity

House, ER Blowback: Consequences of Evaluation for Evaluation. American Journal of Evaluation. 2008 December 29 (4), 416-426.

For some, the Randomized Controlled Trial (RCT) has the mystique of separating the researcher from the method and therefore guaranteeing research objectivity. However, in a 2008 article in the American Journal of Evaluation, Ernest House disputes this myth. He describes many examples of how bias has been introduced into the RCTs for new drugs, which are funded primarily by drug companies  (over 70%, according to the article). He talks about suppression of negative results as one problem with drug-company sponsored research; but he also describes ways that drug trials can actually be manipulated to influence results favoring the drug company’s products. For example, the drugs under investigation may be compared to a lower dosage of the competitor’s drug in the control group or the competitor’s drug may be administered in a less effective manner. Studies also may be conducted on younger subjects who generally tend to show fewer side effects or conducted for short periods of time even though the drug was developed for long-term use.

House also notes that, as an evaluation tool, RCTs are very limited in providing all information needed to judge a new drug’s value. Usually the drug group is compared to a control group that gets no treatment instead of the typical dosage of the closest competitor drug. So, while consumers may know the tested drug is superior to no treatment, we know nothing about the cost-benefit or “clinical effectiveness” of a new (often more expensive) drug.

This article provides some important take-home messages for evaluators as well as for consumers of drugs. First of all, no method guarantees objectivity: even the highly acclaimed RCT can be manipulated (deliberately or unconsciously) to influence desired results. Second, evaluation – finding the value of products, services, or programs – usually involves multiple issues that must be investigated through mixed methods. Finally, evaluators need to be aware that they are not completely objective and methods cannot protect them from their own subjectivity. We need to be transparent about our data collection and analysis process and be open to feedback from peers and stakeholders.

Healthcare Services Managers’ Decision-Making and Information

MacDonald, J; Bath, P; Booth, A.  “Healthcare services managers:  What information do they need and use?”  Evidence-Based Library and Information Practice.  3(3):18-38.

This paper presents research results that provide insights into how information influences healthcare managers’ decisions.  Information needs included explicit Organizational Knowledge (such as policies and guidelines), Cultural Organizational Knowledge (situational such as buy-in, controversy, bias, conflict of interest; and environmental such as politics and power), and Tacit Organizational Knowledge (gained experientially and through intuition).  Managers tended to use internal information (already created or implemented within an organization) when investigating an issue and developing strategies.  When selecting a strategy, managers either actively looked for additional external information–or else they didn’t, and simply made a decision without all of the information that they would have liked to have.  Managers may be more likely to use external information (ie, research-based library resources) if their own internal information is well-managed.  The article’s authors suggest that librarians may have a role in managing information created within an organization in order to integrate it with externally created information resources.

Demystifying Survey Research: Practical Suggestions for Effective Question Design

An article entitled “Demystifying Survey Research: Practical Suggestions for Effective Question Design” was published in the journal Evidence Based Library and Information Practice (2007). The aim of this article is to provide practical suggestions for effective questions when designing written surveys. Sample survey questions used in the article help to illustrate how some basic techniques, such as choosing appropriate question forms and incorporating the use of scales, can be used to improve survey questions.

Since this is a peer reviewed, open-access journal, those interested may access the full-text article online at: http://ejournals.library.ualberta.ca/index.php/EBLIP/article/view/516/668.

In addition, for those interested in exploring survey research more, I have found the following print resources to be very helpful in this learning process:

Converse, J.M., and S. Presser. Survey Questions: Handcrafting the Standardized Questionnaire. Thousand Oaks, CA: Sage, 1986.

Fink, A. How to Ask Survey Questions. Thousand Oaks: Sage Publications, 2003.

Fowler, F.J. Improving Survey Questions: Design and Evaluation. Thousand Oaks: Sage Publications, 1995.

The Promise of Appreciative Inquiry in Library Organizations

Sullivan, M. “The Promise of Appreciative Inquiry in Library Organizations.” Library Trends. Summer 2004. 53(1):218-229.

According to Sullivan (2004), Appreciative Inquiry is a different approach to organizational change that “calls for the deliberate search for what contributes to organizational effectiveness and excellence” (p. 218). This perspective proposes moving from a traditional “deficit-based approach” in which there is an emphasis on problems to a more positive and collaborative framework. Therefore, Appreciative Inquiry is a unique approach that includes the identification of positive experiences and achievements as a “means to create change based upon the premise that we can effectively move forward if we know what has worked in the past” (p. 219). Furthermore, this approach “engages people in an exploration of what they value most about their work” (p. 219).

Overall, this article discusses the origins and basic principles of Appreciative Inquiry. In particular, the author provides practical suggestions for how libraries can begin to apply the principles and practices of Appreciative Inquiry to foster a more positive environment for creating change in libraries. For example:

· Start a problem-solving effort with a reflection on strengths, values, and best experiences.

· Support suggestions, possible scenarios, and ideas.

· Take time to frame questions in a positive light that will generate hope, imagination, and creative thinking.

· Ask staff to describe a peak experience in their professional work or a time when they felt most effective and engaged.

· Close meetings with a discussion of what worked well and identify individual contributions to the success of the meeting.

· Create a recognition program and make sure that it is possible (and easy) for everyone to participate.

· Expect the best performance and assume that everyone has the best intentions in what they do.

In conclusion, Appreciative Inquiry entails a major shift in thinking about how change can occur in library organizations. By examining what is working, this approach provides a useful and positive framework for transforming libraries.

What Is “Appreciative Inquiry”?

Christie, C.A. “Appreciative inquiry as a method for evaluation: an interview with Hallie Preskill.”  American Journal of Evaluation. Dec 2006. 27(4): 466-474.

In this interview, Preskill defines appreciative inquiry as “…a process that builds on past successes (and peak experiences) in an effort to design and implement future actions.” (p. 466)  She points out that when we look for problems we find them and this deficit-based approach can lead to feelings of powerlessness.  In appreciative inquiry the focus is on what has worked well, and use of affirmative and strengthening language improves morale.  She suggests a focus on the positive through interviews asking for descriptions of “peak experiences” that led to feelings of being energized and hopeful; asking for information about what is valued most.  She cautions that skeptics will find this to be a “Pollyanna” approach that lacks scientific rigor.

What Does “Effective” Mean?

Schweigert, F.J. “The meaning of effectiveness in assessing community initiatives.” American Journal of Evaluation. Dec 2006. 27(4):416-426.

Evaluators have a way of coming up with answers to questions we didn’t know we had, such as, “what does ‘effective’ mean?” This article points out that the meaning varies according to context. Sometimes a positive judgement means the changes that occurred were the ones that were expected; in others it requires that the changes were better than what would have occurred without any intervention (which needs evidence regarding cause and effect). In true academic evaluator fashion, the author presents three different meanings of “effectiveness”:

  • increased understanding through clarifying assumptions, documenting influences, identifying patterns, assessing expected and unexpected results, etc.
  • accountability through making decisions based on performance expectations and standards, such as in benchmarking.
  • demonstration of causal linkages through experimental and quasi-experimental evidence showing what works. “Although randomized experiments have been called the ‘gold standard’ of social science research and evaluation, evaluators are well aware that experimental designs are not always possible, feasible, necessary, or even desirable.” (p. 427)