Skip all navigation and go to page content
NN/LM Home About OERC | Contact Us | Feedback |OERC Sitemap | Help | Bookmark and Share

Archive for the ‘Research Reads’ Category

The Critical Incident Technique and Service Evaluation

In their systematic review of clinical library (CL) service evaluation, Brettle et al. summarize evidence showing that CLs contribute to patient care through saving time and providing effective results.  Pointing out the wisdom of using evaluation measures that can be linked to organizational objectives, they advocate for using the Critical Incident Technique to collect data on specific outcomes and demonstrate where library contributions make a difference.  In the Critical Incident Technique, respondents are asked about an “individual case of specific and recent library use/information provision rather than library use in general.”  In addition, the authors point to Weightman, et al.’s suggested approaches for conducting a practical and valid study of library services:

  • Researchers are independet of the library service.
  • Respondents are anonymous.
  • Participants are selected either by random sample or by choosing all members of specific user groups.
  • Questions are developed with input from library users.
  • Questionnaires and interviews are both used.

Brettle, et al., “Evaluating clinical librarian services: a systematic review.”  Health Information and Libraries Journal, March 2010.  28(1):3-22.

Weightman, et al., “The value and impact of information provided through library services for patient care: developing guidance for best practice.”  Health Information and Libraries Journal, March 2009.  26(1):63-71.

Research Design and Research Methods

Evidence-Based Library and Information Practice has published an open-access article as part of its EBL 101 series, titled “Research Methods: Design, Methods, Case Study…oh my!” in which the author talks about research design as the process of research from research question through “data collection, analysis, interpretation, and report writing”  and  the logic that connects questions and data to conclusions.  It can be thought of as “a particular style, employing different methods.”  In the way that evaluators love to discuss terminology, she then differentiates “research design” from “research method” and points out that research methods are ways to “systematize observation” including the approaches, tools, and techniques that are used during data collection.  Asking whether a “case study” is a research design or a research method, the author concludes that “case study” is an example of a kind of research design that can employ various methods.

Wilson, V. “Research Methods: Design, Methods, Case Study…oh my!”  Evidence Based Library and Information Practice 2011, 6(3):90-91.

An Evaluation of the Use of Smartphones to Communicate Between Clinicians: A Mixed-Methods Study

A good example of mixed methods in this study, which combined quantitative measures of frequency of smartphone use and email messages with interviews and ethnographic observations. The semistructured interviews explored clinician’s perceptions of their smartphone experiences; participants were selected using a purposive sampling stratgy that chose from different groups of health care professionals with differing views on the use of smartphones for clinical communications. The observational methods included nonparticipatory “work-shadowing,” in which a researcher followed medical residents during day and evening shifts, plus observations at the general internal medicine nursing stations. Analysis of qualitative data resulted in five major themes: efficiency, interruptions, interprofessional relations, gaps in perceived urgency, and professionalism. The full article is available open access:

Wu, R, et al. An Evaluation of the Use of Smartphones to Communicate Between Clinicians: A Mixed-Methods Study. Journal of Medical Internet Research, 2011, volume 13, Issue 3.

Empowerment Evaluation in Academic Medicine: Example

The OERC promotes collaborative evaluation approaches in our training and consultations, so it is always nice to have published examples of collaborative evaluation projects.  A recent issue of Academic Medicine features an article showing how a collaborative evaluation approach called Empowerment Evaluation was applied to improve the medical curriculum at Stanford University School of Medicine.  David Fetterman, originaor of  Empowerment Evaluation, is the primary author of the article.  A key contribution of this evaluation approach — which is exemplified in the article — is the ongoing involvment of stakeholders in the analysis and use of program data. This article provides strong evidence that the approach can lead to positive outcomes for programs.

Citation: Fetterman DM, Deitz J, Gesundheit N. Empowerment Evaluation: A Collaborative Approach to Evaluating and Transforming a Medical School Curriculum. Academic Medicine 2010 May; 85(5):813-820.  A link to the abstract is available at http://journals.lww.com/academicmedicine/Abstract/2010/05000/Empowerment_Evaluation__A_Collaborative_Approach.25.aspx

Drug Research, RCTs, and Objectivity

House, ER Blowback: Consequences of Evaluation for Evaluation. American Journal of Evaluation. 2008 December 29 (4), 416-426.

For some, the Randomized Controlled Trial (RCT) has the mystique of separating the researcher from the method and therefore guaranteeing research objectivity. However, in a 2008 article in the American Journal of Evaluation, Ernest House disputes this myth. He describes many examples of how bias has been introduced into the RCTs for new drugs, which are funded primarily by drug companies  (over 70%, according to the article). He talks about suppression of negative results as one problem with drug-company sponsored research; but he also describes ways that drug trials can actually be manipulated to influence results favoring the drug company’s products. For example, the drugs under investigation may be compared to a lower dosage of the competitor’s drug in the control group or the competitor’s drug may be administered in a less effective manner. Studies also may be conducted on younger subjects who generally tend to show fewer side effects or conducted for short periods of time even though the drug was developed for long-term use.

House also notes that, as an evaluation tool, RCTs are very limited in providing all information needed to judge a new drug’s value. Usually the drug group is compared to a control group that gets no treatment instead of the typical dosage of the closest competitor drug. So, while consumers may know the tested drug is superior to no treatment, we know nothing about the cost-benefit or “clinical effectiveness” of a new (often more expensive) drug.

This article provides some important take-home messages for evaluators as well as for consumers of drugs. First of all, no method guarantees objectivity: even the highly acclaimed RCT can be manipulated (deliberately or unconsciously) to influence desired results. Second, evaluation – finding the value of products, services, or programs – usually involves multiple issues that must be investigated through mixed methods. Finally, evaluators need to be aware that they are not completely objective and methods cannot protect them from their own subjectivity. We need to be transparent about our data collection and analysis process and be open to feedback from peers and stakeholders.

Healthcare Services Managers’ Decision-Making and Information

MacDonald, J; Bath, P; Booth, A.  “Healthcare services managers:  What information do they need and use?”  Evidence-Based Library and Information Practice.  3(3):18-38.

This paper presents research results that provide insights into how information influences healthcare managers’ decisions.  Information needs included explicit Organizational Knowledge (such as policies and guidelines), Cultural Organizational Knowledge (situational such as buy-in, controversy, bias, conflict of interest; and environmental such as politics and power), and Tacit Organizational Knowledge (gained experientially and through intuition).  Managers tended to use internal information (already created or implemented within an organization) when investigating an issue and developing strategies.  When selecting a strategy, managers either actively looked for additional external information–or else they didn’t, and simply made a decision without all of the information that they would have liked to have.  Managers may be more likely to use external information (ie, research-based library resources) if their own internal information is well-managed.  The article’s authors suggest that librarians may have a role in managing information created within an organization in order to integrate it with externally created information resources.

What Is “Appreciative Inquiry”?

Christie, C.A. “Appreciative inquiry as a method for evaluation: an interview with Hallie Preskill.”  American Journal of Evaluation. Dec 2006. 27(4): 466-474.

In this interview, Preskill defines appreciative inquiry as “…a process that builds on past successes (and peak experiences) in an effort to design and implement future actions.” (p. 466)  She points out that when we look for problems we find them and this deficit-based approach can lead to feelings of powerlessness.  In appreciative inquiry the focus is on what has worked well, and use of affirmative and strengthening language improves morale.  She suggests a focus on the positive through interviews asking for descriptions of “peak experiences” that led to feelings of being energized and hopeful; asking for information about what is valued most.  She cautions that skeptics will find this to be a “Pollyanna” approach that lacks scientific rigor.

What Does “Effective” Mean?

Schweigert, F.J. “The meaning of effectiveness in assessing community initiatives.” American Journal of Evaluation. Dec 2006. 27(4):416-426.

Evaluators have a way of coming up with answers to questions we didn’t know we had, such as, “what does ‘effective’ mean?” This article points out that the meaning varies according to context. Sometimes a positive judgement means the changes that occurred were the ones that were expected; in others it requires that the changes were better than what would have occurred without any intervention (which needs evidence regarding cause and effect). In true academic evaluator fashion, the author presents three different meanings of “effectiveness”:

  • increased understanding through clarifying assumptions, documenting influences, identifying patterns, assessing expected and unexpected results, etc.
  • accountability through making decisions based on performance expectations and standards, such as in benchmarking.
  • demonstration of causal linkages through experimental and quasi-experimental evidence showing what works. “Although randomized experiments have been called the ‘gold standard’ of social science research and evaluation, evaluators are well aware that experimental designs are not always possible, feasible, necessary, or even desirable.” (p. 427)

Nuggets from the Health Program Evaluation Field

Grembowski, D.  The Practice of Health Program Evaluation.  Sage, 2001.  Info about this book from Google books.

Not a new book, but an interesting one, with information of potential use to us in thinking about evaluating health information outreach.  Some general overview perspective from the book:

  • Most evaluations are conducted to answer two questions:  Is the program working?  Why or why not?
  • All evaluation is political since judging worth is based on attaching values.
  • Evaluation as a 3-act play:  Act 1 is asking questions; Act 2 is answering them; Act 3 is using these answers in decision-making.
  • Evaluators’ roles range from objective researcher through participant, coach, and advocate.
  • Evaluations look at the “theories” behind programs, such as the causes and effects of implementing activities.
  • Premises underlying cost-effectiveness analysis: health care resources are scarce, resources have alternate uses, people have different priorities, there are never enough resources to satisfy all.
  • Evaluation standards include utility (results are intended to be used), feasibility (methods should be realistic and practical), propriety (methods should be ethical, legal, and respectful of the rights and interests of all participants), accuracy (produce sound information and conclusions that are related logically to data).

The “LIMB” Model: Lay Information Mediary Behavior

Abrahamson, J.A.; Fisher, K.E. “‘What’s past is prologue’: towards a general model of lay information mediary behaviour.” Information Research 12(4):October, 2007

Health information outreach is often aimed at information mediaries in addition to primary information seekers. The article defines lay information mediaries as “those who seek information in a non-professional or informal capacity on behalf (or because) of others without necessarily being asked to do so, or engaging in follow-up.” These individuals are also known as gatekeepers, change agents, communication channels, links, navigators, and innovators. The authors present a generalized model of information mediary characteristics, activities, motivations, barriers, facilitators and raise the question of what differences exist between primary information seekers and information mediaries, since “the caregiver-as-person may have information needs that vary from the caregiver-as-caregiver.” These are factors we can take into account in community assessment activities.