Skip all navigation and go to page content
NN/LM Home About OERC | Contact Us | Feedback |OERC Sitemap | Help | Bookmark and Share

Archive for the ‘News’ Category

Logic Models: Handy Hints

The Coffee Break Demonstration webinar  for Thursday, January 5 from the American Evaluation Association was “5 Hints for Making Logic Models Worth the Time and Effort.”  CDC Chief Evaluation Officer Tom Chapel provided this list:

1.  See the model as a means, not an end itself

His point here was that you may not NEED a logic model for successful program planning, but you will ALWAYS need a program description that describes need, target groups, intended outcomes, activities, and causal relationships.  He advised us to identify “accountable” short-term outcomes where the link between the project and subsequent changes can be made clear, and differentiate between those and the longer-term outcomes to which the program contributes.

2.  Process use may be the highest and best use

Logic models are useful for ongoing evaluation and adjustment of activities for continuous quality improvement.

3.  Let form follow function

You can make a logic model as detailed and complex as you need it to be, and you can use whatever format works best for you.  He pointed out that the real “action” is in the middle of the model.  The “middle” of the logic model, its “heart,” is the description of what the program does, and who or what will change as a result of the program.  He advised us to create simple logic models that focus on these key essentials to aid communication about a program.  This simple logic model can be a frame of reference for more complexity—for details about the program and how it will be evaluated.

4.  Use additional vocabulary sparingly, but correctly

Mediators such as activities and outputs help us understand the underlying “logic” of our program.  Moderators are contextual factors that will facilitate or hinder outcome achievement.

5.  Think “zebras,” not “horses”

This is a variation of the saying, “when you hear hoofbeats, think horses, not zebras.”   My interpretation of this hint is that it’s a reminder that in evaluation we are looking not only for expected outcomes but also unexpected ones.   According to Wikipedia, “zebra” is medical slang for a surprising diagnosis.

You can find the slides for Tom Chapel’s presentation in the American Evaluation Association’s Evaluation eLibrary.

Can Tweets Predict Citations?

Can Tweets Predict Citations? Metrics of Social Impact Based on Twitter and Correlation with Traditional Metrics of Scientific Impact.”  G. Eysenbach, Journal of Medical Internet Research 2011; 13(4):e123

This article describes an investigation of whether tweets predict highly cited articles.  The author looked at 1573 “tweetations” (tweets about articles) of 55 articles in the Journal of Medical Internet Research and their subsequent citation data from Scopus and Google Scholar.  There was a correlation:  highly tweeted articles were 11 times more likely to be highly cited than less-tweeted articles.  The author proposes a “twimpact factor” (the cumulative number of tweetations within a certain number of days since publication) as a near-real time measure of reach of research findings.

Strategies for Improving Response Rate

There are articles about strategies to improve survey response rate with health professionals in the open access December, 2011 issue of Evaluation and the Health Professions.   Each explored variations on Dillman’s Tailored Design Method, also known as TDM (see this Issue Brief from the University of Massachusetts Medical School’s Center for Mental Health Services Research for a summary of TDM).

Surveying Nurses: Identifying Strategies to Improve Participation” by J. VanGeest and T.P. Johnson (Evaluation and the Health Professions, 34(4):487-511)

The authors conducted a systematic review of efforts to improve response rates to nurse surveys, and found that small financial incentives were effective and nonmonetary incentives were not effective.  They also found that postal and telephone surveys were more successful than web-based approaches.

Surveying Ourselves:  Examining the Use of a Web-Based Approach for a Physician Survey” by K.A. Matteson; B.L. Anderson; S.B. Pinto; V. Lopes; J. Schulkin; and M.A. Clark (Eval Health Prof  34(4):448-463)

The authors distributed a survey via paper and the web to a national sample of obstetrician-gynecologists and found little systematic difference between responses using the two modes, except that university physicians were more likely to complete the web-based version than private practice physicians.  Data quality was also better for the web survey: fewer missing and inappropriate responses.  The authors speculate that university-based physicians may spend more time at computers than do private physicians.  However, given that response rate was good for both groups, the authors conclude that using web-based surveys is appropriate for physician populations and suggest controlling for practice type.

Effects of Incentives and Prenotification on Response Rates and Costs in a National Web Survey of Physicians” by J. Dykema; J. Stevenson; B. Day; S.L. Sellers; and V.L. Bonham (Eval Health Prof 34(4):434-447, 2011)

The authors found that response rates were highest in groups that were entered into a $50 or $100 lottery and that a prenotification letter containing a $2 preincentive.  They also found that use of postal prenotification letters increased response rates (even though the small token $2 had no additional effect and was not cost-effective).  The authors conclude that larger promised incentives are more effective than nominal preincentives.

A Randomized Trial of the Impact of Survey Design Characteristics on Response Rates among Nursing Home Providers” by M. Clark et al. (Eval Health Prof 34(4):464-486.

This article describes an experiment in maximizing participation by both the Director of Nursing and the Administrator of long-term care facilities.  One of the variables was incentive structure, in which the amount of incentive increased if both participated, and decreased if only one participated.  The authors found that there were no differences in the likelihood of both respondents participating by mode, questionnaire length, or incentive structure.

Evaluating Information Skills Training in Health Libraries

In her 2007 systematic review of how health libraries evaluate their training activities, Alison Brettle points out that these evaluations are designed to address various questions:  Are class participants learning?  Are resources being used in the best way?  Are more resources needed?  What changes should be made to improve materials and methods?   This review focuses on measures that examine changes in class participants’ knowledge, skills, or behavior.  The majority were used in these study designs:

  • Pre-experimental (one group, post-test only; one group with pre- and post-test; two groups post-test only)
  • Quasi-experimental (control group, pre- and post-testing without randomization)
  • Randomized controlled experiments

A few of the studies in the review were qualitative, and some were descriptive.  Methods of measuring outcomes of information skills classes included:

  • A score sheet or checklist listing features of a search
  • Surveys, including perceptions of the training, participant’s confidence, and ability to use knowledge gained
  • The Objective Structured Clinical Exam (OCSE) in which medical students perform clinical tasks in a short period of time, with literature searching included as one of the tasks

Appendix 2 of the article lists the studies that Brettle reviewed, describes methodologies and tools, and indicates how (or whether) instruments were deemed reliable and valid.  (Quick review–reliable instruments produce the same results if used again in the same situation; valid instruments actually measure what they claim to measure, and might produce generalizable results.)

This systematic review is open access and you can find the full text here:
Brettle, A. “Evaluating information skills training in health
libraries: a systematic review.” Health Information & Libraries Journal. 24: (Supplement s1): 18–37, December 2007.

Comparison of a Postal Survey and Mixed-Mode Survey

This open access article from the Journal of Medical Internet Research features a comparison between two modes of questionnaire delivery:  postal only, and mixed-method (email followed up by postal mail).  The authors looked at:

  • Respondent characteristics
  • Response rate
  • Response time
  • Rating scale responses
  • Data quality
  • Total costs

Here are the conclusions:

“Mixed-mode surveys are an alternative method to postal surveys that yield comparable response rates and groups of respondents, at lower costs. Moreover, quality of health care was not rated differently by respondents to the mixed-mode or postal survey. Researchers should consider using mixed-mode surveys instead of postal surveys, especially when investigating younger or more highly educated populations.”

Zuidgeest, M., et al. “A Comparison of a Postal Survey and Mixed-Mode Survey Using a Questionnaire on Patients’ Experiences With Breast Care.”  Journal of Medical Internet Research (JMIR):2011;13(3):e68.

How Many Interviews Are Enough?

When planning an evaluation that features interviews, it is difficult to know in advance how many interviews should be conducted.  The usual approach is to continue to interview until you no longer hear new information.  (Olney & Barnes, Data Collection and Analysis, p. 23)  However, there are times when a numeric guideline would be very useful.  For example, such a guideline would help in creating the budget for the evaluation section of a proposal when that evaluation features qualitative data collection.  Guest, et al., conducted a research project in which they analyzed transcripts of sixty interviews and found that 94% of the coded topics that appeared were identified within six interviews.  Analysis of six additional interviews revealed only another 3% of all the coded topics that were eventually found in the sixty interviews.  They concluded that data saturation (the point at which no additional data are being found) occurred by the time they had analyzed twelve interviews.  They point out that similar evidence has been presented by Nielsen and Landauer, who found that technology usability studies uncover 80% of problems after six evaluations, and 90% after twelve.  In a later publication, Nielsen showed that usability testing with fifteen participants will uncover 100% of problems, but he recommends using a smaller number (“Why You Only Need to Test with Five Users” from Jakob Nielsen’s Alertbox, March 19, 2000).

Is twelve the magic number for interviews, then?  Not necessarily.  Guest, et al. caution that their study involved highly structured interviews with members of a relatively homogeneous group.  In addition, the interview topics were familiar to the participants.  With a heterogeneous group or a diffuse (vague) topic, more interviews will probably be needed.  The bottom line is to select a purposeful group of interview candidates carefully.  Olney and Barnes provide more details about purposeful sampling (Olney & Barnes, Data Collection and Analysis, p. 23).

The article by Guest, et al. is not open-access, but this is a link to the authors’ abstract:  Guest, G., et al.  “How many interviews are enough? :  An experiment with data saturation and variability.”  Field Methods, February 2006.  18(1):59-82.

The Value of Academic Libraries

Last year the Association of College and Research Libraries issued a very substantial and thorough review of the research that has been done on how to measure library value:  “The Value of Academic Libraries: A Comprehensive Research Review and Report” by Megan Oakleaf.  Althought its focus is academia, there are sections reviewing work in public libraries, school libraries, and special libraries.  I recommend the section on special libraries, which has quite an emphasis on medical libraries, including references to past work regarding clinical impacts.

For those who are interested in approaches that libraries have taken to establishing their value, there is potential benefit in reading the entire report cover-to-cover.  For those who want a quick overview, these are sections that I recommend:

  • Executive Summary
  • Defining “Value”
  • Special Libraries

For more information about this report, see “A tool kit to help academic librarians demonstrate their value” from the 9/14/2010 issue of the Chronicle of Higher Education.

The full report is available at http://www.acrl.ala.org/value/.

The Critical Incident Technique and Service Evaluation

In their systematic review of clinical library (CL) service evaluation, Brettle et al. summarize evidence showing that CLs contribute to patient care through saving time and providing effective results.  Pointing out the wisdom of using evaluation measures that can be linked to organizational objectives, they advocate for using the Critical Incident Technique to collect data on specific outcomes and demonstrate where library contributions make a difference.  In the Critical Incident Technique, respondents are asked about an “individual case of specific and recent library use/information provision rather than library use in general.”  In addition, the authors point to Weightman, et al.’s suggested approaches for conducting a practical and valid study of library services:

  • Researchers are independet of the library service.
  • Respondents are anonymous.
  • Participants are selected either by random sample or by choosing all members of specific user groups.
  • Questions are developed with input from library users.
  • Questionnaires and interviews are both used.

Brettle, et al., “Evaluating clinical librarian services: a systematic review.”  Health Information and Libraries Journal, March 2010.  28(1):3-22.

Weightman, et al., “The value and impact of information provided through library services for patient care: developing guidance for best practice.”  Health Information and Libraries Journal, March 2009.  26(1):63-71.

Research Design and Research Methods

Evidence-Based Library and Information Practice has published an open-access article as part of its EBL 101 series, titled “Research Methods: Design, Methods, Case Study…oh my!” in which the author talks about research design as the process of research from research question through “data collection, analysis, interpretation, and report writing”  and  the logic that connects questions and data to conclusions.  It can be thought of as “a particular style, employing different methods.”  In the way that evaluators love to discuss terminology, she then differentiates “research design” from “research method” and points out that research methods are ways to “systematize observation” including the approaches, tools, and techniques that are used during data collection.  Asking whether a “case study” is a research design or a research method, the author concludes that “case study” is an example of a kind of research design that can employ various methods.

Wilson, V. “Research Methods: Design, Methods, Case Study…oh my!”  Evidence Based Library and Information Practice 2011, 6(3):90-91.

IdeaScale – Quantified Qualitative Data

Colleagues at the National Library of Medicine Training Center notified the OERC about interesting new software that builds on social media to collect comments from communities of interest. It’s called IdeaScale.

The software is designed to support “crowdsourcing,” in which an open call is sent to a targeted group (a “community”) to participate in solving a problem or developing an innovation.  With IdeaScale, people can post their ideas about an issue, and then others can cast a “like/dislike” vote and add comments. This tool provides an interesting approach to evaluation because you can get both qualitative responses and a quantified measure of interest in the community.  The software can give evaluators a jump on thematic analysis of comments.

The Institute of Museum and Library Services and the Executive Office of the President of the United States have used IdeaScale to collect feedback from communities of interest.  Here are links to their sites (now closed for comment):

http://imls.ideascale.com/

http://opengov.ideascale.com/

One drawback, from an evaluation standpoint, is that the targeted group is largely undefined, so respondents are likely to be those who are particularly drawn to the topic and may not be representative of the community (or larger population). Comments from the more-engaged members of a community can be particularly helpful for needs and process assessment, but interpretation of the quantified “votes,” particularly for outcomes assessment, would require caution, such as checking findings against another source of data.

The other drawback is that people must sign up for an IdeaScale account to contribute ideas. A two-step process (setting up an account, and then contributing) can be a barrier to participation.  IdeaScale does allow participants to open accounts through Facebook, Twitter, or other social media accounts.  This might ease the participation barriers, but mostly for those who are comfortable with social media, which may further bias the data collected through IdeaScale. Still, it is an intriguing tool for the correct audiences and evaluation questions.

Here’s a link to the company website, which offers a demonstration video and a free subscription:

http://ideascale.com/