Skip all navigation and go to page content
NN/LM Home About OERC | Contact Us | Feedback |OERC Sitemap | Help | Bookmark and Share

Archive for 2012

Logic Models: Handy Hints

The Coffee Break Demonstration webinar  for Thursday, January 5 from the American Evaluation Association was “5 Hints for Making Logic Models Worth the Time and Effort.”  CDC Chief Evaluation Officer Tom Chapel provided this list:

1.  See the model as a means, not an end itself

His point here was that you may not NEED a logic model for successful program planning, but you will ALWAYS need a program description that describes need, target groups, intended outcomes, activities, and causal relationships.  He advised us to identify “accountable” short-term outcomes where the link between the project and subsequent changes can be made clear, and differentiate between those and the longer-term outcomes to which the program contributes.

2.  Process use may be the highest and best use

Logic models are useful for ongoing evaluation and adjustment of activities for continuous quality improvement.

3.  Let form follow function

You can make a logic model as detailed and complex as you need it to be, and you can use whatever format works best for you.  He pointed out that the real “action” is in the middle of the model.  The “middle” of the logic model, its “heart,” is the description of what the program does, and who or what will change as a result of the program.  He advised us to create simple logic models that focus on these key essentials to aid communication about a program.  This simple logic model can be a frame of reference for more complexity—for details about the program and how it will be evaluated.

4.  Use additional vocabulary sparingly, but correctly

Mediators such as activities and outputs help us understand the underlying “logic” of our program.  Moderators are contextual factors that will facilitate or hinder outcome achievement.

5.  Think “zebras,” not “horses”

This is a variation of the saying, “when you hear hoofbeats, think horses, not zebras.”   My interpretation of this hint is that it’s a reminder that in evaluation we are looking not only for expected outcomes but also unexpected ones.   According to Wikipedia, “zebra” is medical slang for a surprising diagnosis.

You can find the slides for Tom Chapel’s presentation in the American Evaluation Association’s Evaluation eLibrary.

Can Tweets Predict Citations?

Can Tweets Predict Citations? Metrics of Social Impact Based on Twitter and Correlation with Traditional Metrics of Scientific Impact.”  G. Eysenbach, Journal of Medical Internet Research 2011; 13(4):e123

This article describes an investigation of whether tweets predict highly cited articles.  The author looked at 1573 “tweetations” (tweets about articles) of 55 articles in the Journal of Medical Internet Research and their subsequent citation data from Scopus and Google Scholar.  There was a correlation:  highly tweeted articles were 11 times more likely to be highly cited than less-tweeted articles.  The author proposes a “twimpact factor” (the cumulative number of tweetations within a certain number of days since publication) as a near-real time measure of reach of research findings.

Strategies for Improving Response Rate

There are articles about strategies to improve survey response rate with health professionals in the open access December, 2011 issue of Evaluation and the Health Professions.   Each explored variations on Dillman’s Tailored Design Method, also known as TDM (see this Issue Brief from the University of Massachusetts Medical School’s Center for Mental Health Services Research for a summary of TDM).

Surveying Nurses: Identifying Strategies to Improve Participation” by J. VanGeest and T.P. Johnson (Evaluation and the Health Professions, 34(4):487-511)

The authors conducted a systematic review of efforts to improve response rates to nurse surveys, and found that small financial incentives were effective and nonmonetary incentives were not effective.  They also found that postal and telephone surveys were more successful than web-based approaches.

Surveying Ourselves:  Examining the Use of a Web-Based Approach for a Physician Survey” by K.A. Matteson; B.L. Anderson; S.B. Pinto; V. Lopes; J. Schulkin; and M.A. Clark (Eval Health Prof  34(4):448-463)

The authors distributed a survey via paper and the web to a national sample of obstetrician-gynecologists and found little systematic difference between responses using the two modes, except that university physicians were more likely to complete the web-based version than private practice physicians.  Data quality was also better for the web survey: fewer missing and inappropriate responses.  The authors speculate that university-based physicians may spend more time at computers than do private physicians.  However, given that response rate was good for both groups, the authors conclude that using web-based surveys is appropriate for physician populations and suggest controlling for practice type.

Effects of Incentives and Prenotification on Response Rates and Costs in a National Web Survey of Physicians” by J. Dykema; J. Stevenson; B. Day; S.L. Sellers; and V.L. Bonham (Eval Health Prof 34(4):434-447, 2011)

The authors found that response rates were highest in groups that were entered into a $50 or $100 lottery and that a prenotification letter containing a $2 preincentive.  They also found that use of postal prenotification letters increased response rates (even though the small token $2 had no additional effect and was not cost-effective).  The authors conclude that larger promised incentives are more effective than nominal preincentives.

A Randomized Trial of the Impact of Survey Design Characteristics on Response Rates among Nursing Home Providers” by M. Clark et al. (Eval Health Prof 34(4):464-486.

This article describes an experiment in maximizing participation by both the Director of Nursing and the Administrator of long-term care facilities.  One of the variables was incentive structure, in which the amount of incentive increased if both participated, and decreased if only one participated.  The authors found that there were no differences in the likelihood of both respondents participating by mode, questionnaire length, or incentive structure.