Skip all navigation and go to page content
NN/LM Home About OERC | Contact Us | Feedback |OERC Sitemap | Help | Bookmark and Share

Archive for the ‘News’ Category

How to Ignite Your Presentation: AEA Training Webinar

On July 27, 2012 Stephanie Evergreen, eLearning Initiatives Director for the American Evaluation Association, gave a half-hour webinar about the Ignite approach to giving presentations.  This approach involves a 5 minute presentation based on 20 slides that are each shown for 15 seconds.  (Yes, this is similar to Pecha Kucha.)  The American Evaluation Association, which is conducting a “Potent Presentations” initiative to help its members improve their reporting skills, has made the recording and slides for this great presentation available in its free AEA Public Library:

In her short, practical webinar, Stephanie demonstrated the Ignite approach with a great presentation about “Chart Junk Extraction”—valuable tips for creating streamlined, readable charts with maximized visual impact.  Spend an enjoyable and enlightening few minutes viewing the fast-paced and interesting “Light Your Ignite Training Webinar”—you can even learn how to set your PowerPoint timer to move forward automatically every 15 seconds so that you can practice your Igniting!

How to Analyze Qualitative Data

“Utilizing grounded theory to explore the information-seeking behavior of senior nursing students.” Duncan V; Holtslander, L.  J Med Lib Assoc 100(1) January 2012:20-27.

In this very practical article, the authors describe the steps they took to analyze qualitative data from written records that nursing students kept about their experiences with finding information in the CINAHL database.  They point out that, although the ideal way to gather data about student information seeking behavior would be via direct observation, that approach is not always practical.  Also, self-reporting via surveys and interviews may create bias because members of sample populations might “censor themselves instead of admitting an information need.”  For this study, students were asked to document their search process using an electronic template that included “prompts such as resource consulted, reason for choice, terms searched, outcome, comments, and sources consulted (people).”

After reviewing these searching journals, the authors followed up with interviews.

The “Data analysis and interpretation” section of this article provides a clear, concise description of the grounded theory approach to analyzing qualitative data using initial, focused, and theoretical coding using the nVivo 8 software.  [Note, as of this writing, the latest version is nVivo 10]

  • Initial codes:  “participants’ words were highlighted to create initial codes that reflected as closely as possible the participants’ own words.”
  • Focused codes:  “more directed, selective, and conceptual than word-by-word, line-by-line, and incident-by-incident coding.”
  • Theoretical codes:  focused codes were compared and contrasted “in order to develop the emerging theory of the information-seeking process.”

The authors reviewed the coding in follow-up interviews with participants to check the credibility of their findings:  “The central theme that united all categories and explained most of the variation among the data was ‘discovering vocabulary.’”  They recommend “teaching strategies to identify possible words and phrases to use” when searching for information.

You can do this even if you don’t have access to nVivo 8 software.  Here’s an illustration: “Summarize and Analyze Your Data” from the OERC’s Collecting and Analyzing Evaluation Data booklet.

Lessons Learned from a Black and Minority Health Fair’s 15-Month Follow-up Counseling

In health information outreach, one of the most common ways to connect with the public is through a health fair.  Recent literature has shown that school health fairs improve health knowledge in the short term (Freedman, 2010) and health fairs can be a valuable way to enhance public awareness of specific health professions or services (Stamat, Injety, Pohlod, & Aguwa, 2008).  Additionally, participating in health fairs as exhibitors is a manageable outreach activity that can help to address health literacy and health information inequalities in specific communities (Pomerantz, Muhammad, Downey & Kind, 2010). For non-profit health organizations and medical libraries, health fairs provide important outreach opportunities and  ways to reach beyond a traditional sphere of users.  Exhibitors can share targeted information for a population demographic or demonstrate new online resources and applications for health information access and health literacy.

Even though the aforementioned studies show positive trends for short-term effects of health fairs on health information and health literacy, it can be difficult to evaluate long-term effects or to determine participant behavior changes from attending health fairs. As such, it can also be difficult to justify the staff time and monetary expenses for hosting a health fair.  However, results from a recent research study show that health fairs have potential to provide a foundation for ongoing and integrated outreach. In a study conducted in Indiana in 2006-2007, health professional contact 10 and 15 months following a health fair (to determine health fair effectiveness) led to more in-depth and personalized involvement with participants that resulted in greater changes in health behavior and attitudes.  Librarians can use health fairs to begin an engagement process with targeted individuals.   Additionally, librarians can evaluate health fairs using similar pre and post test methods as this study used, but focus more on assessing changes in health information literacy and knowledge from health fair participants.

The study published in 2011 evaluated both the short-term and long-term health effects of attending a health fair, and assessed the impact of personalized follow-up health counseling sessions for select participants.  The Indiana Black and Minority Health Fair included more than 100 health education booths and addressed a variety of health topics, including prevention and screening, as well as general healthy living.  Sponsored by the Indiana State Department of Health, the annual health fair is part of an ongoing effort to develop health literacy skills. The study was constructed as a pre-post test evaluation with baseline data collection at the health fair and then again after 10 and 15 months.  All participants from the original sample group received health education materials between the 10 and 15-month posttests, and a select group participated in health counseling sessions with a registered nurse. The researcher designed the study to explore a paradigm where, “health fair encounters were aimed to translate from episodic experiences to long-term educational experiences via 15-month follow-up health counseling sessions in a partnership among the state government…local government…and a university…” (p. 898).

With a stationed booth at the health fair, the researchers evaluated a baseline level of willing participants’ health information knowledge through a pretest. In this study, the pretest was a 16-item short-form questionnaire.  This questionnaire was adapted from the Behavioral Risk Factor Surveillance System questionnaire and included questions on “perceived body weight, vigorous and moderate physical activity, TV/video watching hours, fruit and vegetable consumption, tobacco use, and perceived health status” (p.899).  The pretest evaluated the major health indicators and behaviors of health fair participants.

The first posttest was conducted 10 months later in a follow-up mail survey to all health fair participants in the form of a 99-item long-form questionnaire, also adapted from the Behavioral Risk Factor Surveillance System.  This round had a response rate of 47%, with 266 participants responding. This posttest evaluated if attending the health fair had any short-term effects on the participant’s health knowledge and other health habits. It looked at the “participants’ health knowledge, eating behaviors, sedentary behaviors, exercise self-efficacy, exercise goal setting and adherence, and changes in health status” (p. 899). Results from this posttest showed that more people perceived themselves as overweight (an increase of almost 7% from the baseline measure) and fewer people watched TV/videos 4 hours or more on a usual weekday (a drop of almost 22%) 10 months following the health fair. These behavioral changes were not matched by any other changes in notable health behaviors or conditions included in the questionnaires.

This lack of health behavior change is an important aspect of this study, and a revealing situation for librarians.  Health fairs may not produce immediate changes but they can provide an educational space where health professionals and librarians can connect with at-risk health populations and start a conversation about their health behaviors. This potential was seen in the intervention step of the study from which participants’ health information materials and personalized counseling over the course of three to four months offered another opportunity to connect health information professionals with targeted populations.

All participants from the posttest received mailed health educational materials, and this was considered the comparison group. A small subset agreed to personalized health counseling by a registered nurse via phone. The intervention group set at least one health goal and received four counseling phone calls over the course of 12-16 weeks.  Then all comparison and intervention group participants received a second posttest questionnaire 15 months after the health fair (the same 99-item long form questionnaire from the first posttest).  This second posttest evaluated the effect of the personalized follow-up health counseling sessions, as well as the effect of receiving the health information materials.  Of the initial 266 participants, 188 followed through with the second posttest. The second posttest showed some increases in self-reported measures of general health status, with the percentage of overweight or obese participants decreased in both the intervention and comparison groups.  Additionally, participants in the intervention group showed a significant increase in choosing leaner meats, and showed improvements in exercise self-efficacy, exercise goal setting and adherence, and health knowledge.  However, there was little significant gain in health knowledge, as the emphasis in the intervention group was on addressing individuals’ health concerns.  A takeaway message from this study is that long-term follow-up interventions have the potential to provide health support over a longer period of time, which can increase the likelihood of healthy outcomes and health literacy.

Yet, this study shows that health fairs can be an important stepping-stone for health information outreach and evaluation.  Due to the inherent nature of health fairs, they are typically a one-time interaction, and participants run the risk of information overload in unfocused health fairs.  Librarians can use health fairs as an initial contact setting for population groups, and develop an ongoing relationship by providing consistent and meaningful health information materials and personalized outreach to former health fair participants.  Furthermore, evaluations of this ongoing outreach in the form of written or online questionnaires, in-person focus groups, or even individual case studies, can be compelling support for continued participation in health fairs.

For the long-term, Seo also states that the findings of this study have other implications for health promotion and education, in particular, “explor[ing] the possibility of utilizing BMHF [Black and Minority Health Fair] participants as lay health workers or social support groups in their own communities to address low health literacy and high-risk lifestyles in this population” (p. 903).  In other words, you can reach new (and possibly elusive) groups of people or target small specific high-risk populations by investing in a small group of participants over a long-term period, as initiated by a simple health fair.  Short-term and long-term evaluation can complement this long-term outreach program by evaluating changes in teaching methods, changes in health information literacy within communities, or even show differences over time in health knowledge and health information literacy for the lay health workers.  This study shows how heath information professionals can build long-term relationships with targeted populations by using health fairs as an initial contact point. With coordinated evaluation strategies, health information professionals can ensure that their long-term health outreach activity is consistently meaningful, relevant, and adaptive to their targeted populations’ needs.

(more…)

Survey Research Problems and Solutions

Susan Starr’s editorial in the January 2012 issue of JMLA (“Survey research: we can do better” J Med Libr Assoc. 2012 January; 100(1): 1–2) is a very clear presentation of 3 common problems that disqualify article submissions from being full-length JMLA research papers.  Making the point that the time to address these problems is in survey development (ie, before the survey is administered), she also suggests solutions that are best practices for any survey:

Problem #1:  The survey does not answer a question of interest to a large enough group JMLA readers.  (For example, a survey that is used to collect data about local operational issues.)
Solution #1:  Review the literature to identify an issue of general importance.

Problem #2:  The results cannot be generalized.  (Results might be biased if respondents differ from nonrespondents.)
Solution #2:  Address sample bias by sending the survey to a representative sample and using techniques to encourage a high response rate; including questions to help determine whether sample bias is a concern; and comparing characteristics of the sample and the respondents to the study population.

Problem #3:  Answers to survey questions do not provide the information that is needed.  (For example, questions might be ambiguous or might not address all aspects of the issue being studied.)
Solution #3:  Begin survey development by interviewing a few representatives from the survey population to be sure all critical aspects of the topic have been covered, and then pretest the survey.

Logic Models: Handy Hints

The Coffee Break Demonstration webinar  for Thursday, January 5 from the American Evaluation Association was “5 Hints for Making Logic Models Worth the Time and Effort.”  CDC Chief Evaluation Officer Tom Chapel provided this list:

1.  See the model as a means, not an end itself

His point here was that you may not NEED a logic model for successful program planning, but you will ALWAYS need a program description that describes need, target groups, intended outcomes, activities, and causal relationships.  He advised us to identify “accountable” short-term outcomes where the link between the project and subsequent changes can be made clear, and differentiate between those and the longer-term outcomes to which the program contributes.

2.  Process use may be the highest and best use

Logic models are useful for ongoing evaluation and adjustment of activities for continuous quality improvement.

3.  Let form follow function

You can make a logic model as detailed and complex as you need it to be, and you can use whatever format works best for you.  He pointed out that the real “action” is in the middle of the model.  The “middle” of the logic model, its “heart,” is the description of what the program does, and who or what will change as a result of the program.  He advised us to create simple logic models that focus on these key essentials to aid communication about a program.  This simple logic model can be a frame of reference for more complexity—for details about the program and how it will be evaluated.

4.  Use additional vocabulary sparingly, but correctly

Mediators such as activities and outputs help us understand the underlying “logic” of our program.  Moderators are contextual factors that will facilitate or hinder outcome achievement.

5.  Think “zebras,” not “horses”

This is a variation of the saying, “when you hear hoofbeats, think horses, not zebras.”   My interpretation of this hint is that it’s a reminder that in evaluation we are looking not only for expected outcomes but also unexpected ones.   According to Wikipedia, “zebra” is medical slang for a surprising diagnosis.

You can find the slides for Tom Chapel’s presentation in the American Evaluation Association’s Evaluation eLibrary.

Can Tweets Predict Citations?

Can Tweets Predict Citations? Metrics of Social Impact Based on Twitter and Correlation with Traditional Metrics of Scientific Impact.”  G. Eysenbach, Journal of Medical Internet Research 2011; 13(4):e123

This article describes an investigation of whether tweets predict highly cited articles.  The author looked at 1573 “tweetations” (tweets about articles) of 55 articles in the Journal of Medical Internet Research and their subsequent citation data from Scopus and Google Scholar.  There was a correlation:  highly tweeted articles were 11 times more likely to be highly cited than less-tweeted articles.  The author proposes a “twimpact factor” (the cumulative number of tweetations within a certain number of days since publication) as a near-real time measure of reach of research findings.

Strategies for Improving Response Rate

There are articles about strategies to improve survey response rate with health professionals in the open access December, 2011 issue of Evaluation and the Health Professions.   Each explored variations on Dillman’s Tailored Design Method, also known as TDM (see this Issue Brief from the University of Massachusetts Medical School’s Center for Mental Health Services Research for a summary of TDM).

Surveying Nurses: Identifying Strategies to Improve Participation” by J. VanGeest and T.P. Johnson (Evaluation and the Health Professions, 34(4):487-511)

The authors conducted a systematic review of efforts to improve response rates to nurse surveys, and found that small financial incentives were effective and nonmonetary incentives were not effective.  They also found that postal and telephone surveys were more successful than web-based approaches.

Surveying Ourselves:  Examining the Use of a Web-Based Approach for a Physician Survey” by K.A. Matteson; B.L. Anderson; S.B. Pinto; V. Lopes; J. Schulkin; and M.A. Clark (Eval Health Prof  34(4):448-463)

The authors distributed a survey via paper and the web to a national sample of obstetrician-gynecologists and found little systematic difference between responses using the two modes, except that university physicians were more likely to complete the web-based version than private practice physicians.  Data quality was also better for the web survey: fewer missing and inappropriate responses.  The authors speculate that university-based physicians may spend more time at computers than do private physicians.  However, given that response rate was good for both groups, the authors conclude that using web-based surveys is appropriate for physician populations and suggest controlling for practice type.

Effects of Incentives and Prenotification on Response Rates and Costs in a National Web Survey of Physicians” by J. Dykema; J. Stevenson; B. Day; S.L. Sellers; and V.L. Bonham (Eval Health Prof 34(4):434-447, 2011)

The authors found that response rates were highest in groups that were entered into a $50 or $100 lottery and that a prenotification letter containing a $2 preincentive.  They also found that use of postal prenotification letters increased response rates (even though the small token $2 had no additional effect and was not cost-effective).  The authors conclude that larger promised incentives are more effective than nominal preincentives.

A Randomized Trial of the Impact of Survey Design Characteristics on Response Rates among Nursing Home Providers” by M. Clark et al. (Eval Health Prof 34(4):464-486.

This article describes an experiment in maximizing participation by both the Director of Nursing and the Administrator of long-term care facilities.  One of the variables was incentive structure, in which the amount of incentive increased if both participated, and decreased if only one participated.  The authors found that there were no differences in the likelihood of both respondents participating by mode, questionnaire length, or incentive structure.

Evaluating Information Skills Training in Health Libraries

In her 2007 systematic review of how health libraries evaluate their training activities, Alison Brettle points out that these evaluations are designed to address various questions:  Are class participants learning?  Are resources being used in the best way?  Are more resources needed?  What changes should be made to improve materials and methods?   This review focuses on measures that examine changes in class participants’ knowledge, skills, or behavior.  The majority were used in these study designs:

  • Pre-experimental (one group, post-test only; one group with pre- and post-test; two groups post-test only)
  • Quasi-experimental (control group, pre- and post-testing without randomization)
  • Randomized controlled experiments

A few of the studies in the review were qualitative, and some were descriptive.  Methods of measuring outcomes of information skills classes included:

  • A score sheet or checklist listing features of a search
  • Surveys, including perceptions of the training, participant’s confidence, and ability to use knowledge gained
  • The Objective Structured Clinical Exam (OCSE) in which medical students perform clinical tasks in a short period of time, with literature searching included as one of the tasks

Appendix 2 of the article lists the studies that Brettle reviewed, describes methodologies and tools, and indicates how (or whether) instruments were deemed reliable and valid.  (Quick review–reliable instruments produce the same results if used again in the same situation; valid instruments actually measure what they claim to measure, and might produce generalizable results.)

This systematic review is open access and you can find the full text here:
Brettle, A. “Evaluating information skills training in health
libraries: a systematic review.” Health Information & Libraries Journal. 24: (Supplement s1): 18–37, December 2007.

Comparison of a Postal Survey and Mixed-Mode Survey

This open access article from the Journal of Medical Internet Research features a comparison between two modes of questionnaire delivery:  postal only, and mixed-method (email followed up by postal mail).  The authors looked at:

  • Respondent characteristics
  • Response rate
  • Response time
  • Rating scale responses
  • Data quality
  • Total costs

Here are the conclusions:

“Mixed-mode surveys are an alternative method to postal surveys that yield comparable response rates and groups of respondents, at lower costs. Moreover, quality of health care was not rated differently by respondents to the mixed-mode or postal survey. Researchers should consider using mixed-mode surveys instead of postal surveys, especially when investigating younger or more highly educated populations.”

Zuidgeest, M., et al. “A Comparison of a Postal Survey and Mixed-Mode Survey Using a Questionnaire on Patients’ Experiences With Breast Care.”  Journal of Medical Internet Research (JMIR):2011;13(3):e68.

How Many Interviews Are Enough?

When planning an evaluation that features interviews, it is difficult to know in advance how many interviews should be conducted.  The usual approach is to continue to interview until you no longer hear new information.  (Olney & Barnes, Data Collection and Analysis, p. 23)  However, there are times when a numeric guideline would be very useful.  For example, such a guideline would help in creating the budget for the evaluation section of a proposal when that evaluation features qualitative data collection.  Guest, et al., conducted a research project in which they analyzed transcripts of sixty interviews and found that 94% of the coded topics that appeared were identified within six interviews.  Analysis of six additional interviews revealed only another 3% of all the coded topics that were eventually found in the sixty interviews.  They concluded that data saturation (the point at which no additional data are being found) occurred by the time they had analyzed twelve interviews.  They point out that similar evidence has been presented by Nielsen and Landauer, who found that technology usability studies uncover 80% of problems after six evaluations, and 90% after twelve.  In a later publication, Nielsen showed that usability testing with fifteen participants will uncover 100% of problems, but he recommends using a smaller number (“Why You Only Need to Test with Five Users” from Jakob Nielsen’s Alertbox, March 19, 2000).

Is twelve the magic number for interviews, then?  Not necessarily.  Guest, et al. caution that their study involved highly structured interviews with members of a relatively homogeneous group.  In addition, the interview topics were familiar to the participants.  With a heterogeneous group or a diffuse (vague) topic, more interviews will probably be needed.  The bottom line is to select a purposeful group of interview candidates carefully.  Olney and Barnes provide more details about purposeful sampling (Olney & Barnes, Data Collection and Analysis, p. 23).

The article by Guest, et al. is not open-access, but this is a link to the authors’ abstract:  Guest, G., et al.  “How many interviews are enough? :  An experiment with data saturation and variability.”  Field Methods, February 2006.  18(1):59-82.