Skip all navigation and go to page content
NN/LM Home About OERC | Contact Us | Feedback |OERC Sitemap | Help | Bookmark and Share

Archive for the ‘Research Reads’ Category

Maximize your response rate

Did you know that the American Medical Association has a specific recommendation for its authors about questionnaire response rate? Here it is, from the JAMA Instructions for Authors:

Survey Research
Manuscripts reporting survey data, such as studies involving patients, clinicians, the public, or others, should report data collected as recently as possible, ideally within the past 2 years. Survey studies should have sufficient response rates (generally at least 60%) and appropriate characterization of nonresponders to ensure that nonresponse bias does not threaten the validity of the findings. For most surveys, such as those conducted by telephone, personal interviews (eg, drawn from a sample of households), mail, e-mail, or via the web, authors are encouraged to report the survey outcome rates using standard definitions and metrics, such as those proposed by the American Association for Public Opinion Research.

Meanwhile, response rates to questionnaires have been declining over the past 20 years, as reported by the Pew Research Center in “The Problem of Declining Response Rates.” Why should we care about the AMA’s recommendation regarding questionnaire response rates? Many of us will send questionnaires to health care professionals who, like physicians, are very busy and might not pay attention to our efforts to learn about them. Even JAMA authors such as Johnson and Wislar have pointed out that “60% is only a “rule of thumb” that masks a more complex issue.” (Johnson TP; Wislar JS. “Response Rates and Nonresponse Errors in Surveys.” JAMA, May 2, 2012—Vol 307, No. 17, p.1805) These authors recommend that we evaluate nonresponse bias in order to characterize differences between those who respond and those who don’t. These standard techniques include:

  • Conduct a follow-up survey with nonrespondents
  • Use data about your sampling frame and study population to compare respondents to nonrespondents
  • Compare the sample with other data sources
  • Compare early and late respondents

Johnson and Wislar’s article is not open access, unfortunately, but you can find more suggestions about increasing response rates to your questionnaires in two recent AEA365 blog posts that are open access:

Find more useful advice (e.g., make questionnaires short, personalize your mailings, send full reminder packs to nonrespondents) at this open access article: Sahlqvist S, et al., “Effect of questionnaire length, personalisation and reminder type on response rate to a complex postal survey: randomised controlled trial.” BMC Medical Research Methodology 2011, 11:62

“Evidence” — what does that mean?

In our health information outreach work we are expected to provide evidence of the value of our work, but there are varying definitions of the word “evidence.” The classical evidence-based medicine approach (featuring results from randomized controlled clinical trials) is a model that is not always relevant in our work. At the 2013 EBLIP7 meeting in Saskatoon, Saskatchewan, Canada, Denise Kaufogiannakis presented a keynote address that is now available as an open-access article on the web:

“What We Talk About When We Talk About Evidence” Evidence-Based Library and Information Practice 2013 8.4

This article looks at various interpretations of what it means to provide “evidence” such as

theoretical (ideas, concepts and models to explain how and why something works),
empirical (measuring outcomes and effectiveness via empirical research), and
experiential (people’s experiences with an intervention).

Kaufogiannakis points out that academic librarians’ decisions are usually made in groups of people working together and she proposes a new model for evidence-based library and information practice:

1) Articulate – come to an understanding of the problem and articulate it. Set boundaries and clearly articulate a problem that requires a decision.

2) Assemble – assemble evidence from multiple sources that are most appropriate to the problem at hand. Gather evidence from appropriate sources.

3) Assess – place the evidence against all components of the wider overarching problem. Assess the evidence for its quantity and quality. Evaluate and weigh evidence sources. Determine what the evidence says as a whole.

4) Agree – determine the best way forward and if working with a group, try to achieve consensus based on the evidence and organizational goals. Determine a course of action and begin implementation of the decision.

5) Adapt – revisit goals and needs. Reflect on the success of the implementation. Evaluate the decision and how it has worked in practice. Reflect on your role and actions. Discuss the situation
with others and determine any changes required.

Kaufogiannakis concludes by reminding us that “Ultimately, evidence, in its many forms, helps us find answers. However, we can’t just accept evidence at face value. We need to better understand evidence – otherwise we don’t really know what ‘proof’ the various pieces of evidence provide.”

Institute for Research Design in Librarianship: 9 days in southern CA; full scholarships available

Do you want to learn about how your user groups and communities find and use information? Do you want to gather evidence to demonstrate that your work is making a difference?

Exciting news! You can work on these questions, and questions like them, June 16-26, 2014!

The Institute for Research Design in Librarianship is a great opportunity for an academic librarian who is interested in conducting research. Research and evaluation are not necessarily identical, although they do employ many of the same methods and are closely related. This Institute is open to academic librarians from all over the country. If your proposal is accepted, your attendance at the Institute will be paid for, as will your travel, lodging, and food expenses.

The William H. Hannon Library has received a three-year grant from the Institute for Museum and Library Services (IMLS) to offer a nine-day continuing education opportunity for academic and research librarians. Each year 21 librarians will receive instruction in research design and a full year of support to complete a research project at their home institutions. The summer Institute for Research Design in Librarianship (IRDL) is supplemented with pre-institute learning activities and a personal learning network that provides ongoing mentoring. The institutes will be held on the campus of Loyola Marymount University in Los Angeles, California.

The Institute is particularly interested in applicants who have identified a real-world research question and/or opportunity. It is intended to

“bring together a diverse group of academic and research librarians who are motivated and enthusiastic about conducting research but need additional training and/or other support to perform the steps successfully. The institute is designed around the components of the research process, with special focus given to areas that our 2010 national survey of academic librarians identified as the most troublesome; the co-investigators on this project conducted the survey to provide a snapshot of the current state of academic librarian confidence in conducting research. During the nine-day institute held annually in June, participants will receive expert instruction on research design and small-group and one-on-one assistance in writing and/or revising their own draft research proposal. In the following academic year, participants will receive ongoing support in conducting their research and preparing the results for dissemination.”

Your proposal is due by February 1, 2014. Details are available at the Institute’s Prepare Your Proposal web site.

Factoid: Loyola Marymount is on a bluff above the Pacific Ocean, west of central LA.

eHealth Literacy Demands and Barriers: An Evaluation Matrix

Chan, CV; Kaufman, DR.  “A framework for characterizing eHealth literacy demands and barriers.”  Journal of Medical Internet Research, 2011. 13(4): e94.

Researchers from Columbia University have developed a matrix of literacy types and cognitive complexity levels that can be used to assess an individuals’ eHealth competence and develop eHealth curricula.  This tool can also be used to design and evaluate eHealth resources.  eHealth literacy is defined as “a set of skills and knowledge that are essential for productive interactions with technology-based health tools.”  The authors’ objectives were to understand the core skills and knowledge needed to use eHealth resources effectively, and develop a set of methods for analyzing eHealth literacy.  They adapted Norman and Skinner’s eHealth literacy model to characterize six components of eHealth literacy:

  1. Computer literacy
  2. Information literacy
  3. Media literacy
  4. Traditional literacy
  5. Science literacy
  6. Health literacy

The authors used Amer’s revision of Bloom’s cognitive processes taxonomy to classify six cognitive process dimensions, ranked in order of increasing complexity:

  1. Remembering
  2. Understanding
  3. Applying
  4. Analyzing
  5. Evaluating
  6. Creating

They used the resulting matrix to characterize demands of eHealth tasks (Table 3) and describe an individuals’ performance on one of the tasks (Table 5), with a cognitive task analysis coding scheme based on the 6 cognitive process dimensions.

Can Tweets Predict Citations?

Can Tweets Predict Citations? Metrics of Social Impact Based on Twitter and Correlation with Traditional Metrics of Scientific Impact.”  G. Eysenbach, Journal of Medical Internet Research 2011; 13(4):e123

This article describes an investigation of whether tweets predict highly cited articles.  The author looked at 1573 “tweetations” (tweets about articles) of 55 articles in the Journal of Medical Internet Research and their subsequent citation data from Scopus and Google Scholar.  There was a correlation:  highly tweeted articles were 11 times more likely to be highly cited than less-tweeted articles.  The author proposes a “twimpact factor” (the cumulative number of tweetations within a certain number of days since publication) as a near-real time measure of reach of research findings.

Strategies for Improving Response Rate

There are articles about strategies to improve survey response rate with health professionals in the open access December, 2011 issue of Evaluation and the Health Professions.   Each explored variations on Dillman’s Tailored Design Method, also known as TDM (see this Issue Brief from the University of Massachusetts Medical School’s Center for Mental Health Services Research for a summary of TDM).

Surveying Nurses: Identifying Strategies to Improve Participation” by J. VanGeest and T.P. Johnson (Evaluation and the Health Professions, 34(4):487-511)

The authors conducted a systematic review of efforts to improve response rates to nurse surveys, and found that small financial incentives were effective and nonmonetary incentives were not effective.  They also found that postal and telephone surveys were more successful than web-based approaches.

Surveying Ourselves:  Examining the Use of a Web-Based Approach for a Physician Survey” by K.A. Matteson; B.L. Anderson; S.B. Pinto; V. Lopes; J. Schulkin; and M.A. Clark (Eval Health Prof  34(4):448-463)

The authors distributed a survey via paper and the web to a national sample of obstetrician-gynecologists and found little systematic difference between responses using the two modes, except that university physicians were more likely to complete the web-based version than private practice physicians.  Data quality was also better for the web survey: fewer missing and inappropriate responses.  The authors speculate that university-based physicians may spend more time at computers than do private physicians.  However, given that response rate was good for both groups, the authors conclude that using web-based surveys is appropriate for physician populations and suggest controlling for practice type.

Effects of Incentives and Prenotification on Response Rates and Costs in a National Web Survey of Physicians” by J. Dykema; J. Stevenson; B. Day; S.L. Sellers; and V.L. Bonham (Eval Health Prof 34(4):434-447, 2011)

The authors found that response rates were highest in groups that were entered into a $50 or $100 lottery and that a prenotification letter containing a $2 preincentive.  They also found that use of postal prenotification letters increased response rates (even though the small token $2 had no additional effect and was not cost-effective).  The authors conclude that larger promised incentives are more effective than nominal preincentives.

A Randomized Trial of the Impact of Survey Design Characteristics on Response Rates among Nursing Home Providers” by M. Clark et al. (Eval Health Prof 34(4):464-486.

This article describes an experiment in maximizing participation by both the Director of Nursing and the Administrator of long-term care facilities.  One of the variables was incentive structure, in which the amount of incentive increased if both participated, and decreased if only one participated.  The authors found that there were no differences in the likelihood of both respondents participating by mode, questionnaire length, or incentive structure.

Evaluating Information Skills Training in Health Libraries

In her 2007 systematic review of how health libraries evaluate their training activities, Alison Brettle points out that these evaluations are designed to address various questions:  Are class participants learning?  Are resources being used in the best way?  Are more resources needed?  What changes should be made to improve materials and methods?   This review focuses on measures that examine changes in class participants’ knowledge, skills, or behavior.  The majority were used in these study designs:

  • Pre-experimental (one group, post-test only; one group with pre- and post-test; two groups post-test only)
  • Quasi-experimental (control group, pre- and post-testing without randomization)
  • Randomized controlled experiments

A few of the studies in the review were qualitative, and some were descriptive.  Methods of measuring outcomes of information skills classes included:

  • A score sheet or checklist listing features of a search
  • Surveys, including perceptions of the training, participant’s confidence, and ability to use knowledge gained
  • The Objective Structured Clinical Exam (OCSE) in which medical students perform clinical tasks in a short period of time, with literature searching included as one of the tasks

Appendix 2 of the article lists the studies that Brettle reviewed, describes methodologies and tools, and indicates how (or whether) instruments were deemed reliable and valid.  (Quick review–reliable instruments produce the same results if used again in the same situation; valid instruments actually measure what they claim to measure, and might produce generalizable results.)

This systematic review is open access and you can find the full text here:
Brettle, A. “Evaluating information skills training in health
libraries: a systematic review.” Health Information & Libraries Journal. 24: (Supplement s1): 18–37, December 2007.

Comparison of a Postal Survey and Mixed-Mode Survey

This open access article from the Journal of Medical Internet Research features a comparison between two modes of questionnaire delivery:  postal only, and mixed-method (email followed up by postal mail).  The authors looked at:

  • Respondent characteristics
  • Response rate
  • Response time
  • Rating scale responses
  • Data quality
  • Total costs

Here are the conclusions:

“Mixed-mode surveys are an alternative method to postal surveys that yield comparable response rates and groups of respondents, at lower costs. Moreover, quality of health care was not rated differently by respondents to the mixed-mode or postal survey. Researchers should consider using mixed-mode surveys instead of postal surveys, especially when investigating younger or more highly educated populations.”

Zuidgeest, M., et al. “A Comparison of a Postal Survey and Mixed-Mode Survey Using a Questionnaire on Patients’ Experiences With Breast Care.”  Journal of Medical Internet Research (JMIR):2011;13(3):e68.

How Many Interviews Are Enough?

When planning an evaluation that features interviews, it is difficult to know in advance how many interviews should be conducted.  The usual approach is to continue to interview until you no longer hear new information.  (Olney & Barnes, Data Collection and Analysis, p. 23)  However, there are times when a numeric guideline would be very useful.  For example, such a guideline would help in creating the budget for the evaluation section of a proposal when that evaluation features qualitative data collection.  Guest, et al., conducted a research project in which they analyzed transcripts of sixty interviews and found that 94% of the coded topics that appeared were identified within six interviews.  Analysis of six additional interviews revealed only another 3% of all the coded topics that were eventually found in the sixty interviews.  They concluded that data saturation (the point at which no additional data are being found) occurred by the time they had analyzed twelve interviews.  They point out that similar evidence has been presented by Nielsen and Landauer, who found that technology usability studies uncover 80% of problems after six evaluations, and 90% after twelve.  In a later publication, Nielsen showed that usability testing with fifteen participants will uncover 100% of problems, but he recommends using a smaller number (“Why You Only Need to Test with Five Users” from Jakob Nielsen’s Alertbox, March 19, 2000).

Is twelve the magic number for interviews, then?  Not necessarily.  Guest, et al. caution that their study involved highly structured interviews with members of a relatively homogeneous group.  In addition, the interview topics were familiar to the participants.  With a heterogeneous group or a diffuse (vague) topic, more interviews will probably be needed.  The bottom line is to select a purposeful group of interview candidates carefully.  Olney and Barnes provide more details about purposeful sampling (Olney & Barnes, Data Collection and Analysis, p. 23).

The article by Guest, et al. is not open-access, but this is a link to the authors’ abstract:  Guest, G., et al.  “How many interviews are enough? :  An experiment with data saturation and variability.”  Field Methods, February 2006.  18(1):59-82.

The Value of Academic Libraries

Last year the Association of College and Research Libraries issued a very substantial and thorough review of the research that has been done on how to measure library value:  “The Value of Academic Libraries: A Comprehensive Research Review and Report” by Megan Oakleaf.  Althought its focus is academia, there are sections reviewing work in public libraries, school libraries, and special libraries.  I recommend the section on special libraries, which has quite an emphasis on medical libraries, including references to past work regarding clinical impacts.

For those who are interested in approaches that libraries have taken to establishing their value, there is potential benefit in reading the entire report cover-to-cover.  For those who want a quick overview, these are sections that I recommend:

  • Executive Summary
  • Defining “Value”
  • Special Libraries

For more information about this report, see “A tool kit to help academic librarians demonstrate their value” from the 9/14/2010 issue of the Chronicle of Higher Education.

The full report is available at http://www.acrl.ala.org/value/.