Skip all navigation and go to page content


NN/LM Outreach Evaluation Resource Center

Archive for the ‘Research Reads’ Category

The STAR Model for Developing Health Promotion Web Sites

Tuesday, July 8th, 2008

Skinner, H.A.; Maley, O.; Norman, C.D. “Developing Internet-based ehealth promotion programs: The Spiral Technology Action Research (STAR) Model.” Health Promotion Practice 2006; 7(4):406-417.

The STAR model combines technology development with community involvement and continuous improvement through five cycles: listen, learn, plan, do, study, act. The “listen” cycle corresponds to community assessment: learning about needs and opportunities, and building partnerships and stakeholder buy-in. The “plan” and “do” cycles involve identification of objectives and strategies followed by prototyping and design to address identified community needs. The “study” cycle corresponds to process evaluation of web sites or prototypes, followed by the “act” cycle in which decisions are made and actions taken based on evaluation results (promotion, ongoing feedback collection and continued refinement, and sustainability). This article presents a case study of using the model plus methods for approaching each of the five cycles.

Storytelling and Behavior Change

Tuesday, July 8th, 2008

Hinyard, L.J.; Kreuter, M.W. “Using narrative communication for health behavior change: a conceptual, theoretical, and empirical overview.” Health Education & Behavior 2007; 34(5):777-792.

This article advocates use of narrative communication in motivating people to change their health behaviors, pointing out that “understanding any situation involves storing and retrieving stories from memory.” The authors speculate that narrative ways of learning and knowing may be especially useful when addressing issues for which reason and logic have limitations, such as morality, religion, values, and social relationships. Narratives can help overcome resistance to a message, facilitate observational learning, and provide identification with characters. Stories can be combined with more “scientific” methods to achieve optimum results.

Health Promotion Facilitators and Barriers

Tuesday, July 1st, 2008

Robinson, K.L.; Driedger, M.S.; Elliott, S.J.; Eyles, J. “Understanding facilitators of and barriers to health promotion practice.” Health Promotion Practice 2006; 7:467-476.

The authors state that although the “field of health promotion has shifted to embrace a socioecological model of health recognizing the role of environmental and contextual factors on health promotion practice and health outcomes,” most health promotion research “continues to focus on behavioral or risk factor outcomes.” Published studies of health promotion facilitators and barriers have tended to focus on one of the three linked stages of health promotion practice: capacity building for planning and development; delivery of health promotion activities; and evaluation and/or research. Barriers to evaluation and research include: health promotion activities rarely have simple, direct cause-effect relationships to test; health interventions involve many factors and processes that cannot easily be quantified; monitoring in rural areas or at the community level poses significant logistical and financial barriers; and tension exists between “scientific rigor” and participatory evaluation processes that aim to influence practice.

The article characterizes facilitators and barriers to health promotion practice as internal (leadership, staffing, resources, priority/interest, infrastructure, and organization of teams and groups) and external (community buy-in, turnover of local contacts, partnerships or collaboration, socioeconomic/demographic/political contexts, and funding opportunities or cuts).

Identifying Opinion Leaders

Monday, June 30th, 2008

Valente, T.W.; Pumpuang, P. “Identifying Opinion Leaders to Promote Behavior Change.” Health Education & Behavior 2007; 34:881.

This article begins by listing how opinion leaders can help with health promotion efforts:

  • Provide entree and legitimation
  • Provide communication from their communities
  • Act as role models for behavior change
  • Convey health messages
  • Contribute to sustainability after a specific program has ended

Programs that use peer opinion leaders are generally more effective. Opinion leaders influence behavior in their communities through awareness-raising, persuasion, norm establishment/reinforcement, and resource leveraging. Opinion leaders are also known as champions, lay health advisors, health advocates, promotoras, behavior change agents, peer leaders, and community leaders. The best methods for identifying opinion leaders will vary depending on a project’s characteristics and setting; this article presents ten methods:

  1. Celebrities (recruit people who are nationally, regionally, or locally known)
  2. Self-selection (solicit volunteers)
  3. Self-identification (administer questionnaire with a leadership scale)
  4. Staff selected (project staff select leaders based on community observation)
  5. Positional (community members who occupy leadership positions)
  6. Judge’s ratings (knowledgeable community members identify leaders)
  7. Expert identification (trained ethnographers study community)
  8. Snowball (ask who people go to for advice, then interview them in turn)
  9. Sample sociometric (randomly selected respondents nominate leaders; those receiving frequent nominations are chosen)
  10. Sociometric (all respondents are interviewed and those receiving frequent nominations are selected)

Ideally, a health promotion project would use multiple methods to find and select opinion leaders. Once they are identified and recruited, training and support are essential.

Advice on Writing Competitive Grant Proposals

Wednesday, June 13th, 2007

Langille, L, MacKenzie, T. Navigating the road to success: A systematic approach to preparing competitive grant proposals. EBLIP 2007; 2 (1): 23-31.

This article gives pragmatic advice about writing competitive grant proposals, with information organized around 11 basic principles of grant preparation including “address a specific audience,” “be innovative,” “involve stakeholders” and “define your objectives and outcomes.” The tips are so concrete, an applicant could create a checklist of characteristics to include in the proposal. The article emphasizes writing to two audiences: the funding agency and reviewers. While the article specifically describes preparation of a research grant proposal, many of the principles would apply to any grant application. RML staff members who teach proposal writing to network members or provide one-to-one help to network members with applications might find the article can help to articulate characteristics of a good proposal. Those reviewing grant applications also will find good criteria for judging applications. The article includes a list of online sources to support proposal writing and project planning. It also presents a timeline, so applicants understand the lead time they need to truly prepare good proposals.

Qualitative-based Evidence

Thursday, May 24th, 2007

Brophy, P. Narrative-Based Practice. EBLIP. 2007. 2:(1):149-158.

Given, L. Evidence-Based Practice and Qualitative Research: A Primer for Library and Information Professionals. EBLIP. 2007. 2:(1):15-21

In the movement promoting evidence-based library and information practice — defined as use of formalized strategies for including evidence in daily practice – the definition of the body of knowledge constituting “best evidence” continues to evolve. In the two articles cited above, which share the same issue of EBLIP, authors Brophy and Given argue for the inclusion of qualitative studies in that body of knowledge. While quantitative randomized-controlled trial (RCT) studies have long been considered the gold standard for producing evidence in the disciplines embracing evidence-based practice, Brophy and Given both argue that social fields like librarianship must look to qualitative studies to answer questions of “why” and “in what context.” They also argue that we cannot fully understand social context without methods that emphasize listening to people, observing behavior and reviewing textual and pictorial documents.

Brophy’s article, a commentary, promotes use of a database of high quality narratives (or stories) to inform practice – something he calls “narrative-based practice.” He writes “We are more likely to find meaning in the telling of how things have been experienced by others than in the formality of arid statistics and measures.” (p.156) Thus, he believes that narratives must be presented along with statistics to help managers with their “evidence-based” decision making.

Given’s article presents a more informative treatment of qualitative research with examples of its three primary methodologies: interviews; observation; and analysis of textual data (e.g., participant-created documents like journals or existing texts like policy manuals or meeting minutes). She also discusses some standard criteria for assessing qualitative research – which differs considerably from criteria for judging quantitative research.

Any number of articles have been published that argue for the legitimacy of qualitative research, but Brophy and Given go a step further. They believe that qualitative studies are essential to the development of a complete body of knowledge for informing practice.

Scale to Measure eHealth Literacy

Friday, December 8th, 2006

Source: Norman CD, Skinner HA. eHEALS: The eHealth Literacy Scale.Journal of Medical Internet Research 2006; 8(4): e27.

The eHealth Literacy Scale is designed to measure consumers’ knowledge, skill, and comfort with finding, evaluating, and using electronic health resources. A scale is a measurement instrument designed for research and evaluation that is comprised of several (usually 3 or more) items. A participant’s responses to these items are combined into one score (e.g. by averaging or summing) to provide one measure of a specific concept – in this case, eHealth Literacy. A reliable scale is one that is consistent or stable — characteristics evaluated through a variety of methods. For instance, all items in this scale are supposed to be measuring the same concept, so the developers checked to see if participants’ answers were consistent across all of the items. Norman and Skinner also ran a factor analysis, which tests to see if the 8 items are related to one “theme.” This statistical method looks at patterns of responses and can indicate how many themes (known as factors) are needed to explain variations in how people responded to the questions. (The researchers name the factors by looking at items that statistics show belong together.) For the eHealth Literacy Scale, one factor seems to be adequate, which further supports its reliability. Finally, the developers tested to see if participants’ answers remained consistent (or stable) when they completed the scale on several occasions.

Since many of us do not have the skills to test measurement scales, it is nice to have one with a track record available in the literature. (Norman and Skinner provide the scale in this article.) One thing to remember, however, is that reliability is just one element of validity. Reliability tells us nothing about whether or not this scale is actually measuring eHealth Literacy (scales can be consistently wrong). Hopefully, Skinner and Norman or others will provide future publications that show evidence for the eHealth Literacy Scale’s validity. However, that should not prevent the rest of us from using it. In evaluation, we seldom make decisions based on one source of information, so we just need to pay attention to the scale’s findings in our studies to see if they corroborate other evaluation findings. If they do, you probably can feel comfortable using the data along with your other evaluation findings. If they do not, you can explore the inconsistencies and possibly gain a deeper understanding of the program you are evaluating.

Types of Information Needs among Cancer Patients: A Systematic Review

Monday, November 6th, 2006

Full citation: Ankem, K. Types of information needs among cancer patients: A systematic review. LIBRES 2005 Sept; 15 (2):

The Ankem article is a literature review and meta-analysis of articles investigating how the situational and demographic characteristics of cancer patients affects their need for different types of health information. For instance, the article reported that patients’ preferred role in making treatment-related decisions affects their need for information. Disease-related information was ranked highest in need, while information about social activities, sexual issues, and self-care issues received lower rankings. Gender, age, and time since diagnosis had some affect on how patients rated the importance of different types of information. This article provides insight into factors for librarians to consider when locating health information for cancer patients. (more…)

A Randomized Controlled Trial Comparing the Effect of E-learning, with a Taught Workshop, on the Knowledge and Search Skills of Health Professionals

Monday, October 30th, 2006

Full Citation: Pearce-Smith N. A randomized controlled trial comparing the effect of e-learning, with a taught workshop, on the knowledge and search skills of health. EBLIP 2006; 1(3):44-56.

The randomized controlled-trial design is a method that can yield compelling evidence of a project’s success. (more…)

The Evidence Base for Cultural and Linguistic Competency in Health Care

Monday, October 30th, 2006

Full Citation: Goode TD, Dunne MC, Bronheim SC. The Evidence Base for Cultural and Linguistic Competency in Health Care Executive Summary New York, NY: The Commonwealth Fund, 2006 Oct.

The link below is to an executive summary of a review of the evidence regarding the “impact of cultural and linguistic competence in health and mental health care on health outcomes and well-being and the costs and benefits to the system.” (more…)

Last updated on Saturday, 23 November, 2013

Funded by the National Library of Medicine under contract # HHS-N-276-2011-00008-C.