Skip all navigation and go to page content
NN/LM Home About OERC | Contact Us | Feedback |OERC Sitemap | Help | Bookmark and Share

Archive for the ‘Research Reads’ Category

Nuggets from the Health Program Evaluation Field

Grembowski, D.  The Practice of Health Program Evaluation.  Sage, 2001.  Info about this book from Google books.

Not a new book, but an interesting one, with information of potential use to us in thinking about evaluating health information outreach.  Some general overview perspective from the book:

  • Most evaluations are conducted to answer two questions:  Is the program working?  Why or why not?
  • All evaluation is political since judging worth is based on attaching values.
  • Evaluation as a 3-act play:  Act 1 is asking questions; Act 2 is answering them; Act 3 is using these answers in decision-making.
  • Evaluators’ roles range from objective researcher through participant, coach, and advocate.
  • Evaluations look at the “theories” behind programs, such as the causes and effects of implementing activities.
  • Premises underlying cost-effectiveness analysis: health care resources are scarce, resources have alternate uses, people have different priorities, there are never enough resources to satisfy all.
  • Evaluation standards include utility (results are intended to be used), feasibility (methods should be realistic and practical), propriety (methods should be ethical, legal, and respectful of the rights and interests of all participants), accuracy (produce sound information and conclusions that are related logically to data).

The “LIMB” Model: Lay Information Mediary Behavior

Abrahamson, J.A.; Fisher, K.E. “‘What’s past is prologue’: towards a general model of lay information mediary behaviour.” Information Research 12(4):October, 2007

Health information outreach is often aimed at information mediaries in addition to primary information seekers. The article defines lay information mediaries as “those who seek information in a non-professional or informal capacity on behalf (or because) of others without necessarily being asked to do so, or engaging in follow-up.” These individuals are also known as gatekeepers, change agents, communication channels, links, navigators, and innovators. The authors present a generalized model of information mediary characteristics, activities, motivations, barriers, facilitators and raise the question of what differences exist between primary information seekers and information mediaries, since “the caregiver-as-person may have information needs that vary from the caregiver-as-caregiver.” These are factors we can take into account in community assessment activities.

The STAR Model for Developing Health Promotion Web Sites

Skinner, H.A.; Maley, O.; Norman, C.D. “Developing Internet-based ehealth promotion programs: The Spiral Technology Action Research (STAR) Model.” Health Promotion Practice 2006; 7(4):406-417.

The STAR model combines technology development with community involvement and continuous improvement through five cycles: listen, learn, plan, do, study, act. The “listen” cycle corresponds to community assessment: learning about needs and opportunities, and building partnerships and stakeholder buy-in. The “plan” and “do” cycles involve identification of objectives and strategies followed by prototyping and design to address identified community needs. The “study” cycle corresponds to process evaluation of web sites or prototypes, followed by the “act” cycle in which decisions are made and actions taken based on evaluation results (promotion, ongoing feedback collection and continued refinement, and sustainability). This article presents a case study of using the model plus methods for approaching each of the five cycles.

Storytelling and Behavior Change

Hinyard, L.J.; Kreuter, M.W. “Using narrative communication for health behavior change: a conceptual, theoretical, and empirical overview.” Health Education & Behavior 2007; 34(5):777-792.

This article advocates use of narrative communication in motivating people to change their health behaviors, pointing out that “understanding any situation involves storing and retrieving stories from memory.” The authors speculate that narrative ways of learning and knowing may be especially useful when addressing issues for which reason and logic have limitations, such as morality, religion, values, and social relationships. Narratives can help overcome resistance to a message, facilitate observational learning, and provide identification with characters. Stories can be combined with more “scientific” methods to achieve optimum results.

Health Promotion Facilitators and Barriers

Robinson, K.L.; Driedger, M.S.; Elliott, S.J.; Eyles, J. “Understanding facilitators of and barriers to health promotion practice.” Health Promotion Practice 2006; 7:467-476.

The authors state that although the “field of health promotion has shifted to embrace a socioecological model of health recognizing the role of environmental and contextual factors on health promotion practice and health outcomes,” most health promotion research “continues to focus on behavioral or risk factor outcomes.” Published studies of health promotion facilitators and barriers have tended to focus on one of the three linked stages of health promotion practice: capacity building for planning and development; delivery of health promotion activities; and evaluation and/or research. Barriers to evaluation and research include: health promotion activities rarely have simple, direct cause-effect relationships to test; health interventions involve many factors and processes that cannot easily be quantified; monitoring in rural areas or at the community level poses significant logistical and financial barriers; and tension exists between “scientific rigor” and participatory evaluation processes that aim to influence practice.

The article characterizes facilitators and barriers to health promotion practice as internal (leadership, staffing, resources, priority/interest, infrastructure, and organization of teams and groups) and external (community buy-in, turnover of local contacts, partnerships or collaboration, socioeconomic/demographic/political contexts, and funding opportunities or cuts).

Identifying Opinion Leaders

Valente, T.W.; Pumpuang, P. “Identifying Opinion Leaders to Promote Behavior Change.” Health Education & Behavior 2007; 34:881.

This article begins by listing how opinion leaders can help with health promotion efforts:

  • Provide entree and legitimation
  • Provide communication from their communities
  • Act as role models for behavior change
  • Convey health messages
  • Contribute to sustainability after a specific program has ended

Programs that use peer opinion leaders are generally more effective. Opinion leaders influence behavior in their communities through awareness-raising, persuasion, norm establishment/reinforcement, and resource leveraging. Opinion leaders are also known as champions, lay health advisors, health advocates, promotoras, behavior change agents, peer leaders, and community leaders. The best methods for identifying opinion leaders will vary depending on a project’s characteristics and setting; this article presents ten methods:

  1. Celebrities (recruit people who are nationally, regionally, or locally known)
  2. Self-selection (solicit volunteers)
  3. Self-identification (administer questionnaire with a leadership scale)
  4. Staff selected (project staff select leaders based on community observation)
  5. Positional (community members who occupy leadership positions)
  6. Judge’s ratings (knowledgeable community members identify leaders)
  7. Expert identification (trained ethnographers study community)
  8. Snowball (ask who people go to for advice, then interview them in turn)
  9. Sample sociometric (randomly selected respondents nominate leaders; those receiving frequent nominations are chosen)
  10. Sociometric (all respondents are interviewed and those receiving frequent nominations are selected)

Ideally, a health promotion project would use multiple methods to find and select opinion leaders. Once they are identified and recruited, training and support are essential.

Advice on Writing Competitive Grant Proposals

Langille, L, MacKenzie, T. Navigating the road to success: A systematic approach to preparing competitive grant proposals. EBLIP 2007; 2 (1): 23-31.

This article gives pragmatic advice about writing competitive grant proposals, with information organized around 11 basic principles of grant preparation including “address a specific audience,” “be innovative,” “involve stakeholders” and “define your objectives and outcomes.” The tips are so concrete, an applicant could create a checklist of characteristics to include in the proposal. The article emphasizes writing to two audiences: the funding agency and reviewers. While the article specifically describes preparation of a research grant proposal, many of the principles would apply to any grant application. RML staff members who teach proposal writing to network members or provide one-to-one help to network members with applications might find the article can help to articulate characteristics of a good proposal. Those reviewing grant applications also will find good criteria for judging applications. The article includes a list of online sources to support proposal writing and project planning. It also presents a timeline, so applicants understand the lead time they need to truly prepare good proposals.

Qualitative-based Evidence

Brophy, P. Narrative-Based Practice. EBLIP. 2007. 2:(1):149-158.

Given, L. Evidence-Based Practice and Qualitative Research: A Primer for Library and Information Professionals. EBLIP. 2007. 2:(1):15-21

In the movement promoting evidence-based library and information practice — defined as use of formalized strategies for including evidence in daily practice – the definition of the body of knowledge constituting “best evidence” continues to evolve. In the two articles cited above, which share the same issue of EBLIP, authors Brophy and Given argue for the inclusion of qualitative studies in that body of knowledge. While quantitative randomized-controlled trial (RCT) studies have long been considered the gold standard for producing evidence in the disciplines embracing evidence-based practice, Brophy and Given both argue that social fields like librarianship must look to qualitative studies to answer questions of “why” and “in what context.” They also argue that we cannot fully understand social context without methods that emphasize listening to people, observing behavior and reviewing textual and pictorial documents.

Brophy’s article, a commentary, promotes use of a database of high quality narratives (or stories) to inform practice – something he calls “narrative-based practice.” He writes “We are more likely to find meaning in the telling of how things have been experienced by others than in the formality of arid statistics and measures.” (p.156) Thus, he believes that narratives must be presented along with statistics to help managers with their “evidence-based” decision making.

Given’s article presents a more informative treatment of qualitative research with examples of its three primary methodologies: interviews; observation; and analysis of textual data (e.g., participant-created documents like journals or existing texts like policy manuals or meeting minutes). She also discusses some standard criteria for assessing qualitative research – which differs considerably from criteria for judging quantitative research.

Any number of articles have been published that argue for the legitimacy of qualitative research, but Brophy and Given go a step further. They believe that qualitative studies are essential to the development of a complete body of knowledge for informing practice.

Scale to Measure eHealth Literacy

Source: Norman CD, Skinner HA. eHEALS: The eHealth Literacy Scale.Journal of Medical Internet Research 2006; 8(4): e27.

The eHealth Literacy Scale is designed to measure consumers’ knowledge, skill, and comfort with finding, evaluating, and using electronic health resources. A scale is a measurement instrument designed for research and evaluation that is comprised of several (usually 3 or more) items. A participant’s responses to these items are combined into one score (e.g. by averaging or summing) to provide one measure of a specific concept – in this case, eHealth Literacy. A reliable scale is one that is consistent or stable — characteristics evaluated through a variety of methods. For instance, all items in this scale are supposed to be measuring the same concept, so the developers checked to see if participants’ answers were consistent across all of the items. Norman and Skinner also ran a factor analysis, which tests to see if the 8 items are related to one “theme.” This statistical method looks at patterns of responses and can indicate how many themes (known as factors) are needed to explain variations in how people responded to the questions. (The researchers name the factors by looking at items that statistics show belong together.) For the eHealth Literacy Scale, one factor seems to be adequate, which further supports its reliability. Finally, the developers tested to see if participants’ answers remained consistent (or stable) when they completed the scale on several occasions.

Since many of us do not have the skills to test measurement scales, it is nice to have one with a track record available in the literature. (Norman and Skinner provide the scale in this article.) One thing to remember, however, is that reliability is just one element of validity. Reliability tells us nothing about whether or not this scale is actually measuring eHealth Literacy (scales can be consistently wrong). Hopefully, Skinner and Norman or others will provide future publications that show evidence for the eHealth Literacy Scale’s validity. However, that should not prevent the rest of us from using it. In evaluation, we seldom make decisions based on one source of information, so we just need to pay attention to the scale’s findings in our studies to see if they corroborate other evaluation findings. If they do, you probably can feel comfortable using the data along with your other evaluation findings. If they do not, you can explore the inconsistencies and possibly gain a deeper understanding of the program you are evaluating.

Types of Information Needs among Cancer Patients: A Systematic Review

Full citation: Ankem, K. Types of information needs among cancer patients: A systematic review. LIBRES 2005 Sept; 15 (2): http://libres.curtin.edu.au/libres15n2/index.htm.

The Ankem article is a literature review and meta-analysis of articles investigating how the situational and demographic characteristics of cancer patients affects their need for different types of health information. For instance, the article reported that patients’ preferred role in making treatment-related decisions affects their need for information. Disease-related information was ranked highest in need, while information about social activities, sexual issues, and self-care issues received lower rankings. Gender, age, and time since diagnosis had some affect on how patients rated the importance of different types of information. This article provides insight into factors for librarians to consider when locating health information for cancer patients. (more…)