Skip all navigation and go to page content

NEO Shop Talk

The blog of the National Network of Libraries of Medicine Evaluation Office

Archive for 2006

Community Collaboration Measures and Resources

Friday, December 8th, 2006

The Aspen Institute Roundtable on Comprehensive Community Initiatives has a Measures for Community Success database with descriptions of over 100 evaluation tools and methods for evaluating community initiatives. Many of these tools have been designed for very specific types of initiatives, such as child development or substance abuse prevention. However, the resources are useful for getting ideas for community-based indicators or for evaluation of collaborations. There are some entries that appear to have more generic applications, such as the Collaboration Index and Collaborative Assessment of Capacity. (Note: I could not find these online, so they may be more context-specific than their descriptions indicate.) Some of these entries will lead you to names of experts who might have articles of interest to you.

This is mainly a database with descriptions. If you find something of interest, you will have to dig for more information, by either doing a literature search on the instrument or the author’s name or calling the contact information listed in each entry. However, it seems to be a rich resource for learning how to work with community partners.

Scale to Measure eHealth Literacy

Friday, December 8th, 2006

Source: Norman CD, Skinner HA. eHEALS: The eHealth Literacy Scale.Journal of Medical Internet Research 2006; 8(4): e27.

The eHealth Literacy Scale is designed to measure consumers’ knowledge, skill, and comfort with finding, evaluating, and using electronic health resources. A scale is a measurement instrument designed for research and evaluation that is comprised of several (usually 3 or more) items. A participant’s responses to these items are combined into one score (e.g. by averaging or summing) to provide one measure of a specific concept – in this case, eHealth Literacy. A reliable scale is one that is consistent or stable — characteristics evaluated through a variety of methods. For instance, all items in this scale are supposed to be measuring the same concept, so the developers checked to see if participants’ answers were consistent across all of the items. Norman and Skinner also ran a factor analysis, which tests to see if the 8 items are related to one “theme.” This statistical method looks at patterns of responses and can indicate how many themes (known as factors) are needed to explain variations in how people responded to the questions. (The researchers name the factors by looking at items that statistics show belong together.) For the eHealth Literacy Scale, one factor seems to be adequate, which further supports its reliability. Finally, the developers tested to see if participants’ answers remained consistent (or stable) when they completed the scale on several occasions.

Since many of us do not have the skills to test measurement scales, it is nice to have one with a track record available in the literature. (Norman and Skinner provide the scale in this article.) One thing to remember, however, is that reliability is just one element of validity. Reliability tells us nothing about whether or not this scale is actually measuring eHealth Literacy (scales can be consistently wrong). Hopefully, Skinner and Norman or others will provide future publications that show evidence for the eHealth Literacy Scale’s validity. However, that should not prevent the rest of us from using it. In evaluation, we seldom make decisions based on one source of information, so we just need to pay attention to the scale’s findings in our studies to see if they corroborate other evaluation findings. If they do, you probably can feel comfortable using the data along with your other evaluation findings. If they do not, you can explore the inconsistencies and possibly gain a deeper understanding of the program you are evaluating.

Evaluation Learning Circles

Tuesday, November 21st, 2006

I attended a session at the 2006 American Evaluation Association conference about building “evaluation capacity” in organizations and one presenter talked about “learning circles” as a way to teach evaluation methods. Others from NN/LM attended the session and confirmed to Susan and I that learning circles would be a worthwhile activity for the OERC to offer. (more…)

Cindy’s AEA Report

Friday, November 10th, 2006

AEA has been my favorite professional conference since I started attending about 5 years ago. This year, I attended some of the same sessions as my colleagues, so I won’t report on those. Here are some other sessions that I thought were noteworthy. (more…)

The Agony of AEA ’06– So Many Good Sessions, So Hard to Choose

Wednesday, November 8th, 2006

I was only able to attend 1.5 days at AEA this year. But, for even a short time, it was well worth the drive to Portland. I loved seeing many of us RMLers at the meeting. I definitely agree with Heidi’s encouragement that everyone would benefit from participating in the conference in some way, some day.

Highlights of the conference for me…

(more…)

Evaluation 2006 (AEA Annual Meeting) Doggy Bag from Heidi S.

Wednesday, November 8th, 2006

Really just thrown in the “bag” following the conference. I would encourage all liaisons to attend this at least once!!!

Some notable PEOPLE in the field (per Betsy Kelly):

Ray Maiotta (coding)
Michael Quinn Patton

(more…)

Evaluation 2006 (AEA Annual Meeting) Tidbits from Susan B.

Wednesday, November 8th, 2006

The American Evaluation Association annual meeting was held in Portland, OR last week and I attended two workshops and several sessions of interest. Here is a selection of tidbits that I picked up:

(more…)

International Conference on Evidence Based Library and Information Practice

Monday, November 6th, 2006

The 4th International Conference on Evidence Based Library and Information Practice will be held May 2007 in Chapel Hill, hosted by the School of Information and Library Science at UNC Chapel Hill and the UNC Institute on Aging. The conference will be held May 6-9, followed by 2 days of continuing education. EBLIP4 welcomes proposals for high quality papers and posters of research and innovative applications of evidence-based library and information practice in library and information management. The deadline for abstracts is December 1. Full information about the conference is available at: http://www.eblip4.unc.edu/

Types of Information Needs among Cancer Patients: A Systematic Review

Monday, November 6th, 2006

Full citation: Ankem, K. Types of information needs among cancer patients: A systematic review. LIBRES 2005 Sept; 15 (2): http://libres.curtin.edu.au/libres15n2/index.htm.

The Ankem article is a literature review and meta-analysis of articles investigating how the situational and demographic characteristics of cancer patients affects their need for different types of health information. For instance, the article reported that patients’ preferred role in making treatment-related decisions affects their need for information. Disease-related information was ranked highest in need, while information about social activities, sexual issues, and self-care issues received lower rankings. Gender, age, and time since diagnosis had some affect on how patients rated the importance of different types of information. This article provides insight into factors for librarians to consider when locating health information for cancer patients. (more…)

A Randomized Controlled Trial Comparing the Effect of E-learning, with a Taught Workshop, on the Knowledge and Search Skills of Health Professionals

Monday, October 30th, 2006

Full Citation: Pearce-Smith N. A randomized controlled trial comparing the effect of e-learning, with a taught workshop, on the knowledge and search skills of health. EBLIP 2006; 1(3):44-56.

The randomized controlled-trial design is a method that can yield compelling evidence of a project’s success. (more…)

Last updated on Monday, June 27, 2016

Funded by the National Library of Medicine under Contract No. UG4LM012343 with the University of Washington.