Skip all navigation and go to page content

NEO Shop Talk

The blog of the National Network of Libraries of Medicine Evaluation Office

Archive for the ‘News’ Category

From QWERTY to Quality Responses: How To Make Survey Comment Boxes More Inviting

Friday, July 22nd, 2016

The ubiquitous comment box.  It’s usually stuck at the end of a survey with a simple label such as “Suggestions,” “Comments:” or “Please add additional comments here.”

Those of us who write surveys over-idealistic faith in the potential of comment boxes, also known as open-ended survey items or questions.  These items will unleash our respondents’ desire to provide creative, useful suggestions! Their comments will shed light on the difficult-to-interpret quantitative findings from closed-ended questions!

In reality, responses in comment boxes tend to be sparse and incoherent. You get a smattering of “high-five” comments from your fans. A few longer responses may come from those with an ax to grind, although their feedback may be completely off topic.  More often, comment boxes are left blank, unless you make the mistake of requiring an answers before the respondent can move on to the next item. Then you’ll probably get a lot of QWERTYs in your blank space.

Let’s face it.  Comment boxes are the vacant lots of Survey City.  Survey writers don’t put such effort into cultivating them. Survey respondents don’t even notice them.

Can we do better than that?  Yes, we can, say the survey methods experts.

First, you have to appreciate this fact: open-ended questions ask a lot of respondents.  They have to create a response. That’s much harder than registering their level of agreement to a statement you wrote for them. So you need strategies that make open-ended questions easier and more motivating for the survey taker.

In his online class Don’t (Survey)Monkey Around: Learn to Make Your Surveys Work,  Matthew Champagne provides the following tips for making comment boxes more inviting to respondents:

  • Focus your question. Get specific and give guidance on how you want respondents to answer. For example, “Please tell us what you think about our new web site. Tell us both what you like and what you think we can do better.” I try to make the question even easier by putting boundaries on how much I expect from them.  So, when requesting feedback on a training session, I might ask my respondents to “Please describe one action step you will take based on what you learned in this class.”
  • Place the open-ended question near related closed-ended questions. For example, if you are asking users to rate the programming at your library, ask for suggestions for future programs right after they rate the current program. The closed-ended questions have primed them to write their response.
  • Give them a good reason to respond. A motivational statement tells respondents how their answers will be used. Champagne says that this technique is particularly effective if you can explain how their responses will be used for their personal For example, “Please give us one or two suggestions for improving our references services.  Your feedback will help our reference librarians know how to provide better service to users like you.”
  • Give them room to write. You need a sizable blank space that encourages your respondents to be generous with their comments. Personally, when I’m responding to an open-ended comment on a survey, I want my entire response to be in view while I’m writing.  As a survey developer, I tend to uses boxes that are about three lines deep and half the width of the survey page

Do we know that Champagne’s techniques work?  In the Dillman et al.’s classic book on survey methods, the authors present research findings to support Champagne’s advice. Adding motivational words to the open-ended survey questions showed a 5-15 word increase in response length and a 12-20% increase in how many respondents’ submitted answers.  The authors caution, though, that you need to use open-ended questions sparingly for the motivational statements to work well. When four open-ended questions were added to a survey, the motivational statements worked better for questions placed earlier in the survey.

I should add, however, to never make your first survey question an open-ended one.  The format itself seems to make people close their browsers and run for the hills.  I always warm up the respondents with some easy closed-ended questions before they see an open-ended item.

Dillman et al. gave an additional technique for getting better responses to open-ended items: Asking follow-up questions.  Many online software packages now allow you to take a respondent’s verbatim answer and repeat it in a follow-up question.  For example, a follow-up question about a respondent’s suggestions for improving the library facility might look like this:

“You made this suggestion about how to improve the library facility: ‘The library should add more group study rooms.’ Do you have any other suggestions for improving the library facility?” [Bolded statement is the respondents’ verbatim written comment.]

Follow-up questions like this have been shown to increase the detail of respondents’ answers to open-ended questions.  If you are interested in testing out this format, search your survey software system for instructions on “piping.”

When possible, I like to use an Appreciative Inquiry approach for open-ended questions. The typical Appreciative Inquiry approach requires two boxes, for example:

  • Please tell us what you liked most about the proposal-writing workshop.
  • What could the instructors do to make this the best workshop possible on proposal writing?

People find it easier to give you an example rooted in experience.  We are story tellers at heart and you are asking for a mini-story. Once they tell their story, they are better prepared to give you advice on how to improve that experience. The Appreciative Inquiry structure also gives specific guidance on how you want them to structure their responses.  The format used for the second question is more likely to gather actionable suggestions.

So if you really want to hear from your respondents, put some thought into your comment box questions.  It lets them know that you want their thoughtful answers in return.

Source:  The research findings reported in this post are from Internet, Phone, Mail and Mixed-Mode Surveys: The Tailored Design Method (4th ed.), by Dillman, Smyth, and Christian (Hoboken, NJ: John Wiley and Sons, Inc, 2014, pp 128-134.

Rubber Duck Evaluation Planning

Friday, July 15th, 2016

Yellow rubber duck with reflection

Programmers have a process for solving coding problems called “Rubber Duck Debugging.” It emerged from the realization that when they explained a problem they were having in coding to non-programmer, suddenly the solution would come to them. Then they realized that they could get the same results by explaining the problem to a rubber duck (or some other inanimate object) and they wouldn’t have to bother someone.  What they do is explain each line of code to a rubber duck, until they hit on the solution to their problem.

How does this apply to evaluation planning?  Cindy and I kind of did this yesterday (for full disclosure, I will admit that I was the rubber duck).  We were walking through a complicated timeline for an evaluation process. It had a lot of steps. It was easy to leave one out. Some of them overlapped. We really had to explain to each other how it was going to happen.

Rubber Duck Debugging can be employed at almost any stage of the evaluation planning process. Here are some examples:

Logic Models

When creating a logic model, you usually work from the right side (the outcomes you want to see), and work your way left to the activities that you want to do that will bring about the outcomes, then further left to the things you need to have in place to do the activities (here’s a sample logic model from the NEO’s Booklet 2 Planning Outcomes Based Outreach Projects).  Once you’ve got your first draft of the logic model, get your rubber duck and carefully describe your logic model to it from left to right, saying “If we have these things in place, then we will be able to do these activities. If these activities are done correctly, they will lead to these results we want to see.  If those things happen, over time it is logical that these other long term outcomes may happen.”  Explain thoroughly so the duck understands how it all works, and you know you haven’t missed anything.

Indicators

Process Indicators: in the logic model section, you explained to your duck “If these activities are done correctly they will lead to these results.”  What does “correctly” look like? Explain to your duck how things need to be done or they won’t lead to the results you want to see. Be as specific as you can.  Then think about what you can measure to see how well you’re doing the activities.  Explain those things to the duck so you can be sure that you are measuring the things you want to see happen.

Outcome Indicators: Looking at your logic model, you know what results you’d like to see.  Think about what would indicate that those results had happened? Then think about how and when you would measure those indicators. Talk it out with the duck. In some cases you may not have the time, money or staff needed to measure an indicator you would really like to measure.  In some cases the data that you can easily collect with your money, staff and time will not be acceptable to your funders or stakeholders.  You will need to make sure you have indicators that you can measure successfully that are credible to your stakeholders. The rubber duck’s masterful silence will help you work this out.

Data collection

I think this is where the duck will really come in handy.  To collect the data that you have described above, you will need to have some data collection tools, like questionnaires or forms. Once you’ve put together the tools, you should explain to the duck what data each question is intend to gather. When you explain it out loud, you might catch some basic mistakes, like asking questions you don’t really need the answers to, or asking a question that is really two questions in one.

Then you need a system for collecting the data using your tools.  If it’s a big project, a number of people may be collecting the data and you will have to write instructions to make sure they are all doing it the same way.  Read each instruction to the duck and explain why it’s important to the success of the project. Did the duck’s ominous silence suggest areas where someone might misunderstand the instructions?

I hope this is helpful to you in your evaluation planning, and maybe other areas of your life.  Why use a rubber duck instead of something else? Well, they are awfully cute. And they come with a great song that you can sing when you’ve completed your plan: https://www.youtube.com/watch?v=Mh85R-S-dh8

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Happy Fourth of July in Numbers!

Friday, July 1st, 2016

4th of July graphic image

Before holidays we sometimes do a post on the value of putting your data in a visually understandable format, perhaps some kind of infographic.

As I write this, some of you may be sitting at your desks pondering how you will celebrate U.S. Independence Day. To help turn your ponderings into a work-related activity, here are some examples of Fourth of July Infographics.  Since some of them have numbers but no dates (for example the number of fireworks purchased in the US “this year”) you might use them as templates for the next holiday-based infographic you create yourself.

If you like History, the History Channel has a fun infographic called 4th of July by the Numbers.  It includes useful information such as:

  • the oldest signer of the Declaration of Independence was Benjamin Franklin at 70,
  • the record for the hot dog eating contest on Coney Island is 68 hotdogs in 10 minutes, and
  • 80% of Americans attend a barbecue, picnic or cookout on the 4th of July

Thinking about the food for your picnic (if you’re one of the 80% having one)?

From the perspective of work (remember work?) here is an infographic from Unmetric on how and why you should create your own infographics for the 4th of July: How to Create Engaging 4th of July Content.

Have a great 4th of July weekend!

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Design Principles in Evaluation Design

Friday, June 17th, 2016

Robot and human hands almost touching

“Sometimes… it seems to me that… all the works of the human brain and hand are either design itself or a branch of that art.” Michelangelo

Michelangelo is not the only one who thinks design is important in all human activities.  In his book A Whole New Mind, Dan Pink considers design to be one of the 6 senses that we need to develop to thrive in this world. As Mauro Porcini, PepsiCo’s Chief Design Officer points out “There is brand design. There is industrial design. There is interior design. There is UX and experience design. And there is innovation in strategy.” ¹

There is also evaluation design. Whether we’re talking about designing evaluation for an entire project or just one section, like the needs assessment or presenting evaluation results, evaluators are still actively involved in design.

Most of us don’t think of ourselves as designers, however.  Juice Analytics has a clever tool called “Design Tips for Non-Designers” to teach basic design skills and concepts.  Some of these are very specific design tips for charts and power points (which by the way are very important and useful, like “avoiding chart junk” and “whitespace matters”).  But some of the other tips can be jumping off points for thinking about bigger picture design skills, such as:

  • Using Hick’s Law and Occam’s Razor to explain the importance of simplicity
  • Learning how to keep your audience in mind by thinking of how to persuade them, balancing Aristotle’s suggested methods of ethical appeal (ethos), emotional appeal (pathos), and logical appeal (logos)
  • Learning how Gestalt theory applies to the mind’s ability to acquire and maintain meaningful perceptions in an apparently chaotic world
  • Considering the psychology of what motivates users to take action

The September 2015 issue of Harvard Business Review highlighted design thinking as corporate strategy in their spotlighted articles (which are freely available online, as long as you don’t open more than 4 a month).  Here are some cool things you can read about in these articles:

  • Using design thinking changed the way PepsiCo designed products to fit their users’ needs (my favorite line is how they used to design products for women by taking currently existing products and then applying the strategy of “shrink it or pink it.”)
  • Design is about deeply understanding people.
  • Principles of design can be applied to the way people work: empathy with users, a discipline of prototyping and tolerance for failure.
  • Create models to explain complex problems, and then use prototypes to explore potential solutions.
  • If it is likely that a new program or strategy may not be readily accepted, use design principles to plan the program implementation.

Some people are seen as being born with design skills.  But it’s clear that a lot can be learned with study and practice.  Even Michelangelo said, “If people knew how hard I worked to get my mastery, it wouldn’t seem so wonderful after all.”


¹ James De Vries. “PepsiCo’s Chief Design Officer on Creating an Organization Where Design Can Thrive.Harvard Business Review. 11 Aug 2015.  Web. 17 June 2016.

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Summer Evaluation Institute Registration is Open!

Friday, May 13th, 2016

AEA Summer Institute Logo

Whether you call what you do evaluation or assessment, the American Evaluation Association’s Summer Evaluation Institute is an amazing event, sure to teach you something and give you a different perspective on the job you do.

The institute, held June 26-29, 2016, is made up of ½ day hands-on training sessions, taught by the best professionals in the field of evaluation.  It’s attended by people from all over the world who want to improve their skills in different aspects of the evaluation process.

Why would you as a librarian want to attend the AEA Summer Evaluation Institute?   Here are some ideas:

Let’s say you were in charge of eliminating much of your print journal collection and increasing your online journals, and you want to figure out what data you should collect that will show that your users are still getting what they need. There’s a great program called “Development and Use of Indicators for Program Evaluation” by Goldie MacDonald that covers criteria for selection of indicators. Goldie MacDonald is a Health Scientist in the Center for Global Health at the U.S. Centers for Disease Control and Prevention (CDC) and a dynamic speaker and trainer.

Are you taking on the planning of an important project, like finding ways to ensure that your hospital administration values the contributions of your library?  Logic models are great planning tools, but additionally are useful for integrating evaluation plans and strategic plans.  How would you like to take a four hour class on logic models taught by the Chief Evaluation Officer at the CDC, Tom Chapel?

What if you’re a liaison librarian to your university’s biology department and you’re looking for ways to improve collaboration with the faculty? There’s a program called Evaluating and Improving Organizational Collaboration that gives participants the opportunity to increase their capacity to quantitatively and qualitatively examine the development of inter- and intra-organizational partnerships. It’s taught by Rebecca Woodland, Chair of the Department of Educational Policy and Administration at University of Massachusetts Amherst (and recognized as one of the AEA’s most effective presenters).

Maybe you’ve been responsible for a program at your public library training the community in using MedlinePlus for their health information needs. You’ve collecting a lot of data showing the success of your programs, and want to make sure your stakeholders take notice of it. How about giving them an opportunity to work with the data themselves? There’s a program called: A Participatory Method for Engaging Stakeholders with Evaluation Findings, taught by Adrienne Adams at Michigan State University.

This is only a small sampling of the great workshops at the Summer Evaluation Institute.

For those of you who don’t know much about the American Evaluation Association: The AEA is an international professional association devoted to the application and exploration of program evaluation, personnel evaluation, technology, and many other forms of evaluation. Evaluation involves assessing the strengths and weaknesses of programs, policies, personnel, products, and organizations to improve their effectiveness. AEA has approximately 7000 members representing all 50 states in the United States as well as over 60 foreign countries.

Cindy and I will be there – we hope to see you there too!

 

Data Party for Public Librarians

Friday, May 6th, 2016

The Engage for Health project team from left to right: Lydia Collins, Kathy Silks, Susan Jeffery, Cindy Olney

Last week, I threw my first data party. I served descriptive statistics and graphs; my co-hosts brought chocolate.

I first learned about data parties from evaluation consultant Kylie Hutchinson’s presentation It’s A Data Party that she gave at the 2016 American Evaluation Association Conference. Also known as data briefings or sense-making sessions, data parties actively engage stakeholders with evaluation findings.

Guest List

My guests were librarians from a cohort of public libraries that participated in the Engage for Health project, a statewide collaboration led by the NN/LM Middle Atlantic Region (MAR) and the Pennsylvania Library Association (PaLA). The NN/LM MAR is one of PaLA’s partners in a statewide literacy initiative called PA Forward, an initiative to engage libraries in activities that address five types of literacy.  The project team was composed of Lydia Collins of NN/LM MAR (which also funded the project), Kathy Silks of the PaLA, and Susan Jeffery of the North Pocono Public Library. I joined the team to help them evaluate the project and develop reports to bring visibility to the initiative.  Specifically, my charge was to use this project to provide experiential evaluation training to the participating librarians.

Librarians from our 18 cohort libraries participated in all phases of the planning and evaluation process.  Kathy and Susan managed our participant recruitment and communication. Lydia provided training on how to promote and deliver the program, as well as assistance with finding health care partners to team-teach with the librarians. I involved the librarians in every phase of the program planning and evaluation process. We met to create the project logic model, develop the evaluation forms, and establish a standard process for printing, distributing, and returning the forms to the project team. In the end, librarians delivered completed evaluation forms from 77% of their adult participants from Engage for Health training sessions.

What We Evaluated

The objective of PA Forward includes improving health literacy, so the group’s outcomes for Engage for Health was to empower people to better manage their health. Specifically, we wanted them to learn strategies that would lead to more effective conversations with their health care providers. Librarians and their health care partners emphasized strategies such as researching health issues using quality online health resources, making a list of medications, and writing down questions to discuss at their appointments.  We also wanted them to know how to use two trustworthy online health information sources from the National Library of Medicine: MedlinePlus and NIHSeniorHealth.

 Party Activities

Sharing with Appreciative Inquiry. The data party kicked off with Appreciative Inquiry interviews. Participants interviewed each other, sharing their peak experiences and what they valued about those experiences. Everyone then shared their peak experiences in a large group. (See our blog entries here and here for detailed examples of using Appreciative Inquiry.)

Data sense-making: Participants then worked with a fact sheet with graphs and summary statistics compiled from the session evaluation data.  As a group, we reviewed our logic model and discussed whether our data showed that we achieved our anticipated outcomes.  The group also drew on both the fact sheet and the stories from the Appreciative Inquiry interviews to identify unanticipated outcomes.  Finally they identified metrics they wish we had collected. What was missing?

Consulting Circles: After a morning of sharing successes, the group got together to help each other with challenges.  We had three challenge areas that the group wanted to address: integration of technology into the classes; finding partners from local health organizations; and promotional strategies.  No area was a problem for all librarians: some were quite successful in a given areas, while others struggled. The consulting groups were a chance to brainstorm effective practices in each area.

Next steps:  As with most funded projects, both host organizations hoped that the libraries would continue providing health literacy activities beyond the funding period.  To get the group thinking about program continuity, we ran a 1-2-4-All discussion about next steps.  They first identified the next steps they will take at their libraries, then provided suggestions to NN/LM MAR and PALA on how to support their continued efforts.

Post Party Activities

For each of the four party activities, a recorder from each group took discussion notes on a worksheet developed for the activity, then turned it into the project team. We will incorporate their group feedback into written reports that are currently in process.

If you are curious about our findings, I will say generally that our data supports the success of this project.  We have plans to publish our findings in a number of venues, once we have a chance to synthesize everything.  So watch this blog space and I’ll let you know when a report of our findings becomes available.

Meanwhile, if you are interested in reading more about data parties, check out this article in the Journal of Extension.

 

Diversity, Texas Libraries and Participatory Data Collection

Monday, May 2nd, 2016

On April 20, Cindy Olney and I facilitated a program for the Texas Library Association Annual Conference called Open Libraries! Making Your Library Welcome to All.  The program was sponsored by TLA’s Diversity and Inclusion Committee and the plan for the program was for attendees to work cooperatively to discover ways to ensure that people of diverse cultures, languages, ages, religions, sexual orientations, physical abilities, and others feel welcome at the library.  The committee wanted to get ideas from the wealth of TLA librarians’ experiences, so Cindy was invited to gather as much information from the attendees as possible. As co-chair of the TLA Diversity and Inclusion Committee, I co-facilitated the event.

The process used was a modified 1-2-4-All process, that you can find on the Liberating Structures website.  Our primary question was “What can my library do to become more welcoming to all people?”  We asked everyone in the room to brainstorm together all the different parts of a library that could be modified to make it more welcoming (e.g., reference services, facility, etc.).  We wanted to be sure that everyone thought as broadly and creatively as possible.

TLA Diversity Data Collection Program 2016

The discussion process actually had two parts.  For part one, we gave everyone two minutes to write as many ideas as they could on index cards (one idea per card).  Then we asked people to take two minutes to share their ideas with a partner.  They then shared their ideas with the entire table (up to 10 participants). The group then chose and wrote down the three best ideas and turned them in to the moderators.  Participants were instructed to leave their index cards with their ideas piled in the middle of their tables.

Here were some of the ideas that were generated through this discussion.

  • Welcome signs in different languages
  • Signage
  • Physical spaces – access to mobility

As you can see, the responses were fairly non-specific. We wanted richer descriptions of modifications of programs or services.  So part two of the process involved asking participants to develop more detailed plans for making their libraries more welcoming. Using a method involving dog, cat, and sea creature stickers, we moved participants randomly to new tables so that they ended up with a new group of colleagues.  They then chose a partner from their new table members and, as a pair, randomly chose one idea card the piles generated in part one of the process. They worked on a plan for one idea for eight minutes.  When the moderator called time, they pulled another card and worked on plans for a second idea. In the final eight minutes of the session, we asked for idea sharing by table to the entire group.

The plans in part 2 were better articulated and detailed than those we got in part one. Here are some examples of the kind of result we got from that exercise:

  • Signage: Making clearer, more colorful. Different languages signage or use digital signage.
  • Language material specific to the community and programming in various language spoken in the community. ESL classes partnered with community colleges.
  • Invite representatives from ADA/disability advocates to give suggestions on making library desks/areas more accessible.

The whole process was completed in a 50-minute conference program session.  Both myself and the other Diversity and Inclusion co-chair, Sharon Amastae from El Paso, TX, were impressed with the energy and enthusiasm that was present among attendees in the room.

The results of this data gathering event will be communicated to the TLA membership.  When that project is completed, we’ll let you know here on the NEO Shop Talk blog!

Photo credit: Esther Garcia

 

Meet the New NEO

Friday, April 22nd, 2016

Cindy Olney and Karen Vargas

Head’s up, readers.  Look for a name change to our blog on May 1.

That’s the day the NN/LM Outreach Evaluation Resource Center will be replaced by the new NN/LM Evaluation Office, a.k.a. NEO.  The NEO will have the same staff (Karen Vargas and Cindy Olney) and same location (headquartered in University of Washington Health Sciences Library) as the OERC; but it has a new and evolving role in the National Network of Libraries of Medicine (NN/LM).

This time last year, Karen Vargas (evaluation specialist) and I (acting assistant director) began writing a proposal for this new NN/LM office.  The University of Washington Health Sciences Library submitted our proposal as part of its larger one for a five-year cooperative agreement to fund the NN/LM Pacific Northwest Regional Medical Library. (Spoil alert: UW HSL won the award.  See the announcement here:  https://www.nlm.nih.gov/news/nlm-rml-coop-agreement-2016.html.)

Our new name reflects one of a number of changes in NN/LM’s funding, organization, and management.  Leaders of the NN/LM are re-envisioning what it means to be a national network of organizations that work together to advance the progress of medicine and public health through access to health information. The NEO staff will contribute our evaluation expertise to help the leaders focus on key outcomes and measure progress and accomplishments.

The vision set forth in our proposal is to influence NN/LM’s use of evaluation to engage and learn about its programs, make good decisions, and enhance the visibility of its successes. Our proposed strategies were organized around five main aims. First, we will support the NN/LM leadership’s ability to make data-driven decisions. Second, we will collaborate with the regional medical libraries to increase use of evaluation in their regions.  Third, we will provide quality evaluation training opportunities to build evaluation skills of network members.  Our fourth aim is to increase visibility of NN/LM’s program successes.  Lastly, we plan to provide new written materials about effective and emerging evaluation practices and trends.

The exact nature of our services will be determined by the needs of the NN/LM as we all develop new approaches to working together. We do know that the NEO’s scope will expand beyond health information outreach evaluation to include other areas, such as organizational development and internal services to users and clients. We also want to put more emphasis on evaluation use, both for decision-making and advocating program value to stakeholders. As a teaser, Karen and I plan to develop our own expertise in evaluation reporting, participatory evaluation methods, and digital story-telling. (In fact, Karen’s blog post next week will describe our  recent participatory evaluation experience at the Texas Library Association 2016 meeting.)

The most important news for our blog readers, though, is that our URL address will not change for the foreseeable future. So in spite of the name change that’s coming, you will still find our weekly blog posts here.  So “see” you next week.

 

Our Favorite Evaluation Blogs

Friday, April 15th, 2016

successful business woman on a laptop

We really don’t want you to stop reading our blog!  But April is a really busy month for us, so this week we’ll make sure you get your evaluation buzz by letting you know of some other great evaluation blogs.

AEA365 – This is the blog of the American Evaluation Association.  This blog shares hot tips, cool tricks, rad resources, and lessons learned by different evaluators every single day!

Better Evaluation Blog – Better Evaluation is an international collaboration to improve evaluation by sharing information about evaluation methods, processes and approaches. The blog has posts that provide new perspectives  about particular issues in evaluation.

EvaluATE – EvaluATE is the evaluation resource center for the National Science Foundation’s Advanced Technological Education program. Their blog has lessons learned, tips, or techniques on evaluation management, proposal development, evaluation design, data collection and analysis, reporting, and more.

Evergreen Data – Stephanie Evergreen writes a blog about data visualization.  For the record, she has written the book(s) on data visualization, Effective Data Visualization and Presenting Data Effectively.

Visual Brains – Sara Vaca writes about new techniques and ways of visualizing data, information, and figures to communicate evaluation findings and to improve evaluation use, but also for use in other stages such as planning and analyzing.

The OERC Is On The Road in April

Wednesday, April 6th, 2016

 

A young boy having fun driving his toy car outdoors.

The OERC staff will be putting on some miles this month. Karen and Cindy are slated to present and teach at various library conferences and meetings. If you happen to be at any of these events, please look for us and say “hello.”  Here is the April itinerary: 

Cindy will participate in a panel presentation titled “Services to Those Who Serve: Library Programs for Veterans and Active Duty Military Families” at the Public Library Association’s 2016 conference in Denver. The panel presentation will be held from 10:45 – 11:45 am, April 7. She and Jennifer Taft, who is now with Harnett County Public Library,  will present a community assessment project they conducted for the Cumberland County Public Library, described here in the November/December 2014 edition of Public Libraries.

Karen will conduct the OERC workshop “Planning Outcomes-Based Outreach Programs” on April 8 for the Joint Meeting of the Georgia Health Sciences Library Association and the Atlanta Health Science Library Consortium in Decatur, GA. This workshop teaches participants how to develop logic models for both program and evaluation planning.

 Cindy and Karen will facilitate two different sessions for the Texas Library Association’s annual conference, both on April 20 in Houston. One session will be a large-group participatory evaluation exercise to gather ideas from the TLA membership about how  libraries can become more welcoming to diverse populations. The second is an 80-minute workshop on participatory evaluation methods, featuring experiential learning exercises about Appreciative Inquiry, 1-2-4-All, Photovoice, and Most Significant Change methods.

Cindy will join the NN/LM Middle Atlantic Region and the Pennsylvania Library Association to talk about evaluation findings from a collaborative health literacy effort conducted with 18 public libraries across the state. The public libraries partnered with health professionals to run health literacy workshops targeted at improving consumers’ ability to research their own health concerns and talk more effectively with their doctors. The public librarians involved in this initiative worked together to design an evaluation questionnaire that they gave to participants at the end of their workshops. The combined effort of the cohort librarians allowed the group to pool a substantial amount of evaluation data. Cindy will facilitate a number of participatory evaluation exercises to help the librarians interpret the data, make plans for future programming, and develop a communication plan that will allow them to publicize the value of the health literacy initiative to various stakeholders. The meeting will be held April 29 in Mechanicsburg, PA.

In addition, Cindy will be attending a planning meeting at the National Library of Medicine in Bethesda in mid-April with Directors, Associate Directors, and Assistant Directors from the NN/LM. Our library, the University of Washington Health Sciences Library, will receive cooperative agreements for both the NN/LM Pacific Northwest Regional Medical Library and the NN/LM Evaluation Office, which will replace the OERC on May 1. (You can see the announcement here.)  We will let you know more about the NEO later, except to say that we will be moving into the same positions in the NEO that we hold with the OERC. You have not heard the end of us!

Although we will be on the road quite a bit, rest assured we will not let our loyal readers down. So please tune in on Fridays for our weekly posts.

 

Last updated on Monday, June 27, 2016

Funded by the National Library of Medicine under Contract No. UG4LM012343 with the University of Washington.