Skip all navigation and go to page content
NN/LM Home About NTO | Contact NTO | NTO Feedback | Help | Bookmark and Share

Archive for the ‘Evaluation’ Category

Sometimes, zero isn’t everything

Monday, June 27th, 2016

Did you attend MLA 2016 in Toronto? Did you hear Dr. Ben Goldacre give the McGovern Lecture?  One of the things he spoke about was representing statistics in charts and that pesky Y axis. The YouTube video below does not contradict Goldacre, but shows how sometimes zero can get in the way.

How Many Attendees are Enough?

Tuesday, March 1st, 2016

stockfresh_5559584_reaching-for-a-star_sizeS-203x300Note: The NTC wishes to thank Karen Vargas and the NN/LM OERC for permission to repost this blog from . It seemed especially relevant to what we as the NTC are about and do, as well as to the OERC.

I enjoyed reading an article in Public Libraries titled “The Grass Is Always Greener” by Melanie A. Lyttle and Shawn D. Walsh.  They discuss the complexities of deciding whether a program was “well attended” or “nobody came.”  Sometimes a program that seems well attended in one situation is the same as a poorly attended program in another.

I can think of a lot of times I’ve experienced this exact situation. When I was a branch manager at a public library, the program manager at the main library would ask if she could send authors to speak at our branch library.  When I said, “maybe you should send them somewhere else – we only had ten people come to the last one,” she replied “ten is a lot – ten is more than we get anywhere else.”

When I worked at the NN/LM South Central Region, in some parts of the region 30 people could be expected to attend training sessions.  In other parts of the region, we considered 6 people a successfully attended program.  These differences often corresponded to urban vs. rural, or the travel distance needed to get to the training, or whether the librarians were largely solo librarians or worked in multi-librarian organizations, or whether their institutions supported taking time off for training.  Other considerations include whether the trainers had already built an audience over time that would regularly attend the programs. Or on the other hand, whether the trainers had saturated their market and there were very few new people to learn about the topic.

So how can you decide what a good target participation level should be, or maybe more importantly, how can you explain your participation targets to your funder or parent organization?

Tying your participation level to your intermediate and long-term intended outcomes is one way to do that.  Let me give you an example of a program in Houston that was funded by the NN/LM South Central Region. The Greater Houston AHEC received funding many years ago to do an in-depth training project with a small number of seniors in the most underserved areas of Houston.  The goals were to teach these seniors how to use computers, how to get on the Internet, how to use email, and then how to use MedlinePlus and NIHSeniorHealth to look up health information.  They planned for the seniors to take 2-3 classes a week, and each class lasted several hours. It was a big commitment, but they intended for these seniors to really know how to use the Internet at the end of the series.  There were so few seniors who saw the need to learn to use computers that they had to persuade about 10 people from each location to sign up.  However, the classes were so good and the seniors so enthusiastic, that after a couple of weeks, the other seniors wanted to take classes too.  This led to a phase 2 project which included funding for a permanent computer and coffee area in a senior center where students could practice their Internet skills. There is now a third phase of the program called M-SEARCH which teaches seniors to use mobile devices to look up their health information.

At the beginning, Greater Houston AHEC may not have envisioned these specific outcomes.  However, if they were trying to convince a funder that 10 person classes were a reasonable use of the funder’s money, it might be good to show that small in-depth classes could lead to a long-term outcome like “seniors in even the poorest neighborhoods in Houston will be able to research their health conditions on NIHSeniorHealth.”  In addition, it would be important to bring in other factors, such as your intended goals for the project, for example whether you hope to have a small group of these seniors that you can train to really use the Internet for health research or whether you want to reach a lot of seniors in underserved areas to let them know that it’s possible to find great health information using NLM resources (see the Kirkpatrick Model of training evaluation for more information on evaluating your training goals).

For more on creating long-term outcomes, see Booklet 2 of the OERC’s booklets: Planning Outcomes-based Outreach Projects

What We’re Reading

Wednesday, September 3rd, 2014

Stack of magazines viewed from corner


Summer can  be a great time to catch up on reading. Here are a few things we’ve been reading that you might find interesting or useful too.

What did you read over the summer? Share with us on Facebook or Twitter your favorites!


Assessment on the Fly

Wednesday, June 25th, 2014

With just an hour of classroom time (or less!) how can you fit in assessment? How can you tell if your students have gained the skill you’ve taught or understand a critical concept?

Rubric showing ratings of 5 to 1 with eyeglasses in upper left corner

TeachThought had a recent blog post detailing several assessment strategies, and I thought I’d share a few here.

1. Ticket out the door: Have students write the answer to a question, an a-ha moment or lingering question on a scrap of paper or sticky note and collect them on the way out the door to a break or to leave. This is a quick way to see what stood out to the class and one we’ve used here at the NTC.

2. Ask students to reflect: Before class ends, have students jot down what they learned or how they will apply it in the future.

3. Misconception check: Describe a common misconception about the concept you’re teaching, or show an example of something done incorrectly. Ask students to identify and correct the problem.

4. Peer instruction: Ask a question and have students pair-up and explain the correct answer and why to their partner. Walk around and listen to their responses to assess whether the concept needs to be revisited.

To see the rest of the list of simple assessments you can try, see the blog on TeachThought.

Scratch-Off Graph

Wednesday, January 30th, 2013

Stephanie Evergreen is the Director of eLearning Initiatives at the American Evaluation Association, as well as having her own company Evergreen Evaluation. She recently wrote a blog post about how to make evaluation findings more exciting and interesting. Follow the link to learn how to make a scratch-off chart. If you’re a little crafty (person who likes hands-on crafts), you may like this.

Evaluate at the Time of Need

Friday, October 19th, 2012

Has this happened to you? You teach a class, a training session, what have you, and then you distribute an evaluation survey. So far, so good. You sit back to read the evaluations and you learn there was a problem during class or someone didn’t understand something and you think I wish they told me that during the class. Read an article at the Kirkpatrick Partners website called: Is your training survey too late?

Visualizing Data

Friday, September 14th, 2012

You’ve done the work; you’ve collected the data; now what? In recent years, there has been an outpouring  of tools to corral data and present it in a human-friendly format (ex. Infographics). A recent article in Information Today provides a run down of many different options based on the type of information you are trying to present.

Tools for Better Presentations

Friday, August 31st, 2012

The American Evaluation Association [] is creating a resource with presentation guidelines to help you “prepare, develop, and deliver awesome presentations that will better engage your audience and make your content stick.”

To view the tools they have posted, visit

The Outlier

Wednesday, June 27th, 2012

Many of us who provide training classes end the class with an evaluation. Speaking for myself, I love to see positive evaluations, but sometimes there is a lone voice that does not jive with the rest of the evaluations. It is easy to say, well that is just one person. Rachel Wasserfel, an Evaluator by profession and a blogger, wrote a post called The power of the dissonant story.

From Rachel’s post: I suggest paying close attention to the outlier story – information, cases, events and other occurrences that are atypical, when compared to the overall data collected. Instead of dismissing such occurrences, I study them: they may signal a need to dig deeper for more insight.

Read the entire post at: