Did you attend MLA 2016 in Toronto? Did you hear Dr. Ben Goldacre give the McGovern Lecture? One of the things he spoke about was representing statistics in charts and that pesky Y axis. The YouTube video below does not contradict Goldacre, but shows how sometimes zero can get in the way.
Archive for the ‘Evaluation’ Category
Note: The NTC wishes to thank Karen Vargas and the NN/LM OERC for permission to repost this blog from http://nnlm.gov/evaluation/blog . It seemed especially relevant to what we as the NTC are about and do, as well as to the OERC.
I enjoyed reading an article in Public Libraries titled “The Grass Is Always Greener” by Melanie A. Lyttle and Shawn D. Walsh. They discuss the complexities of deciding whether a program was “well attended” or “nobody came.” Sometimes a program that seems well attended in one situation is the same as a poorly attended program in another.
I can think of a lot of times I’ve experienced this exact situation. When I was a branch manager at a public library, the program manager at the main library would ask if she could send authors to speak at our branch library. When I said, “maybe you should send them somewhere else – we only had ten people come to the last one,” she replied “ten is a lot – ten is more than we get anywhere else.”
When I worked at the NN/LM South Central Region, in some parts of the region 30 people could be expected to attend training sessions. In other parts of the region, we considered 6 people a successfully attended program. These differences often corresponded to urban vs. rural, or the travel distance needed to get to the training, or whether the librarians were largely solo librarians or worked in multi-librarian organizations, or whether their institutions supported taking time off for training. Other considerations include whether the trainers had already built an audience over time that would regularly attend the programs. Or on the other hand, whether the trainers had saturated their market and there were very few new people to learn about the topic.
So how can you decide what a good target participation level should be, or maybe more importantly, how can you explain your participation targets to your funder or parent organization?
Tying your participation level to your intermediate and long-term intended outcomes is one way to do that. Let me give you an example of a program in Houston that was funded by the NN/LM South Central Region. The Greater Houston AHEC received funding many years ago to do an in-depth training project with a small number of seniors in the most underserved areas of Houston. The goals were to teach these seniors how to use computers, how to get on the Internet, how to use email, and then how to use MedlinePlus and NIHSeniorHealth to look up health information. They planned for the seniors to take 2-3 classes a week, and each class lasted several hours. It was a big commitment, but they intended for these seniors to really know how to use the Internet at the end of the series. There were so few seniors who saw the need to learn to use computers that they had to persuade about 10 people from each location to sign up. However, the classes were so good and the seniors so enthusiastic, that after a couple of weeks, the other seniors wanted to take classes too. This led to a phase 2 project which included funding for a permanent computer and coffee area in a senior center where students could practice their Internet skills. There is now a third phase of the program called M-SEARCH which teaches seniors to use mobile devices to look up their health information.
At the beginning, Greater Houston AHEC may not have envisioned these specific outcomes. However, if they were trying to convince a funder that 10 person classes were a reasonable use of the funder’s money, it might be good to show that small in-depth classes could lead to a long-term outcome like “seniors in even the poorest neighborhoods in Houston will be able to research their health conditions on NIHSeniorHealth.” In addition, it would be important to bring in other factors, such as your intended goals for the project, for example whether you hope to have a small group of these seniors that you can train to really use the Internet for health research or whether you want to reach a lot of seniors in underserved areas to let them know that it’s possible to find great health information using NLM resources (see the Kirkpatrick Model of training evaluation for more information on evaluating your training goals).
For more on creating long-term outcomes, see Booklet 2 of the OERC’s booklets: Planning Outcomes-based Outreach Projects
Summer can be a great time to catch up on reading. Here are a few things we’ve been reading that you might find interesting or useful too.
- This summer, Eccles Health Sciences Library decided on a book for an all-staff read: How Stella Saved the Farm. It’s a fable and an easy read about how to make innovation happen.
- The NN/LM Outreach Evaluation Center blog is full of great bite-sized tips for practical evaluation, planning for assessment, and demonstrating library value.
- 60 iPad Productivity Apps for Modern, Mobile Teachers from TeachThought. TeachThought’s posts are often geared to the K-12 teacher, but we are excited to try some of the apps mentioned on this page.
- Recent article in the Journal of the American Medical Association: How to read a systematic review and meta-analysis and apply the results to patient care: users’ guides to the medical literature.
- The “Tomorrow’s Professor” mailing list, sponsored by the Stanford Center for Teaching and Learning, discusses many topics relevant to higher education, including active learning, distance learning, e-portfolios, effective teaching techniques, and flipped classrooms. Postings contain references and web links to journal articles and reports. You can receive posts via e-mail by signing up for the listserv, or visit the archives on the web: http://cgi.stanford.edu/~dept-ctl/tomprof/postings.php
With just an hour of classroom time (or less!) how can you fit in assessment? How can you tell if your students have gained the skill you’ve taught or understand a critical concept?
TeachThought had a recent blog post detailing several assessment strategies, and I thought I’d share a few here.
1. Ticket out the door: Have students write the answer to a question, an a-ha moment or lingering question on a scrap of paper or sticky note and collect them on the way out the door to a break or to leave. This is a quick way to see what stood out to the class and one we’ve used here at the NTC.
2. Ask students to reflect: Before class ends, have students jot down what they learned or how they will apply it in the future.
3. Misconception check: Describe a common misconception about the concept you’re teaching, or show an example of something done incorrectly. Ask students to identify and correct the problem.
4. Peer instruction: Ask a question and have students pair-up and explain the correct answer and why to their partner. Walk around and listen to their responses to assess whether the concept needs to be revisited.
To see the rest of the list of simple assessments you can try, see the blog on TeachThought.
Stephanie Evergreen is the Director of eLearning Initiatives at the American Evaluation Association, as well as having her own company Evergreen Evaluation. She recently wrote a blog post about how to make evaluation findings more exciting and interesting. Follow the link to learn how to make a scratch-off chart. If you’re a little crafty (person who likes hands-on crafts), you may like this.
Has this happened to you? You teach a class, a training session, what have you, and then you distribute an evaluation survey. So far, so good. You sit back to read the evaluations and you learn there was a problem during class or someone didn’t understand something and you think I wish they told me that during the class. Read an article at the Kirkpatrick Partners website called: Is your training survey too late?
You’ve done the work; you’ve collected the data; now what? In recent years, there has been an outpouring of tools to corral data and present it in a human-friendly format (ex. Infographics). A recent article in Information Today provides a run down of many different options based on the type of information you are trying to present. http://goo.gl/rf1mt
The American Evaluation Association [http://www.eval.org/] is creating a resource with presentation guidelines to help you “prepare, develop, and deliver awesome presentations that will better engage your audience and make your content stick.”
To view the tools they have posted, visit http://p2i.eval.org/index.php/p2i-tools/
Many of us who provide training classes end the class with an evaluation. Speaking for myself, I love to see positive evaluations, but sometimes there is a lone voice that does not jive with the rest of the evaluations. It is easy to say, well that is just one person. Rachel Wasserfel, an Evaluator by profession and a blogger, wrote a post called The power of the dissonant story.
From Rachel’s post: I suggest paying close attention to the outlier story – information, cases, events and other occurrences that are atypical, when compared to the overall data collected. Instead of dismissing such occurrences, I study them: they may signal a need to dig deeper for more insight.
Read the entire post at: http://goo.gl/CWn6w