Skip all navigation and go to page content

NEO Shop Talk

The blog of the National Network of Libraries of Medicine Evaluation Office

Archive for the ‘NEO Projects’ Category

Uninspired by Bars? Try Dot Plots

Friday, March 17th, 2017

Thanks to Jessi Van Der Volgen and Molly Knapp at the NNLM Training Office for allowing us to feature their assessment project and for providing the images in this post. 

Are you tired of bars?

I don’t mean the kind of bars where you celebrate and socialize. I mean the kind used in data visualization.  My evidence-free theory is that people still succumb to using the justifiably maligned pie chart simply because we cannot face one more bar graph.

Take heart, readers. Today, I’m here to tell you a story about some magic data that fell on the NEO’s doorstep and broke us free of our bar chart rut.

It all began with a project by our NNLM Training Office (NTO) colleagues, the intrepid leaders of NNLM’s instructional design and delivery. They do it all. They teach. They administratively support the regions’ training efforts. They initiate opportunities and resources to up-level instructional effectiveness throughout the network. One of their recent initiatives was a national needs assessment of NNLM training participants. That was the source of the fabulous data I write about today.

For context, I should mention that training is one of NNLM’s key strategies for reaching the furthest corners of our country to raise awareness, accessibility and use of NLM health information resources. NNLM offers classes to all types of direct users, (e.g., health professionals; community-based organization staffs) but we value the efficiency of our “train-the-trainer” programs. In these classes, librarians and others learn how to use NLM resources so they, in turn, can teach their users. The national needs assessment was geared primarily toward understanding how to best serve “train-the-trainer” participants, who often takes multiple classes to enhance their skills.

For the NTO’s needs assessment, one area of inquiry involved an inventory of learners’ need for training in 30 topic areas. The NTO wanted to assess participants’ desired level and their current level of proficiency in each topic.  That meant 60 questions. That was one heck-of-a-long survey. We wished them luck.

The NTO team was undaunted!  They did some research and found a desirable format for presenting this set of questions (see upper left). The format had a nice minimalist design. The sliders were more fun for participants than radio buttons. Also, NTO designed the online questionnaire so that only a handful of question-pairs appeared on the screen at one time.  The approach worked, because NTO received responses from 559 respondents, and 472 completed the whole questionnaire.

Dot plots for four skill topic areas. Conducting literature searches (4=Current; 5=Desired) Understanding and searching for evidence-based research ( Current-3; Desired=5) Develop/teach classes (Current-3; Desired=5; Create videos/web tutorials Current-2; Desired=4)

The NEO, in turn, consulted the writings of one of our favorite dataviz oracles, Stephanie Evergreen. And she did not disappoint.  We found the ideal solution: dot plots!  Evergreen’s easy-to-follow instructions from this blog post allowed us to create dot plots in Excel, using a few creative hacks. This approach allowed us to thematically cluster results from numerous related questions into one chart. We were able to present data for 60 questions in a total of seven charts.

I would like to point out a couple of design choices I made:

  • I used different shapes and colors to visually distinguish between “current proficiency” and “desired proficiency.” Navy blue for current proficiency was inspired from NNLM’s logo. I used a complimentary green for the desired proficiency because green means “go.”
  • Evergreen prefers to place labels (e.g., “conducting literature searches”) close to the actual dots. That works well if your labels consist of one or two words. We found that our labels had to be longer to make sense. Setting them flush-left made them more readable.
  • I suggested plotting medians rather than means because many of the data distributions were skewed. You can use means, but probably should round to whole numbers so you don’t distract from the gaps.

Dot plots are quite versatile. We used the format to highlight gaps in proficiency, but other evaluators have demonstrated that dot plots work well for visualizing change over time and cross-group comparisons.

Dot plots are not as easy to create as the default Excel bar chart, but they are interesting.  So give up bars for a while.  Try plotting!

 

 

 

 

ABP: Always Be Pilot-testing (some experiences with questionnaire design)

Monday, February 20th, 2017

Cover of NEO's Booklet 3 on collecting and analyzing evaluation data

This week I have been working on a questionnaire for the Texas Library Association (TLA) on the cultural climate of TLA.  Having just gone through this process, I will tell you that NEO’s Booklet #3: Collecting and Analyzing Evaluation Data has really useful tips on how to write questionnaires (p. 3-7).  I thought it might be good to talk about some of the tips that had turned out particularly useful for this project, but the theme of all of these is “always be pilot-testing.”

NEO’s #1 Tip: Always pilot test!

This questionnaire is still being pilot tested. So far I have thought the questionnaire was perfect at least 10 times, and we are still finding out about important changes from people who are pilot testing it for our committee.  One important part of this tip is to include stakeholders in the pilot testing.  Stakeholders have points of view that may not be included in the points of view of the people creating the survey.  After we have what we think is a final version, our questionnaire will be tested by the TLA Executive Board.  While this process sounds exhausting, every single change that has been made (to the questionnaire that I thought was finished) has fundamentally improved it.

There is a response for everyone who completes the question

Our questionnaire asks questions about openness and inclusiveness to people of diverse races, ethnicities, nationalities, ages, gender identities, sexual identities, cognitive and physical disabilities, perceived socioeconomic status, etc.  We are hoping to get personal opinions from all kinds of librarians who live all over Texas.  By definition this means that many of the questions are possibly sensitive, and may be hot button issues for some people.

In addition to wording the questions carefully, it’s important that every question has a response for everyone who completes the question. We would hate for someone not to find the response that best works for them, and then leave the questionnaire unanswered, or even worse get their feelings hurt or feel insulted. For example, we have a question about whether our respondents feel that their populations are represented in TLA’s different groups (membership, leadership, staff, etc). At first the answers were just “yes” or “no.”  But then (from responses in the pilot testing) we realized that a person may feel that they belong to more than one population. For example, what if someone was both Asian and had a physical disability.  Perhaps they feel that one group is well represented and the other group not represented at all.  How would they answer the question?  Without creating a complicated response, we decided to change our response options to “yes” “some are” and “no.”

“Don’t Know” or “Not Applicable”

In a similar vein, sometimes people do not know the answer to the question you are asking.  They can feel pressured to make a choice among questions rather than skip the question (and if they do skip the question, the data will not show why).  For example, we are asking a question about whether people feel that TLA is inclusive, exclusionary or neither.  Originally I thought those three choices covered all the bases.  But among discussions with Cindy (who was pilot testing the questionnaire), we realized that if someone simply didn’t know, they wouldn’t feel comfortable saying that TLA was neither inclusive nor exclusionary.  So we added a “Don’t know” option.

Speaking from experience, the most important thing is keeping an open mind. Remember that the people taking your questionnaire will be seeing it from different eyes than yours, and they are the ones you are hoping to get information from.  So, while I recommend following all the tips in Booklet 3, to get the best results, make sure that you test your questionnaire with a wide variety of people who represent those who will be taking it.

NEO Announcement! Home Grown Tools and Resources

Friday, February 3rd, 2017

Red Toolbox with ToolsSince NEO (formerly OERC) was formed, we’ve created a lot of material – four evaluation guides, a 4-step guide to creating an evaluation plan, hosted in-person classes and webinars, and of course, written in this very blog! All of the guides, classes, and blogs come with a lot of materials, including tip sheets, example plans, and resource lists. In order to get to all of these resources though, you had to go through each section of the website and search for them, or attend one of our in person classes. That all changed today.

Starting now, NEO will be posting its own tip sheets, evaluation examples, and more of our favorite links on the Tools and Resources page. Our first addition is our brand new tip sheet, “Maximizing Response Rate to Questionnaires,” which can be found under the Data Collection tab. We also provided links to some of our blog posts in each tab, making them easier to find. Look for more additions to the Tools and Resources page in upcoming months.

Do you have a suggestion for a tip sheet? Comment below – you might see it in the future!

2016 Annual NEO Shop Talk Round-up

Friday, December 30th, 2016

Top 10 List

Like everyone else, we have an end-of-the-year list.  Here’s our top ten list of the posts we wrote this year, based on number of views:

10. Developing Program Outcomes Using the Kirkpatrick Model – with Vampires

9.  Inspirational Annual Reporting with Appreciative Inquiry

8.  What is a Need?

7.  Designing Surveys: Does the Order of Response Options Matter?

6.  Simply Elegant Evaluation: GMR’s Pilot Assessment of a Chapter Exhibit

5.  A Chart Chooser for Qualitative Data!

4.  W.A.I.T. for Qualitative Interviews

3.  The Zen Trend in Data Visualization

2.  How I Learned to Stop Worrying and Love Logic Models (The Chili Lesson)

1.  Logic Model for a Birthday Party

We put a lot of links to interesting things in our blog posts.  Here are the Top Ten websites that people went to from our blog:

10. The Kirkpatrick Model

9.  Books by Stephanie Evergreen

8.  Tearless Logic Model article in Global Journal of Community Psychology Practice

7.  AEA 365 | A Tip-a-Day by and for Evaluators

6.  Public Libraries, Project Outcome – Looking Back, Looking Forward

5.  Build a Qualitative Dashboard

4.  Nat King Cole, The Christmas Song

3.  The Histomap by John Sparks

2.  Tools: Tearless Logic Model (how-to summary)

1.  Stephanie Evergreen Qualitative Chart Chooser

The NEO wishes you a happy and fulfilling New Year!!

Evaluation Planning for Proposals: a New NEO Resource

Friday, August 12th, 2016

Angry crazy Business woman with a laptop

Have you ever found yourself in this situation?  You’re well along in your proposal writing when you get to the section that says “how will you evaluate your project?”  Do you think:

  1. “Oh #%$*! It’s that section again.”
  2. “Why do they make us do this?”
  3. “Yay! Here’s where I get to describe how I will collect evidence that my project is really working!”

We at the NEO suggest thinking about evaluation from the get-go, so you’ll be prepared when you get to that section.  And we have some great booklets that show how to do that.  But sometimes people aren’t happy when we say “here are some booklets to read to get started,” even though they are awesome booklets.

So the NEO has made a new web page to make it easier to incorporate evaluation into the project planning process and end up with an evaluation plan that develops naturally.

1. Do a Community Assessment; 2. Make a Logic Model; 3. Develop Measurable Objectives; 4. Create an Evaluation Plan

We group the process into 4 steps: 1) Do a Community Assessment; 2) Make a Logic Model; 3) Develop Measurable Objectives for Your Outcomes; and 4) Create an Evaluation Plan.   Rather than explain what everything is and how to use it (for that you can read the booklets), this page links to the worksheets and samples (and some how-to sections) from the booklets so that you can jump right into planning.  And you can skip the things you don’t need or that you’ve already done.

In addition, we have included links to posts in this blog that show examples of the areas covered so people can put them in context.

We hope this helps with your entire project planning and proposal writing experience, as well as provides support for that pesky evaluation section of the proposal.

Please let Cindy (olneyc@uw.edu) or me (kjvargas@uw.edu) know how it works for you, and feel free to make suggestions.  Cheers!

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

The Accidental Objective

Friday, June 10th, 2016

A couple of months ago, I accidentally set an objective for the NEO Shop Talk blog.

First, let me give you my definition of an objective. I think of objectives as observable outcomes. An outcome defines the effect you want to make through a project or initiative. (You can read more about outcomes here.)  Then, you add a measure (something observable), a target (what amount of change constitutes success), and a timeframe for achieving that target.  For example, say your doctor tells you to lower your blood pressure.  She likely will suggest you make some lifestyle changes, then return in six months (timeframe) so she can take a BP reading (measure) to see if it’s below 120/80, the commonly accepted target for “normal” blood pressure.

In February 2015, Karen and I had set an outcome to increase our blog readership.  We monitored our site statistics, which we considered a measure of readership. However, we never wrote an objective or set a target. Like most of the world, we only write objectives when a VIP (such as an administrator or funder) insists. Otherwise, we are as uncommitted as the next person.

But then this happened.  I was preparing slides for a webinar on data visualization design principles and wanted to show how a benchmark line improves the meaning of data displayed in a line graph. A benchmark line basically represents your target and allows readers to compare actual performance against that target.  The best “real” data I had for creating a nice line graph was our blog’s site statistics.  But I needed a target to create the benchmark line.

NEO Blog Monthly Site Visits 2014-2016 line graph showing steady increase of monthly site visits, starting near 100 in the first month represented on the graph and and steadily increasing toward the target goal of 500. There is a straight benchmark line, going straight across the graph at the 500 mark, demonstrating our target. We hit the target once in the time frame represented on the graph.

So I made one up: 500 views per month by March 1. I did check our historical site statistics to see what looked reasonable. However, I mostly choose 500 because it was a nice, simple number to use during a presentation. I didn’t even consult Karen. She learned about it when she reviewed my webinar slides.

After all, it was a “pretend” objective.

But a funny thing happened.  As luck would have it, the NEO had nine presentations scheduled for the month of February, the month after I prepared the graph. Our new target motivated us to promote our blog in every webinar. By the end of February, we exceeded our goal, with 892 site visits.

It was game on! We started monitoring our site statistics the way cats watch a gecko. Whenever we feared we might not squeak across that monthly target line, we began strategizing about how to bump up readership. At first, we focused on promotion. We worked on building our following in Twitter, where we promoted our blog posts each week. Karen created a Facebook page so we had another social media outlet to promote our blog.

Eventually, though, we shifted our focus toward strategies for creating better content. Here were some of our ideas:

  • Show our readers how to apply evaluation in their work settings. Most of our readers are librarians, so we make a point of using examples in our articles that demonstrate how evaluation is used in library programs.
  • Demonstrate how the NEO evaluates its own program. We do posts about our own evaluation activities so that we can model the techniques we write and teach about.
  • Allow our readers to learn about assessment from each other. We work for a national network and our readers like to read about and learn from their colleagues. We now seek interviews with readers who have evaluation success stories to share.
  • Supplement our trainings. We create blog posts to supplement our training sessions. We then list relevant blog posts (with links) in our workshop and webinar resource lists.
  • Improve our consulting services. We offer evaluation consultations to members of the National Network of Libraries of Medicine. We now send them URLs to blog posts that we think will help them with particular projects.
  • Introduce new evaluation trends and tools: Both Karen and I participate in the American Evaluation Association, which has many creative practitioners who are always introducing new approaches to evaluation. We use NEO Shop Talk to pass along innovations and methods to our readers.

 

In the end, this accidental objective has improved our service.  It nudged us toward thinking of our blog contributes to the mission of the NEO and the NN/LM.

So I challenge you to set some secret objectives, telling only those who are helping you achieve that target.  See if it changes how you work.

If it does, email us. We’ll write about you in our blog.

By the way, if you want to learn how to add a benchmark line to your graphs, check out this post from Evergreen Data.

Data Party for Public Librarians

Friday, May 6th, 2016

The Engage for Health project team from left to right: Lydia Collins, Kathy Silks, Susan Jeffery, Cindy Olney

Last week, I threw my first data party. I served descriptive statistics and graphs; my co-hosts brought chocolate.

I first learned about data parties from evaluation consultant Kylie Hutchinson’s presentation It’s A Data Party that she gave at the 2016 American Evaluation Association Conference. Also known as data briefings or sense-making sessions, data parties actively engage stakeholders with evaluation findings.

Guest List

My guests were librarians from a cohort of public libraries that participated in the Engage for Health project, a statewide collaboration led by the NN/LM Middle Atlantic Region (MAR) and the Pennsylvania Library Association (PaLA). The NN/LM MAR is one of PaLA’s partners in a statewide literacy initiative called PA Forward, an initiative to engage libraries in activities that address five types of literacy.  The project team was composed of Lydia Collins of NN/LM MAR (which also funded the project), Kathy Silks of the PaLA, and Susan Jeffery of the North Pocono Public Library. I joined the team to help them evaluate the project and develop reports to bring visibility to the initiative.  Specifically, my charge was to use this project to provide experiential evaluation training to the participating librarians.

Librarians from our 18 cohort libraries participated in all phases of the planning and evaluation process.  Kathy and Susan managed our participant recruitment and communication. Lydia provided training on how to promote and deliver the program, as well as assistance with finding health care partners to team-teach with the librarians. I involved the librarians in every phase of the program planning and evaluation process. We met to create the project logic model, develop the evaluation forms, and establish a standard process for printing, distributing, and returning the forms to the project team. In the end, librarians delivered completed evaluation forms from 77% of their adult participants from Engage for Health training sessions.

What We Evaluated

The objective of PA Forward includes improving health literacy, so the group’s outcomes for Engage for Health was to empower people to better manage their health. Specifically, we wanted them to learn strategies that would lead to more effective conversations with their health care providers. Librarians and their health care partners emphasized strategies such as researching health issues using quality online health resources, making a list of medications, and writing down questions to discuss at their appointments.  We also wanted them to know how to use two trustworthy online health information sources from the National Library of Medicine: MedlinePlus and NIHSeniorHealth.

 Party Activities

Sharing with Appreciative Inquiry. The data party kicked off with Appreciative Inquiry interviews. Participants interviewed each other, sharing their peak experiences and what they valued about those experiences. Everyone then shared their peak experiences in a large group. (See our blog entries here and here for detailed examples of using Appreciative Inquiry.)

Data sense-making: Participants then worked with a fact sheet with graphs and summary statistics compiled from the session evaluation data.  As a group, we reviewed our logic model and discussed whether our data showed that we achieved our anticipated outcomes.  The group also drew on both the fact sheet and the stories from the Appreciative Inquiry interviews to identify unanticipated outcomes.  Finally they identified metrics they wish we had collected. What was missing?

Consulting Circles: After a morning of sharing successes, the group got together to help each other with challenges.  We had three challenge areas that the group wanted to address: integration of technology into the classes; finding partners from local health organizations; and promotional strategies.  No area was a problem for all librarians: some were quite successful in a given areas, while others struggled. The consulting groups were a chance to brainstorm effective practices in each area.

Next steps:  As with most funded projects, both host organizations hoped that the libraries would continue providing health literacy activities beyond the funding period.  To get the group thinking about program continuity, we ran a 1-2-4-All discussion about next steps.  They first identified the next steps they will take at their libraries, then provided suggestions to NN/LM MAR and PALA on how to support their continued efforts.

Post Party Activities

For each of the four party activities, a recorder from each group took discussion notes on a worksheet developed for the activity, then turned it into the project team. We will incorporate their group feedback into written reports that are currently in process.

If you are curious about our findings, I will say generally that our data supports the success of this project.  We have plans to publish our findings in a number of venues, once we have a chance to synthesize everything.  So watch this blog space and I’ll let you know when a report of our findings becomes available.

Meanwhile, if you are interested in reading more about data parties, check out this article in the Journal of Extension.

 

Diversity, Texas Libraries and Participatory Data Collection

Monday, May 2nd, 2016

On April 20, Cindy Olney and I facilitated a program for the Texas Library Association Annual Conference called Open Libraries! Making Your Library Welcome to All.  The program was sponsored by TLA’s Diversity and Inclusion Committee and the plan for the program was for attendees to work cooperatively to discover ways to ensure that people of diverse cultures, languages, ages, religions, sexual orientations, physical abilities, and others feel welcome at the library.  The committee wanted to get ideas from the wealth of TLA librarians’ experiences, so Cindy was invited to gather as much information from the attendees as possible. As co-chair of the TLA Diversity and Inclusion Committee, I co-facilitated the event.

The process used was a modified 1-2-4-All process, that you can find on the Liberating Structures website.  Our primary question was “What can my library do to become more welcoming to all people?”  We asked everyone in the room to brainstorm together all the different parts of a library that could be modified to make it more welcoming (e.g., reference services, facility, etc.).  We wanted to be sure that everyone thought as broadly and creatively as possible.

TLA Diversity Data Collection Program 2016

The discussion process actually had two parts.  For part one, we gave everyone two minutes to write as many ideas as they could on index cards (one idea per card).  Then we asked people to take two minutes to share their ideas with a partner.  They then shared their ideas with the entire table (up to 10 participants). The group then chose and wrote down the three best ideas and turned them in to the moderators.  Participants were instructed to leave their index cards with their ideas piled in the middle of their tables.

Here were some of the ideas that were generated through this discussion.

  • Welcome signs in different languages
  • Signage
  • Physical spaces – access to mobility

As you can see, the responses were fairly non-specific. We wanted richer descriptions of modifications of programs or services.  So part two of the process involved asking participants to develop more detailed plans for making their libraries more welcoming. Using a method involving dog, cat, and sea creature stickers, we moved participants randomly to new tables so that they ended up with a new group of colleagues.  They then chose a partner from their new table members and, as a pair, randomly chose one idea card the piles generated in part one of the process. They worked on a plan for one idea for eight minutes.  When the moderator called time, they pulled another card and worked on plans for a second idea. In the final eight minutes of the session, we asked for idea sharing by table to the entire group.

The plans in part 2 were better articulated and detailed than those we got in part one. Here are some examples of the kind of result we got from that exercise:

  • Signage: Making clearer, more colorful. Different languages signage or use digital signage.
  • Language material specific to the community and programming in various language spoken in the community. ESL classes partnered with community colleges.
  • Invite representatives from ADA/disability advocates to give suggestions on making library desks/areas more accessible.

The whole process was completed in a 50-minute conference program session.  Both myself and the other Diversity and Inclusion co-chair, Sharon Amastae from El Paso, TX, were impressed with the energy and enthusiasm that was present among attendees in the room.

The results of this data gathering event will be communicated to the TLA membership.  When that project is completed, we’ll let you know here on the NEO Shop Talk blog!

Photo credit: Esther Garcia

 

Appreciative Inquiry of Oz: Building on the Best in the Emerald City

Friday, February 19th, 2016

Cartoon image of an Emerald City

“One day not very long ago, librarians came to the Emerald City from their libraries in all of the countries of Oz. They came to visit the Great Library of the Emerald City, and to petition the Wizard allow them to borrow books and other items at the Great Library. Their hope was to transport items from one library to another using the Winged Monkeys, who offered their skills for this task after they were set free and got bored.”

Thus begins the latest OERC project – an online class in Appreciative Inquiry (AI), offered through the MidContinental Region’s Librarians in the Wonderful Land of Oz Moodle ‘game’ (i.e. series of online classes worth game points and CE credits from the Medical Library Association).  The game is made up of several ‘challenges’ (online classes) for librarians offered by NN/LM instructors.

In OERC’s challenge, Building on the Best at the Great Library of the Emerald City: Using Appreciative Inquiry to Enhance Services and Programs, the Wizard of Oz makes a deal with the librarians.  He will allow interlibrary loan of the Great Library’s resources if the librarians will assess customer satisfaction of the Great Library’s services and find things to improve.  And students in the class will learn to use a qualitative data collection technique called Appreciative Inquiry to do this assessment.

Sometimes people avoid customer service assessment because they find the methods to be complicated and time-consuming. Negative feedback can be uncomfortable on the part of the listener and the speaker. Appreciative Inquiry, with a focus on identifying and building on organizational strengths, removes that discomfort. A number of OERC workshops touch on Appreciative Inquiry but this Librarians of Oz challenge allows you to practice the technique, something that the OERC has not been able to provide in the traditional webinar or workshop context.  Completing the class is worth 14 MLA CE credits.

The class is free, but in order to take it you will need to register for the game Librarians in the Wonderful Land of Oz .  If you don’t want to take the class, but would still like to learn more about Appreciative Inquiry, I recommend these earlier blog posts:

From Cindy and Karen’s perspective, one of the best parts of this experience is that we finally get the official title of Wizard.  Special thanks to John Game Wizard Bramble of the NN/LM MCR who made all this happen.

 

The OERC Blog – Moving Forward

Friday, January 8th, 2016

turtle climbing up staircase

Since last week’s message the OERC has been looking at some additional data about the blog in order to update our online communications plan going forward. The earlier OERC strategy had been to use social media to increase the use of evaluation resources, the OERC’s educational services, and the OERC’s coaching services. These continue to be the goals of the OERC’s plan. However, due to the following pieces of information, a new strategy has emerged.

  • The OERC Blog is increasing in popularity. As reported last week, more people find it, share it with their regions, and engage with it by clicking on the links than ever before.
  • The blog always has new content and is time-intensive to create: it takes approximately 6 person-hours each week to write and publish new content.
  • Although the OERC does not have a Facebook page, and the OERC Twitter account @nnlmOERC has been used primarily promote the blog, still Facebook refers more people to the blog than come from Twitter (this was kind of a shocker for us!)

We feel that that the OERC Blog, based on the results described in last week’s post, has become one of the most successful products of the OERC. The blog has become a source of educational content, and is itself an evaluation resource in need of promotion. Because of this, our Online Communications Plan going forward has a special focus on promoting the blog. Here are some of the new process goals for this purpose.

  • Facebook: The OERC will create a Facebook page that will promote the blog, link to other online evaluation resources, and show photos of what the OERC team is up to.
  • Twitter: Karen and Cindy will post at least one additional post per week on Twitter to increase the Twitter presence. Included will be retweets to build social capital which may lead to more retweets of our blog tweets (here is an interesting dissertation by Thomas Plotkowiak explaining this).
  • Training: We will make a point of promoting the blog during our in-person classes and webinars. For example, we may refer people to articles in our blog that supplement the content in the training sessions.
  • Email: The blog URL will be added to Karen and Cindy’s email signatures

So, what kinds of things will we measure? Naturally we want process measurements (showing that things are working the way they should along the way) and outcome measurements (showing that we are meeting our goals).

Here are our process goals, which are the activities we are committing to this year:

  • 52 blog posts a year
  • 3 tweets a week
  • Minimum of 1 Facebook post a week
  • Blog information added to class resource information and email signatures

In the short-term, we hope to see people interacting with our social media posts. So we are hoping to see increases on the following measures of our short-term outcomes:

  • # of Twitter retweets, likes and messages
  • # of Facebook likes, comments, and shares
  • # of new followers on Twitter and Facebook

We hope that the increased interaction with our Facebook and Twitter posts will lead more readers to our blog. So we will be monitoring increases on the following long-term outcome measures:

  • # of blog visitors per month
  • # of blog average views per day
  • # of blog link “click-throughs” to measure engagement with the blog articles
  • # of people who register for weekly blog updates and add the OERC Blog to their blog feeds
  • # of times blog posts are re-posted in Regional Medical Library blogs and newsletters.

This is our strategy for increasing use of our blog content. We will keep you updated and share tools that we develop to track these outcomes.

References

Plotkowiak, Thomas. “The Influence of Social Capital on Information Diffusion in Twitter’s Interest-Based Social Networks.” Diss. University of St.Gallen, 2014. Web. 8 Jan. 2016.

Last updated on Monday, June 27, 2016

Funded by the National Library of Medicine under Contract No. UG4LM012343 with the University of Washington.