Archive for the ‘Communications Tools’ Category
The American Evaluation Association’s Statement on Cultural Competence in Evaluation describes the importance of cultural competence in terms of ethics, validity of results, and theory.
- Ethics – quality evaluation has an ethical responsibility to ensure fair, just and equitable treatment of all persons.
- Validity – evaluation results that are considered valid require trust from the diverse perspectives of the people providing the data and trust that the data will be honestly and fairly represented.
- Theory – theories underlie all of evaluation, but theories are not created in a cultural vacuum. Assumptions behind theories must be carefully examined to ensure that they apply in the cultural context of the evaluation.
The Statement also makes some recommendations for essential practices for cultural competence, including the following examples:
- Acknowledge the complexity of cultural identity. Cultural groups are not static, and people belong to multiple cultural groups. Attempts to categorize people often collapse them into cultural groupings that may not accurately represent the true diversity that exists.
- Recognize the dynamics of power. Cultural privilege can create and perpetuate inequities in power. Work to avoid reinforcing cultural stereotypes and prejudice in evaluation. Evaluators often work with data organized by cultural categories. The choices you make in working with these data can affect prejudice and discrimination attached to such categories.
- Recognize and eliminate bias in language: Language is often used as the code for a certain treatment of groups. Thoughtful use of language can reduce bias when conducting evaluations.
Two recent entries on the Evergreen Blog on data visualizations and how they can show cultural bias illustrate how these principles can be applied to the evaluation of an outreach project. The first case, How Dataviz Can Unintentionally Perpetuate Inequality: The Bleeding Infestation Example, shows how using red to represent individual participants on a map made the actual participants feel like they were perceived as a threat. The more recent blog post, How Dataviz Can Unintentionally Perpetuate Inequality Part 2, shows how the categories used in a chart on median household income contribute to stereotyping certain cultures and skew the data to show something that does not accurately represent income levels of the different groups.
On March 16, NN/LM PSR presented What the heck is Data Visualization and why should a librarian care?! for the Midday at the Oasis monthly webinar. Jackie Wirz, PhD, Research Data Ninja and Assistant Professor at Oregon Health & Science University, discussed the basic principles of presenting data with good visual design. You can view the webinar by visiting the Midday at the Oasis Archives page or by clicking on the YouTube video player below.
Note: To switch to full screen, click on the full screen icon in the bottom corner of the video player. To exit the full screen, press Esc on your keyboard or click on the Full screen icon again. If you have problems viewing full screen videos, make sure you have the most up-to-date version of Adobe Flash Player.
The ACRL Roadshow Workshop, Scholarly Communication: From Understanding to Engagement! will be offered on Thursday, March 24, 8:30 am – 4:30 pm, at the Toll Room, Alumni House, on the UC Berkeley campus. Registration is free and limited to 100 participants. The session is directed towards librarians and library staff who need a broad foundational knowledge of scholarly communication issues. Participants will learn about and discuss content access barriers, intellectual property, emerging opportunities, and engagement with faculty and students. Attendees will leave with practical ideas for developing outreach activities and models for supporting changes in scholarly communication. The two presenters for this workshop will be Katie Fortney, Copyright Policy & Education Officer, California Digital Library, and Jaron Porciello, Digital Scholarship Initiatives Coordinator, Digital Scholarship and Preservation Services, Cornell University.
The Alumni House is a short distance from the Downtown Berkeley BART station. Parking around campus is limited and taking public transportation is recommended. For inquiries regarding the workshop, please contact Jean McKenzie, UC Berkeley Acting Associate University Librarian for Collections.
You have surely noticed poorly designed data visualization displays with too many details, unnecessary icons, and many variables piled into one chart. But you don’t have to be an artist to do good visual displays. Most information designers concur that data visualization is about communication, not art. However, you have to know how to design with a purpose. To understand the basics of good design, you need to understand why humans respond so well to visual displays of data, which is the topic of an excellent blog article by Stephen Few, a thought leader in the data visualization field. First, he advocates that data visualizations aid users in performing three primary functions: exploring, making sense of, and communicating data. Evaluators would add that another goal is to help users apply data in planning and decision-making. To that end, Few argues that data visualizations should be designed to support readers’ ability to perform these four cognitive tasks:
- See the big picture in the data
- Compare values
- See patterns among values
- Compare patterns
Design experts like Stephen Few are avowed minimalists, who hate chart junk, such as gridlines and data labels. They have an affinity for small multiples, which are series of graphs displaying different slices of data. If you have never seen small multiples, visit this post from Juice Analytics with good examples. In general, they do not include any element that will hinder users’ ability to make comparisons, find patterns, and identify pattern abnormalities that may be indicators of important events. Decorative features like gas gauges and paper doll icons are viewed as unnecessary distractions.
There is a distinction between data visualizations and infographics. Alberto Cairo, Knight Chair in Visual Journalism at the University of Miami’s School of Communication, wrote that data visualizations are tools for interactive data exploration while infographics are visual displays that make a specific point. One way to think of this is that data visualizations have users, while infographics have readers. Chart art may be more legitimate in infographics because it supports the primary message or story. But Cairo admits that the boundary between infographic and data visualizations is fuzzy. He noted a trend toward infographics with two layers: a presentation layer, and an exploration one. The infographics have an obvious primary message, but readers are also presented with opportunities to explore their own questions.
That said, Cairo still argues that data design principles hold true for both data visualizations and infographics. Don’t drown your readers in images or distract them with bling. Zen is in; bells and whistles are out. The good news is that simple data visualizations do not require sophisticated software or design skills. That’s not to say that simple is the same as easy. Good data visualizations and infographics take a lot of thought. For the more interactive data visualizations, you must identify how your users will use your data and design accordingly. For infographics, you need first to clearly identify your central message and then be sure that every element has a supporting role. To read further about developing good visual design habits, check out Presenting Data Effectively by Stephanie Evergreen (Sage, 2013).
Recently the Public Library Association (PLA) initiated a service called Project Outcome. An article entitled “Project Outcome – Looking Back, Looking Forward” by Carolyn Anthony, director of the Skokie Public Library, IL, was recently published in Public Libraries Online that describes the successes of libraries using this service over the past six months.
Project Outcome is an online resource that provides evaluation tools that are designed to measure the impact of library programs and services, such as summer reading programs or career development programming. It also provides ready-made reports and data dashboards that can be used to give libraries and stakeholders immediate data on their programs’ outcomes. And Project Outcome provides support and peer sharing opportunities to address common challenges and increase capacity for outcomes evaluation.
Following are some highlights about this service:
- Project Outcome has managed to create a structured approach for program outcome evaluation that can be used online by public libraries of all shapes and sizes, by people who have not done outcome evaluation before. Along with tools for collecting data, the resource has tutorials and support for libraries doing outcomes evaluation for the first time.
- Continued support and peer sharing as an integral part of the service means that PLA is building a community of librarians who use outcome evaluation.
- The stories that are shared by the peers as described in the article will increase the understanding that evaluation isn’t something forced on you from outside, but can be something that helps you to create a better library and enhance the meaning of your library’s programs.
- This process teaches librarians to start with the evaluation question (“decide what you want to learn about outcomes in your community”) and a plan for what to do with the findings. And the process ends with successfully communicating your findings to stakeholders and implementing next steps.
- Lastly, Project Outcome and the PLA Performance Measurement Task Force are planning the next iteration of their project that will measure whether program participants followed through with their intended outcomes.
The HHS Office of the Assistant Secretary for Preparedness and Response recently kicked off the My Preparedness Story: Staying Healthy and Resilient Video Challenge. The contest invites young people between the ages of 14 and 23 to submit a creative video, up to 60 seconds long and closed-captioned, showing how they help their families, friends, and community protect their health during disasters and every day. Completed videos should be uploaded to YouTube, and the link, along with a description and transcript of the video, should be provided through the “Submit Solutions” form. The entries will be evaluated by a panel of expert judges and the top entries will be posted on the web site for public voting. Submissions could be used to help others learn better ways to prepare their communities for disasters and emergencies, and contestants could win up to a $2,000 grand prize. Entries are due by March 28, 2016, at 8:00 p.m. PDT. Winners will be notified and announced no later than May 9.
Join members of the National Library of Medicine Training Center for three quick online presentations related to teaching topics. Jessi Van Der Volgen will discuss tips and tools for creating video tutorials. Cheryl Rowan will talk about including audience culture and diversity in your training sessions and Rebecca Brown will demonstrate how to integrate Zaption into your online training to add interactive opportunities to videos. Register now for this one-hour session on Friday, February 19, at 10:00 AM PST!
The National Institutes of Health (NIH) has just released the NIH-Wide Strategic Plan, Fiscal Years 2016–2020: Turning Discovery Into Health, which will ensure the agency remains well positioned to capitalize on new opportunities for scientific exploration and address new challenges for human health. Developed after hearing from hundreds of stakeholders and scientific advisers, and in collaboration with leadership and staff of NIH’s Institutes, Centers, and Offices (ICOs), the plan is designed to complement the ICOs’ individual strategic plans that are aligned with their congressionally mandated missions.
The plan focuses on four essential, interdependent objectives that will help guide NIH’s priorities over the next five years as it pursues its mission of seeking fundamental knowledge about the nature and behavior of living systems and applying that knowledge to enhance health, lengthen life, and reduce illness and disability. The objectives are to:
- advance opportunities in biomedical research in fundamental science, treatment and cures, and health promotion and disease prevention;
- foster innovation by setting NIH priorities to enhance nimbleness, consider burden of disease and value of permanently eradicating a disease, and advance research opportunities presented by rare diseases;
- enhance scientific stewardship by recruiting and retaining an outstanding biomedical research workforce, enhancing workforce diversity and impact through partnerships, ensuring rigor and reproducibility, optimizing approaches to inform funding decisions, encouraging innovation, and engaging in proactive risk management practices; and
- excel as a federal science agency by managing for results by developing the “science of science,” balancing outputs with outcomes, conducting workforce analyses, continually reviewing peer review, evaluating steps to enhance rigor and reproducibility, reducing administrative burden, and tracking effectiveness of risk management in decision making.
To inform development of the strategic plan, NIH solicited input from a wide range of stakeholders through a Request for Information, which generated more than 450 responses; a series of interactive webinars, which attracted more than 750 participants; and meetings with 21 NIH advisory councils, including the Advisory Committee to the NIH Director. The plan concludes with a bold vision for NIH, listing some specific achievements and advances that the agency will strive to deliver over the next five years.
The National Library of Medicine (NLM) Value Set Authority Center (VSAC) has just launched VSAC Collaboration; a tool to support communication, knowledge management and document management by value set authors and stewards. VSAC Collaboration provides a central site where value set authors can post value sets for collaborative discussion. In that site, teams can share threaded discussions about the value sets, view recent value set expansions posted by site members, organize their value sets by usage and by team’s workflow needs, and receive activity and change notifications from VSAC.
VSAC Collaboration Tool training webinars and slides are available. Access to the VSAC and to the VSAC Collaboration Tool requires a free Unified Medical Language System® Metathesaurus License.
We’re all trying to find ways to improve evaluation of our social media efforts. It’s fun to count the number of retweets and “likes.” But are these numbers meaningful? A recent program at the American Evaluation Association Conference in Chicago, “Do Likes Save Lives? Measuring What Really Matters in Social Media and Digital Advocacy Efforts,” presented by Lisa Hilt and Rebecca Perlmutter of Oxfam, provided a presentation designed to build knowledge and skills in planning and measuring social media strategies, setting digital objectives, selecting meaningful indicators and choosing the right tools and approaches for analyzing social media data. The presenters did not want to rely solely on what they called “vanity metrics,” for example the number of “impressions” or “likes.” Alone these metrics show very little actual engagement with the information. Instead they chose to focus on specific social media objectives based on their overall digital strategy.
Develop a digital strategy:
- Connect the overall digital strategy to campaign objectives: (for example: To influence a concrete change in policy, or to change the debate on a particular issue.)
Develop social media objectives:
- You want people to be exposed to your message.
- Then you want people to engage with it (for example, sharing your message) or make them work with it (for example: sign an online petition after reading it.)
Collect specific information based on objectives:
- Collect data about social media engagement supporting your objectives that can be measured (for example “the Oxfam Twitter campaign drove 15% of the readers to signing its petition” vs. “we got 1500 likes”.)
The presenters suggested some types of more meaningful metrics:
- On Twitter you can look at the number of profiles who take the action you want them to take, and then the number of tweets or retweets about your topic.
- For Facebook, the number of likes, shares and comments mean that your audience was definitely exposed to your message.
- Changes in the rate of likes or follows (for example if you normally get 5 new followers to your fan page a week, but due to a particular campaign strategy, you suddenly started getting 50 new followers a week.)
- Number of “influential” supporters.
- Qualitative analysis: Consider analyzing comments on Facebook posts, or conversation around a hashtag in Twitter.
Overall, the goal is to have a plan for how you would like to see people interact with your messages in relation to your overall organizational and digital strategies, and find metrics to see if your plan worked.