Skip all navigation and go to page content

NEO Shop Talk

The blog of the National Network of Libraries of Medicine Evaluation Office

Archive for the ‘Questionnaires and Surveys’ Category

The Dark Side of Questionnaires: How to Identify Questionnaire Bias

Monday, March 6th, 2017

Villain cartoon with survey questions

People in my social media circles have been talking lately about bias in questionnaires.  There are biased questionnaires.  Some of them are biased by accident and some are on purpose.  Some are biased in the questions and some are biased in other ways, such as the selection of the people who are asked to complete the questionnaires. Recently, a couple of my friends posted on Facebook that people should check out the NNLM Evaluation Office to learn about better questionnaires. Huzzah! This week’s post was born!

Here are a few things to look for when creating, responding to, or looking at the results of questionnaires.

Poorly worded questions

Sometimes simple problems with questions can lead to bias, whether accidental or on purpose.  Watch out for these kinds of questions:

  • Questions that have unequal number of positive and negative responses.

Example:

Overall, how would you rate NIHSeniorHealth?

Excellent | Very Good | Good | Fair | Poor 

Notice that “Good” is the middle option (which should be neutral), and some people consider “Fair” to be a slightly positive term.

  • Leading questions, which are questions that are asked in a way that is intended to produce a desired answer.

Example:

Most people find MedlinePlus very easy to navigate.  Do you find it easy to navigate?  (Yes   No)

How would you feel if you had trouble navigating MedlinePlus? It would be hard to say ‘No’ to that question.

  •  Double-barreled questions, which are two questions in one.

 Example:

 Do you want so lower the cost of health care and limit compensation to medical malpractice lawsuits?

 This question has two parts – to answer yes or no, you have to agree or disagree with both parts. And who doesn’t want to lower health care costs?

  •  Loaded questions, which are questions that have a false or questionable logic inherent in the question (a “Have you stopped beating your wife” kind of question). Political surveys are notorious for using loaded questions.

Example:

Are you in favor of slowing the increase in autism by allowing people to choose whether or not to vaccinate their child?

This question makes the assumption that vaccinations cause autism. It might be difficult to answer if you don’t agree with that assumption.

The NEO has some suggestions for writing questions in their Booklet 3: Collecting and Analyzing Evaluation Data, page 5-7.

Questionnaire respondents

People think of the questions as a way to bias questionnaires, but another form of bias can be found in the questionnaire respondents.

  • Straw polls or convenience polls are polls that are given to people the easiest way. For example polling the people who are attending an event, or putting a questionnaire on a newspaper homepage (or your Facebook page).  The reason they are problematic is that they attract response from people who are particularly interested or energized by a topic, so you are hearing from the noisy minority.
  • Who you send the questionnaire to has a lot to do with why you are sending out the questionnaire. If you want to know about the opinions of people in a small club, then that’s who you would send them to. But if you are trying to reach a large number of people, you might want to try sampling, which involves learning about randomizing.  (Consider checking out the Appendix C of NNLM PNR’s Measuring the Difference: Guide to Planning and Evaluating Health Information Outreach). Keep in mind that the potential bias here isn’t necessarily in sending the questionnaires to a small group of people, but in how you represent the results of that questionnaire.
  • Low response rate may bias questionnaire results because it’s hard to know if your respondents truly represent the group being surveyed.  The best way to prevent response rate bias is to follow the methods described in this NEO post Boosting Response Rates with Invitation Letters to ensure you get the best response rate possible.

Lastly, the Purpose of the Questionnaire

Just like looking for bias in news or health information or anything else, you want to think about is who is putting out the questionnaire and what is its purpose?  A  questionnaire isn’t always a tool for objectively gathering data.  Here are some other things a questionnaire can be used for:

  • To energize a constituent base so that they will donate money (who hasn’t filled out a questionnaire that ends with a request for donations?)
  • To confirm what someone already thinks on a topic (those Facebook polls are really good for that)
  • To give people information while pretending to find out their opinion (a lot of marketing polls I get on my landline seem to be more about letting me know about some products than really finding out what I know).

If you want to know more about questionnaires, here are some of the NEO resources that can help:

Planning and Evaluating Health Information Outreach Projects, Booklet 3: Collecting and Evaluating Evaluation Data

Boosting Response Rates with Invitation Letters

More NEO Shop Talk blog posts about Questionnaires and Surveys

 

Picture attribution

Villano by J.J., licensed under the Creative Commons Attribution-Share Alike 3.0 Unported license.

 

ABP: Always Be Pilot-testing (some experiences with questionnaire design)

Monday, February 20th, 2017

Cover of NEO's Booklet 3 on collecting and analyzing evaluation data

This week I have been working on a questionnaire for the Texas Library Association (TLA) on the cultural climate of TLA.  Having just gone through this process, I will tell you that NEO’s Booklet #3: Collecting and Analyzing Evaluation Data has really useful tips on how to write questionnaires (p. 3-7).  I thought it might be good to talk about some of the tips that had turned out particularly useful for this project, but the theme of all of these is “always be pilot-testing.”

NEO’s #1 Tip: Always pilot test!

This questionnaire is still being pilot tested. So far I have thought the questionnaire was perfect at least 10 times, and we are still finding out about important changes from people who are pilot testing it for our committee.  One important part of this tip is to include stakeholders in the pilot testing.  Stakeholders have points of view that may not be included in the points of view of the people creating the survey.  After we have what we think is a final version, our questionnaire will be tested by the TLA Executive Board.  While this process sounds exhausting, every single change that has been made (to the questionnaire that I thought was finished) has fundamentally improved it.

There is a response for everyone who completes the question

Our questionnaire asks questions about openness and inclusiveness to people of diverse races, ethnicities, nationalities, ages, gender identities, sexual identities, cognitive and physical disabilities, perceived socioeconomic status, etc.  We are hoping to get personal opinions from all kinds of librarians who live all over Texas.  By definition this means that many of the questions are possibly sensitive, and may be hot button issues for some people.

In addition to wording the questions carefully, it’s important that every question has a response for everyone who completes the question. We would hate for someone not to find the response that best works for them, and then leave the questionnaire unanswered, or even worse get their feelings hurt or feel insulted. For example, we have a question about whether our respondents feel that their populations are represented in TLA’s different groups (membership, leadership, staff, etc). At first the answers were just “yes” or “no.”  But then (from responses in the pilot testing) we realized that a person may feel that they belong to more than one population. For example, what if someone was both Asian and had a physical disability.  Perhaps they feel that one group is well represented and the other group not represented at all.  How would they answer the question?  Without creating a complicated response, we decided to change our response options to “yes” “some are” and “no.”

“Don’t Know” or “Not Applicable”

In a similar vein, sometimes people do not know the answer to the question you are asking.  They can feel pressured to make a choice among questions rather than skip the question (and if they do skip the question, the data will not show why).  For example, we are asking a question about whether people feel that TLA is inclusive, exclusionary or neither.  Originally I thought those three choices covered all the bases.  But among discussions with Cindy (who was pilot testing the questionnaire), we realized that if someone simply didn’t know, they wouldn’t feel comfortable saying that TLA was neither inclusive nor exclusionary.  So we added a “Don’t know” option.

Speaking from experience, the most important thing is keeping an open mind. Remember that the people taking your questionnaire will be seeing it from different eyes than yours, and they are the ones you are hoping to get information from.  So, while I recommend following all the tips in Booklet 3, to get the best results, make sure that you test your questionnaire with a wide variety of people who represent those who will be taking it.

NEO Announcement! Home Grown Tools and Resources

Friday, February 3rd, 2017

Red Toolbox with ToolsSince NEO (formerly OERC) was formed, we’ve created a lot of material – four evaluation guides, a 4-step guide to creating an evaluation plan, hosted in-person classes and webinars, and of course, written in this very blog! All of the guides, classes, and blogs come with a lot of materials, including tip sheets, example plans, and resource lists. In order to get to all of these resources though, you had to go through each section of the website and search for them, or attend one of our in person classes. That all changed today.

Starting now, NEO will be posting its own tip sheets, evaluation examples, and more of our favorite links on the Tools and Resources page. Our first addition is our brand new tip sheet, “Maximizing Response Rate to Questionnaires,” which can be found under the Data Collection tab. We also provided links to some of our blog posts in each tab, making them easier to find. Look for more additions to the Tools and Resources page in upcoming months.

Do you have a suggestion for a tip sheet? Comment below – you might see it in the future!

Participatory Evaluation, NLM Style

Friday, November 11th, 2016

Road Sign with directional arrow and "Get Involved" written on it.

This week, I invite you to stop reading and start doing.

Okay, wait. Don’t go yet.  Let me explain. I am challenging you to be a participant-observer in a very important assessment project being conducted by the National Library of Medicine (NLM).

The NEO is part of the National Library of Medicine’s program (The National Network of Libraries of Medicine) that promotes use of NLM’s extensive body of health information resources.  The NLM is devoted to advancing the progress of medicine and improving the public health through access to health information. Whether you’re a librarian, health care provider, public health worker, patient/consumer, researcher, student, educator, or emergency responder fighting health-threatening disasters, the NLM has high quality, open-access health information for you.

Now the NLM is working on a long-range plan to enhance its service to its broad user population.  It is inviting the public to provide input on its future direction and priorities. Readers, you are a stakeholder in the planning process. Here is your chance to contribute to the vision. Just click here to participate.

And, because you are an evaluation-savvy NLM stakeholder, your participation will allow you to experience a strength-based participatory evaluation method in action.  Participatory evaluation refers to evaluation projects that engage a wide swath of stakeholders. Strength-based evaluation approaches are those that focus on getting stakeholders to identify the best of organizations and suggest ways to build on those strengths. Appreciative Inquiry is one of the most widely recognized strength-based approaches. The NEO blog have posts featuring Appreciative Inquiry projects here and here.

While I have no idea if the NLM’s long-range planning team explicitly used Appreciative Inquiry for developing their Request for Information, their questions definitely embody the spirit of strength-based assessment. I’m not going to post all of the question here because I want readers to go to the RFI to see the questions for themselves. But as a teaser, here’s the first question that appears in each area of inquiry addressed in the feedback form:

 “Identify what you consider an audacious goal in this area – a challenge that may be daunting but would represent a huge leap forward were it to be achieved.  Include any proposals for the steps and elements needed to reach that goal. The most important thing NLM does in this area, from your perspective.”

So be an observer: check out the NLM’s Request for Information.  Notice how they constructed a strength-based participant feedback form.

Then be a participant: take a few minutes to post your vision for the future of NLM.

From QWERTY to Quality Responses: How To Make Survey Comment Boxes More Inviting

Friday, July 22nd, 2016

The ubiquitous comment box.  It’s usually stuck at the end of a survey with a simple label such as “Suggestions,” “Comments:” or “Please add additional comments here.”

Those of us who write surveys over-idealistic faith in the potential of comment boxes, also known as open-ended survey items or questions.  These items will unleash our respondents’ desire to provide creative, useful suggestions! Their comments will shed light on the difficult-to-interpret quantitative findings from closed-ended questions!

In reality, responses in comment boxes tend to be sparse and incoherent. You get a smattering of “high-five” comments from your fans. A few longer responses may come from those with an ax to grind, although their feedback may be completely off topic.  More often, comment boxes are left blank, unless you make the mistake of requiring an answers before the respondent can move on to the next item. Then you’ll probably get a lot of QWERTYs in your blank space.

Let’s face it.  Comment boxes are the vacant lots of Survey City.  Survey writers don’t put such effort into cultivating them. Survey respondents don’t even notice them.

Can we do better than that?  Yes, we can, say the survey methods experts.

First, you have to appreciate this fact: open-ended questions ask a lot of respondents.  They have to create a response. That’s much harder than registering their level of agreement to a statement you wrote for them. So you need strategies that make open-ended questions easier and more motivating for the survey taker.

In his online class Don’t (Survey)Monkey Around: Learn to Make Your Surveys Work,  Matthew Champagne provides the following tips for making comment boxes more inviting to respondents:

  • Focus your question. Get specific and give guidance on how you want respondents to answer. For example, “Please tell us what you think about our new web site. Tell us both what you like and what you think we can do better.” I try to make the question even easier by putting boundaries on how much I expect from them.  So, when requesting feedback on a training session, I might ask my respondents to “Please describe one action step you will take based on what you learned in this class.”
  • Place the open-ended question near related closed-ended questions. For example, if you are asking users to rate the programming at your library, ask for suggestions for future programs right after they rate the current program. The closed-ended questions have primed them to write their response.
  • Give them a good reason to respond. A motivational statement tells respondents how their answers will be used. Champagne says that this technique is particularly effective if you can explain how their responses will be used for their personal For example, “Please give us one or two suggestions for improving our references services.  Your feedback will help our reference librarians know how to provide better service to users like you.”
  • Give them room to write. You need a sizable blank space that encourages your respondents to be generous with their comments. Personally, when I’m responding to an open-ended comment on a survey, I want my entire response to be in view while I’m writing.  As a survey developer, I tend to uses boxes that are about three lines deep and half the width of the survey page

Do we know that Champagne’s techniques work?  In the Dillman et al.’s classic book on survey methods, the authors present research findings to support Champagne’s advice. Adding motivational words to the open-ended survey questions showed a 5-15 word increase in response length and a 12-20% increase in how many respondents’ submitted answers.  The authors caution, though, that you need to use open-ended questions sparingly for the motivational statements to work well. When four open-ended questions were added to a survey, the motivational statements worked better for questions placed earlier in the survey.

I should add, however, to never make your first survey question an open-ended one.  The format itself seems to make people close their browsers and run for the hills.  I always warm up the respondents with some easy closed-ended questions before they see an open-ended item.

Dillman et al. gave an additional technique for getting better responses to open-ended items: Asking follow-up questions.  Many online software packages now allow you to take a respondent’s verbatim answer and repeat it in a follow-up question.  For example, a follow-up question about a respondent’s suggestions for improving the library facility might look like this:

“You made this suggestion about how to improve the library facility: ‘The library should add more group study rooms.’ Do you have any other suggestions for improving the library facility?” [Bolded statement is the respondents’ verbatim written comment.]

Follow-up questions like this have been shown to increase the detail of respondents’ answers to open-ended questions.  If you are interested in testing out this format, search your survey software system for instructions on “piping.”

When possible, I like to use an Appreciative Inquiry approach for open-ended questions. The typical Appreciative Inquiry approach requires two boxes, for example:

  • Please tell us what you liked most about the proposal-writing workshop.
  • What could the instructors do to make this the best workshop possible on proposal writing?

People find it easier to give you an example rooted in experience.  We are story tellers at heart and you are asking for a mini-story. Once they tell their story, they are better prepared to give you advice on how to improve that experience. The Appreciative Inquiry structure also gives specific guidance on how you want them to structure their responses.  The format used for the second question is more likely to gather actionable suggestions.

So if you really want to hear from your respondents, put some thought into your comment box questions.  It lets them know that you want their thoughtful answers in return.

Source:  The research findings reported in this post are from Internet, Phone, Mail and Mixed-Mode Surveys: The Tailored Design Method (4th ed.), by Dillman, Smyth, and Christian (Hoboken, NJ: John Wiley and Sons, Inc, 2014, pp 128-134.

Simply Elegant Evaluation: GMR’s Pilot Assessment of a Chapter Exhibit

Friday, January 15th, 2016

If you spend any time with librarians who work for the National Network of Libraries of Medicine (NN/LM), you’ll likely hear about their adventures with conference exhibits. Exhibiting is a typical outreach activity for the NN/LM Regional Medical Libraries, which are eight health sciences libraries that lead other libraries and organizations in their region in promoting the fine health information resources of the National Library of Medicine (NLM) and National Institutes of Health.  The partnering organizations are called “network members” and, together with RMLs, are the NN/LM.

Jacqueline Leskovec

Exhibiting is quite an endeavor. It requires muscles for hauling equipment and supplies. You have to be friendly and outgoing when your feet hurt and you’re fighting jet lag. You need creative problem-solving skills when you’re in one state and your materials are stuck in another.

More than one RML outreach librarian has asked the question: Is exhibiting worth it?

Jacqueline Leskovec, at the NN/LM Greater Midwest Regional Medical Library (GMR), decided to investigate this question last October. The Outreach, Planning, and Evaluation Coordinator for NN/LM GMR, Jacqueline specifically chose to assess a particular type of NN/LM exhibits: those held at Medical Library Association chapter meetings.  The question was raised at a GMR staff meeting about the value of exhibiting at a conference where most attendees were medical librarians, many of whom already knew about NLM and NIH resources.

Jacqueline decided to look at the question from a different angle. Could they consider, instead, the networking potential of their exhibit? The NN/LM runs on relationships between regional medical library staff and other librarians in their respective regions. Possibly the booth’s value was that it provided an opportunity for the GMR staff to meet with librarians from both long-standing and potential member organizations of the GMR.

Collecting Feedback

Jacqueline decided to ask two simple evaluation questions.  First, did existing GMR users stop by the exhibit booth to visit with the GMR staff at the chapter meeting booth?  Second, did the booth provide the GMR staff with opportunities to meet librarians who were unaware of the NN/LM? In a nutshell, the questions focused on the booth’s potential to promote active participation in the NN/LM. This was a valid goal for an exhibit targeting this particular audience, where the GMR could find partners to support the network’s mission of promoting NLM resources.

She worked with the OERC to develop a point-of-contact questionnaire that she administered to visitors using an iPad. Her questionnaire had five items that people responded to via touch screen.  She chose the app Quick Tap Survey because it produced an attractive questionnaire, data could be collected without an Internet connection, and she could purchase a one-month subscription for the software.  The app also has a feature that allows the administrator to randomly pull a name for a door prize. Jacqueline used this feature to give away an NLM portfolio that was prominently displayed on the exhibit table. (Participation was voluntary, and the personally identifiable information was deleted after the drawing.)

Jacqueline stood in front of the booth to attract visitors, a practice she uses at all exhibits. She did not find that the questionnaire created any barriers to holding conversations with visitors. Quite the contrary, many were intrigued with the technology. Almost no one turned down her request to complete the form. Of the 120 conference attendees (the count reported by the Midwest MLA chapter), 38 (32%) visited the GMR booth and virtually all agreed to complete the questionnaire.

What Did GMR Learn?

 Jacqueline learned that 50% of the visitors came to the booth specifically to visit with GMR staff, while 26% came to get NLM resources.  This confirmed that the visits were more related to networking than information-seeking about NLM or NIH resources. She also learned that more than half were return visitors who had visited at past conferences, while 46% had never stopped by the booth before.  It appeared that the booth served equally as a way for GMR staff to talk with existing users and to meet potential new ones. Those who were return visitors also were the more likely users of the GMR: 68% said that the GMR was the first place they would seek answers to questions they had about NLM or NIH resources. (Although one added that she would first look online, then contact them if she couldn’t find the answer on her own.)  In contrast, 56% of new booth visitors said they usually sought help from a friend or colleague. Only 26% would contact the GMR. Findings do not indicate that exhibits cause librarians to become more involved with GMR. However, when GMR offers opportunities for face-to-face interactions, their users take advantage of it.

Visitors also got an opportunity to voice their opinion about the continuation of GMR exhibits at chapter meetings. There was fairly universal agreement: 92% said they thought the GMR should continue. The other 8% said they weren’t sure, but no one said GMR should stop.

Lessons learned

Jacqueline found it was easy to get people to take her questionnaire, particularly with a smooth application like Quick Tap Survey. She also learned that, regardless of the care she took in developing her questions, she still had at least one item that could have been worded better. However, tweaks can easily be implemented for future exhibits.

Overall, she said this assessment project added depth to the typical booth assessments that GMR typically conducts. Previous assessments focused on describing booth traffic, such as number of visitors, staff hours in booth, or number of promotional materials distributed. This project described the actual visitors and what they got out of the exhibit.

 Prologue: Why The OERC Loves This Project

We love this project because Jacqueline thought carefully about the outcomes of exhibiting to this particular audience and designed her questionnaire accordingly.  She recognizes that exhibits at chapter meetings are a specific type of event. The goals of NN/LM exhibits at other types of conferences are different, so the questionnaires would have to be adapted for those goals.

We also love this project because it shows that you can assess exhibits. Back in the day, point-of-contact assessment required paper-and-pencil methods.  It was a data collection approach that seemed likely to be self-defeating. Visitors would cut a wide path to avoid requests to fill out a form. Now that we have the technology (tablets and easy-to-use apps) that makes the task less daunting, the OERC has been promoting the idea of exhibit assessment.  Jacqueline’s project is proof that it can be done!

Boosting Response Rates with Invitation Letters

Friday, October 30th, 2015

"You've got mail" graphicwith mail spelled m@il

Today’s topic: The humble survey invitation letter.

I used to think of the invitation letter (or email) as a “questionnaire delivery device.”  You needed some way to get the URL to your prospective respondents, and the letter (or, more specifically, the email) was how you distributed the link. The invitation email was always an afterthought, hastily composed after the arduous process of developing the questionnaire itself.

Then I was introduced to Donald Dillman’s “Tailored Design Method” and learned that I needed to take as much care with the letter as I did the questionnaire. A carefully crafted invitation has been proven to boost response rates. And response rate is a key concern when conducting surveys, for reasons clearly articulated in this quote from the American Association of Public Opinion Research:

“A low cooperation or response rate does more damage in rendering a survey’s results questionable than a small sample, because there may be no valid way scientifically of inferring the characteristics of the population represented by the non-respondents.” (AAPOR, Best Practices for Research)

With response rate at stake, we need to pay attention to how we write and send out our invitation emails.

This blog post features my most-used tips for writing invitation emails, all of which are included in Internet, Phone, Mail, and Mixed-Mode Surveys: The Tailored Design Method by Dillman, Smyth, and Christian (2014). Now in its fourth edition, this book is the go-to resource for how to conduct all aspects of the survey process. It is evidence-based, drawing on an extensive body of research literature on survey practice.

Plan for Multiple Contacts

Don’t think “invitation email.”  Think “communication plan,” because Dillman et al. emphasized a need for multiple contacts with participants to elicit good response rates. The book outlines various mailing schedules, but you should plan for a minimum of four contacts:

  • A preliminary email message to let your participants know you will be sending them a questionnaire. (Do not include the questionnaire link)
  • An invitation email with a link to your questionnaire (2-3 days after preliminary letter)
  • A reminder notice, preferably only to those who have not responded (one week after the invitation email)
  • A final reminder notice, also specifically to those who have not responded (one week after the first reminder).

 Tell Them Why Their Feedback Matters

Emphasize how the participants’ feedback will help your organization improve services or programs. This simple request appeals to a common desire among humans to help others. If applicable, emphasize that you need their advice specifically because of their special experience or expertise. It is best to use mail merge to personalize your email messages, so that each participant is personally invited by name to submit their feedback.

If you are contacting people who have a relationships with your organization, such as your library users or members of your organization, play up that relationship. Also, make a commitment to share results with them at a later date. (And be sure to keep that commitment.)

Make Sure They Know Who’s Asking

With phishing and email scams abounding, people are leery about clicking on URLs if an email message seems “off” in any way. Make sure they know they can trust your invitation email and survey link. Take opportunities to publicize your institutional affiliation. Incorporate logos or letterhead into your emails, when possible.

Provide names, email addresses and phone numbers of one or two members of your evaluation team, so participants know who to contact with questions or to authenticate the source of the email request. You may never get a call, but they will feel better about answering questions if you give them convenient access to a member of the project team.

It is also helpful to get a public endorsement of your survey project from someone who is known and trusted by your participants.  You can ask someone influential in your organization to send out your preliminary letter on your behalf. Also you or your champion can publicize your project over social media channels or through organizational newsletters or blogs.

And How You Will Protect Their Information

Be explicit about who will have access to individual-level data and will know how they answered specific questions. Be sure you know the difference between anonymity (where no one knows what any given participant specifically said) and confidentiality (where identifiable comments are seen by a few specific people). You can also let them know how you will protect their identity, but don’t go overboard. Long explanations also can cast doubt on the trustworthiness of your invitation.

Provide Status Updates

While this may seem “so high school,” most of us want to act in a manner consistent with our peer group. So if you casually mention in reminder emails that you are getting great feedback from other respondents, you may motivate the late responders who want to match the behavior of their peers.

Gifts Work Better Than Promises

The research consistently shows that sending a small gift to everyone, with your preliminary or invitation letter, is more effective than promising an incentive to those who complete your questionnaire. If you are bothered by the thought of rewarding those who may never follow through, keep in mind that small tokens (worth $2-3) sent to all participants is the most cost effective practice involving incentives. More expensive gifts are generally no more influential than small gifts when it comes to response rates. Also, cash works better than gift cards or other nonmonetary incentives, even if the cash is of less value.

Beyond Invitation Letters

The emails in your survey projects are good tools for enhancing response rate, but questionnaire design also matters. Visual layout, item order, and wording also influence response rate. While questionnaire design is beyond the scope of today’s post, I recommend The Tailored Design Method to anyone who plans to conduct survey-based evaluation in the near future. The complete source is provided below.

Source: Dillman DA, Smyth JD, and Christian LM. Internet, Phone, Mail, and Mixed-Mode Surveys: The Tailored Design Method, 4th edition. Hoboken, NJ: Wiley; 2014.

 

 

 

Which Online Survey Tool Should I Use? A Review of Reviews

Friday, September 4th, 2015

Quality survey close up with a thumbtack pointing on the word excellentRecently we faced the realization that we would have to reevaluate the online survey tool that we have been using. We thought that we would share some of the things that we learn along the way.

First of all, finding a place that evaluates survey products (like Survey Monkey or Survey Gizmo), is not as easy as going to Consumer Reports or Amazon (or CNET, Epinions, or Buzzillions).  A number of places can be found on the internet that provide reviews of surveys, but their quality is highly varied.   So for this week our project has been to compare review websites to see what we can learn from and about them.

Here are the best ones I could find that compare online survey tools:

Zapier.com’s Ultimate Guide to Forms and Surveys, Chapter 7 “The 20 Best Online Survey Builder Tools”

This resource compares 20 different online survey tools. There is a chart with a brief statement of what each survey tool is best for, what you get for free, and the lowest plan cost. Additionally, there is a paragraph description of each tool and what it does best.  Note: this is part of an eBook published in 2015 which includes chapters like “The Best Online Form Builders for Every Task.”

Appstorm.net’s “18 Awesome Survey & Poll Apps”

This review was posted on May 27, 2015 which reassures us that the information is most likely up to date.  While there are very brief descriptions, it is good for a quick comparison of the survey products. Each review includes whether or not there is a free account, if the surveys can be customized, and whether or not there are ready-made templates.

Capterra.com’s “Top Survey Software Products”

Check boxes showing the features of the different productsThis resource appears to be almost too good to be true. Alas, no date shown means that the specificity in the comparisons might not be accurate.  Nevertheless, this website lists over 200 survey software products, has separate profile pages on each product (with varying amounts of detail), and lists features that each product offers.  You can even narrow down the surveys you are looking for by filtering by feature.  Hopefully the features in Capterra’s database are kept updated for each product.  One thing to point out is that at least two fairly well-known survey products (that I know of) are not in their list.

AppAppeal.com’s “Top 31 Free Survey Apps”

Another review site with no date listed. This one compares 31 apps by popularity, presumably in the year the article was written. One thing that is unique about this review site is that the in-depth review includes the history and popularity of the app, the differences of each app to other apps, and who they would recommend the app to.  Many of the reviews include videos showing how to use the app.  Pretty cool.

TopTenReviews.com’s 2015 Best Survey Software Reviews and Comparisons

This website has the feel of Consumer Reports. It has a long article explaining why you would use survey software, how and what the reviewers tested, and the kinds of things that are important when selecting survey software. Also like Consumer Reports, it has ratings of each product (including the experiences of the business, the respondents, and the quality of the support), and individual reviews of each product showing pros and cons. Because the date is included in the review name, the information is fairly current.

This is a starting point. There are individual reviews of online survey products on a variety of websites and blogs, which are not included here.  Stay tuned for more information on online survey tools as we move forward.

 

Designing Questionnaires for the Mobile Age

Friday, July 10th, 2015

How does your web survey look on a handheld device?  Did you check?

The Pew Research Center reported that 27% of respondents to one of its recent surveys answered using a smartphone. Another 8% used a tablet. That means over one-third of participants used handheld devices to answer the questionnaire. Lesson learned: Unless you are absolutely sure your respondents will be using a computer, you need to design with mobile devices in mind.

As a public opinion polling organization, the Pew Center knows effective practices in survey research. It offers advice on developing questionnaires for handhelds in its article Tips for Creating Web Surveys for Completion on a Mobile Device. The top suggestion is to be sure your survey software is optimized for smartphones and tablets. The OERC uses SurveyMonkey, which fits this criteria. Many other popular Web survey applications do as well. Just be sure to check.

However, software alone will not automatically create surveys that are usable on handhelds devices. You also need to follow effective design principles. As a rule of thumb, keep it simple. Use short question formats. Avoid matrix-style questions. Keep the length of your survey short. And don’t get fancy: Questionnaires with logos and icons take longer to load on smartphones.

This article provides a great summary of tips to help you design mobile-device friendly questionnaires. My final word of advice? Pilot test questionnaires on computers, smartphones, and tablets. That way, you can make sure you are offering a smooth user experience to all of your respondents.

Many smart phones with application tiles on their touchscreens
Many smart phones with application tiles on their touchscreens

 

Strategies for Improving Response Rate

Tuesday, January 3rd, 2012

There are articles about strategies to improve survey response rate with health professionals in the open access December, 2011 issue of Evaluation and the Health Professions.   Each explored variations on Dillman’s Tailored Design Method, also known as TDM (see this Issue Brief from the University of Massachusetts Medical School’s Center for Mental Health Services Research for a summary of TDM).

Surveying Nurses: Identifying Strategies to Improve Participation” by J. VanGeest and T.P. Johnson (Evaluation and the Health Professions, 34(4):487-511)

The authors conducted a systematic review of efforts to improve response rates to nurse surveys, and found that small financial incentives were effective and nonmonetary incentives were not effective.  They also found that postal and telephone surveys were more successful than web-based approaches.

Surveying Ourselves:  Examining the Use of a Web-Based Approach for a Physician Survey” by K.A. Matteson; B.L. Anderson; S.B. Pinto; V. Lopes; J. Schulkin; and M.A. Clark (Eval Health Prof  34(4):448-463)

The authors distributed a survey via paper and the web to a national sample of obstetrician-gynecologists and found little systematic difference between responses using the two modes, except that university physicians were more likely to complete the web-based version than private practice physicians.  Data quality was also better for the web survey: fewer missing and inappropriate responses.  The authors speculate that university-based physicians may spend more time at computers than do private physicians.  However, given that response rate was good for both groups, the authors conclude that using web-based surveys is appropriate for physician populations and suggest controlling for practice type.

Effects of Incentives and Prenotification on Response Rates and Costs in a National Web Survey of Physicians” by J. Dykema; J. Stevenson; B. Day; S.L. Sellers; and V.L. Bonham (Eval Health Prof 34(4):434-447, 2011)

The authors found that response rates were highest in groups that were entered into a $50 or $100 lottery and that a prenotification letter containing a $2 preincentive.  They also found that use of postal prenotification letters increased response rates (even though the small token $2 had no additional effect and was not cost-effective).  The authors conclude that larger promised incentives are more effective than nominal preincentives.

A Randomized Trial of the Impact of Survey Design Characteristics on Response Rates among Nursing Home Providers” by M. Clark et al. (Eval Health Prof 34(4):464-486.

This article describes an experiment in maximizing participation by both the Director of Nursing and the Administrator of long-term care facilities.  One of the variables was incentive structure, in which the amount of incentive increased if both participated, and decreased if only one participated.  The authors found that there were no differences in the likelihood of both respondents participating by mode, questionnaire length, or incentive structure.

Last updated on Monday, June 27, 2016

Funded by the National Library of Medicine under Contract No. UG4LM012343 with the University of Washington.