ABP: Always Be Pilot-testing (some experiences with questionnaire design)
This week I have been working on a questionnaire for the Texas Library Association (TLA) on the cultural climate of TLA. Having just gone through this process, I will tell you that NEO’s Booklet #3: Collecting and Analyzing Evaluation Data has really useful tips on how to write questionnaires (p. 3-7). I thought it might be good to talk about some of the tips that had turned out particularly useful for this project, but the theme of all of these is “always be pilot-testing.”
NEO’s #1 Tip: Always pilot test!
This questionnaire is still being pilot tested. So far I have thought the questionnaire was perfect at least 10 times, and we are still finding out about important changes from people who are pilot testing it for our committee. One important part of this tip is to include stakeholders in the pilot testing. Stakeholders have points of view that may not be included in the points of view of the people creating the survey. After we have what we think is a final version, our questionnaire will be tested by the TLA Executive Board. While this process sounds exhausting, every single change that has been made (to the questionnaire that I thought was finished) has fundamentally improved it.
There is a response for everyone who completes the question
Our questionnaire asks questions about openness and inclusiveness to people of diverse races, ethnicities, nationalities, ages, gender identities, sexual identities, cognitive and physical disabilities, perceived socioeconomic status, etc. We are hoping to get personal opinions from all kinds of librarians who live all over Texas. By definition this means that many of the questions are possibly sensitive, and may be hot button issues for some people.
In addition to wording the questions carefully, it’s important that every question has a response for everyone who completes the question. We would hate for someone not to find the response that best works for them, and then leave the questionnaire unanswered, or even worse get their feelings hurt or feel insulted. For example, we have a question about whether our respondents feel that their populations are represented in TLA’s different groups (membership, leadership, staff, etc). At first the answers were just “yes” or “no.” But then (from responses in the pilot testing) we realized that a person may feel that they belong to more than one population. For example, what if someone was both Asian and had a physical disability. Perhaps they feel that one group is well represented and the other group not represented at all. How would they answer the question? Without creating a complicated response, we decided to change our response options to “yes” “some are” and “no.”
“Don’t Know” or “Not Applicable”
In a similar vein, sometimes people do not know the answer to the question you are asking. They can feel pressured to make a choice among questions rather than skip the question (and if they do skip the question, the data will not show why). For example, we are asking a question about whether people feel that TLA is inclusive, exclusionary or neither. Originally I thought those three choices covered all the bases. But among discussions with Cindy (who was pilot testing the questionnaire), we realized that if someone simply didn’t know, they wouldn’t feel comfortable saying that TLA was neither inclusive nor exclusionary. So we added a “Don’t know” option.
Speaking from experience, the most important thing is keeping an open mind. Remember that the people taking your questionnaire will be seeing it from different eyes than yours, and they are the ones you are hoping to get information from. So, while I recommend following all the tips in Booklet 3, to get the best results, make sure that you test your questionnaire with a wide variety of people who represent those who will be taking it.