|
SOME ISSUES IN QUESTION CONSTRUCTION |
|
CLICK HERE TO RETURN TO YOUR SPOT IN GUIDE 5.
EDF
5481 METHODS OF EDUCATIONAL RESEARCH
INSTRUCTOR:
DR. SUSAN CAROL LOSH
|
We distinguish between open questions, in which the respondent uses his or her own words to answer, and closed questions which provide pre-written response categories.
CLOSED QUESTIONS:
|
Closed questions should be unidimensional: they ask about one and only one topic at a time. Questions which use more that one dimension are often called “double-barreled”.
Rule 1: Avoid double-barreled questions. We cannot disentangle which question embedded in a question the respondent actually answered. These items are confounded variables. Double-barreled questions are usually quickly recognized by their use of “and” or “or”. But any question that simultaneously asks about at least two topics is double-barreled.
TWO BAD DOUBLE-BARRELED EXAMPLES:
Do you agree that we should lower property taxes AND provide more county services?
Do you approve or disapprove of abortion in cases of incest OR threats to the mother’s health?
I have seen questions so convoluted that they were actually quadruple-barreled items, asking FOUR questions in one!
Rule 2: PROVIDE ALL RESPONSES TO A CLOSED QUESTION.
EXAMPLE:
For marital status, people are married, widowed, divorced, separated, living together, single (NEVER MARRIED) or something else (you’d be surprised), not just married or single. This set of "unmarried" categories is often omitted in many closed questions and that can be a BIG mistake. It took a couple of centuries before the US Census Bureau asked unmarried women about their children!
Rule 4: Avoid condensing numerical responses into grouped categories.
The exception: income categories. We use grouped categories to provide greater confidentiality in the answers and because people generally only know their incomes around April 15.
*****Rule 5: Use a mix of question formats to avoid format response sets or response effects.*****
For example, one of the most popular formats is the Likert* item: people are asked their degree of agreement or disagreement (typically whether they strongly agree, agree, are undecided, disagree, or strongly disagree with an attitude statement, e.g., "I really enjoy jury duty")*. Many survey researchers love Likert items because you can administer them so quickly (a well-trained interviewer can do five per minute) and they are easy to code. BUT:
Don’t use ONLY Likert agree-disagree questions. This encourages acquiescence response set. And, acquiescence response set is negatively related to education. College educated respondents are "nay-sayers."
I don't think I can emphasize this one enough. The research on acquiescent response set dates back to 1948. It is not new research.
*THIS IS THE ONLY LIKERT FORMAT. Do not use the phrase
"Likert Scale" or "Likert type" for any other type of ordinal question
format. Unfortunately you will see poorly educated researchers do so all
the time. Beware of such individuals.
|
Response sets reflect a tendency to respond to the form, instead of the content, of a question.
In acquiescence response set, people tend to generally agree or disagree with nearly every sweeping statement put before them in a questionnaire. Well-educated persons have been taught "never say 'always' or 'never,' " so "nay-saying," or disagreement, rises with education, typically regardless of content. We are less clear about which factors nudge people to gravitate toward extreme or polarized responses ("strongly agree" or "strongly disagree") or toward the middle, but we know that happens too.
The tendency to simply
respond "I don't know" is also linked to education. In analyses I did for
a recent article, I found that people who had not graduated high school
were three times as likely to give "I don't know" responses to "science
quiz" questions as people who had graduate school experience. Women were
nearly
twice as likely as men to say "I don't know." In such cases, you need
to reflect on whether a "don't know" response truly represents lack of
knowledge or some other factor (such as low self-efficacy). Other researchers
have reported similar results for political knowledge.
|
WHAT THE RESEARCHER CAN DO: Vary the item format that is used. Keep Likert items to a minimum. Try to include reversed items, or the same concept using different formats, in your questionnaire. Pilot test the questionnaire ALOUD even if it will be self-administered. See if respondents consistently fall into response patterns ("yep..." "yep..." "yep..") and "clean up" the problem areas.
Using experimental
"split-ballot"techniques,
survey researchers also assess whether different forms of a question yield
comparable responses and appear to be comparable in meaning. Subsamples
from the total survey are selected at random to receive the "same item"
with different question wording. We check not only to see whether the univariate
frequency distributions are the same, but also whether question wording
changes how one concept relates to other variables, such as educational
level.
Rule 6: IN
GENERAL: don’t use hypothetical situations. Don't
ask respondents to guess how other people “would feel” or even how they
themselves would feel under hypothetical conditions. The answers are generally
unreliable because people have not thought about or experienced their responses.
Most people would have a hard time telling you exactly how they would feel
“if” they found out their spouse was unfaithful--except “bad” (and we don’t
need a survey to figure that one out).
Rule 7: Try to keep the number of response alternatives that are read to respondents to a maximum of seven. (That's George Miller's famous classic article, the number "5" + 2. ) Show cards or the set of response alternatives cannot be used in a telephone survey. The respondent must be able to memorize the alternatives, then select one. The fewer the response categories, the easier this is.
Rule 8: Use specific time frames when asking about behaviors, particularly regular or habitual behaviors. Don’t leave the time frame vague or undefined if at all possible.
“During the
last month, how many times did you attend religious services?”
“During the
last week, did you smoke any cigarettes at all?”
Rule 9: Similarly, use specific place frames. Do you want to know where someone was born? (In Detroit, in a hospital, in the United States, under the kitchen table.) If you want to know someone's country of birth, ask "In which country were you born?"
Rule 10: Make sure to make the question stem consistent with the provided responses.
EXAMPLE: If the stem reads “how often”, make sure the responses are in a time frame (times per month) or take a relative form such as “All of the time”“Most of the time”“Half the time”“Seldom” or “Never”.
Reserve “yes/no” questions (“...did
you smoke any cigarettes at all?”) for specific actions or dimensions of
an issue.
Rule 11: If
there is a very complicated question stem, break the question into AT LEAST
two questions. The respondent
will have an easier time and the questionnaire will actually proceed faster
with more short, simple questions than with fewer long, complicated questions.
BAD QUESTION: “What do you think should be done about the environment? Tell me all the actions that you approve: A. Recycling B. Start carpools C. Mandatory thermostat controls (etc.)”
BETTER SET OF QUESTIONS:
REMEMBER! Approve-disapprove response categories rank responses from highest to lowest (approve is more in favor than disapprove) and are ORDINAL-level variables. Two category responses (yes-no) very often are also ordinal variables if answering "yes" means the respondent did it more, even if once once (smoked a cigarette, visited a friend, played the Lottery), than answering "no."
Rule 12: Avoid jargon or technical terms. The respondent probably won't know what "trait anxiety" is, even if she or he has a lot of it. "ET" to most people means a very old movie about a quaint alien, not Educational Technology.
Rule 13: Use complete sentences. If the researcher is trying to ascertain respondent gender, and the "question" reads:
Sex?
They will get more than they bargained for. (And deserved what they got.)
Instead, phrase the question as a complete sentence: "Are you male or female?"
This is ESPECIALLY IMPORTANT when there is a self-administered questionnaire, for example:
PLEASE CHECK ONE: Are you [ ] Male or [ ] Female?
Rule 14: Avoid "red flag" words, that is, words with emotional connotations or that coincide with strongly-held values.
Everyone wants to be "fair"! NEVER use that word in a question unless it described a civic event or a festival such as the North Florida Fair!
"Murder" is another red flag word (as in
"Do you approve of the murder of unborn babies?")
|
Rule 15: If
the questionnaire will be administered by an interviewer, be sure to read
the entire questionnaire aloud in a pilot test. Many
words sound alike that have different meanings. Consider:
|
|
|
|
|
|
Rule 16: Similarly, beware of words that have multiple meanings, for example: kind; fair; item. Put the Thesaurus in the computer word processor to use!
Fair can mean a festival, someone pretty, or being equitable.
Rule 17: In
fact, it is an excellent idea to pilot test the questionnaire
aloud no matter what. This
is one of the best ways to catch any problems with question wording.
|
Many novices love open-ended, unstructured questions. They are rich, they have depth. That is, novices love them until they must CODE them. Even with new computer programs, it is an art to code any unstructured material.
Rule 18:
In an open-ended question, you want respondents to speak their minds.
So ask open-ended
questions in a way that encourages people to give a complete and full answer.
Use phrases
such as “what are”? “how do you feel about”?
Never ask
a “yes/no” open-ended question!
Good: “what
do you like best about high school
cafeteria lunches?”
Bad: “do
you like high school cafeteria lunches?”
Good: “what
do you think is the biggest
issue with homeless people in Tallahassee?”
Bad: “is
homelessness a problem in Tallahassee?” (what does “problematic” mean?)
Rule 19: With a complex question that could have multi-dimensional answers (“what do you see as the top priorities for the Florida legislature this year?”), use an open-ended format. There will be so many possible responses the investigator probably will not be able to specify all the response categories in advance.
Rule 20: DO NOT abbreviate. Abbreviations mean different things to different people. To me "IT" means information technology but to others it may mean sex appeal.
Rule 21: Consider
the nuances in each question and what these might imply. For
example, I have found sex differences in responses to questions about science
knowledge (men higher), basic medical knowledge (women higher), support
for creationism and astrology (women higher), and the existence of space
aliens (men higher). It is clear that the questions the researcher selects
can make either sex look really stupid just by varying the mix of items.
I'm not sure what the answer is here and I don't think there ARE any easy
answers (modern IQ tests have been formulated for DECADES to toss many
items that distinguish between the sexes because the researchers assumed
that men and women have equal IQs...). I do think that when you construct
a questionnaire, you need to be alert to nuances of gender, social class
and ethnicity and what the implications of the results might be. You then
have to think long and hard CONCEPTUALLY about what you meant to measure
in the first place. If reading professional literature, consider these
nuances in light of the population that was studied.
This page was built with
Netscape Composer.
Susan Carol Losh
Many jobs are available for people skilled in survey
methodology.
that's me |
METHODS READINGS AND ASSIGNMENTS |
METHODS OVERVIEW |
|