METHODS READINGS AND ASSIGNMENTS |
GUIDE 5: A SURVEY RESEARCH TIMETABLE SUSAN CAROL LOSH |
OVERVIEW |
KEY TAKEAWAYS:
|
|
|
|
|
|
|
In survey research, the investigator asks verbal questions (either aloud or on paper or screen) and either an interviewer or the respondent records the answers.
If you do these two steps, you are conducting a survey, even if you call it a "standardized test," a language assessment, a "personality profile" or some other high-falutin' name, or if you embed a questionnaire inside your study. Public opinion surveys or polls are simply one special kind of survey, a survey that seeks to generalize its results to a well-defined population.This means that no matter what name the research is called, if you ask questions and record the answers, the same rules for good question construction apply, and the data are subject to the same methodological problems as most other surveys are.
Be aware that surveys "try to hit a moving target". People change their knowledge, beliefs, and attitude responses, in part because of the reactivity that occurs as a result of being studied. In terms of reactivity, surveys typically rank below experiments, but above field research, such as ethnographies or more secondary collections, such as content analysis. And, of course, people change their beliefs and attitudes as they learn new things, obtain new information, or as a result of persuasion campaigns. Think how sad it would be if we did not.
Everyone must be concerned about sampling issues. I elaborate upon them here because excellent sampling has been a hallmark potential strength of general public opinion surveys in the recent past. However, sampling directly addresses external validity, and thus is a feature of any kind of study design. Who wants to spend months doing a study, only to conclude that generalizations can only be made to a particular group of students taking an online course and no one else?
Most scholars are more ambitious than that.
Because it typically deals with naturalistic variables, there is less internal validity in a survey than in an experiment but possibly as much as a quasi-experiment. Statistical control, in which one measures as many intervening and confounding variables as possible, then statistically control these, typically substitutes for experimental control. Public opinion surveys generally define their populations very specifically and take careful samples from their populations, thus usually have had very good, sometimes excellent, external validity, although internal validity or causal connections can be difficult to establish.
Unfortunately, response rates have fallen sharply in recent years. Survey researchers have gone from face to face surveys (very expensive) to random digit dial surveys, to including cell phones to online surveys and probably everything in between. It is often unknown how respondents differ from non-respondents and external validity can be jeopardized accordingly.
Sampling error, which we hope is truly random, and which lies behind most attempts to measure variation from survey to surve, is only ONE form of error. Instrumentation presents many problems as can administration errors.
Review the rules on causal inference and establishing independent or dependent variables in nonexperimental research HERE. Think about these rules when you study variables included as independent variables in a report that you read. Be sure that these independent variables are plausible.
Of course, a scholar can embed an experiment within a survey, and randomly assign who gets which questionnaire version. Professional survey researchers do so all the time. Often we call these "split-ballot" studies.
Consider what demographics "stand for" in survey research. Variables such as "age," "gender" or "ethnicity" are useful because they condition life experiences. By the same token, these are sometimes confounded variables that are "proxies" for multiple sets of experiences. Older people have different options, have had different life experiences, and are treated differently than younger people. People with different educational levels have had differential access to knowledge, and college students have experienced a whole different culture than those who never went to college. Thus, think about what each demographic stands for and see if the researcher could have directly included those variables in the study. (For example, if you believe that a key with "age" is experience handling money, such as wages, checking accounts, or credit, were questions related to wages, checking accounts and credit included in the survey, in addition to "years of age.")
FOR EXAMPLE: I've worked with several surveys about disentangling confounded effects. OPTIONAL: Check out the study I co-authored with my students in the September 2003 Skeptical Inquirer ("Sam Savage" wrote the synopsis; reprinted several places online). We found that most of the variation due to "level of education" was due to something else entirely: either correlates of education level (e.g., age or gender) or specific products of education (e.g., the total number of science classes.)
To repeat: "Educational level" is a highly confounded variable.
A second study compared age and generational effects, which are totally confounded in a cross-sectional survey at one point in time. Virtually all the knowledge effects typically assumed to be due to age are due to generation (birth cohort) instead.* This study, which examines science knowledge in the American adult general public was published in Martin W. Bauer, Rajesh Shukla and Nick Allum (eds.(2012), The Culture of Science - How does the Public relate to Science across the Globe? NY: Routledge: 55-75 OPTIONAL: it can be accessed online HERE. In yet another study, I found that generation influences access to and use of information technology but age generally does not.
*It is, of course, nice to know as one ages that generation plays an important role and we can hold on to our "minds" as we grow older.
|
There are several steps to conducting a survey. The most elaboration to those steps related to collecting data and the study design is in Guide 1, leaving data management and statistics to your other methods courses.
REVIEW: ASSIGNMENT 1: CONSTRUCTING A RESEARCH PROBLEM
|
|
1. Define the population. Make the definition clear enough so that there is absolutely no question about who is or is not in the population.
3. Locate or construct the most thorough list of the population as possible. This will become the "sampling frame." If a complete list is not possible, see if sampling stages can be created, and then completely enumerate all elements at each stage.
4. At about the same time, the researcher must decide on the type of administration for the survey: self-administered (mail, group, or, increasingly, Internet) or interviewer-administered. Some types of samples, such as simple random samples, can be done with straightforward logistics by telephone (Random Digit Dial or RDD) or online but are often difficult to do in other circumstances (such as area in-person surveys). The increased use of cell phones rather than landlines has complicated RDD. Except for single institutions (a business or a university, for example), it is often difficult to get a good list of email addresses. Remember that virtually all WEB surveys are self-administered. If the population is not very literate or relatively accustomed to being online a lot, an Internet survey is a very bad idea.
My recent experiences suggest that online survey response tend to be poor, even with a prestigeous sponsor or incentives (e.g., a "lottery" for a gift card). However, in some instances they can be better than random digit dialing.
Furthermore, incentives such as lottery-type drawings, e.g., for a gift card, generally do not increase response rate although a small guaranteed incentive (even a "bright, shiny quarter") can.
5. Make the final decision on the type of sample to draw. Try for a probability sample. Accept that under certain circumstances this may not be possible. Decide whether in fact the researcher is taking the entirety of a small population instead (a census).
6. Draw the sample. If appropriate, send out an introductory letter or email. Include inducements (shiny new quarter, Barnes and Noble gift certificate!) if feasible.
Human subjects regulations at FSU specify
either that everyone sampled receives the incentive or is entered into
some kind of lottery to be awarded the incentive.
SAMPLING COUNTS IN EXPERIMENTS TOO! EVEN IF THERE IS HIGH INTERNAL VALIDITY, ONE CAN'T LEGITIMATELY GENERALIZE IF THE SAMPLE IS POOR (LOW EXTERNAL VALIDITY). A BIG PROBLEM NOW IS VERY LOW RESPONSE RATES IN SOME SAMPLES. |
|
SOME CONSIDERATIONS |
|
|
INTERVIEWS |
INTERVIEWS |
Inexpensive cost is over-rated. Postage, stationary, repeat mail-outs, inducements, raise the cost. | Cost is low but external validity may be low too. | Still predominant mode for administering general public surveys (especially quickly) but online gaining ground. Cheaper than in-person surveys. Cell phones create complications such as reimbursing respondents for the minutes used in the survey. Response rates are dropping! | Became prohibitively expensive due to more single-person households and employed women leading to far more "not-at-homes." Still used sparingly. |
Response rates
better if respondents motivated about topic and endorsement cover letter
is present.
Remember those attractive (even if small) incentives. |
REMEMBER! Classes are clusters that underestimate population variability (unless corrected). One way to compensate is to increase n. | Callbacks typically needed. Overnight polls have low response rates. Response rates in ALL telephone surveys have fallen considerably. Recent research also indicates lower response rates among cell phone than landline users. | Callbacks needed; cluster samples cut costs THEN sample size is increased to compensate for cluster design effect on variance! |
Once repeat mail-outs are included, can take several weeks (minimum) | Can be speedy but what about absences? Could they introduce biases? | Generally faster, even including callbacks. | Takes the longest to gather the same sample size, especially with callbacks. |
Is your population literate? Can they easily read and respond? Holds for online too. | Is your population literate? Can they easily read and respond? | CATI (Computer Assisted Telephone Interviewing) makes field work faster, easier, more accurate, and skips the data entry stage. | Visual aids possible: pictures, show-cards with responses printed on them for respondents to hold. Laptops increase aid with the interview here. |
May make it easier to answer sensitive questions. | May make it easier to answer sensitive questions. | You can use more open-ended questions with interviewers. Obtain more sensitive information than face-to-face. | You can use more open-ended questions with interviewers. |
Respondents can look up records. | Adminstrator can clarify problems, help with survey directions. | Proliferation of machines (answering, fax, etc.) increases contact problems. | Respondents LIKE in-person surveys the best. |
Average response rate for one-wave with a general population (e.g., all FSU students) runs 20 percent. We once considered this awful! | Makes a larger case base possible; this helps with later analyses. | Increased use of mobile phones as only household phone complicates sampling. Some response rates below 10 percent! | Interview can be longer than telephone or self-administered survey. |
|
I bet you receive requests at least once a week in your email: Internet surveys are proliferating. Currently, well over 95 percent of the United State population is "online" from home (counting smart phone access and tablets). This is NOT randomly distributed. The "digital divide" still exists. Single women, minorities, the less-educated all have lower online access. Wealthy agencies, such as Stanford, can afford to give respondents selected in other ways their own Internet connections and laptops so that the whole sample can respond. Web surveys clearly work better with highly literate and motivated respondents. Thus, they become similar to other self-administered questionnaires, such as mail surveys.
Response rates are also similar to mail surveys and are generally low but they can be high (see above for motivation). These days they can be higher than for telephone surveys. However response bias is still relatively unknown.
Build in safeguards so that respondents can only answer the survey once. I usually try this with any online survey I receive to assess how professional the survey is. Imagine my shock when the FSU Presidential Search Committee constructed both its surveys and its "Comments" submissions in 2014 so that the same person could submit multiples--and judging from the responses, many people did.
Also remember that the researcher should choose the respondents. If she or he "advertises" and takes whoever answers, there's not a whole lot of difference between that survey and dialing in to one of Entertainment Tonight's 900 numbers. Both are self-selected samples.
Presently, WEB surveys appear to work best
with subsamples of the population who are (1) already online (preferably
from a computer rather than a smart mobile phone--slightly easier to write),
(2) who are interested the topic, and with (3) with whom the researchers
generally stay in touch (e.g., a class, a club) to remind and prompt.
|
|
Doing a telephone survey? It's tough to keep respondents on the phone for more than 20 minutes. Long questionnaires work better with an in-person survey.
FOR SOME RULES OF GOOD QUESTION CONSTRUCTION, CLICK HERE!
FOR A QUESTIONNAIRE EXAMPLE, CLICK HERE!
The same is true for mail or online surveys. These should be short--and pilot tested to see just how long they take. Long surveys depress response rates and introduce probably unmeasurable biases.
Young, mobile adults are more likely to be "cell phone only." Until the last few years, researchers could not [knowingly] random dial cell phone exchanges because many plans charge subscribers for in-coming minutes and it is currently illegal to charge people to take part in a survey. We are now getting more data (and surveys) from cell phone users. If the individual is on land-line or cell, the researcher faces a thicket of answering machines, caller id, voice mail and other privacy oriented options. According to the Director of the UVA Survey, these depress response rates, especially caller ID. Concerns about "hacking" and identity theft are adding to response rate problems.
With telephone surveys, try everything possible to increase the response rates:
People dislike
writing by hand. This includes
people with doctorates. If the questionnaire has many open-ended questions,
use an interviewer. Otherwise the sample response rate will be low and
the open-ended item response even lower.
Concentrate
on closed questions for self-administered surveys when possible.
General public participants don't like doing a LOT of typing online either and most aren't used to writing long papers that way (even if the investigator is).
If using a self-administered survey, keep page clutter to a minimum. Use a lot of white space. Don't use complicated charts for people to complete.
ALWAYS PILOT TEST THE QUESTIONNAIRE ALOUD!
Among other things, always a good idea to see if people know ANYTHING (or have even heard of) the topic.
Do this even if the questionnaire will be self-administered.
Pilot test the questionnaire on respondents who are comparable to the selected population on key issues such as education, gender, and ethnicity.
Revise the questionnaire. If needed, do a second pilot test. This is no place to cut corners!
I am out of the University of Michigan Survey Research Center training. Our questionnaires tend to have a more casual, "chatty" feel to them which we believe encourages responses and relaxes respondents.
The very worst example of the University of Michigan style that I have ever heard--which was only partially presented as a joke--went as follows:
"A lot of people think about suicide. How about you? How many times have YOU thought about killing yourself?" (Not at all was not a response category.)The toughest question I ever asked on a survey--with the highest nonresponse: "Without shoes, what is your weight in pounds?" Over 10 percent of women would not answer. We made this the last question in the survey, naturally.OUCH. The idea of introducing the "A lot of people..." is so that the respondent will not feel deviant and to legitimate both the question and the respondent's answer. This is supposed to help with questions in sensitive areas, such as drinking, illegal drug use, sexuality, birth control, and failing to keep to your diet.
And remember, little kids (e.g., 5th grade
on down) are not very good at writing, and that includes self-administered
surveys...
IN THE LITERATURE YOU READ MAKE SURE THE AUTHOR DESCRIBES THE QUESTIONS ASKED IN THE STUDY, PERHAPS IN AN APPENDIX |
There are several advantages to using interviewers. Interviewers motivate respondents simply by being interested in what respondents have to say. Interviewers can help clarify directions (but NEVER change question wording), can report questions that give problems to the study directors, can probe for more detail.
STEP FIVE: INTERVIEWER SELECTION AND TRAINING
(IF APPROPRIATE)
When was the last time someone asked you what you really thought, cared about what you said, and LISTENED to the answer?
However, as you might guess, a survey that uses interviewers is more expensive. The researcher must also budget time to train interviewers, and interviewers should be periodically monitored to ensure that they do a good job. Don't use ROBO-CALLS or ROBO-INTERVIEWERS. Response rates are the worst.
The General Social Survey conducts in-person (usually in household) interviews with at least 3000 people every two years. Each (recent) survey costs about $3,000,000.
Nearly all professional telephone surveys these days are done via CATI: Computer Assisted Telephone Interviewing. CATI has many improvements over the old paper and pencil questionnaires. In person interviews are now often done with a laptop that has the questionnaire loaded on it:
Newer hardware/software packages include voice recognition software, although the current quality varies enormously across companies.
Good interviewers are:
Have a solid interviewer manual--also read in advance.
Have interviewers practice in teams and practice role-playing respondents.
Have interviewers practice saying in to the mirror (for in-person interviews:) "I'd like to come in and talk with you about this" OR (for telephone interviews:) "The interview will only take about 10 minutes; here's the first question."
MONITOR
ALL INTERVIEWS! Check call on in-person interviews. Spot
monitor on telephone interviews.
THESE STEPS ARE FOR LATER COURSES:
STEP SIX: CODING AND DATA ENTRY
Not necessary for most CATI programs. Researcher receives usb or other storage device with data. If data must be manually entered, select
a random sample of cases and re-enter to validate. If large numbers of
errors (over 5 percent) are found, consider checking entire database.
|
STEP SEVEN: INDEX CONSTRUCTION (if appropriate)
STEP EIGHT: UNIVARIATE DESCRIPTIVE STATISTICS
STEP NINE: MORE COMPLEX MULTIVARIATE STATISTICS
STEP TEN: REPORT WRITING AND DISSEMINATION
|
|
|
|
Susan Carol Losh
October 3 2017
This page was built with
Netscape Composer