METHODS READINGS AND ASSIGNMENTS |
|
OVERVIEW |
EDF
5481 METHODS OF EDUCATIONAL RESEARCH
INSTRUCTOR:
DR. SUSAN CAROL LOSH
FALL
2017
|
|
|
|
|
KEY TAKEAWAYS:
|
|
A refresher from our overview:
|
PERSONAL PREAMBLE
To start (my opinion), I don't like the terminology distinction "quantitative-qualitative."
It confuses the level of the variables (such as nominal or interval) with the way you conducted your study (e.g., experiment, ethnography), and that is just plain inaccurate.
Many historical studies, for example, are highly quantitative. Field studies may gather information on quantitative variables. The computer programs used for content analyses (see Creswell) can produce highly quantitative results. Similarly, you might have a categorical independent or dependent variable in an experiment, a quasi-experiment, or a survey.
Furthermore, the level of the variables, whether quantitative or qualitative, also has absolutely nothing to do with causality or internal validity. For example, gender, a nominal variable, is typically an independent variable in many studies. Causality depends on the threats to internal validity or possibly the construct validity of your variables, but not at all on whether a variable is nominal, ordinal, interval, or ratio.
Yet, we suspect that differences exist among methods called "quantitative" and those called "qualitative." What is the key to some of those differences?
I believe that key is STRUCTURE. From the very beginning, the methods we call "quantitative" are much more structured: how they are conceptualized; how scholars measure the variables concerned (typically in advance), how the data are gathered; how the data are analyzed, and how reports are written. Contrast, for example, the typical structured survey with the far less structured focus group.
Less structured ("qualitative") research is much more fluid. The timetable for these kinds of studies is more flexible. The scholar may alternate among data collection, data analysis, and theorizing, or start in one area (religious congregation directives on religion, science, and politics) and end up somewhere else entirely different (the role of group cohesion).
A huge reason for this fluidity is that once you enter "the field," whether this is an organization or a set of records, you will almost certainly encounter many unanticipated events, patterns or sequences. To capture knowledge about these, you often revise your research design.
To researchers accustomed to more structured methods, such as experiments or surveys with their a priori hypotheses, less structured methods often seem sloppy, over-intuitive, and even downright mystical. When does data collection stop? Many unstructured researchers give answers more structured researchers consider vague, such as: "when the data repeat themselves"; "when no new information is uncovered"; "when no more new insights occur"; or, most aggrevating of all to experimenters or survey researchers, "you just know."
If your goal is to draw tight cause-effect connections, clearly less structured research designs are not the best way to proceed because of the typical lack of standardization and the less precisely defined variables. However, you may have different goals in mind when you choose a less structured design (i.e., qualitative):
You
want to describe a culture or a subculture, particularly from the point
of view of its population.
You
want to describe observed relationships among variables but not, at this
point in time, to make definitive causal statements about these relationships.
You
want to generate rather than to test hypotheses.
You
don't exactly understand all the parameters of the situation under study
and you want to know more about them.
You
want to conduct your study in a natural setting, rather than in a laboratory.
You
want the study participants to define their lives for you.
You
want to examine SOCIAL INTERACTION among people who know each other.
SOME COMPARISONS BETWEEN MORE AND LESS STRUCTURED RESEARCH DESIGNS |
"QUANTITATIVE" | "QUALITATIVE" |
Data are CREATED via experimental manipulation or the design of survey research questionnaires | Data are COLLECTED or patterns are DISCOVERED as they exist in nature |
Research questions are clearly specified in advance (otherwise, how could you design experimental treatments or questionnaire items?). There are preliminary conceptual and/or null hypotheses about what is expected. | The investigator starts with "working hypotheses" that typically are revised (often many times) during the research and "foreshadowed problems" that follow from the general research statement. |
Measuring instruments are structured IN ADVANCE of data collection | Measuring instruments often are created DURING OR AFTER data collection |
Standardized measurement procedures | Idiosyncratic measurement procedures |
Precoded instruments (e.g., closed survey questions) | Coding often emerges through data collection |
More breadth | More depth |
CONTROL: over experimental treatments, questionnaire items, research setting (experimental participant is "guest"), interviewer training | Very little control over variables, setting or anything else. RESEARCHER is guest. Causality must be inferred (review "THE RULES" site in Guide 3) |
Causal statements | Relational statements; possible inferred causality |
Often deductive: hypotheses derived from theory and a body of prior research. Hypotheses generally exist PRIOR to data collection. | Inductive: hypotheses are grounded in and generated from data. Hypotheses generated from data may be tested in a preliminary manner late in the data collection process. |
Often contrived setting | Often natural setting |
Objectivity about research participants or respondents | Empathy with persons or groups who are studied (view from their perspective) |
May study group or individual, often individual | Often study group or organization (context) |
Less
structured designs often become more structured as the study continues
and the research team learns more about the situation. Descriptions of
situations may give way to more structured field observation codes, in
which observers are able to check off behaviors and themes that regularly
occur.
Eventually, the researcher may be able to specify when an event occurs or when it does not. This increase in precision means that occurrences could be coded as dichotomous 0-1 or "dummy" variables, "yes" or "1" for an occurrence and "no" or 0 for its absence. If understanding proceeds to this stage, the analyst can then create countable or ratio level variables for the number of instances of an event. The analyses then begin to resemble those from more "quantitative" methods, such as analysis of variance.
Thus, the analysis for a less structured study can use any applicable and appropriate statistic. For analyses such as analysis of variance, t-tests, or regression, you need an interval-ratio dependent variable and you need to be able to specify cause and effect. If your data allow you to meet these (and the required other statistical assumptions), you can use these analytic methods. They are not particular to any particular method of collecting data or conducting studies.
Statistical analyses, of course, can be hampered if only one group or organization is studied, or a number of case histories are selected in a haphazard or "grab" manner. Probability sampling is more the exception than the rule in "qualitative" designs. This is partly because most occur in a naturalistic field and site access can be a problem.
The analyses and reports for ethnographies
or case histories often resemble those done for focus groups. A few descriptive
statistics such as means or percentages may be provided. A lot of the data
presentation is anectdotal. Much of it describes situations in the study
setting. Causality is suggested, not assured. The end point of the analyses
and discussion may be to suggest variables and hypotheses for new studies.
|
Historical research often deals with events of the past or threads of the present in natural settings.
Before I married an historian, I must confess to considerable naïveté when it came to historical data. I imagined history as a study of facts--which were well known to historians, to be altered only if new facts came to light. I WAS WRONG! First, most historians are sticklers for examing the original documents and want to draw their own conclusions from them.Oral histories and Narrative researchSecond, interpretation and reinterpretation constantly occurs within history. The same facts are often interpreted in many different ways. Were the immigrant schools of New York City about 100 years ago a way to "Americanize" immigrants or did they serve as propaganda to support existing elite structures in the Eastern United State (or both)? What were the goals of immigrant schools? How were these goals achieved? Historians pore over documents, examining them first one way, then another. It is not so much what facts are, but what facts mean that is important.
Historians examine many types of written records and record searches: diaries;, schedules; newspapers and magazines; census data; television programs; letters; and cultural artifacts of various kinds (art work; music and song).
Historians often interview individuals who may be connected with the events under study, such as European Holocaust survivors, children and grandchildren of the Rosewood, Florida racial attacks, or early members of a group or organization. These use in-depth (qualitative) interviews. Questions are open-ended. Comparable to focus groups, the researcher starts with several unstructured questions. Which questions are asked, and their order, depend on the information that is uncovered during the interviews. This information may lead to the formulation of new research questions.Ethnographies
Ethnographies address describing a culture and/or subculture. Ethnographers often collect records and field notes, do oral histories, and utilize field and participant observation to collect data. Typically one group or organization (a case history) is studied in considerable detail. The emphasis is holistic and contextual. Ethnographies occur in natural settings.Content AnalysisFrequently ethnographies utilize participant observation, or a methodology in which individuals act as data-collecting researchers while participating in the daily events of the group or organization. Participant observers detail the routine rounds of daily life, they conduct in-depth interviews, describe leaders, and summarize group activities.
Compared with other types of research, a major focus of many ethnographies is interaction among individuals or group interaction. Life in groups often receives scanty notice in experimental work, which often uses "groups" comprised entirely of strangers (this can make a BIG difference in results). Surveys, of course, typically utilize individuals as the source of information, even if individuals report about a group, such as a household. Individual reports about groups, of course, only reflect that respondent's perspective. Ethnographies, with their focus on "who said what or did what to whom" provide a refreshing contrast. On the other hand, it is very difficult to observe groups because typically considerable amounts of action simultaneously occur. Access to the group may also be difficult. Taking field notes can be difficult too.
Old magazines are often stored in library archives. Old reruns of situation comedies apparently "never die." The researcher who wants to trace idealized fictitious images of the American family can start with "I Love Lucy" television films from the 1950s and sample from every decade through to "Everyone Loves Raymond", "Two and a half men", "BlueBloods", or "A Modern Family" in the 21st century (or even teenage single moms!). The minutes of a corporation may stretch back hundreds of years.GleaningContent analysis often involves studying content in media or records, typically using a framework that the researcher provides. The data are typically pre-existing to the particular study and often were gathered for original purposes unrelated to the purposes of the present study. Thus content analysis is one form of secondary analysis. Content analyses use naturalistic data, very often generated by a group or formal organization.
The researcher provides the themes for the content analyses, which are often unrelated to the orginal purpose that generated the data (for example, early television producers were unaware of the gender composition or gender stratification in their situation comedies). The researcher may provide a coding grid or transfer data to one of several computer programs (see below) that will test for themes, do word counts, theme counts, and other analyses.
In the course of our daily lives, we leave a lot of debris behind us. Consider the contents of our garbage cans. The Project du Garbage at the University of Arizona certainly has! Wrappings from the foods that we eat as well as leftovers, the packages for the cosmetics and colognes that we apply, discarded magazines and pamphlets, tossed out solicitations for credit cards, charitable contributions, mail-order catalogues, newspapers. Take a good look at the contents of your can before you toss things into the trash and consider what kind of information your trash gives about you.Secondary AnalysisBecause garbage can be toxic and infectious, the Arizona researchers "suit up" in biohazard suits. Do NOT try this at home! But you can look at the "gleanings" from everyday life without going through the trash. Gleaning can be particularly helpful in telling us about culture. Do teenagers claim they are interested in "higher culture," new art or music, while records of the books, magazines, audio or video tapes, and CDs checked out, rented, downloaded or sold tell a different story? Examine best-seller book and music lists (which typically are now computer generated from bar codes and thus much more accurate than the subjective impressions of store owners and managers that were used as measures in the past). Frequent second-hard shops to see which articles are discarded.
There are more subtle indicators of the gleanings of everyday life. Consider the fingerprints and noseprints left on glass display windows as clues to the popularity of exhibits and the age of visitors. Tiles that are worn on a floor or paths worn in the grass indicate popular routes.
We will have an entire class section on analyzing data that originally were collected for other purposes. In particular, we will examine a variety of online databases and archives. Content analysis is one type of secondary analysis. So are online analyses of data archives.
|
The "funnel approach"
The funnel approach (so-called because the research endeavor begins in a very broad way, then narrows as more information is accumulated) is common in less structured designs. Typically, the researcher begins with very general questions ("what are the major themes in religious school classes?"). These serve to begin delineating the type of sites to study (e.g., a variety of Judeo-Christian religious denominations, to be expanded to Islamic denominations at a second stage of research) and the initial type of data collection (ethnographies and participant observation). As research progresses, the research questions are refined, the sites are narrowed, and the type of data to be gathered becomes more precise (content analyses of religious school lessons; congregant participation in religious school lessons.)
Holistic
Rather than defining and measuring individual variables, holistic research attempts to examine the entire situation. Thus, it may describe a contextual situation in considerable detail: who the actors are; how they interact; the "scenery" of the setting without delineating separate variables and the relationships among them. Holistic research is often contextual, that is, it insists that data must be examined in situ or in the context in which it exists or occurs. For example, lessons in religious school would be interpreted in the context of the religious congregation, its general denomination, its epistomology, its sermons and religious literature.
Participant Observation terms
Participant observation tends to have one of the more colorful terminologies in research methods. For example, observers who do not reveal their research goals to participants are said to be "under cover." Researchers who empathize so closely with the individuals and situations that they study are said to "go native," borrowing a term from anthropology. Group members who establish a mutual trust with the researcher and who often provide in-depth information are called "key informants."
Participant observation also presents ethical issues that are less prominent in some other types of research. The researcher may fear that revealing his or her true purpose in the situation will be obtrusive and cause reactivity. Participants may become self-conscious or alter their normal modes of behavior if they are aware their behavior is under scrutiny. Some researchers study behavior that is illegal (e.g., drug use) or of dubious morality. Signed "consent statements" might implicate participants as criminals or hurt them professionally. The researcher must balance their ethics with data collection purposes.
Phenomenological
There are several aspects to the phenomenological approach in research. Phenomenological research tends to be holistic. It also tends to be interpretive, i.e., not just concerned with facts and findings, but the meaning of these facts and findings in the larger context of the individual life, group or organization. This kind of research is often empathetic, taking the view of the participants in the situation.
Unobtrusive Measurement
Many researchers are concerned that their presence in the field will be obtrusive, or noticable. This is particularly problematic with research that occurs in the natural field, such as case histories or ethnographies. Obtrusive measurement may lead to reactivity, particularly social desirability. People may want to appear more organized, more competent, or more tolerant than they "really" are. You will recall that this is one major problem with video recording. People become self-conscious about their appearance or what they say in a manner that does not seem to happen with audio recordings.
Of course, some research, such as content analysis, may contain elements of social desirability when the data are laid down in the first place (e.g., "no one commits suicide in death certificates," individuals resign instead of being fired.)
There are two main ways we address obtrusive measures in less structured work. One way is how researchers behave. An attempt is made to be low key and non-judgmental in oral histories or in-depth interviews. Fear of reactivity is a major reason why participant observers go "under cover" and race to lavatories to scribble field notes. The second is in the measures that are used. "Gleanings" for example, which use the debris of everyday life, do not intrude on the lives of those whom we study. We may be able to study organizational records without bothering study participants.
Collecting data rather than Creating data
Working or emergent design
This is your preliminary plan for data collection. These methods are LESS structured, but that does not mean UNstructured or no structure. When you enter the field for the first time, you are likely to be overwhelmed, especially because you typically have no control over who enters or leaves the scene, or the events that occur within it. It is literally impossible for you to see everything (all at once OR in sequence), or to record everything.
Thus, you need some focal points to get started. Your design may designate key individuals or activities to study. It may include a sequence of the areas of the setting that will be studied.
Over time, you probably will gain a better idea of variables and sequences of behavior. You may be able to formulate hypotheses and define conceptual and operational variables. At that point the study design may change. For example, you may be able to create grid sheets of behaviors or occurrrences to check off as you observe them.
Working hypotheses
Working hypotheses are much less precise
than the relatively structured conceptual, operational, or null hypotheses
that you constructed earlier this semester. You may not know exactly
which variables you will study. Indeed, you may not even be interested
in variables per se at this stage. Working hypotheses are more likely
to take the form of questions about phenomena rather than relationships
among well-defined variables.
Notice how working hypotheses direct your attention (e.g., to sermons or Sunday School classes), but do not tell you precisely what to observe. |
|
External validity is typically much less an issue in less structured research for several reasons:
Triangulation means that you used
at least two different kinds of methods to measure a phenomenon. It
does
NOT
mean you used two or more instances of the same kind of
measure, such as two self-report surveys.
Subjectivity is a constant problem in less structured research designs.
We can think of statistical significance testing, random assignment in experimental treatments, replication, or probability sampling as safeguards against our own biases and what we want to believe. All of us start a research project with hunches and our own beliefs about how we think cause and effect operate in the situations that we study.The problem is that our subjective beliefs can easily influence which phenomena we choose to observe, the categories that we construct, and the interpretation that we place on our observations or collected records. In participant observation, we may become so empathetic with those whom we study that we lose our own objectivity.
It is a delicate balance that we need to try to preserve between empathy and interpretation and objectivity. I think this balance is what can make less structured research so difficult. It helps to have multiple observers. It helps to have regular meetings among all those involved with the research project. These meetings should follow a semi-structured agenda that allows input from all participants on the projects, sharing ideas and interpretations and receiving feedback from other participants.
|
By now, you are getting the idea that less structured designs are far from easy. You are absolutely right! Conducting a less structured study is not an excuse to toss everything you have learned about research methods thus far. The researcher must be far more careful to try to specify concepts as clearly as possible (none of those "my concept is whatever this standardized test says" statements will work here). Because creating and testing hypotheses on the same set of data can produce tautologies, the researcher must bend over backwards to provide rigorous and varied evidence for his or her assertions.
Recall that because so much of less structured designs is interpretive and holistic, objectivity can be a definite problem. No matter what one's empathy about the people who are studied, the researcher must remember that she or he IS a researcher.
On the other hand, when it is possible, try to imitate the clarity and objectivity of more structured designs. If you can draw a probability sample of records or organizations, do so! If you are able to create a patterned grid to measure at least some of the behaviors that you observe and count their occurrence early in the research process, do it! If either prior theory or experience in the field allows you to create more precise hypotheses, create them!
For example, many computer programs are now available to assist you with coding data from content searches or ethnographic studies. Examples include:
AskSam, and
and...MORE (e.g., NVIVO at this link)
PLEASE SEND ME AN EMAIL TO LET ME KNOW
ABOUT ANY OTHER SITES OF THIS TYPE THAT YOU RUN ACROSS!
I'll add them to the list.
And see Creswell designated chapters.
Thanks.
|
|
|
This page was built with
Netscape Composer.
Susan Carol Losh
November 10 2017