EDF
5481 METHODS OF EDUCATIONAL RESEARCH
FALL
2017
This is the first of several guides that
will be published online for EDF 5481 this semester.
KEY TAKEAWAYS:
-
This course mostly (but not entirely) addresses
quantitative methods
-
We do have a substantial "qualitative"
section
-
We also examine use of existing databases
-
I take a consumer perspective as well as
a research perspective
-
Many students here will be clinicians and
practitioners in addition to or instead of researchers
-
Causal issues in all kinds of studies (more
on this one later this semester)
-
Several methodological
stages
-
Developing a research problem: some how-tos
|
WHAT
METHODS DO
In Methods of Educational
Research, we study several "quantitative" and "qualitative" methods--or,
in my preferred terminology, more or less
structured research designs for
collecting data. In this course, we deal with empirical research, i.e.,
tangible data which are assessed using the evidence of our senses. While
these kinds of data collection methods are not the only way in which we
"know" things, they have particular utility for testing research hunches
and hypotheses in a variety of fields.
Research Methods
are most often used for two major purposes:
(1) To
establish "facts" or recurring regularities
in the environment. Examples of facts include:
-
The incidence of violence
on high school campuses.
-
Dimensions of eating
disorders among adolescents
-
The percentage of adult
South Koreans with access to the Internet at least once a week.
-
How many American adults
over 18 engage in physical exercise.
-
What percentage of people
in different countries who speak a second language.
(2) To
test (and, more surreptitiously, establish) causal explanations for established
facts. Most theories address explanations
for factual material. Explanations typically
assert causal relationships among variables of interest.
For example:
-
Students who engage
in explosive violence on high school campus have been bullied at that school.
-
Science or technology
professionals have greater access to the Internet at work than other workers
do.
Consumers are constantly
being bombarded with information: the Internet; journals; books; TV; newspapers;
and much more.
You need to know
about research methods (in your discipline and outside of it) to be able
to evaluate information.
Either of these two
assertions immediately above could be WRONG.
Establishing facts
is hard enough! The measures we use may be contaminated by response bias
(e.g., many people tend to agree with any general statement.) As a result,
you may not know whether the presented questions in a survey or a "personality
test" measure the desired construct--or instead "agreement response set."
The population that was studied may be relatively small and harbor unique
characteristics that are not typical of the true population you are interested
in. For example, it is risky to generalize from studies of college undergraduates
to corporate workers. You may have measured the wrong dimensions or omitted
key facets of your topic (example: you thought you were measuring
positive attitudes toward performance--but
instead you measured
emotions about competition).
METHODS AND CAUSE: A PRELIMINARY
STATEMENT
As soon as we try
to establish causal precedence, things become even more difficult. For
every pair of factors that we see locked in a causal relationship:
-
We could mistake
the direction of causality. For
example, research on parents who physically discipline (spank!) their children
found that nearly 19 out of every 20 parents use some form of physical
discipline. The rare children whose parents never spanked them were found
to have exemplary behavior. The assumption was that parental discipline
patterns influenced children's behavior. BUT,
isn't it possible that "exemplary children" (who must exist, somewhere)
never even tempted their parents to use physical discipline in the first
place? That is, the true causal variable here was the behavior of CHILDREN,
rather than parents.
-
Any apparent causal
relationship occurs because a third factor caused both the orginal "cause"
and also the "effect." In other words, the relationship is "spurious,"
and not a "true" or "real" causal relationship. For
example, several decades ago, researchers found that American high school
students who smoked cigarettes had lower grades. Their conclusion was that
something about smoking caused lower grades. Leaving aside the reversed
causal possibility (your grades were so awful, you began smoking to relieve
the stress), later scholars found that the "true cause" was parental social
class. High school students who came from poorer backgrounds were both
more likely to smoke cigarettes and also had lower average grades.
Once parental background was controlled, student cigarette smoking no longer
predicted grade point average. Spurious relationships appear in experimental
studies too; for example, experimental results may be due to anxiety aroused
by being in a testing situation or an artifact of a particular treatment
manipulation.
One important example
of misapplied causal inference is that of Hormone Replacement Therapy
(HRT) study in postmenopausal women. Early studies reported that women
taking estrogen/progesterone hormone supplements following menopause had
lower rates of heart attacks or strokes and lower odds of osteoporosis
than women who did not take these hormones. The data appeared so impressive
that many doctors did not wait for more conclusive experimental results
in their recommendations, so that by early 2002, over 16 million U.S. women
were on HRT. However, in the early 2000s, a massive experimental study
was begun. Half the U.S. menopausal women in the study received HRT and
the other half received a sugar pill placebo. The women, from all walks
of life and different social classes, in public clinics and with private
doctors, were followed over time. To the researchers' shock, the experimental
data indicated that women on HRT, in fact, had HIGHER rates of heart attacks
and strokes. Although the incidence was still low, the data were convincing
enough the experiment was immediately terminated and new warnings were
posted on hormonal supplements.
How could this happen?
Women who took very good care of themselves: (A) were more likely to see
their doctors and thus receive HRT in the observational studies and (B)
women who take good care of themselves have a lower incidence of heart
attacks in general. The TRUE causal factor, apparently, is self-selection,
in this case the level of responsibility that individual women take for
their physical well-being. Although the data are still far from all in,
it appears that this is one case where incorrect causal inferences in observational
data were literally lethal.
|
-
The results were
caused by alternative causal variables, leaving your original causal explanation
suspect. For example, I have found that
the level of basic science knowledge in American adults was somewhat higher
among men than among women. People who have read this material conclude
that women are just less knowledgeable about science. HOWEVER,
much of this difference occurred because women gave more "I don't know"
responses than men did. Issues such as self-efficacy become more important
in giving "I don't know" responses than incorrect ones.
A considerable amount
of scholarship consists of formulating and testing alternative causal explanations
for "factual material," that is, teasing out how and why regularities occur.
Methodology is critical in the research enterprise. Some alternative explanations
are methodological artifacts: for example, a limited population; an unrepresentative
sample; biased questionnaire items or tests; or incomplete experimental
treatments. Others are conceptual issues that can only be tested using
thorough methods of data collection.
STAGES OF METHODOLOGIES
In this course, we will study several different
types of research designs. However, all these designs also share some common
similarities and a well-planned sequence of activities. I will mention
some basic ones now, and we will study these issues in more depth over
the semester:
-
Being able to develop
a clear research problem
-
Deciding on the unit
of analysis (individual? group? organizations?) and taking
measures that are consistent with that unit of analysis
-
Deciding how to sample
one's chosen units
-
Deciding how to measure
one's concepts via choice of method (experiment? survey questionnaire,
including "tests"? archival search? etc.)
-
Once a method has been chosen, deciding
on actual measures and procedures, such as questionnaire items or
experimental treatments
-
Pilot testing
one's measures and double-checking the results
-
Moving "into the
field" to collect data
-
Making contact with subjects and respondents,
including Institutional Review Boards (Human Subjects Committees) and any
organizational representatives.
-
Training field staff
-
Supervising or conducting data collection
-
Reducing the collected
data to manageable size by selecting coding categories and coding
the data
DEVELOPING A RESEARCH
PROBLEM
|
Most of us, when
we begin to write up professional research, like to start writing our papers
like storytellers. We discuss an interesting recent research finding. We
describe a compelling personal or social problem. Very often, the "meat"
of our study does not even emerge until the fifth typewritten page. Besides
making it very difficult for the reader, who must scrutinize this vivid--and
lengthy--prose for several pages to learn what it is that the investigator
will even study or wants to know, and what the research topic actually
is, this written procrastination serves as a signal that the author is
not really sure what their research is about!
When I work with
students on research projects, I am adamant that somewhere on the first
page of writing, a student must tell me:
What
the project is about. Anxiety
and testing results? Hormone fluxuations and sports participation? Motivation
tools and sports team performance?
Why
this project is important. Why it is a subject worthy of study. Will
it cure a social problem? Will it diagnose a learning disability? Will
it help individuals achieve a higher performance? Will it extend scholarship
in the discipline?
What
specifically
will be done in this study. An
examination of how gender and educational type and level influence science
knowledge in survey data? An experiment with social identity threat and
pain tolerance? An observational study of group dynamics on football teams?
Or succinctly put:
-
What's the study
topic?
-
Why should we care?
-
What, specifically,
will this paper address about the topic?
This combination
of three elements (the BIG 3) constitutes your
research problem statement: the general area
of
your research, why this research area is important,
and what specifically you will study.
Your research
problem statement will also address:
Your key conceptual variables
and definitions of these variables (see below).
Postulated
causal relationships (if any) among these variables
(or, conceptual hypotheses).
Writing a research
problem statement will be THE MOST DIFFICULT ASSIGNMENT you will have all
semester.
It is typically
the most difficult entity in the entire research process, even for experienced
professionals.
HOW
TO GET STARTED
If you are having
trouble conceptualizing a research problem, you are not alone. This is
the most difficult stage of conducting research. Further, in less structured
research, you may constantly revise the research problem as you collect
data, and you may do so in any kind of research if you encounter surprising
and unanticipated results. Nevertheless, here are several "tried and true"
ways to begin.
CONCEPTUAL
AND DEDUCTIVE APPROACH. You
are thoroughly familiar with the literature in your area (say, self-regulated
learning) and you are aware of gaps where theory has not yet been tested,
or where theoretical predictions contradict one another, or you derive
your research problem from some basic theoretical assumptions. For example,
perhaps you compare the reading assessment scores of elementary school
children taught via "whole language learning" (remember that one?) versus
"phonetics".
CURIOSITY.
Intrigued
by regularly occuring "facts," you wish to know more about why and how
those facts occur. You may be dissatisfied with previous explanations.
For example,
why does educational level affect basic science knowledge?
Is it the type of college major? Stimulating an interest in science? "Weeding
out" the less intelligent? Holding a scientific or technical job?
You may encounter
a suprising, unanticipated "serendipitous" finding that begs for an explanation.
Your
guesses about why this anomalous result occurred become the basis of defining
your research problem. Example, several decades ago, researchers on achievement
motivation discarded women subjects because female results did not "fit"
the researchers' paradigm. Encountering this unexplained quirk in
a footnote in the course assigned textbook, I have been examining
issues in gender ever since.
IT'S
THE MONEY, HONEY. Your
major professor or your client defines the research problem and you conduct
the study. In my experience, working for a client can be the most difficult
way to begin because the client often has a very fuzzy idea at best of
what they want to know or do. You often end up defining, or at the least,
clarifying and refining the research problem for the client. Alternatively
you are looking for grant support and write a proposal conforming to the
grant parameter descriptions. In my reviewing experience, "doing it for
the money" produces research no better and no worse on the average than
research conducted for more altruistic reasons.
STILL STUCK? CHECK
THIS SITE OUT! ON RESEARCH PROBLEM SELECTION HINTS.
KNOW
YOUR TOPIC
There is no substitute
for knowing your topic well. Most methods textbooks have excellent chapters
that describe literature searches. Online search engines and journal or
abstract services cut the time involved tremendously and alert you to new
sources of information. Check out the links to various organizations (many
of them sponsor journals) in the RESOURCES section of our Blackboard course.
Collect as many relevant
study designs as you can. I have a file cabinet filled with survey research
questionnaires on all kinds of different topics.
Talk with your clients,
consult your major professor, speak with members of your proposed participant
pool. Find out which aspects of your research problem are the most important
to them (and to you).
DOES
A PROBLEM REALLY NEED TO BE "A PROBLEM"? We
call it a "research problem" in that the investigator proposes some kind
of conundrum to be resolved. But this could be a conceptual statement (e.g.,
how do inter-cultural experiences lead to greater language acquisition?)
and not any kind of psychological or social problem at all.
One way to continue
working on your research project is to start a flow chart. Diagram your
key variables and the types of relationships among variables that you expect
to find. Such a chart will alert you to the concepts you need to measure.
Each global concept,
such as "reading assessment", "eating disorder," or "instructional design
plan" has a number of variable components and alternative definitions.
Be alert to this multiplicity of definitions and make clear what your
definition
is, what your key variables are, and what is or is not an instance of your
definition. See if the materials you read logically follow this kind of
progression.
Don't be surprised
if they don't.
A variable
is a characteristic or factor that has values that vary, for example, levels
of education, intelligence, or physical endurance.
A variable has
at least two different categories or values. If all cases have the
same score or value, we call that characteristic a constant,
not a variable.
CONCEPTUAL
VARIABLES are what you think the entity really
is or what it means. YOU DO NOT DISCUSS MEASUREMENT AT THIS STAGE!
Examples include "achievement motivation" or "endurance" or "group cohesion".
You are describing a concept.
On the other hand, OPERATIONAL
VARIABLES (sometimes called "operational definitions") are
how
you actually measure this entity, or the concrete operations, measures
or procedures that you use to measure the variable.
You usually begin your research problem
with CONCEPTUAL VARIABLES and the relationships among them. One of the
few
exceptions is if your actual purpose
is to study a particular operational variable, for example, perhaps you
want to study the validity of the "SATs", or Scholastic Aptitude Test.
We will spend considerable time in the
next week examing causal issues. For right now, you need to know about
independent and dependent variables.
Causes
are called
INDEPENDENT VARIABLES.
If one variable truly causes a second,
the
cause is the independent variable.
Speaking more statistically, variation in the independent variables comes
from sources outside our causal system or is "explained" by these sources.
Independent variables
are often also called explanatory variables
or predictors.
Effects
are called
DEPENDENT VARIABLES.
Statistically
speaking, we "explain" the variation in our dependent variable.
Dependent variables
are also sometimes called outcome
or criterion
variables.
A research problem
often describes the causal relationships between independent and dependent
variables and explains how these relationships come to be.
What we will
learn over the course of this semester is that some designs are able to
make stronger causal statements about the study results than others. In
some cases, the causal direction simply is not clear.
A
DOZEN METHODOLOGICAL CLICHÉS TO GET US STARTED
|
-
1.
Good research takes time. "Overnight polls" have bad
response rates and typically dubious generalizability. Experiments must
be pilot tested, for example, to see if participants even noticed your
treatment manipulations
(manipulation checks).
Ethnographies
can take months, or even years studying a particular group. No method can
be done in a hurry. Even if you are only in the field a short time,
allow enough time for planning and pilot testing in your research. Using
data originally collected for other purposes ("secondary analysis") doesn't
save any time, because you need to learn the database.
-
2.
No one study disproves (or worse yet, "proves") anything. While
we like to think of the "definitive experiment," each study has strengths
and weaknesses. Perhaps one cannot generalize well to a known population
of individuals (groups) or situations (NOTE: this is EXTERNAL
VALIDITY). Perhaps there are alternative causal explanations
about what caused the outcomes (NOTE: this addresses INTERNAL
VALIDITY). An aggregate of studies is usually needed to make
strong assertions about the phenomena under study. Beware
the studies that you read that make grandiose claims for their results.
This includes studies that may have thousands of participants. The size
of the sample or population studied has almost nothing to do with causality.
-
3.
Always,
ALWAYS pilot test before you go into
the field. This way you will catch
problems with the experimental manipulations, difficulties with field observational
categories, strange ways that respondents interpret your survey research
or "personality test" questions and much more. If you are using any type
of questionnaire, be sure that you pilot test at least once by reading
questions aloud (even if the questionnaire will be self administered).
-
4.
Try to measure your variables as many ways as practicably possible. You
want to rule out alternative explanations for your results. Do you want
your survey results to represent acquiescence response set instead of substance?
Of course not! Do you want your experimental findings to reflect experimenter
demand effects instead of treatment effects? Of course not! Beware
of the studies that truncate how they measure their concepts.
lalala
The process of measuring the same concept
in different ways is sometimes called "TRIANGULATION."
It is one way to try to ascertain CONSTRUCT VALIDITY
(i.e., whether your operationalized variables really measure the construct
you envisioned--or something else entirely).
-
5.
Trust participants and respondents. Listen to what they
are trying to tell you. Your respondents may be trying to tell you
(subtly, nicely) that they can't understand your questions (your colleagues
had no trouble with the professional jargon). Your participants may see
the experimental task as ridiculous although they will try to "help out"
by completing it anyway. Debrief. You are bound by ethics to do so anyway.
Ask your participants what they thought was the purpose of your
experiment. Ask a random subsample of respondents to answer the question
in their own words or why they answered the way they did (Schuman's
"RANDOM
PROBE" technique).
-
6.
Watch your defined population. Who does it represent?
Undergraduate psychology students only? Undergraduate education majors?
All undergraduate college students? High school Spanish students at an
upper income facility? Football coaches at AA universities? Graduate students
enrolled in distance learning courses? You almost certainly will want to
make generalizations later on if you gather quantitative data (or even
inappropriately if you gather qualitative data...) Assess the studies that
you read in terms of who they studied and who those individuals represent.
-
7.
Try to avoid dichotomies in your measurements whenever possible. Likewise,
don't collapse an interval level variable (e.g., years of education) into
an ordinal one (unequal educational categories) if at all possible. The
computer can aggregate several categories into one category in a matter
of seconds (e.g., 9 through 11 years of formal education can be recoded
as "some high school"). However, you cannot go the other way: "some high
school" cannot be turned into a definitive number of years of education.
Try learning to think in conceptual continuums,
degrees of "more" or "less" rather than "either" "or." Although your manipulated
treatments in an experiment may be categorical, even the manipulations
can be "levels" or degrees of a treatment.
-
8.
Consider how you will analyze your data once you have collected it or hopefully
even BEFORE you have collected. I know that some
of you have not yet taken a course in statistics. Therefore--consult a
statistician or a friend/student who has elected several statistics courses.
If you do an experiment and want to use something called analysis of variance,
you will need interval level (or "sort of numeric" anyway) measures for
your dependent variables. Regression typically requires interval dependent
variables (see your statistics instructor for variations on this theme).
If all your variables are nominal you will be more limited in the analytic
methods you can use. One thing to consider when
you read a study is whether the statistical analyses are appropriate for
the type of data in the study.
-
9.
It is very difficult, and sometimes impossible, for sophisticated means
of data analysis to compensate for poor data collection. If
the response rate is poor, the results probably cannot be generalized to
any known population. Some behavioral scientists engage in elaborate weighting
schemes so their data appear to be "representative" but the problem is
that we seldom know how those who responded differed from those who refused
or could not be located. If the measures are contaminated by some kind
of systematic response bias, casual effects will not be able to be disentangled
without gathering more data. If it turns out out later that important variables
were not measured, the investigator may no longer have access to their
population to collect more information.
-
10.
There is no such thing as "value free" research. Researchers
are human beings who are the captives of their culture and time period.
This includes considering only research that produces "statistically significant"
differences as "important" (consider for a moment what this perspective
did to the field of "sex differences" when ONLY
research finding differences got published).
So what is there to do? Try to understand
your own values, those of study investigators, and how these might introduce
biases into research. Safeguard against your own biases in your own work.
Don't do your own interviewing, don't open code your own data, and don't
serve as your own experimenter. Trade off with another graduate student
or seek funding to hire people for these positions.
Talk with men (if you are female) or women
(if you are male). Show your research design to friends from other cultural
backgrounds to see if your ideas--or your treatments--or your questionnaire
items--might be misconstrued.
-
11.
Your research will take at least twice as long and be at least three times
as much trouble as you ever thought it would before you got started. Trust
me on this one! Experimental participants
don't show up and must be rescheduled. The survey research lab goes broke
while you are in the field (this one really happened to me). You must locate
someone who speaks the language fluently. Allow time for the Human Subjects
Committee to examine your design. You had to go out of town and the client
pilot-tested on the wrong population (I had this one happen too.) MORAL:
try
to trouble shoot as much as you can at the very beginning!
-
12. DO THE BEST YOU
CAN WITH WHAT YOU GOT. No study (including mine or yours)
will be perfect. You almost certainly will have a less than ideal level
of funding (bake sale level, maybe?). You will have less than ideal assistance
and your time will be constrained. If we all waited for perfection, we
would never study anything. So, relax and enjoy!
under
pressure
Susan Carol Losh
This page was built with
Netscape Composer
August 26 2017