EDF 5481 METHODS OF EDUCATIONAL RESEARCH
FALL 2017

 

READINGS AND ASSIGNMENTS
GUIDE 1: INTRODUCTION
GUIDE 2: VARIABLES AND HYPOTHESES
GUIDE 3: RELIABILITY, VALIDITY, CAUSALITY, AND EXPERIMENTS
GUIDE 4: EXPERIMENTS & QUASI-EXPERIMENTS
GUIDE 5: A SURVEY RESEARCH PRIMER
GUIDE 6: FOCUS GROUP BASICS
GUIDE 7: LESS STRUCTURED METHODS
GUIDE 8: ARCHIVES AND DATABASES

OVERVIEW
GUIDE 1: INTRODUCTION
USING METHODS
WHAT THEY HAVE IN COMMON
DEVELOPING A RESEARCH PROBLEM
TYPES OF VARIABLES
SOME STARTING CLICHES

This is the first of several guides that will be published online for EDF 5481 this semester.
 
     KEY TAKEAWAYS:
  • This course mostly (but not entirely) addresses quantitative methods
  • We do have a substantial "qualitative" section
  • We also examine use of existing databases
  • I take a consumer perspective as well as a research perspective
    • Many students here will be clinicians and practitioners in addition to or instead of researchers
  • Causal issues in all kinds of studies (more on this one later this semester)
  • Several methodological stages
  • Developing a research problem: some how-tos



USING METHODOLOGY
WHAT METHODS DO

In Methods of Educational Research, we study several "quantitative" and "qualitative" methods--or, in my preferred terminology, more or less structured research designs for collecting data. In this course, we deal with empirical research, i.e., tangible data which are assessed using the evidence of our senses. While these kinds of data collection methods are not the only way in which we "know" things, they have particular utility for testing research hunches and hypotheses in a variety of fields.

Research Methods are most often used for two major purposes:

(1) To establish "facts" or recurring regularities in the environment. Examples of facts include:

(2) To test (and, more surreptitiously, establish) causal explanations for established facts. Most theories address explanations for factual material. Explanations typically assert causal relationships among variables of interest. For example: Consumers are constantly being bombarded with information: the Internet; journals; books; TV; newspapers; and much more.
You need to know about research methods (in your discipline and outside of it) to be able to evaluate information.

Either of these two assertions immediately above could be WRONG.

Establishing facts is hard enough! The measures we use may be contaminated by response bias (e.g., many people tend to agree with any general statement.) As a result, you may not know whether the presented questions in a survey or a "personality test" measure the desired construct--or instead "agreement response set." The population that was studied may be relatively small and harbor unique characteristics that are not typical of the true population you are interested in. For example, it is risky to generalize from studies of college undergraduates to corporate workers. You may have measured the wrong dimensions or omitted key facets of your topic (example: you thought you were measuring positive attitudes toward performance--but instead you measured emotions about competition).

METHODS AND CAUSE: A PRELIMINARY STATEMENT

As soon as we try to establish causal precedence, things become even more difficult. For every pair of factors that we see locked in a causal relationship:

 
One important example of misapplied causal inference is that of Hormone Replacement Therapy (HRT) study in postmenopausal women. Early studies reported that women taking estrogen/progesterone hormone supplements following menopause had lower rates of heart attacks or strokes and lower odds of osteoporosis than women who did not take these hormones. The data appeared so impressive that many doctors did not wait for more conclusive experimental results in their recommendations, so that by early 2002, over 16 million U.S. women were on HRT. However, in the early 2000s, a massive experimental study was begun. Half the U.S. menopausal women in the study received HRT and the other half received a sugar pill placebo. The women, from all walks of life and different social classes, in public clinics and with private doctors, were followed over time. To the researchers' shock, the experimental data indicated that women on HRT, in fact, had HIGHER rates of heart attacks and strokes. Although the incidence was still low, the data were convincing enough the experiment was immediately terminated and new warnings were posted on hormonal supplements.

How could this happen? Women who took very good care of themselves: (A) were more likely to see their doctors and thus receive HRT in the observational studies and (B) women who take good care of themselves have a lower incidence of heart attacks in general. The TRUE causal factor, apparently, is self-selection, in this case the level of responsibility that individual women take for their physical well-being. Although the data are still far from all in, it appears that this is one case where incorrect causal inferences in observational data were literally lethal.
 

A considerable amount of scholarship consists of formulating and testing alternative causal explanations for "factual material," that is, teasing out how and why regularities occur. Methodology is critical in the research enterprise. Some alternative explanations  are methodological artifacts: for example, a limited population; an unrepresentative sample; biased questionnaire items or tests; or incomplete experimental treatments. Others are conceptual issues that can only be tested using thorough methods of data collection.


TAKE A DEEP BREATH!

STAGES OF METHODOLOGIES

In this course, we will study several different types of research designs. However, all these designs also share some common similarities and a well-planned sequence of activities. I will mention some basic ones now, and we will study these issues in more depth over the semester:

DEVELOPING A RESEARCH PROBLEM

Most of us, when we begin to write up professional research, like to start writing our papers like storytellers. We discuss an interesting recent research finding. We describe a compelling personal or social problem. Very often, the "meat" of our study does not even emerge until the fifth typewritten page. Besides making it very difficult for the reader, who must scrutinize this vivid--and lengthy--prose for several pages to learn what it is that the investigator will even study  or wants to know, and what the research topic actually is, this written procrastination serves as a signal that the author is not really sure what their research is about!

When I work with students on research projects, I am adamant that somewhere on the first page of writing, a student must tell me:

What the project is about.  Anxiety and testing results? Hormone fluxuations and sports participation? Motivation tools and sports team performance?

Why this project is important. Why it is a subject worthy of study.  Will it cure a social problem? Will it diagnose a learning disability? Will it help individuals achieve a higher performance? Will it extend scholarship in the discipline?

What specifically will be done in this study.  An examination of how gender and educational type and level influence science knowledge in survey data? An experiment with social identity threat and pain tolerance? An observational study of group dynamics on football teams?

Or succinctly put:

  • What's the study topic?
  • Why should we care?
  • What, specifically, will this paper address about the topic?


This combination of three elements (the BIG 3)  constitutes your research problem statement: the general area of your research, why this research area is important, and what specifically you will study.

Your research problem statement will also address:

  Your key conceptual variables and definitions of these variables (see below).

Postulated causal relationships (if any) among these variables (or, conceptual hypotheses).

Writing a research problem statement will be THE MOST DIFFICULT ASSIGNMENT you will have all semester.
It is typically the most difficult entity in the entire research process, even for experienced professionals.

HOW TO GET STARTED

If you are having trouble conceptualizing a research problem, you are not alone. This is the most difficult stage of conducting research. Further, in less structured research, you may constantly revise the research problem as you collect data, and you may do so in any kind of research if you encounter surprising and unanticipated results. Nevertheless, here are several "tried and true" ways to begin.

CONCEPTUAL AND DEDUCTIVE APPROACH. You are thoroughly familiar with the literature in your area (say, self-regulated learning) and you are aware of gaps where theory has not yet been tested, or where theoretical predictions contradict one another, or you derive your research problem from some basic theoretical assumptions. For example, perhaps you compare the reading assessment scores of elementary school children taught via "whole language learning" (remember that one?) versus "phonetics".

CURIOSITY. Intrigued by regularly occuring "facts," you wish to know more about why and how those facts occur. You may be dissatisfied with previous explanations. For example, why does educational level affect basic science knowledge? Is it the type of college major? Stimulating an interest in science? "Weeding out" the less intelligent? Holding a scientific or technical job?

You may encounter a suprising, unanticipated "serendipitous" finding that begs for an explanation. Your guesses about why this anomalous result occurred become the basis of defining your research problem. Example, several decades ago, researchers on achievement motivation discarded women subjects because female results did not "fit" the researchers' paradigm. Encountering this unexplained quirk in a footnote in the course assigned textbook, I have been examining issues in gender ever since.

IT'S THE MONEY, HONEY. Your major professor or your client defines the research problem and you conduct the study. In my experience, working for a client can be the most difficult way to begin because the client often has a very fuzzy idea at best of what they want to know or do. You often end up defining, or at the least, clarifying and refining the research problem for the client. Alternatively you are looking for grant support and write a proposal conforming to the grant parameter descriptions. In my reviewing experience, "doing it for the money" produces research no better and no worse on the average than research conducted for more altruistic reasons.

STILL STUCK? CHECK THIS SITE OUT! ON RESEARCH PROBLEM SELECTION HINTS.

KNOW YOUR TOPIC

There is no substitute for knowing your topic well. Most methods textbooks have excellent chapters that describe literature searches. Online search engines and journal or abstract services cut the time involved tremendously and alert you to new sources of information. Check out the links to various organizations (many of them sponsor journals) in the RESOURCES section of our Blackboard course.

Collect as many relevant study designs as you can. I have a file cabinet filled with survey research questionnaires on all kinds of different topics.

Talk with your clients, consult your major professor, speak with members of your proposed participant pool. Find out which aspects of your research problem are the most important to them (and to you).

DOES A PROBLEM REALLY NEED TO BE "A PROBLEM"? We call it a "research problem" in that the investigator proposes some kind of conundrum to be resolved. But this could be a conceptual statement (e.g., how do inter-cultural experiences lead to greater language acquisition?) and not any kind of psychological or social problem at all.

TYPES OF VARIABLES

One way to continue working on your research project is to start a flow chart. Diagram your key variables and the types of relationships among variables that you expect to find. Such a chart will alert you to the concepts you need to measure.

Each global concept, such as "reading assessment", "eating disorder," or "instructional design plan" has a number of variable components and alternative definitions. Be alert to this multiplicity of definitions and make clear what your definition is, what your key variables are, and what is or is not an instance of your definition. See if the materials you read logically follow this kind of progression.

Don't be surprised if they don't.

A variable is a characteristic or factor that has values that vary, for example, levels of education, intelligence, or physical endurance.

A variable has at least two different categories or values. If all cases have the same score or value, we call that characteristic a constant, not a variable.

CONCEPTUAL VARIABLES are what you think the entity really is or what it means. YOU DO NOT DISCUSS MEASUREMENT AT THIS STAGE! Examples include "achievement motivation" or "endurance" or "group cohesion". You are describing a concept.

On the other hand, OPERATIONAL VARIABLES (sometimes called "operational definitions") are how you actually measure this entity, or the concrete operations, measures or procedures that you use to measure the variable.

You usually begin your research problem with CONCEPTUAL VARIABLES and the relationships among them. One of the few exceptions is if your actual purpose is to study a particular operational variable, for example, perhaps you want to study the validity of the "SATs", or Scholastic Aptitude Test.

We will spend considerable time in the next week examing causal issues. For right now, you need to know about independent and dependent variables.

Causes are called INDEPENDENT VARIABLES.

If one variable truly causes a second, the cause is the independent variable. Speaking more statistically, variation in the independent variables comes from sources outside our causal system or is "explained" by these sources.

Independent variables are often also called explanatory variables or predictors.

Effects are called DEPENDENT VARIABLES.

Statistically speaking, we "explain" the variation in our dependent variable.

Dependent variables are also sometimes called outcome or criterion variables.

A research problem often describes the causal relationships between independent and dependent variables and explains how these relationships come to be.
What we will learn over the course of this semester is that some designs are able to make stronger causal statements about the study results than others. In some cases, the causal direction simply is not clear.

A DOZEN METHODOLOGICAL CLICHÉS TO GET US STARTED
under pressure

Susan Carol Losh
This page was built with Netscape Composer
August 26 2017
 
 

READINGS AND ASSIGNMENTS

OVERVIEW