SYP 5105-01           FALL 2017




  • Diverse learning theories are still prominent in social psychology, contribute to psychological therapy, training such as "self regulation", and have changed public attitudes in many ways.
  • Formal and informal learning perspectives hold out the promise to us of lifetime learning and the always present possibility of change.
  • Similarities among classical conditioning, instrumental conditioning and modeling
    • Ideally observable stimuli and response (S-R)
    • Habit/practice/contiguity of S-R important, especially for learning
    • Motivation/reinforcement important, especially for performance*
    • Theories become increasingly more complex, cognitive and social
    • Generalizing from one situation to another; discriminating between one situation and another
    • Extinction (reinforcement schedules and frequency can influence)
  • Classical conditioning
    • Begin with and builds on existing S-R connections
    • Historically lays out the basics (e.g., extinction)
  • Instrumental (operant) conditioning-->applied behavioral analysis, desensitization training
    • Shaping behavior in steps (dog tolerates hoop, approaches hoop, sniffs hoop, sticks head through etc)
  • Social learning/modeling/"no-trial" learning/vicarious learning (lots of different names for the same general perspective!)
    • Clearly distinguishes between learning and performance
    • Both can be instilled through observation of others
    • Rewarding others can suffice for both learning and performance
  • Effects of punishment controversial; may create avoidance (behavior continues), trauma, modeling aggression as an adult
    • OR be exceptionally "effective"
    • Reward may be intrinsic or extrinsic--with different behavioral results ("oversufficient justification")
  • "Socio-cultural learning" Vygotsky
    • Several parallels with George Herbert Mead and symbolic interactionism
    • Learning from the outside in
    • Good chance how internalization of society proceeds and maybe even "superego" type development
  • Other influences: age and gender and social status "appropriateness" of behavior
*I'm not saying I agree with these equivalences (e.g., does motivation really = observable reward necessarily?) but only how they often occur in the literature (with, of course, room for disagreement)



Many of you have seen dog agility trials that often appear on sports networks, or on Animal Planet. In these competitions, dogs race through tunnels, swerve back and forth through "weave poles", walk tightropes, and jump through hoops. The dog who performs these behaviors correctly in sequence in the least amount of time wins! Very few dogs learn and perform these behaviors on their own; they are carefully taught. Nevertheless, once trained, most agility dogs not only seem to enjoy the applause but enjoy completing the actual behaviors.

Since most dogs are not exactly what we would call "self-actualizing" or "self-regulating," how do they learn such complex skills in sequence? Read on for a variety of learning theories that can explain "the agility dog."



Formal learning theories, heavily influenced by Russian and American research, were a product of the late nineteeth and early twentieth centuries. Behavioral scientists felt that in the stimulus-response connection, they had found the equivalent of "the atom" in physics or "the cell" in the life sciences: they envisioned the "S-R" connection as the basic building block of all behavior. Learning theories, or "behaviorism" became the dominant thrust from the early twentieth century to the early 1960s. These theories still form a prominent paradigm in social psychology today, contribute to psychological therapy, and have changed public attitudes in many ways.

In the past, human behavior was largely seen as "instinctual" or biologically based, with this paradigmatic shift, our behavior was now seen as learned and heavily influenced by the environment. This change was tremendous: after all, if behavior is learned, previous theories of race or ethnic based behavior, grounded in biology or "instinct", made no sense. Perhaps the last bastion to fall was that of biologically based "sex differences" as "destiny". Social scientists no longer viewed national differences as genetically ingrained but due, instead, to cultural influences. (Of course, with more evolutionary theories, we have returned to issues of "temperament" but this is NOT the old instinct theories. Biology is NOT seen as destiny but as one of many inputs into the human outcome.)

All organisms are considered subject to stimulus and response connections. And all behavior is subject to "shaping." From the progression of babbling baby to speaking child, to teaching tricks to your dog, to Unabomber victim psychologist James McConnell, who shaped the behavior of planeria (a primative flatworm cellular structure), all living matter learns.

The practical and political implications meant that national and cultural viewpoints could change, and individuals would change with them. "The sky was the limit" for women or ethnic groups previously seen as immutably hampered by gender or color. With the view of lifetime socialization sponsored by a learning theory perspective, even an old dog could learn new tricks. And, of course, the whole concept of "adult education" took on entirely new meanings.

In virtually all countries exposed to these perspectives, socialization (at home and elsewhere) and education became the keys to change. In early twentieth century United States, many public schools taught adult immigrants English and "American ways" in night school. In the early 20th century, U.S. government agencies sponsored pamphlets and training programs for "modern childrearing." At least two generations of American children were raised "by the clock," conditioned to be obedient to schedules created by behavioral specialists. The profession of social work became a government-sponsored industry. By the late 1960s, Head Start was created to ready young children for elementary school. By that time anyway, children from upper and middle class families attended nursery school, many of them being drilled in foreign languages and reading. New fields: adult education and instructional design, became industries, as government and industry began to realize that education never stops. Extension of the behaviorist model to gender meant that girls now receive opportunities in sports or education that previously were nonexistent.

Learning theory models were consistent with and boosted the emerging fields of anthropology and sociology, which saw individual behavior as determined by "social forces." It was not difficult to translate cultural mandates and interactional scripts into chains of learned stimuli-response connections.

Did these dramatic changes engender "backlash" or reaction? They did. Developmental theorists pointed out that children had to be at particular maturational stages to perform activities (such as toilet training) that behaviorists were trying to instill at "too early" an age. By the 1980s, biological determinism and Social Darwinism re-emerged in new guises, buoyed by breakthroughs in genetics, although many sociobiology assertions (e.g., "an altruism gene") never have been verified and the end goal appeared similar: to make stratified social divisions appear "functional," natural, inevitable, and immutable. These streams of thought continue, sometimes financially bolstered by those who would aid agenda scholarship. The book, The Bell Curve, for example, which asserts biological differences in "intelligence" and "motivation" between Blacks and Whites, was largely funded by grants to the authors (one a specialist in pigeon learning, the other a political scientist, neither a specialist in genetics or IQ) through a very politically conservative foundation.

At the same time, cognitive specialists questioned the passive perspective reinforcement theories imposed on human behavior. People (and other organisms) could learn and react, all right, but where were originality, initiative, leadership, or the importance of values? About the best behaviorists could muster were vague allusions to randomly emerging behavior, which, if reinforced, became the springboard for original responses.

Meanwhile, during  the 1930s in the Soviet Union, Lev Vygotsky continued to refine his "socio-cultural" theory of learning. Comparable to many symbolic interactionists, Vygotsky believed that (1) learning is inherently social and (2) that learning occurs "from the outside in". His theories have become influential in the United States (see below for more detail).


Similarities include:

Stimulus and response should both be observable. You must know when the behavior occurs.

Stimulus and response are more often linked if they are contiguous or associated in some way. Association can be temporal (happening close together in time), geographical (linked to the same place) but also linked by symbolism or meaning. Contiguity is a key in virtually all learning theories and is especially important in  classical conditioning theories.

Stimulus and response become linked through processes of reinforcement. Rewarded behavior is more often repeated under the same or similar stimulus conditions to those which contained the reward. (The effects of punishment are conceptually more controversial and are described later.)

Virtually all learning theories consider the role of habit, frequency, or practice. Psychologist Clark Hull was a major exponent of this view. Stimulus-responses connections that occur more often are seen as higher in the habit or response hierarchy. Thus, a well-practiced response is more likely to be evoked and performed in a similar stimulus situation. The veterans of the 1993 attempted World Trade Center bombing in New York City had a practiced repertoire or script when the buildings again came under attack in 2001. Others had had rehearsals or fire drills held by their companies, a life-saving effort because stairwells were not always easily spotted on all floors and some were covered by furniture.

Learning theories also incorporate motivation and drive. These are sometimes ascertained through deprivation (e.g., not feeding your dog her breakfast) and other times by goals with a high positive valance. Self-regulation is one type of motivation whereby the individual internally prioritizes goals and undertakes steps to achieve them. Skinner's operant conditioning theory was one of the early perspectives to clearly distinguish motivation from habit or practice.

Both stimuli and responses generalize, i.e., there are a range of similar stimuli which evoke roughly equivalent responses and there are roughly similar responses (a smile, a giggle, a guffaw) that can occur to the same stimulus. So, for example, if you have a bad experience with a red-headed teacher, you might invoke stereotypes about redheads and apply them to other people with red hair.

Individuals also discriminate, i.e., they treat stimuli that are different enough as distinct. Generalization and discrimination characterize even Pavlov's early theory of classical conditioning.

Over the twentieth century, learning theories also became:

More complex, often leading to several stimulus-response chains, or adding additional variables such as generalization gradients.

More social, in particular, acknowledging that organisms could learn through watching others perform or through watching others receive rewards or punishments for their behavior. This is especially true for Albert Bandura's theories of social cognition and learning (for the past 50 years!)

More cognitive, adding cognitive mediators and information organizers that were not immediately observable. For example, self-regulation is highly internal and cognitive.

They distinguish more between learning and performance. Contiguity and habit are often more associated with learning while drive, motivation and reward are more associated with performance. That is, learning can occur through frequent, contiguous presentations, while performance may require an incentive. In classic learning theories, incentives are seen to be external. As theories became more sophisticated, incentives such as self-identity, meaning, or anticipatory goals, which are internal, became added.


Classical conditioning is associated with Ivan Pavlov, who often worked with dogs in Russia at the turn of the twentieth century. His terminology is still current today.

Pavlov would deprive dogs of food for about a day, then present the dog with meat powder. Upon smelling the meat powder, the dog would begin to salivate. Pavlov measured saliva output in drops. (It was pretty boring to read about...)

He termed the meat powder the unconditional stimulus and salivating as the unconditional response. Pavlov treated unconditional stimuli and responses as a given, an already established stimulus-response connection, possibly due to reflex or to unmeasured earlier conditioning.

Pavlov then presented a tone or a light (the conditional stimulus) prior to presenting the meat powder. After a few trials, the dog would salivate when the tone was sounded or the light turned on. Then Pavlov omitted the meat powder, but the dog continued to salivate in response to the light or tone. Note that the conditional stimulus must be presented prior to the unconditional stimulus. Pavlov termed this response to the tone or light the conditional response--it was typically not exactly the same as the unconditional response (for example, less saliva) but was similar in form.  Thus, previously neutral stimuli become associated with other stimuli or with rewards and punishment.

Notice in Pavlov's research that the unconditional stimulus was fused, or confounded, with the reward or reinforcement. Disentangling these two (the stimulus versus reward) was a refinement in later learning theories.

Pavlov could not permanently remove the meat powder, or eventually he would sound the tone or activate the light and nothing happened at all. The conditional stimulus-response connection was now broken. Pavlov termed this process extinction. The unconditional stimulus now had to be replaced in the sequence to reinstill the conditional response. Learning about extinction stimulated Pavlov's interest in different kinds of reinforcement schedules:

Reinforcement could be given on a fixed periodic basis (say, every five minutes).

Reinforcement could be given at fixed intervals (say, every fifth time the organism made the response).

Reinforcement could also be irregular or variable.

Regular reinforcement is often used at the beginning of a learning task to "stamp in" the response.

After that,  irregular reinforcements schedule typically are the most resistant to extinction. Later research suggests that consistent use of a regular (periodic or interval) schedule sensitizes the organism to changes in the reinforcement schedule so that it notices immediately when the reinforcement is missing. It is harder to detect changes when a variable schedule is used because the reinforcement is unpredictable. Notice the cognitive approach to extinction here.

Pavlov's work also gave rise to the concepts of generalization and discrimination (or differentiation).

HIs work had enormous influence in the United States due to adaptations by psychologist John Watson. Watson felt that humans started with few reflexes, such as a startle reflex or a fear of falling. All else was conditioned in. In his research with "Little Albert," Watson demonstrated how a toddler, originally interested in a furry white rat became afraid of it after Watson repeatedly clashed cymbals behind Little Albert's head when the animal was presented (they didn't show the cymbals in a movie clip of Watson and Little Albert--no wonder!). Little Albert quickly generalized to any white furry animal such as a rabbit and (so legend has it) eventually anything white or furry.

Thus, much of mental illness (or, at least phobias) was seen as explained by conditioned responses and generalization to an initial incident.

Watson extended his applications to the socialization process (those rigid schedules for feeding--and even for holding an infant) and to education. He believed that anyone could become anything--any occupation--with the proper training. His ideas (particularly on schedules) influenced child-rearing until Dr. Spock and other, more development-oriented perspectives gained credence following World War Two.

The picture of people under these perspectives was passive; people were "bundles of reflexes" and conditioned responses.

There was, however, a relatively unnoticed "fly in the ointment": Pavlov noted that if someone new entered the testing room dogs would show an alerting response. All their other responses stopped as the dog "oriented" to the new stimulus in the environment.


The late B.F. Skinner had a more active picture of humanity (and, at least, other animals and birds). Skinner saw organisms as active, as constantly emiting responses in a random manner. Some responses were rewarded. Those that were rewarded would remain (using Thorndike's "law of effect") in the organism's behavioral repertoire.
Skinner saw learning as incremental. The organism began with a random response which bore some, often very slight, resemblance to the end desired response. For example, simply to get a dog to come near a hoop may be an initial triumph if you are trying to teach your dog to jump through it.

At first the initial response would be strongly rewarded, almost regardless of what it was. As the organism continued to emit responses (now, the dog sniffs the hoop), only those that came closer to the desired response would be rewarded. Step by step, the organism is successively rewarded for responses that more greatly resemble the goal behavior. Rewarding these successive approximations is called shaping. Skinner saw shaping at work in all forms of human behavior, including language. Eventually, the dog sniffs the hoop, then sticks its head through it, walks through the hoop, then jumps as the hoop is lifted off the ground.

Skinner trained pigeons for the military during World War Two. He also developed the "Skinner Box", a cube with a bar for a rat to press, and a tube that dispensed food pellets, and even a "total environment," a kind of Skinner Box for an infant, with climate control (no diapers or clothes, just a paper sheet on rollers) and visual stimulation. His novel, Walden Two (still fun reading after all these nearly 70 years), portrayed an ideal society based on operant conditioning principles. For example, young children learned to resist temptation by wearing a lollipop dipped in powdered sugar on a string around their necks. Hungry children came to the table and donned their lollipop, which would show tongue tracks in the powdered sugar. The length of time the child had to wait before licking the lollipop was gradually lengthened from a few minutes to as long as an hour. I used exactly this approach to quit smoking and I haven't had a cigarette since 1994.

Never smoked, don't care, find "the personal stuff" boring? OK, this section's optional; skip ahead HERE to phobias.
Over 23 years and counting...

I remembered Skinner's lollipop training when I made the seventh, and it turned out, my final, attempt to quit smoking. I was as addicted to cigarettes as it gets (including waking to smoke in the middle of the night) and got the nicotine "high" or "buzz" that many smokers mention. By the mid-1980s following several stressful events, I was smoking three packs of cigarettes a day (that's 60 daily cigarettes for those who never smoked), a quantity that shocked even me.

In previous attempts to quit "cold turkey," I could not even last 24 hours. At one point I was smoking and chewing nicotine gum simultaneously. Obviously, none of this did my sense of self-efficacy much good. And, in one of life's little ironies, it turns out that I am allergic to tobacco.

On the seventh try, I remembered the one time I had temporarily quit for five days, during an attack of the two week measles in my last semester of graduate school. I was so sick and the Health Service infirmary forbade smoking. Thus, at my next bad cold, I cut my consumption in half, to 1.5 packs per day. This was not without repercussions: I gained 25 pounds of fluid in two weeks (I didn't eat more either); my hands shook; and I woke constantly with nightmares for a month. When my doctor told me I just should have quit cold turkey, I reminded him that this was the equivalent of a pack-and-a-half a day smoker quitting cold turkey.

Following that, I invoked the "lollipop method," gradually extending the time between cigarettes. At first this was incredibly difficult. I set timers for the alloted minutes, I took up embroidery (and within a year generated 13 pairs of embroidered pillowcases which virtually everyone I knew received as gifts), I did isometric exercises when the craving hit (which was plenty). I counted out my alloted cigarettes per day. I rewarded myself with a book or other goody when I achieved intermediate goals. I used my calculator to estimate how much I spent on cigarettes in a year and how much I had saved (tax-free!) with my cigarette reductions.

I began to condition my cigarette use to time. Previously I had lit a cigarette in response to any minor stress such as being stuck in traffic or to meals (one before, one after). Clearly I had conditioned myself to smoke in response to various events. By conditioning to time, I broke part of the psychological hold that kept me smoking. For example, whenever possible, I would stop whatever I was doing to have a cigarette every 30 minutes on the half hour. If that was impossible I had to wait until the next free half hour (cigarettes in my method could not be banked; an opportunity lost was not made up.)

People who came to my office who heard the timer ticking and saw the cigarettes neatly laid out in labelled rows couldn't stop laughing (although many of them are still smoking.)
(In those days, one could smoke in the office.)

And everytime I got sick, I cut back as much as I could. There were many times I hit a plateau, but I never increased my consumption.

By August 4, 1994, I was down to one cigarette every five days. To show you just how hooked I was, I refused to use the "Q word" and was contemplating "cutting back" to one cigarette per week. Although I didn't know it then, that was my last cigarette. I flew to Los Angeles for a conference and contacted a terrible case of food poisoning while there. By the time my temperature dropped to normal, that was it. No more cigarettes. For months afterwards, if I smelled tobacco, or even saw a cigarette billboard advertisement, I literally went weak at the knees as a physical craving hit me from head to toe.

So that's how I finally quit smoking, and I feel I owe it all to operant conditioning.

Maybe you have something you want to change, diet, exercise, or something else habitual. Give it a try!
Put Psychology to work for you!

Instrumental conditioning has become the favored treatment for phobias. Phobias are a strong, unpleasant set of physiological reactions with accompanying psychological anxiety ideation to an object or circumstance that most people find innocuous.The fact that the cat or the balcony or going outside  really won't hurt you doesn't help.Most people with phobias already know that these entities are basically harmless; that knowledge doesn't make them feel better; in fact, it makes people feel worse.

Phobic anxiety reactions can be terrifying. I cannot emphasize enough that anxiety isn't just "psychological"; anxiety has a very strong physical component. The person might feel "a band around their chest," feel dizzy, nauseated, sweat profusely, and feel unable to breathe. Full-blown agoraphobics literally cannot leave home by themselves. Some end up in a hospital emergency room convinced they are experiencing a heart attack. Less severe cases may go to great lengths to avoid flying, heights, dogs, cats, or whatever is the subject of the phobia. Others stop driving. As you can see, phobias exert an enormous toll in work days lost, inconvenience, and just plain old human misery.

More developmental psychologists tended to see phobias as resulting from inner conflicts and psychological defenses. They believe in substitution, that is, if one phobia is "cured," another will replace it until the inner conflict is addressed and resolved. As you might guess, it can take months or even years to unearth unconscious psychological conflicts and resolve them.

Behaviorists believe that phobias are noxious habitual responses to a set of stimuli. The first anxiety reaction may be accidental and the setting probably a coincidence (e.g., an attack of dizziness at a shopping center perhaps in response to sniffing tobacco, a fragrance, or something else that might be an allergen for the individual). Through generalization, the person now begins to have the same reactions under somewhat similar circumstances, such as in a grocery store or pharmacy. Very rapidly, the individual may now produce phobic responses in any type of shopping in a large enclosed area. The preliminary response of most people is then to avoid circumstances that set off these unpleasant phobic reactions. Unfortunately, generalization often means that phobic reactions continue to occur, only in new settings, which, in a chain reaction, are then avoided. The individual's world becomes smaller and smaller.

Desensitization training, a variant of operant conditioning, is much faster and more effective than traditional "talk therapy." Further, new phobias do not seem to arise. In desensitization, the individual starts learning relaxation responses in a non-threatening environment. The person learns to contract and release muscles from head to toe. At the end of this series, the body is very relaxed. The muscle exercises are often coupled with visualizing a relaxing scene, which is usually person-specific (favorites are being near the ocean and listening to the surf, or sitting in a quiet forest; others imagine aural stimuli, such as a favorite piece of music.)

Then, shaping begins. The person is exposed to the least threatening version of the phobia (for example, if you have a feline phobia, you might be shown drawings of kittens). Relaxing techniques are practiced until the individual becomes comfortable with the situation. Then, the next level of the phobic situation is invoked (for example, photographs of adult cats). Step by step, the person learns to make relaxation responses in the presence of a phobic situation (for example, you might have a cat in a carrier placed across the room, which is then moved closer; eventually you might pet the cat.) Other desensitization tactics may use "virtual reality" goggles.

Operant conditioning has often been used to explain internalization, or how the individual comes to accept society's values and regulations. For example, Vygotsky's discovery of private speech, in which children talk to themselves aloud before learning to silently think to themselves, is one possible mechanism. By speaking to themselves, children reinforce these values and rules, and they become part of the self.

Operant conditioning has several drawbacks:

It badly predicts the acquisition of more complex behaviors, such as generative grammar in language, compared with learning individual words. On the other hand, sounds that infants make that are not part of their culture's language do begin to drop from the infant repertoire--consistent with Skinner's predictions.

It is slow and cumbersome. As Albert Bandura put it, God forbid you should learn to drive a car this way! It also took me eight years to quit smoking.

As an undergraduate psychology major, my friends and I trained rats in Skinner boxes. Poor lab rats! It took them so long to connect pressing the bar with its click noise to receiving a food pellet. In fact, it took forever for them to learn to even depress the bar, so we speeded up the process. We crumbled the pellets and sprinkled them on the bar. That brought the rat right over to the bar. In nibbling the pellet, the rat would press on the bar, it would click, and a pellet shot down the tube of the Skinner box. Using this procedure, lab rats quickly learned to press the bar to receive food pellets.

And operant conditioning STILL doesn't explain the generation of original complex responses.

MODELING (among other names)
This type of learning is also often called:

Social Learning (I don't use that term here because the DeLamater et al book uses a different definition of Social Learning, which derives from a symbolic interactionism perspective)


Observational Learning (because this is what happens)

Vicarious Learning (because what happens to the model has a vicarious effect on the observer; if the model is rewarded, the observer expects to be rewarded for the same behavior)

"No-trial" Learning because the behavior can be learned from one observation without practice.

Social cognition (oh, ouch!) most recently. (You can imagine how that term is used aggrevates those of us who study the import of social factors on basic cognition as well as the "person perception of others".)

I will use the term "modeling" here. Going back to John Dollard and Neal Miller's work in the late 1930s, the assumption is that people (and often animals) imitate or "model" responses. Why does imitation occur? That is never really explained; it is just something we typically begin to do by the end of our first year of life.  Perhaps it has a biological base; perhaps it is a response to stimulation...The original organism emitting a response is called "the model."

Not all responses or all models are imitated. Physically agressive responses, at least for young boys, are often imitated. Physically arousing behavior in general is more often imitated.

Models who are warm and friendly are imitated more often. So are higher status or more powerful models.

What happens to the model is critical. Behavior is far more often imitated if the model's behavior is rewarded. It is less often imitated if the model's behavior is punished. Thus, this research demonstrates the influence of vicarious reinforcement.

What happens to the observer is perhaps even more critical. In Bandura's early work, he showed that sex differences between young girls and boys who imitated a filmed aggressive model virtually disappeared if both sexes were directly rewarded for imitation. Boys were much more likely than girls to imitate an aggressive model if neither the child nor the model received no reward or no punishment. Thus, one thing these results illustrate is that behavior can be learned but not performed if the observer or the model is not rewarded.

Modeling was not only a more social and more cognitive form of learning, it added several advantages to previous theories:

It illustrates that behavior can be learned, but not performed, unless the circumstances are auspicious.

It shows that complex behavior can be rapidly acquired, sometimes within the context of a single observation.

Behavior can be acquired in "large chunks" or sequences, rather than laboriously step by step. Interactional sequences (scripts) can be modeled too.

Consequences to the model are very important.

Both direct and vicarious reward can be effective. So can direct and vicarious punishment.

Behaving in role-appropriate ways (gender, age, occupational) may be intrinsically rewarding. Some role theorists, social identity theorists, and in addition, developmental theorist Lawrence Kohlberg speculate that people deliberately and selectively seek out and perform role-related behaviors. For example, studies of nursery school age children show at that time many want to act "like a girl or boy should."

Modeling results from over literally thousands of studies have changed how we view responses to filmed physical aggression. Prior to these studies, a prominent view was the catharsis or "hydralic model." The basic idea was that physical aggression was one response to stress, which built up like steam in a closed boiler. Watching filmed violence supposedly had a "cathartic effect" by allowing a stressed individual to "blow off steam" by watching someone else do the punching. Now, we realize that a steady diet of filmed violence has negative effects and aggrevates, not alleviates, aggressive responses. Even the American Medical Association has labelled watching violent TV a medical risk for children.

One chilling finding is that aggression increases after watching filmed violence, even if people have not been frustrated or are not under stress. In fact, Leonard Berkowitz' research on the cue theory of aggression suggests that all that is needed are aggressive cues. In a simple and ingenious experiment, half the students entered a laboratory room where a gun was placed in a jumble of papers, books, a tennis racket, and assorted academic junk on a table. The gun was removed for the other random half of the students. The gun was never mentioned by the experimenter. Nevertheless, students shocked an experimental confederate more often and more severely when the gun was present on the table than when it was absent. Boys in Bandura's original studies imitated filmed aggression even if they were not frustrated or even rewarded for their imitation (see below).

Unfortunately, the way most violent villains are punished in films and television programs is through violence delivered by the heroes. The heroes are then usually rewarded for violently bringing the villains to justice.

People who watch a lot of TV (which shows several murders per hour) overestimate the incidence of violent crimes in their area and are more likely to report being afraid to go out at night. (This "mean world syndrome" occurs despite controls for actual crime rates in the person's area and their own experiences being victimized by crime.) The late George Gerbner's "cultivation theory" has been largely tested through surveys of general public adults.

In experiments, boys don't need to see a model rewarded to imitate filmed aggression. As long as the model isn't punished, boys will imitate the behavior. Typically, American girls need to be directly rewarded to imitate filmed aggression. Of course, both boys and girls watch long hours of TV (the worst are cartoons) in which they see boys far more likely to initiate aggression and be rewarded for it than girls are. Public TV shows for children generally show very little physical aggression and currently provide one choice for parents or guardians who want children exposed to more prosocial behavior.

Many children's movies and TV shows repeatedly show children as far smarter than adults, who are often portrayed as stupid and oblivious. The bumbling behavior and ignorance is ascribed to most adults in authority over children, e.g., parents, teachers, principals, and camp counselors. Children are the heroes who solve the problems that adults cannot. Walt Disney features are often the worst offenders. In fact, in the Disney movie cartoon 101 Dalmations, not only are the dogs smarter than adult humans, the puppies are the smartest of all. And then teachers and parents, who  tout Disney movies as so terrific because they don't have sex or violence, wonder why kids disrespect authority...

One famous TV series, The Power Rangers, which was once wildly popular with preteens, sold millions of dollars of merchandise annually: doll figures, posters, books, and so on. The Power Rangers are "mutant" high school students who "metamorph" into strange alien creatures who fight inter-galactic battles. It is clear that the producers have taken at least some behavioral science strictures to heart because the Rangers are a mix of male and female, White, Black, Hispanic, and Asian. Unfortunately, all the Power Rangers are also physically gorgeous, so fit that some are gymnasts. Their enemies are fat, badly groomed, poorly dressed, and have pimples. What do you think is being learned and modeled here?

More recently, attention has turned to the modeling of prosocial behavior. Individuals often "follow the lead" of those who act in an altruistic manner as well.

On a more positive note, my students and I found that elementary school girls drawing teachers, veterinarians and "scientists" in a school cafeteria more often drew black, brown or yellow figures if adults of color were seated nearby (there were not comparable results for boys).


It may seem straightforward to you: behavior that is rewarded is repeated under similar circumstances while behavior that is punished is extinguished. At least that's how the early behaviorists thought, but we now know the reality is more complex.


Skinner felt punishment was counter-productive for several reasons:

punishment itself may become a reward in itself, as when children receive attention for misbehavior

punishment doesn't get rid of environmental temptations

punishment is highly situation-specific so you might get Johnny to stay in his seat in math but his behavior doesn't improve elsewhere

the punisher becomes an aversive stimulus (as in the dads who are seen as more distant and less close than moms)

punishment requires surveillance turning the situation into one of forced compliance. Hence, it is externally rather than internally controlled. A compliance situation for a group or an organization is very costly in terms of resources and personnel.

Skinner suggested simply ignoring unwanted behavior.

As we already know, however, this doesn't always work (remember those boys who imitated filmed aggression if nothing happened to the model?) Or what if your toddler runs into the street? It's tough to reason with a two year old and "time out" may not be dramatic enough for your kid to understand and change his or her behavior.

In his experiments with dogs, Solomon found that severe punishment can be effective. In fact, he was unable to extinguish escape behavior in dogs who were conditioned to escape a severe shock. Well, you don't want to be abusive in any regard, but  more recent research indicates that punishment can be effective. The watchwords are:


sure and

[relatively] severe

directly tie the punishment to the transgression

Swift, because your kid and your dog aren't going to remember why you put them in a corner hours after the event; sure so they don't learn you simply threaten; and severe enough to be remembered.

You can see why the old "wait until your father gets home" tactic doesn't work well, besides the fact that it makes Dad an aversive stimulus. Punishment, if it occurs at all, happens long after the transgression and Dad even may say, "I don't think he needs to be spanked about that" (so the child isn't punished at all on that occasion). When fathers do engage in physical punishment, they often inflict more serious physical harm than mothers.

In surveys of the adult general public, over 96 percent of American parents say they spank their kids. This mostly means small kids (10 or under) and a quick potch to the rear end. Right or wrong, most parents use at least some corporal punishment. Middle class American parents tend to punish "bad intentions" and use less physical punishment than working class parents do.

On an adult level, punishment has been studied in the form of "fear appeals." The initial studies in the 1950s tended to find that strong fear appeals were ineffective. Although people paid attention, the messages were usually so threatening that individuals failed to comprehend the messages and avoided the situation when possible. However, more recent research indicates that fear appears can be extremely effective if you suggest specific remedies to alleviate the fearful situation. Brush your teeth and see your dentist, and gory teeth will not happen to you. Wash your hands and you will get sick less often.


To my shock, I read a recent (10-21-2016) article in the Los Angeles Times. The ostensible motivation for the story was the U.S. federal government's Consumer Expenditure Survey "rewarding" respondents with five dollars for completing often lengthy surveys. Among other things, survey responses are used to calculate inflation rates. One expert pointed out the "norm of reciprocity" (help those who helped you) might increase response rates. However, a second academic, a marketing professor at Northwestern University disagreed and said:

...the opposite may be true.

“Lots of academic research has identified situations where giving people small amounts of money for completing the task can actually reduce their likelihood of complying because it undermines their intrinsic motivation,” she said.

“When I see myself as doing something for a small amount of money, I infer that I’m not really interested in doing the task for its own sake, and this can ultimately decrease the odds that I’m willing to do it at all.”

OUCH! OUCH! OUCH! What are they teaching at colleges of business these days? Wrong!!

Remember cognitive dissonance? Smaller rewards cause MORE internal change, not less.
The professor is actually citing reseach incorrectly from a stream of research often called "oversufficient justification". Read on for more information.

OPTIONAL: You can access this "Gemini Project" L.A. Times article here: http://touch.latimes.com/#section/-1/article/p2p-91721484/

In most cases, research supports the idea that reward directly influences repeating the behavior. It is true that SOMETIMES the greater the reward, the greater the commitment to the new behavior.

But we have to be careful. Rewards can be idiosyncratic, and specific to a particular individual. Sometimes praise actually does work better than a raise (although it's certainly nice to get both.)

In particular, we should examine issues in behavior that is intrinsically, or internally, motivated as opposed to that which is external or extrinsic. We perform internally motivated behavior because we find the behavior itself is satisfying.

There is research that indicates that expected large rewards for initially internally motivated behavior that occur regardless of performance can be counterproductive. Oversufficient justification theories describe such instances when reward can lower internal motivation and subsequent performance (remember some of my earlier comments about cognitive dissonance studies and the D. Bem response to them). When reward is proportional to performance or unexpected (that surprise bonus) oversufficient justification effects do not typically appear.

The oversufficient justification literature dovetails with two other independent streams of research: (1) cognitive dissonance effects, in which more attitude change can occur with smaller rewards and (2) equity effects (which we'll examine later,) in which proportionate rewards are typically the most satisfying (too low rewards make people angry and too high rewards can make us guilty.) The key in all three appears to be cognitive consistency or proportionality.

Thus, even reward isn't as simple as it appears.

And don't believe that Los Angeles Times article either.


Parents and siblings influence an infant who is helpless, dependent, and has few significant others. No wonder the effects of training may depend on developmental level and parents have such a strong influence on young children.

As we grow, we attend school and form peer groups. While parent and teacher interactions are important, they are hierarchical relationships; voluntary peers have equal status. Thus, by adolescence peer groups often challenge parents in importance. Nevertheless, surveys of college freshmen for over 30 years indicate that on the average parents remain the most important influence in a teen's life.

Most societies have social age grading and normative life stages, i.e., we are age-segregated and have specific goals and challenges that are defined as age-appropriate. Age can interact with socially defined sex roles to define expected life events. Social and historical factors influence how gender and age are perceived. For example, "old old age" is a twentieth century phenomenon and currently most married mothers of infants hold jobs. Current thinking on socialization, you recall, is that it is a lifetime phenomenon. Remember adult education and instructional design? Many people pack several careers--and even more than one successive families--into one lifetime. Some people take early retirement to train for a totally new career, a process facilitated by labor benefits won in industrialized countries during the twentieth century.

Although both symbolic interactionists with the concept of the self, and various learning theorists seem more poised to explain adult socialization, we need more research on maturational processes in adult development. To name just a few:

How experiences during early adulthood, such as marriage, having children, or divorce contribute to adult development

A lot of socialization occurs formally and informally on the job or prior to it (e.g., in college)

Many experiences present opportunities for informal learning. For example, studies of individuals active in their religious congregations indicate that they learn skills that are later put to use in civic and political engagement. Skill transfer is very important among adults.

Many physical changes in middle and late adulthood undoubtedly influence the self-concept. From losing one's hair to acquiring stretch marks, to hormonal changes, how we feel and our sense of efficacy is probably affected too. And recent research consistently suggests that self-discipline and mental activity can ward off symptoms of Alzheimer's--even when the elderly adult has brain lesions similar to those suffering from Alzheimer's symptoms. Widespread and habitual use of the Internet and Internet searches apparently facilitates brain activity in middle-aged and older individuals (see the  for one example of an ongoing research agenda).

Unfortunately recent research also indicates that most of the recent "train your brain" books, [often expensive] courses, and gadgets may increase your skills on tasks very similar to the course or gadget, but do not generalize to other types of tasks. In other words, better to save your money!


Many psychologists suggest that the overwhelming proportion of our daily behavior is habitual, following established routines and scripts of everyday life. As the attacks on the World Trade Tower show, responses at the top of our habit hierarchies may even save our lives. That's why there are recommendations to practice safety evacuation procedures at home or at work in the case of natural or manmade disasters.

Does it have to stay that way? We gain insight from studies of "self-regulated learners" who set goals and subgoals, and proceed to meet them systematically, re-evaluating and revising along the way. Self-regulated learners apparently echo that classic Asian proverb: "A journey of a 1000 miles begins with a single step." Why not examine your dreams? Or create new ones? They don't have to be dramatic changes; perhaps you want to stop smoking or exercise more. Or train for a new job. Put your mind to it and you can "beat the habit hierarchy" at its own game.


Lev Vygotsky, a talented young Russian Soviet psychologist, only lived into his 30s. It's impossible to know what his further contributions would have been, although I suspect they would have continued. Like Piaget, Vygotsky was an insightful observer of human behavior. He was also clearly influenced by the Chicago school of "social behaviorists". Vygotsky's major contributions (my view!) were in learning and in how socialization proceeds. His view of learning was truly "social cognition" and differed considerably from the behaviorist views of the time.

Vygotsky believed that learning progresses in stages, with one's zone of proximal development being a bit more advanced than the learner's current skill level, thereby forcing him or her to "stretch" to acquire new skills. New skills were acquired with the assistance of coaching and scaffolding, i.e., starting at the learner's level of knowledge, even if this level contained some inaccuracies, and proceeding from there.

Vygotsky's concept of private speech provides an invaluable mechanism to inform us how children inculcate and internalize their  particular culture's dominant values and mores. He noticed that around age 3 children seem to talk to themselves quite a bit, a phenomenon that typically continues for the next two years. Upon listening closely, Vygotsky realized that children often "gave themselves instructions," using the words and sometimes even imitating the voice of a parent or guardian.  For example, a child might provide directions on putting herself to bed: "First I've got to brush my teeth and wash my face. Then I'll comb my hair. Then I will put on my 'jammies' and climb into bed. Sweet dreams!"

Private speech typically peaks around age 5 and gradually fades away externally. However it continues internally (becoming "private"), giving us pep talks before an exam or catigating ourselves for some undesired behavior (oh, why did I eat the whole thing?). In this way, we gain insight into what George Herbert Mead called "the generalized other" or what Freud called "the superego".

This page was built with Netscape Composer.
Susan Carol Losh October 16 2017.
Moose Doggie
Happy Howl-o-ween!