|
Russ, No sore point. Just stuff I am somewhat prepared to
talk about, as I prepare the final exam for my advanced behaviorism class ;-
)
You say "Again, the circular answer that something is a reinforcer
for some entity type if it works as a reinforcer for that entity type is
not acceptable."Again, there are many attempts to talk
about what "exactly" a reinforcer is. There is also talk about "why" it is.
Step one, I assert, is to distinguish the two types of talk. Also, I am mostly
representing Skinner's position, which is current the norm, but their are many
behaviorists who tried to define reinforcement in other ways. -- I think you
come to the right point about the computer and the pigeon, and to a good set of
questions at the end. I suppose, unless we are going to really start doing
class here, that the only succinct answer is that in 100 years behaviorists
have found: You get at those answers faster and more efficiently if you don't
muddle them with mind-talk. On the negative side: It is NOT true that
all reinforcers "feel good". It is NOT true that all reinforcers are things
people would tell you they "want". It is NOT true that people are very good at
being aware of what is reinforcing their behavior (actually, Freud taught us
that, the behaviorists just ran further with it than he was willing to). It is
NOT true that all reinforcers activate specific parts of the brain (although
some areas of the brain are active under exposure to several varieties of
reinforcers). On the positive side: It IS true that we know a lot about
how past experience results in some stimuli being reinforcers in the future,
when they were not in the past. There is an awful lot of research showing how
you make a neutral stimuli into a reinforcer. The easiest way, repeated pairing
with something that is already a reinforcer. "But Wait!" you object, "now you
are begging the question." Not quite, but I'll admit that such answers only
hold for a certain class of reinforcers... "secondary reinforcers"... which
happens to be a special class containing the vast majority of reinforcers. As
for the others... "primary reinforcers... It IS also true that we know a lot
about how evolutionary pressures produce species, the members of which
typically find certain things reinforcing. Here there are still complex stories
about how organisms develop from fertilized eggs into reinforcable actors, but
conditioning talk doesn't help us much. This is what behavioral geneticists and
developmental psycho-biologists study, and it gets into the full complexity of
developmental dynamic systems. Getting
better? Eric On Tue, May 4, 2010 01:08 PM, Russ Abbott
<[hidden email]> wrote: Looks like a
hit a sore point. Eric, I don't think you addressed my question. I have
no problem with your defining "reinforcer" any way you like. What I am asking
is what are the mechanisms that reinforcers exploit. Forget about
memory. Let's assume you have two entities, e.g., a computer and a
pigeon. My guess is that no matter how many times you reinforce a
particular computer behavior, that behavior won't increase in frequency.
Yet if you do the same thing for a pigeon, the frequency will increase.
So here we have some good empirical evidence that there is some difference
between a computer and a pigeon. The obvious question is what is it about these
two entities that lead to these different results? The answer that reinforcers
work on one but not on the other is not good enough. Having posed this
question, I can imagine you saying that in the experiment you haven't
reinforced the computer because if you had the behavior frequency would have
increased -- by the definition of reinforcer. I guess that's fair, but it
leads to the question of what makes a reinforcer effective? Presumably there
are no computer reinforcers, but there are pigeon reinforcers. Why is that?
What is about pigeons -- but not about computers -- that makes it possible to
craft pigeon reinforcers but not computer reinforcers? Furthermore, since not
everything I do to a pigeon works as a reinforcer, why do some actions serve as
reinforcers where others fail. Also, why do some (probably most) pigeon
reinforcers fail to work as reinforcers for earthworms and vice
versa? Again, the circular answer that something is a reinforcer for
some entity type if it works as a reinforcer for that entity type is not
acceptable. As you can see, there is a whole raft of basic empirical
questions that the definition of reinforcer brings up. It seems to me that any
good scientist would want to find answers to them. These seem to me to be
fundamental questions. What answers have behaviorists come up in the century or
so that it's been around? -- Russ On Tue, May 4, 2010 at 7:24 AM, ERIC P. CHARLES <epc2@...> wrote:
Damn, these are longer than I intend.
Sorry. The question "How can you be satisfied without explaining
WHY X is a
reinforcer?" Sloppy answer: Look, buddy. One thing at a time. Yes, in
some sense a scientist is never satisfied - as each answer leads
to new questions. On the other hand, to make that logic work, you need to
accept that some things ARE answers. So, only if you are satisfied with my
definition of what a reinforcer is, do we get to go on to ask why it is.
Wherever that quest takes us, it doesn't change the fact that we have a
perfectly good definition for a reinforcer, and that the definition is at a
behavioral level. So, for example, were we to find
that all reinforcers activate a certain part of the brain (WHICH THEY DON'T, but
we'll pretend), we would NOT redefine reinforcers as "things that activate
that part of the brain". We might let "activates that part of the brain" be
part of a particular kind of explanation for reinforcement (the mechanistic or
material cause) -- well, some might. Others would point out, as I have done
before, that when you ask "why is X a reinforcer?" you would need to explain,
among other things, why X activated that part of the brain. Hence, there are
lots of interesting brain questions, but brain answers don't really explain
psychological phenomenon. Take the imprinted
gosling. We are tempted to
say that its following its mother is "instinctive", but really, if
anything is an instinct, it is that the gosling 'a close proximity to
the mother' reinforces the goslings behavior. How do we know? Well, you
do some experiments. If you build a box where the gosling has to press
(peck) a key to bring mom closer, then it does. In fact, you can even
build a box where the gosling has to walk AWAY from the mother for the
mother to come closer. The gosling learns to do so pretty quickly. We
can see that the mother approaching reinforces the goslings behavior,
and all we have done is provided a description of the above events. That's it.
Done. Simple. Straightforward. Scientific.
All that said, the answer to why X is a
reinforcer, is best answered in one of two ways, by reference to an
evolutionary past or a developmental past. Typically, the best answers are
found in the past history of the
organism. After pavlov has trained his dog to drool when it hears the bell, I
can use the bell to reinforce the dog. The bell IS a reinforcer, because it
increases the rate of behaviors which it follows. Why is the bell a reinforcer?
Because it
was paired with the food in the past of the animal. Again, that not only
explains the origins of the behavioral phenomenon, but all concurrent neural
phenomenon as well. As for the need of memory to make that work, that's
just crazy talk. Descartes (and others before and since) told us that we need
to take memories of the past and apply them to current stimulation to make
sense of it. That is just plain wrong. What we need is a system modified over
time. You complexity people should be all about this! No where in a neural
network model, for example, is to be found a memory (a remembrance) of the past
events which shaped the system. The system changes as a result of past events,
and now it does something differently than it did before. The neuronal
structure of the rat in the skinner box has been changed as a result of past
events. At one point, it was a rat that did not discriminate with its lever
pressing behavior whether a light was on or off, and now it is a rat that does
discriminate with its lever pressing behavior whether a light is on or off.
Surely you are not suggesting that the rat need remember the past history of
reinforcement for that to be true?!? Eric P.S. Relative
to Roger's story about the Farmer...
yes, we often self-stimulate (cough, cough) - our behaviors sometimes chain,
with the end of one behavior being a part of the causal chain leading to the
next behavior. As such, the chain of "thoughts" "in the farmer's head" need not
be treated as any different than the chain of "actions" "of the mechanics body"
as they change a tire.
Thanks for your answers Eric. I like your answer to Q1
that the
unit of observation is at the functional level -- where functional refers to an
act that changes the relationship between an entity and its environment.
Since almost every change can be divided into smaller changes, that
doesn't really solve the problem. The answer also depends on the ability to
characterize what an entity is. But for now, I'm satisfied.
With regard to the issue of reinforcement, I'm not so willing to go
along. You said,
X is a
reinforcer if behavior Y increases in future frequency when it
is followed by X.
The problem I have with it is that it doesn't tell me why X is
a reinforcer
of Y. It makes the being-a-reinforcer-of a primitive relation. As you
said,
In that sense, "reinforcer" is a description of the
effect,
not an explanation for the effect.
Do you really want to leave it at that? Science is definitely
happy to come
up with empirically establishable relationships. But it never stops there. I
always attempts to ask why that relationship holds. Are you really saying that
behaviorists refuse to ask that question?
For example, consider the
implications of the fact that the reinforcer occurs after the
thing being reinforced? How can that possibly be? It seems to imply that the
entity being reinforced has a mechanism that enables it to relate the
reinforcement to the action being reinforced. Otherwise how could the
reinforcement have any effect at all since it follows the act being reinforced.
So right there one seems to be postulating some sort of internal mechanisms
that are both (a) able to remember (understood loosely and not necessarily
conceptually) the act that was done so that the subsequent reinforcement can be
related to it as well as (b) change the frequency or conditions under which
that act is performed. One should presumably then ask how those internal
mechanisms work.
-- Russ
On Mon, May 3, 2010 at
6:27 AM, ERIC P. CHARLES <epc2@...> wrote:
Russ, Good questions. These are indeed "obvious vulnerabilities" that
behaviorists are familiar with. Of course, if I just said what I thought, then
the answers would seem more solid, but I will try to give a flavor of the
broader reality of the field, not just my opinion. Your Q1: How big (or
small) is a behavior? -- This is a major historic difference between different
people's behaviorisms (a phrasing I assume is equivalent to saying that X is a
major historic difference between different people's versions of quantum
theory). Watson, for example, the sort-of founder of behaviorism, really wanted
to talk about things like the flexing or not flexing of individual muscles.
This was criticized, even by most behaviorists, as "muscle twitchism", though a
few still like it. The modern analysis, following from Skinner, uses the
"operant" as the level of analysis. An operant is, roughly, a set of movements
that do something, like "press a lever". The justification of this level of
analysis is largely that regularity seems to appear at that level. We can
predict and control the rate of lever pressing. In fact, as will appeal to the
complexity crowd, we can predict and control the rate of level pressing even
when there is quite extensive variation in the underlying muscle movements! I
guess it is a bit of a pragmatist thing - you do science where you see that
science can be done. Your Q1b: What about conceptual stuff? -- It is
just another thing about behavior. How do you know when someone else
understands a concept? You get them to behave in certain ways. How do you know
when you understand a concept? You get yourself to behave in certain ways. We
can quibble about exactly what ways, but ultimately typing the word "right" is
no different than any other five-part behavior, and so your typing 'h' is,
presumably, not qualitatively different than a rat pressing a bar labeled "h"
if I have reinforced it in the past for pressing the pattern "r" "i" "g" "h"
"t". Pigeons can tell the difference between different cubists, between early
and late Picasso, between pictures with people in them and pictures without
people in them (famously, sometimes better than the experimenter who selected
the slides), etc. So, to the extent that 'cubist' vs. 'impressionist' is a
concept, behaviorists can explain how people get concepts perfectly
well. Your Q2: How do you define reinforcement? -- Again, there are
several methods. The biggest problem is that it is easy to slip into
circularity. Skinner's solution is the most popular today, and has certain
virtues over the alternative. Skinner wants to define reinforces by their
effects. X is a reinforcer if behavior Y increases in future frequency when it
is followed by X. In that sense, "reinforcer" is a description of the effect,
not an explanation for the effect. Thus, in applied behavior analysis, it is
common to be stuck trying to fix a problem behavior you know nothing about the
origin of. In a "functional analysis" you would try removing consequences of
that behavior one at a time until you identified the factor(s) that reinforced
the behavior. Thus the reinforcer is identified empirically, rather than
theoretically. -- to appeal to physics again, Einstein would tell us that
gravity is not some separate thing that causes objects in a falling elevator to
converge, gravity is simply the observable fact that objects a falling elevator
converge. Your Q2b: How do you distinguish Organism from Environment --
This question can get very deep very fast in ambiguous cases. Let's face it
though, most cases are not very ambiguous. When I am studying a rat in a
Skinner Box: The organism that fleshy and bony thing that I pick up out of its
cage and walk over to the Skinner box. The Environment is the inside of a
Skinner box. Most cases we deal with on a practical basis are similarly well
defined. I don't need a verbal self-report by the rat to know that food
reinforces its lever presses. I similarly do not need at any point to look
inside the rat. The question of whether or not food reinforces lever presses
(under such and such conditions) is a simple and straightforward scientific
question about behavior. Keep um coming if you got more, this is
fun, Eric
I have two
problems with it.
- Turning left may happen to be a low level aspect of some more significant
action. For example, I suspect it would be difficult to say why I am about to
move my right index finger to the left and then down. (It had to do with the
letter "h", which I was striking because it was in the word "right", which I
was typing because ... . Not only that, I knew that striking the letter "h"
would include it in the message I was constructing ... . How can you be
behaviorist about things that are that conceptual?)
- More important (or at least equally important), all those explanations seem
to depend on the notion of reward or reinforcement. How is reward/reinforcement
defined within a behaviorist framework? I can't immediately think of a way to
talk about what reward or reinforcement means without going "inside" the
subject.
Both of these would seem like obvious vulnerabilities in behaviorist
thinking. They must have been raised many times. Behaviorists must have
answers. It's hard for me to imagine what they are.
-- Russ
On Sun, May 2, 2010 at 7:08 PM,
ERIC P. CHARLES <epc2@...>
wrote:
Russ, The behaviorist's point, first and foremost, would involve
comparing the questions "Why did he choose to turn left?" with the
question "Why did he turn left?" They are the same question, the behaviorist
claims,
except perhaps, at best, that the first question does a little bit more to
specify a reference group of alternative circumstances
and behaviors that our explanation needs to distinguish between. For example,
we might read it as saying "Why did he turn left under circumstances in which
other people turn right?" As for the acceptable answer to such a
question... well... there are several varieties of behaviorism, and the
acceptable answers would vary. The stereotypical villain of Intro Psych stories
is a behaviorist who insists on explanations only in terms of immediate
stimuli. However, those are mostly mythical creatures. Most behaviorists have a
bias towards developmental explanations, and as members of this list will
readily recognize, behavioral development is complex. The most standard forms
of behaviorism make heavy use of drooling dogs and lever-pressing rats as their
primary metaphors. In this case we might prefer a maze-running rat. Upon his
first encounter with choice point C, which has vertical stripes, rat 152 turns
left. Why? Because, in the past, this rat has been reinforced (i.e., got to the
peanut butter faster) when it turned left (the critical behavior) at
intersections that have vertical stripes (a discriminative stimuli). The nature
of the contingencies and stimuli can be quite complex, but we have lots of data
about how those complexities reliably produce certain patterns of behavior.
Similarly, we might argue that the person turns left because in his past, under
circumstances such as these, he has been reinforced for turning left. What
circumstances, well, I would have to elaborate the example a lot more: When
following directions which state "turn left at the next light" and at "an
intersection with a light". The individual's choice is thus explained by his
long-term membership in a verbal community that rewards people for doing
certain things (turning left) under certain conditions consisting of a
combination of discriminative stimuli (which are complex, but clearly possible
to define in sufficient detail for these purposes). This training started very
young, for my daughter it formally started at about 1 and a half with my saying
"look right, look left, look right again" at street corners. --
Admitted, the above explanation for the person's behavior is a bit hand-wavy
and post-hoc, but the explanations for the rat's behavior is neither.
Behaviorism, loosely speaking consists of two parts, a philosophy of
behaviorism and an application of behaviorism (applied behavior analysis). We
could take our person, subject him to empirical study, and determine the
conditions under which he turns left. This would reveal the critical variables
making up a circumstance under
which such turns happen. The science also allows us to determine some aspects
of the past-history based on the subjects present responses. Further, just as
we built our rat, we could build a person who would turn left under such
circumstances, and for that person, all the conditions would be known and no
hand-waving or further investigation would be necessary. The fact that we
usually cannot perform these kinds of investigation, is no excuse for
pretending we wouldn't get concrete results if we did. -- In the
absence of an observed past history, the behaviorist would rather speculate
about concrete past events than about imaginary happenings in a dualistic
other-realm. Personally, I think many behaviorists overdo the role of
conditioning. I have a bias for a more inclusive view of development, such as
that advanced by the epigeneticists of the 60's and 70's, the kind that leads
directly into modern dynamic systems work. Those behaviorists I most like
recognize the limitations of conditioning as an explanation, but argue that
conditioning is particularly important in the development of many behaviors
that humans particularly care about. Pairing the verbal command "left" with the
contingency of reinforcement for turning left, they argue, is just as arbitrary
pairing the visual stimuli "red light" with the contingency of reinforcement
for pressing the left lever in the skinner box. A reasonable position, but I've
never been completely sold. I will admit though that conditioning is important
in unexpected places - you cannot even explain why baby geese follow the object
they imprint on without operant (skinner-box style) conditioning. How
was that? Eric P.S. Returning to Robert's query: It should be
obvious that the above explanation, if accepted as an explanation for the
behavior, is also an explanation for all concurrent neural happenings.
Eric, Can
you provide an example of an acceptable behaviorist answer to your question
about why a person turned left instead of right. By example, I'm looking for
something more concrete than "the explanation for the choice
must reference conditions in our protagonists past that built him into
the type
of person who would turn left under the current conditions." What might such an
explanation look like?
-- Russ
Abbott ______________________________________
Professor,
Computer Science
California State University, Los Angeles
cell:
310-621-3805 blog: <a href="http://russabbott.blogspot.com/"
target="" onclick="window.open('http://russabbott.blogspot.com/');return
false;">http://russabbott.blogspot.com/ vita: <a
href="http://sites.google.com/site/russabbott/" target=""
onclick="window.open('http://sites.google.com/site/russabbott/');return
false;">http://sites.google.com/site/russabbott/
______________________________________
On Sun, May 2, 2010 at 4:03 PM, ERIC P. CHARLES <epc2@...>
wrote:
Robert, You accuse Nick of talking about "the brain", when he was
talking about "the mind". The most basic tenant of behaviorism is that
all questions about the mind are ultimately a question about behavior. Thus,
while some behaviorists deny the existence of mental things, that is not a
necessary part of behaviorism. On the other hand, the behaviorist must deny
that the mind is made up any special substance, and they must deny that the
mental things are somehow inside the person (hence the comparison with soul,
auras, etc.). If the behaviorist does not deny tout court that mental things
happen, what is he to do? One option is to claim that mental things are
behavioral things, analyzed at some higher level of analysis, just as
biological things are chemical things analyzed at some higher level of
analysis. So, to answer your question: There IS a brain, and the brain does all
sorts of things, but it does not do mental things. Mental things happen, but
they do not happen "in the brain". As Skinner would put it, the question is:
What DOES go on in the skull, and what is an intelligible way to talk about it?
The obvious answer is that the only things going on in the skull are
physiological. For example, if one asks why someone chose to go left
instead of right at a stop sign, one might get an answer in terms of the brain:
"He turned left because his frontal cortex activated in such and such a way."
However, that is no answer at all, because the firing of those neurons is a
component part of the turning left! Ultimately, the explanation for the choice
must reference conditions in our protagonists past that built him into the type
of person who would turn left under the current conditions. In doing so, our
explanation will necessarily give the conditions that lead to a person whose
brain activates in such and such a way under the conditions in question.
Put another way: To say that he chose to turn left because a part of
his brain chose to turn left misses the point. It anthromorphizes your innards
in a weird way, suggests homunculi, and introduces all sorts of other ugly
problems. Further, it takes the quite tractable problem of understanding the
origins of behavior and transforms it into the still intractable problem of
understanding the origins of organization in the nervous system. Neuroscience
is a great field of study, and it is thriving. Thus, people hold out hope that
one day we will know enough about nerve growth, etc., that the origin of
neuronal organization will become tractable. One day they will, but when that
day comes it will not tell us much about behavior that we didn't already know,
hence they won't tell us much about the mind we didn't already know. Or
at least, so sayith some behaviorists, Eric On Sun, May
2, 2010 05:09 PM, "Robert J. Cordingley" <robert@...>
wrote:
Nick
Let me try this on(e)... it's because the brain is the physical
structure within which our thinking processes occur and collectively
those processes we call the 'mind'. I don't see a way to say the same
thing or anything remotely parallel, about soul, aura, the Great
Unknown and such. Is there an argument to say that the brain, or the
thinking processes don't exist in the same way we can argue that the
others don't (or might not)?
Thanks
Robert
On 5/2/10 12:52 PM, Nicholas Thompson wrote:
</snipped>
How is banging on about mind any
different from banging on about soul, or aura, or the Great Unknown?
Nick
============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at <a href="http://www.friam.org" target="" onclick="window.open('http://www.friam.org');return false;">http://www.friam.org
============================================================ FRIAM
Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St.
John's College lectures, archives, unsubscribe, maps at <a href="http://www.friam.org" target="" onclick="window.open('http://www.friam.org');return false;">http://www.friam.org
Eric Charles Professional Student and Assistant
Professor of Psychology Penn State University Altoona, PA
16601
Eric Charles
Professional Student
and Assistant Professor of Psychology Penn State University Altoona,
PA 16601
Eric Charles
Professional Student
and Assistant Professor of Psychology Penn State University Altoona,
PA 16601
Eric Charles Professional Student and Assistant Professor of Psychology Penn State University Altoona, PA 16601
============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org
|