Login  Register

Re: boundary permeability (was Behaviorism)

Posted by Eric Charles on May 05, 2010; 9:43pm
URL: http://friam.383.s1.nabble.com/Behaviorism-tp5003979p5011287.html

Ooooh, that is a much more specific question than it initially seemed!

I suppose there is a practical answer and a philosophical answer. The philosophical answer would set out some criterion that would be correct in some global sense. I fear that would get us back to ethical stuff, and keep things muddled. The practical answer is that such decisions are based on a) the availability of money and time, and b) the individual behavior analyst's interest in the problem. I can tell you that regardless of the level of plaque, the shocking backpack, if set correctly WILL reduce (though probably not completely eliminate) the rate of spitting and bedpan throwing. On the other hand, other behaviors are likely to arise that are equally annoying to you (hence Skinner's dislike of punishment). The other behaviors will arise because the contingencies controlling the spitting and bedpan throwing are still available to other behaviors.

In my lower level behaviorism class (which focuses on application rather than theory), by the end of the class we have a list of techniques that can be used to reduce undesirable behaviors:

Punishment (aka 'positive punishment')
Penalty (aka 'negative punishment')
Punishment by prevention (of access to other contingencies)
Differential punishment of high rates
Extinction
Differential reinforcement of low rates
Differential reinforcement of incompatible behavior
Differential reinforcement of alternative behavior
The establishing operation

We could write an almost parallel list for methods of increasing the rates of desirable behaviors. Such techniques are routinely used with people even with sever Alzheimer to positive effects. Of course, whether you think increasing the rate of coherent sentences from 30% to 60% is a miracle or just an okay job depends on your perspective. Probably the rate of your grandmother's offensive behaviors could have been cut in half with a pretty simple plan. Unfortunately, the difficulties in getting every single person who goes into her room to follow the 'pretty simple plan' can be quite difficult. For example, if we try extinction, we might need to let her spit on us without reacting, and good luck getting the night shift worker to follow that plan at 3 in the morning.

Returning to the original question I think that if I were an applied behavior analyst, I would keep working with the patient as long as there was money for me to keep working with the patient. At some point we will stop getting a return on investment of our time, and at that point we might switch from trying to innovate new strategies to focusing on maintenance of the strategies that worked (i.e., getting people to stick with the plan even when I am not there). Maintenance will take fewer hours of my time than trying new things, but I will never completely leave the situation. How little improvement do I need to see before I switch from active investigation to maintenance? Well, it depends on the behavior and the customer. Honestly, I wouldn't care much about getting the rate of spitting from once a week to once a month. In the case of the bedpan, was it empty or full when it was thrown? If empty, then a small improvement matters less than if full. On the other hand, if you are rich and really would like to see your dad go from 60% coherent sentences to 62%, well, then I might keep at it (and you bet your life I'll keep good data, because I don't trust you to tell the difference between those percentages).

If I am a pure researcher... well, I guess I would need a more exact criterion. That is, I wouldn't ask for government money unless I expected the improvement to be at least X amount. Still though, the size of X would vary based on the population and the problem. To test strategies to reduce the rate of violent outbursts in a prison population, maybe only a small effect size would justify a massive study.

I still feel like I haven't fully answered your question, but I think that is a solid start.

Eric


On Wed, May 5, 2010 11:16 AM, "glen e. p. ropella" <[hidden email]> wrote:
Excellent!  Thanks, Eric.  But I still wonder why you (and Nick) have
inferred that I'm talking about ethics.  I'm not really interested in
ethics.  I'm interested in the differences between treatment that works
and treatment that fails.

That was my point about the JRC story.  Forget the accusations that the
JRC is being "cruel" or other mental hoo-ha.  Focus on the JRC's
response that some of their treatments have been shown to work, emphasis
on the word _some_.  It seems to me that there ought to be a percentage
threshold where the treatment is determined not to be effective enough
to continue using.  E.g. let's say skin shock treatment works in 60% of
cases.  Then perhaps that's above the threshold and skin shock treatment
should be tried, "cruel" or not.  But if it only works in 20% of
cases,
then perhaps it shouldn't be used... or extra explicit consent has to be
 acquired ... or whatever.

THAT'S the interesting part of the story and that's why I'd be grateful
for a behaviorist response.  I don't care about the ethics of the
treatment.  I care about the efficacy of the treatment, which is why I
tried to use the word "mistreatment".

You approached this with your answer to my question about Alzheimer's.
But it didn't really target my question like I wanted it to.  Let me try
again.

My experience has been that AD patients will often lash out at family
members and caregivers for no obvious reason.  E.g. My grandma would
sometimes spit and scratch at her children when they were talking to
her.  Now, it's not clear to me that _any_ behaviorist technique will
change this behavior.  Perhaps it would, though.  We could mount a skin
shock backpack to someone like my grandma and shock her every time she
threw her bedpan or spit on someone to see if it would work.  I don't know.

If we decided to do some research on AD patients to find out, at what
point would we decide that some particular treatment worked?  And at
what point would we decide that it fails to work?

When does the behaviorist "give up" and hand the problem completely
over
to the biologists who work on amyloid plaques?


ERIC P. CHARLES wrote  circa 05/04/2010 10:07 PM:
> These are tough questions for many reasons. One is that a behaviorists
> first instinct would be to wrestle with you over several of the terms.
> The most explicit ethical stance I have seen a behaviorist take as a
> behaviorist is Skinner's dislike of the use of punishment, which was at
> least partially justified by the evidence that reinforcement will work
> better at shaping behavior. That's not much, but its something. Ethics
> is a tough business, and I'm not sure there has been much progress in
> the last 3-4,000 years, nevertheless the last 100.
> 
> I will say that behaviorist methods have been shown to be effective at
> treating "thoughts" and "feelings". The behaviorist
conceives of what
> they are doing in such cases in ways most will find unintuitive, but the
> techniques work irrespective (the whole philosophy vs. science
> distinction). Behaviorist's CAN do things for pain management, in no
> small part because behavioral control is often important in pain
> control. Aside from that, nothing about behaviorism bars giving drugs,
> so its not like they would say "I'm a behaviorist, I don't believe in
> morphine drips." (Of course, being a behaviorist leads one to
think
> there are often better alternatives to drugs, but that is a different
> point.)
> 
> Overall though, I think that the distinction between mentalist and
> behaviorist does not place one with specific ethical obligations any
> more than a distinction between string-theorist and quantum-mechanist
> has ethical implications. Sure, there are people who write as if quantum
> mechanics has ethical implications (inherent uncertainty, blah, blah,
> blah), but I'm not convinced it does. I suspect that it just so
happens
> that the same person is interested in both subjects.
> 
> --explanation (sort-of)--
> The question of what we think people are doing when they verbally
> self-report does not tell us what to do after getting the self reports,
> unless we throw in lots of other rules and assumptions. When we get all
> that other stuff figured out, we are likely to find that the first part
> isn't as important as it initially appeared.
> 
> For example, I like to point out to my class that the result of
> introspection is what it obviously is: When you attend the things you
> say to yourself, <drum roll> you find out what types of things you
say
> to yourself. So, the guy at the Thai restaurant asks, "How spicy do
you
> want it?"  You think for a second and say "As high as you can
go!" All I
> learn from that (at best) is that you are the type of person who
tells
> yourself you want it as spicy as possible - I don't learn whether or not
> you are ACTUALLY the type of person who likes it spicy as possible. If
> it is your first time at a Thai restaurant, you might well learn
> something new about yourself.
> 
> Transport to the Alzheimer's patient. You ask "Do you know where you
> are?" The patient thinks for a second and says "Yes." I
assert that we
> learned nothing more than that he is the type of person who tells
> himself he knows where he is. In this case, I have evidence that others
> agree with me. The typical follow up question is "Where are
you?" Often
> it is answered incorrectly. We, as outside observers of the patient's
> behavior declare that he does not know where he is, despite his
> insistence otherwise.
> 
> Again, I can think of ways to take that, add other stuff, and create
> ethical implicature... but on its own, I'm not sure it has much. If we
> decide, for example, that we have an obligation to care for people so
> damaged that they don't even know where they are... well, behaviorists
> and mentalists might argue over how to tell if people know where they
> are, but the eventual ethical course of action has already been laid out.


-- 
glen e. p. ropella, 971-222-9095, http://agent-based-modeling.com


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org


Eric Charles

Professional Student and
Assistant Professor of Psychology
Penn State University
Altoona, PA 16601



============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org