Do computers "try"?

classic Classic list List threaded Threaded
7 messages Options
Reply | Threaded
Open this post in threaded view
|

Do computers "try"?

Nick Thompson
All,

I have talked to a couple of you about the ontological question of when is a simulation the thing it simulates.  For instance, when does a system cease to simulate motivation and actually become motivated?   I am suspicious about the extension of intentional language to non-animate systems, not because I am a vitalistic crypto-creationist, but because my intuition tells me that inanimate systems do not usually  take the sorts of actions that are required for the use of mentalistic predicates like "motivated".   But talking to you folks is making me uneasy.  If you are curious how I come by my quandary, please have a look at the article On the Use of Mental Terms in Behavioral Ecology and Sociobiology, which appears at

 http://home.earthlink.net/~nickthompson/ 

The closest I have ever come to conceding this sort of view is in a BBS commentary entitled, "Why would we ever doubt that species were intelligent?", which I will post later in the day.  I guess I am going to have to argue that the definitional strictures for applying intelligence are less stringent than those for motivation.  

This could get ugly.

Thanks everybody,

Nick


Nicholas S. Thompson
Professor of Psychology and Ethology
Clark University
[hidden email]
http://home.earthlink.net/~nickthompson/
[hidden email]
-------------- next part --------------
An HTML attachment was scrubbed...
URL: /pipermail/friam_redfish.com/attachments/20041109/f4fbf939/attachment.htm
Reply | Threaded
Open this post in threaded view
|

Do computers "try"?

Robert Holmes
Y'know this kind of reminds me of that Jorge Luis Borges story about the
'ideal' map that ends up being as big as the thing it is mapping (and still
isn't as good as the real thing). in the same way that 'good' cartography is
all about deciding what not to represent, 'good' simulation is all about
deciding what not to simulate. And if a simulation is always less than the
thing it simulates, that suggests it can't ever be the thing it simulates.
 
Robert (or a reasonable simulacrum thereof)
 


  _____  

From: Nicholas Thompson [mailto:[hidden email]]
Sent: Tuesday, November 09, 2004 1:32 PM
To: Friam
Subject: [FRIAM] Do computers "try"?


All,
 
I have talked to a couple of you about the ontological question of when is a
simulation the thing it simulates.  For instance, when does a system cease
to simulate motivation and actually become motivated?   I am suspicious
about the extension of intentional language to non-animate systems, not
because I am a vitalistic crypto-creationist, but because my intuition tells
me that inanimate systems do not usually  take the sorts of actions that are
required for the use of mentalistic predicates like "motivated".   But
talking to you folks is making me uneasy.  If you are curious how I come by
my quandary, please have a look at the article On the Use of Mental Terms in
Behavioral Ecology and Sociobiology, which appears at
 
 http://home.earthlink.net/~nickthompson/ 
 
The closest I have ever come to conceding this sort of view is in a BBS
commentary entitled, "Why would we ever doubt that species were
intelligent?", which I will post later in the day.  I guess I am going to
have to argue that the definitional strictures for applying intelligence are
less stringent than those for motivation.  
 
This could get ugly.
 
Thanks everybody,
 
Nick
 
 
Nicholas S. Thompson
Professor of Psychology and Ethology
Clark University
[hidden email]
http://home.earthlink.net/~nickthompson/
[hidden email]
 



-------------- next part --------------
An HTML attachment was scrubbed...
URL: /pipermail/friam_redfish.com/attachments/20041109/88c596a9/attachment.htm
Reply | Threaded
Open this post in threaded view
|

Do computers "try"?

Roger Critchlow-2
In reply to this post by Nick Thompson
Now, what this reminds me of is that animists ascribe intentionality to
everything in the world.  And back when I was learning chemistry in
college, we used to ascribe intentionality to electrons when explaining
organic reaction mechanisms, and to all the thermodynamic state
variables involved when working out applications of Le Chatlier's Principle.

So I would wonder whether this is a purely philosophical question, or
just a way we've always found convenient when talking about how things
happen, or some unreflective mixture of the two.

-- rec --


Reply | Threaded
Open this post in threaded view
|

Do computers "try"?

Stephen Guerin
In reply to this post by Nick Thompson
Nick writes:
>  I am suspicious about the extension of intentional language to
non-animate systems, not because
> I am a vitalistic crypto-creationist, but because my intuition tells me
that inanimate systems do
> not usually  take the sorts of actions that are required for the use of
mentalistic predicates
> like "motivated".

Discussions of intention can be mapped to Aristotelian final cause. A few
writers out there argue (and I think I agree) that self-organizing systems
arise *for the purpose* of disspating a gradient and in that sense have a
final cause. There was a small discussion thread last March on FRIAM about
whether this idea is legitimate or not. I don't think it was adequately
resolved. You can review the thread at:
http://www.redfish.com/pipermail/friam_redfish.com/2003-March/000180.html.

Since that time, Stanley Salthe published a recent paper that's worth at
least skimming and reviewing the final causality section:
http://www.mdpi.org/entropy/papers/e6030327.pdf

I think your question about when we can use intentional language in
describing a physical or computational system is related to the question of
when a physical system might become autonomous or "acting on its own
behalf".

Stu has a lot to say about this wrt to his theory of Autonomous Agents. See
Investigations or webcast at Rice (bottom of page at http://www.friam.org)

____________________________________________________
http://www.redfish.com    [hidden email]
624 Agua Fria Street      office: (505)995-0206
Santa Fe, NM 87501        mobile: (505)577-5828


Reply | Threaded
Open this post in threaded view
|

Do computers "try"?

Roger Critchlow-2
In reply to this post by Roger Critchlow-2
Roger E Critchlow Jr wrote:

> Now, what this reminds me of is that animists ascribe intentionality to
> everything in the world.  And back when I was learning chemistry in
> college, we used to ascribe intentionality to electrons when explaining
> organic reaction mechanisms, and to all the thermodynamic state
> variables involved when working out applications of Le Chatlier's
> Principle.
>
> So I would wonder whether this is a purely philosophical question, or
> just a way we've always found convenient when talking about how things
> happen, or some unreflective mixture of the two.

Not only do we do it all the time, but we're hard wired for it.  There's
a neurological lesion which turns it off.  From the PNAS of six months
ago today:  Impaired spontaneous anthropomorphizing despite intact
perception and social knowledge.

        http://www.pnas.org/cgi/content/full/101/19/7487

-- rec --

Reply | Threaded
Open this post in threaded view
|

Do computers "try"?

Stephen Guerin
REC writes:
> Not only do we do it all the time, but we're hard wired for it.  There's
> a neurological lesion which turns it off.  From the PNAS of six months
> ago today:  Impaired spontaneous anthropomorphizing despite intact
> perception and social knowledge.
> http://www.pnas.org/cgi/content/full/101/19/7487

ah, yeah, I remember the initial anthropomorphizing experiments back in my
school daze. A quick google got me a version of the Heider and Simmel video.
Pretty interesting:

        http://www.cs.brown.edu/people/black/Movies/heider.mov

-Steve

____________________________________________________
http://www.redfish.com    [hidden email]
624 Agua Fria Street      office: (505)995-0206
Santa Fe, NM 87501        mobile: (505)577-5828


Reply | Threaded
Open this post in threaded view
|

Do computers "try"?

Gus Koehler
Anthropologists have looked at the issue too.  For a very interesting
discussion of "artifical life" vs biological life, check out Stefan
Helmreich, Silicon Second Nature, 1998.

Gus

Gus Koehler, Ph.D.
Principal
Time Structures
1545 University Ave.
Sacramento, CA 95826
916-564-8683
Fax: 916-564-7895
www.timestructures.com


-----Original Message-----
From: [hidden email] [mailto:[hidden email]] On Behalf
Of Stephen Guerin
Sent: Thursday, November 11, 2004 8:19 PM
To: The Friday Morning Applied Complexity Coffee Group
Subject: RE: [FRIAM] Do computers "try"?

REC writes:
> Not only do we do it all the time, but we're hard wired for it.  There's
> a neurological lesion which turns it off.  From the PNAS of six months
> ago today:  Impaired spontaneous anthropomorphizing despite intact
> perception and social knowledge.
> http://www.pnas.org/cgi/content/full/101/19/7487

ah, yeah, I remember the initial anthropomorphizing experiments back in my
school daze. A quick google got me a version of the Heider and Simmel video.
Pretty interesting:

        http://www.cs.brown.edu/people/black/Movies/heider.mov

-Steve

____________________________________________________
http://www.redfish.com    [hidden email]
624 Agua Fria Street      office: (505)995-0206
Santa Fe, NM 87501        mobile: (505)577-5828


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9AM @ Jane's Cafe
Lecture schedule, archives, unsubscribe, etc.:
http://www.friam.org