Do computers "try"?

classic Classic list List threaded Threaded
1 message Options
Reply | Threaded
Open this post in threaded view
|

Do computers "try"?

Frank Wimberly
Nick,
?
This reminds me of an ongoing argument I used to have with Hans Moravec.?
?Starting in around 1980 we were both junior faculty members in the newly
founded Robotics Institute at Carnegie Mellon University.?? He is now a very
senior faculty member there and I am here.
?
Anyway, Hans and I used to argue about the limits of artificial
intelligence. ?I said, among other things, that machines could not be
conscious.? Hans said that the Earth was going to be destroyed by an
asteroid, the sun going supernova, or some other catastrophe. ?He said that
the sooner we downloaded our minds into machines and launched them into
outer space the better.? I said that you might be able to make a machine
that behaved like me to everyone else?s satisfaction but that I would be
gone. ?Etc., etc.
?
One of the places that this discussion led was to was my quoting Searle or
Dreyfus?I can?t remember which?to the effect that if someone writes a
computer program that simulates problem solving or artistic judgment or
whatever people say ?It thinks!? but if someone writes a program that
simulates a fire, no one calls the fire department.? Hans said, ?If the fire
simulation were detailed enough they might.? If a simulation has enough
detail, there?s no difference between it and what it?s simulating.?? That?s
the first time I had ever heard that point of view.
?
Frank


---
Frank C. Wimberly?????  140 Calle Ojo Feliz?????  Santa Fe, NM 87505
Phone:???505 995-8715 or 505 670-9918 (cell)
[hidden email] or [hidden email]
?
-----Original Message-----
From: [hidden email] [mailto:[hidden email]] On Behalf
Of Robert Holmes
Sent: Tuesday, November 09, 2004 4:23 PM
To: [hidden email]; 'The Friday Morning Applied Complexity
Coffee Group'
Subject: RE: [FRIAM] Do computers "try"?

Y'know this kind of reminds me of that Jorge Luis Borges story about the
'ideal' map that ends up being as big as the thing it is mapping (and still
isn't as good as the real thing). in the same way that 'good' cartography is
all about deciding what not?to represent,?'good' simulation is all about
deciding what not to simulate. And if a simulation is always less than the
thing it simulates, that suggests it can't ever be the thing it simulates.
?
Robert (or a reasonable simulacrum thereof)
?


From: Nicholas Thompson [mailto:[hidden email]]
Sent: Tuesday, November 09, 2004 1:32 PM
To: Friam
Subject: [FRIAM] Do computers "try"?
All,
?
I have talked to a couple of you about the ontological question of when is a
simulation the thing it simulates.? For instance, when does a system cease
to simulate motivation and actually become motivated??? I am suspicious
about the extension of intentional language to non-animate systems, not
because I am a vitalistic crypto-creationist, but because my intuition tells
me that inanimate systems do not usually? take the sorts of actions that are
required for the use of mentalistic predicates like "motivated".?? But
talking to you folks is making me uneasy.? If you are curious how I come by
my quandary, please have a look at the article On the Use of Mental Terms in
Behavioral Ecology and Sociobiology, which appears at
?
?http://home.earthlink.net/~nickthompson/ 
?
The closest I have ever come to conceding this sort of view is in a BBS
commentary entitled, "Why would we ever doubt that species were
intelligent?", which I will post later in the day.? I guess I am going to
have to argue that the definitional strictures for applying intelligence are
less stringent than those for motivation.?
?
This could get ugly.
?
Thanks everybody,
?
Nick
?
?
Nicholas S. Thompson
Professor of Psychology and Ethology
Clark University
[hidden email]
http://home.earthlink.net/~nickthompson/
[hidden email]
?