Posted by
Pamela McCorduck on
Jul 21, 2006; 7:49pm
URL: http://friam.383.s1.nabble.com/computer-models-of-the-mind-tp522183p522192.html
It's hard for me to imagine what's meant by the phrase "a real
thinking machine." Human level and human versatility? We can get
those the old fashioned way.
What we already have are programs that think better (deeper, faster,
more imaginatively--whatever that means) in certain narrow domains.
One of those is the far from negligible domain of molecular biology.
Such programs cannot get themselves to the airport, or enjoy
strawberries, but they really don't need to, do they? Contemporary
molecular biology would be unthinkable (ahem) without such programs.
Likewise, chess is now something machines do better than humans, and
Kasparov, at least, says he is learning a great deal from how
programs play chess.
Some confusion has arisen because historically, the field of
artificial intelligence both tried to model human thought, and tried
to solve certain problems by hook or by crook (without reference to
how humans do it). They were two distinct efforts. Cognitive
psychologists were grateful to have in the computer a laboratory
instrument that would allow them to move beyond rats running mazes
(yes, folks, this is where cognitive psychology was in the 1950s).
People interested in solving problems that humans are inept at
solving were glad to have a machine that could process symbols.
I'm just now reading Eric Kandel's graceful memoir, "In Search of
Memory." Kandel, a Nobel laureate and biologist, has devoted his
life to understanding human memory, which he believes is one of the
great puzzles whose solution would lead directly to understanding
human thought. He hasn't the least doubt that these seemingly
intractable problems will someday be cracked. I don't either. And
we won't go crazy doing it.
Pamela McCorduck
On Jul 21, 2006, at 11:23 AM, James Steiner wrote:
> I suspect that we won't ever get a real thinking machine by
> deliberately trying to model thought. I suspect that the approach that
> will ultimately work is one of two: One: a "sufficiently complex"
> evolutionary simulation system, or rather set of competing systems,
> will create a concious-seeming intelligence all by itself (though that
> intelligence will be non-human, and not modeled after human thought,
> and we might not understand each other well--how do you instill an AI
> with human concepts of morality?) or two, someone will create a
> super-complex physics simulation that can take hyper-detailed 3D brain
> CAT/PET/etc scan data as input then simply simulate the goings on at
> the atomic level, the "mind" being an emergent property of the
> "matter." Of course, the mind will probably instantly go insane, even
> if provided with sufficient quantity and types of virtual senses and
> body.
>
> And we *still* won't know how the mind happens.
>
> ;)
> ~~James
>
> ============================================================
> FRIAM Applied Complexity Group listserv
> Meets Fridays 9a-11:30 at cafe at St. John's College
> lectures, archives, unsubscribe, maps at
http://www.friam.org>
"The amount of money one needs is terrifying ..."
-Ludwig van Beethoven
-------------- next part --------------
An HTML attachment was scrubbed...
URL: /pipermail/friam_redfish.com/attachments/20060721/84a29eba/attachment.html