Re: Friam Digest, Vol 61, Issue 21

classic Classic list List threaded Threaded
2 messages Options
Reply | Threaded
Open this post in threaded view
|

Re: Friam Digest, Vol 61, Issue 21

Nick Thompson
Joachim,

Thanks for what I reproduce below.  This is the sort of thing I mean to
preseRve in the noodlers' Corner when I get some time. Just too damn good
to be allowed to be trampled down in the midden of friam posts.  

You could, of course, do so yourself at
www.sfcomplex.org/wiki/ComplexityNoodlersCorner which can also be
approached via
www.sfcomplex.org/wiki/NoodlersIndex.  I would recommend that you start a
new page ..., say htpp://www.sfcomplex.org/wiki/TheSelf... If you just type
that into a browser window, the page will be created and then you can copy
this stuff into a window and we're off.  

I am going to add a few comments below, IN CAPS, to distinguish them from
your fine words.  Owen says my caps will be itnerpreted as shouting.  Think
of them not as shouted but as written in my Amurrican Accent.

Nicholas S. Thompson
Emeritus Professor of Psychology and Ethology,
Clark University ([hidden email])





 
> If you were to go about programming a computer
> to think about itself, how would you do it?
 
Even if we program a computer to think about
itself, the computer would be extremely bored,
because he is as intelligent as a cash register
or washing machine. He just follows commands,
only extremely fast.

SORRY ABOUT "THINK ABOUT";  VERY IMPRECISE.  


 
You can program a computer to behave like a
complex adaptive system which acts, reacts and
learns. Such a system or agent is able to act
flexible, adapting itself to the environment,
choosing the right action. It has a kind of
"free will", because it can choose the action
it likes. Here it makes more sense to develop
software that thinks about itself, but if the
system can only recognize a few categories, a
sense of itself is not more than a faint emotion.
To reach human intelligence, you need a vast
number of computers, because the brain is
obviously a huge distributed system. Then
the interesting question is: can the system
be aware of itself?

YES.  THE VERY QUESTION.  THANKS FOR THE MORE PRECISE STATEMENT.
 
It sounds paradox, but if we want to enable
a system of computers to think about itself,
we must prevent any detailed self-knowledge.
If we could perceive how our minds work on
the microscopic level of neurons, we would
notice that there is no central organizer or
controller. The illusion of the self would
probably break down if a brain would be
conscious of the distributed nature of it's
own processing.

EXACTLY.  THANK YOU.

 In this sense, self-
consciousness is only possible because the
true nature of the self is not conscious
to us..

SO SELF CONSCIOUSNESS IS LIKE  PRECEPTION: OF ANYTHING,  A CONSTRUCTION
BASED ON CUES AND CONTROLLED BY ONE OR MORE APRIORI THEORIES CONCERNING THE
NATURE OF THE WORLD.  
 
The complex adaptive system in question is
aware of what is doing only indirectly through
and with the help of the external world. To be
more precise, the system can only watch its own
activity on a certain level: on the macroscopic
level it can recognize macroscopic things, and
on the microscopic level, it can recognize other
microscopic things - a neuron can recognize and
react to other neurons - but there is no
"level-crossing" awareness of the own activity.
 
So you have to build a giant system which
consists of a huge number of computers, and
only if it doesn't have the slightest
idea how it works, it can develop a form
of self-consciousness. And only if you take
a vast number of items - neurons, computers
or servers - the system is complex enough to
get the impression that a single item is in
charge..
 
Quite paradox, isn't it? But there is something
else we need: the idea of the self must have
a base, a single item to identify oneself with.

I HAVE ALWAYS FELT THAT THE NOTION OF SELF IS AT ROOT A LEGAL NOTION HAVING
NOTHING TO DO WITH THE WAY BEHAVIOR WORKS.  OTHERS DIRECT TOWARD US THE
NOTION THAT WE ARE A SELF FOR THEIR OWN  SOCIAL AND LEGAL CONVENIENCE AND
SO OUR BEHAVIOR BECOMES SHAPED.  BUT BEYOND THAT, THE SELF HAS NO REALITY.
AND PARTICULARLY THE "INNER=" SELF HAS NO REALITYH.  
 
Thus we need two worlds: one "mental" world
where the thinking - the complex information
processing - takes place, and where the system
is a large distributed network of nodes, and one
"physical" world where a single "self" walks around
and where the system appears to be a single,
individual item: a person. This "physical" world
could also be any virtual world which is complex
enough to support AI. Each of this worlds could be
be realized by a number of advanced data centers.

rIGHT
 
There are a number of conditions for both worlds:
The hidden, "mental" world must be grounded in the
visible, "physical" world, it must be complex enough
to mirror it, and it must be fast enough to react
instantly. Grounded means we need a "1:infinite"
connection between both worlds. The collective action
of the "hidden" system must result in a single
action of an item in the "visible" system.
And a single instant in the "visible" system must
in turn trigger a collective activity of the
"hidden" system during perception. Every perception
and action for the system must pass a single
point in the visible, physical world. If both
worlds are complex enough, then this is the
point where true self-consciousness can emerge.

CAN YOU EXPLAIN THE PREVIOUS PARAGRAPH?  ARE YOU BEHIND IT OR IS IT A
REDUCTIO AD ABSURDUM.
 
To summarize, in order to build a computer
system which is able to think about itself,
we need to separate the "thinking" from the
"self":
 
(a) a prevention of self-knowledge
which enables self-awareness
 
(b) a "1:infinite" connection between two
very complex worlds which are in
coincidence with each other
 
When we think, certain patterns are brought
into existence. Since a brain contains more
than 100 billion neurons, each pattern is a
vast collection of nearly invisible little
things or processes. When we think of ourselves,
a pattern is brought into existence, too. It
is the identification of a vast collection
of nearly invisible little items with a
single thing: yourself.

PART OF WHY THIS ISNT MAKING SENSE TO ME IS THAT I NEED YOU TO UNPACK THE
METAPHOR IN THE WORD "VISIBLE".   yOU ARENT REFERRING VISION.  sO WHAT ARE
YOU REFERRING TO?  
 
Except the abstract idea, there is no immaterial
"self" hovering over hundred billion flickering
neurons. The idea of a self or soul as the originator
of the own thoughts is an illusion - but you may
ask "if the self is unreal, then who is reading
this?". So maybe it is more precise to say that
the self is a confusing insight or an insightful
confusion. The essence of self-consciousness
seems to be this strange combination of insight
and confusion.
 
Self-consciousness is both: the strange, short-lived
feeling associated with intricated patterns of
feedback loops which arise if inconsistent items
are related to each other: everything is related to
nothing, real to unreal, inside to outside, material
to immaterial, important to unimportant, etc.
And it is the surprising insight associated with
the continuous identification of the self in the
ever-changing environment.
 
-J.
 
 
>
> End of Friam Digest, Vol 61, Issue 21
> *************************************



============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org
Reply | Threaded
Open this post in threaded view
|

Re: Friam Digest, Vol 61, Issue 21

Jochen Fromm-4
My name is Jochen, not Joachim. A Wiki is always a nice idea.
Maybe we can copy some of the contents of the Wiki we started
in the research group where I was for 2 years before the content
gets lost. Nobody seems to use it anymore, it can be found still at
http://www.vs.uni-kassel.de/systems/index.php/Main_Page
I have written most of the interesting pages, for example the following ones
http://www.vs.uni-kassel.de/systems/index.php/Basic_System_Theory

Maybe I will find some time at the weekend to do it.
And yes, maybe we can setup a page about the "self" and
about self-consciousness, why not? It is a good idea
to collect the best of the FRIAM posts, but is the noodlers'
corner the right place to do it? Sounds more like a place
for elusive thoughts about flying sphagetti monsters..

Kind regards,

Jochen

----- Original Message -----
From: "Nicholas Thompson" <[hidden email]>
To: <[hidden email]>
Sent: Monday, July 21, 2008 3:20 AM
Subject: Re: [FRIAM] Friam Digest, Vol 61, Issue 21


> Joachim,
>
> Thanks for what I reproduce below.  This is the sort of thing I mean to
> preseRve in the noodlers' Corner when I get some time. Just too damn good
> to be allowed to be trampled down in the midden of friam posts.
>
> You could, of course, do so yourself at
> www.sfcomplex.org/wiki/ComplexityNoodlersCorner which can also be
> approached via
> www.sfcomplex.org/wiki/NoodlersIndex.  I would recommend that you start a
> new page ..., say htpp://www.sfcomplex.org/wiki/TheSelf... If you just
> type
> that into a browser window, the page will be created and then you can copy
> this stuff into a window and we're off.
>
> I am going to add a few comments below, IN CAPS, to distinguish them from
> your fine words.  Owen says my caps will be itnerpreted as shouting.
> Think
> of them not as shouted but as written in my Amurrican Accent.
>
> Nicholas S. Thompson
> Emeritus Professor of Psychology and Ethology,
> Clark University ([hidden email])
>
>


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org