Login  Register

Re: REPOST: The meaning of "inner".

Posted by Jochen Fromm-4 on Jul 20, 2008; 1:04pm
URL: http://friam.383.s1.nabble.com/REPOST-The-meaning-of-inner-tp573374p573935.html

> If you were to go about programming a computer
> to think about itself, how would you do it?

Even if we program a computer to think about
itself, the computer would be extremely bored,
because he is as intelligent as a cash register
or washing machine. He just follows commands,
only extremely fast.

You can program a computer to behave like a
complex adaptive system which acts, reacts and
learns. Such a system or agent is able to act
flexible, adapting itself to the environment,
choosing the right action. It has a kind of
"free will", because it can choose the action
it likes. Here it makes more sense to develop
software that thinks about itself, but if the
system can only recognize a few categories, a
sense of itself is not more than a faint emotion.
To reach human intelligence, you need a vast
number of computers, because the brain is
obviously a huge distributed system. Then
the interesting question is: can the system
be aware of itself?

It sounds paradox, but if we want to enable
a system of computers to think about itself,
we must prevent any detailed self-knowledge.
If we could perceive how our minds work on
the microscopic level of neurons, we would
notice that there is no central organizer or
controller. The illusion of the self would
probably break down if a brain would be
conscious of the distributed nature of it's
own processing. In this sense, self-
consciousness is only possible because the
true nature of the self is not conscious
to us..

The complex adaptive system in question is
aware of what is doing only indirectly through
and with the help of the external world. To be
more precise, the system can only watch its own
activity on a certain level: on the macroscopic
level it can recognize macroscopic things, and
on the microscopic level, it can recognize other
microscopic things - a neuron can recognize and
react to other neurons - but there is no
"level-crossing" awareness of the own activity.

So you have to build a giant system which
consists of a huge number of computers, and
only if it doesn't have the slightest
idea how it works, it can develop a form
of self-consciousness. And only if you take
a vast number of items - neurons, computers
or servers - the system is complex enough to
get the impression that a single item is in
charge..

Quite paradox, isn't it? But there is something
else we need: the idea of the self must have
a base, a single item to identify oneself with.

Thus we need two worlds: one "mental" world
where the thinking - the complex information
processing - takes place, and where the system
is a large distributed network of nodes, and one
"physical" world where a single "self" walks around
and where the system appears to be a single,
individual item: a person. This "physical" world
could also be any virtual world which is complex
enough to support AI. Each of this worlds could be
be realized by a number of advanced data centers.

There are a number of conditions for both worlds:
The hidden, "mental" world must be grounded in the
visible, "physical" world, it must be complex enough
to mirror it, and it must be fast enough to react
instantly. Grounded means we need a "1:infinite"
connection between both worlds. The collective action
of the "hidden" system must result in a single
action of an item in the "visible" system.
And a single instant in the "visible" system must
in turn trigger a collective activity of the
"hidden" system during perception. Every perception
and action for the system must pass a single
point in the visible, physical world. If both
worlds are complex enough, then this is the
point where true self-consciousness can emerge.

To summarize, in order to build a computer
system which is able to think about itself,
we need to separate the "thinking" from the
"self":

(a) a prevention of self-knowledge
    which enables self-awareness

(b) a "1:infinite" connection between two
    very complex worlds which are in
    coincidence with each other

When we think, certain patterns are brought
into existence. Since a brain contains more
than 100 billion neurons, each pattern is a
vast collection of nearly invisible little
things or processes. When we think of ourselves,
a pattern is brought into existence, too. It
is the identification of a vast collection
of nearly invisible little items with a
single thing: yourself.

Except the abstract idea, there is no immaterial
"self" hovering over hundred billion flickering
neurons. The idea of a self or soul as the originator
of the own thoughts is an illusion - but you may
ask "if the self is unreal, then who is reading
this?". So maybe it is more precise to say that
the self is a confusing insight or an insightful
confusion. The essence of self-consciousness
seems to be this strange combination of insight
and confusion.

Self-consciousness is both: the strange, short-lived
feeling associated with intricated patterns of
feedback loops which arise if inconsistent items
are related to each other: everything is related to
nothing, real to unreal, inside to outside, material
to immaterial, important to unimportant, etc.
And it is the surprising insight associated with
the continuous identification of the self in the
ever-changing environment.

-J.



============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org