Posted by
Vladimyr Burachynsky on
Feb 08, 2011; 10:06pm
URL: http://friam.383.s1.nabble.com/A-question-for-your-Roboteers-out-there-tp5996169p6005675.html
Jochen said" "information about the system itself" and "information about
other things" is the point where self-awareness begins "
Perhaps this thought is perhaps a little overly compacted. Information about
self does not require language, indeed awareness of the outside world does
not require language. If both are in place language does not arise
automatically. It does seem that a model of the world mapped out of
perceptions must exist and another symbolic map linking all images of
reality to meanings and to verbal symbols most also be in place.
There is still a lot of wiggle room about when self awareness emerges. I
am going to assume no human being is born knowing the language of it's
parents. That requires that an individual interact to begin learning the
things in its environment and the symbolic sounds and meanings. So the most
complex brain on the planet spends some 2 or more decades learning languages
bit by bit. Perhaps self awareness is a continuum not an actual object.
Through language games the individual constantly redefines the state of self
awareness.
That machine Mind we are hypothesizing apparently inherits the complete
library of outside things as well as the libraries of symbols and meanings
and does not require the prolonged tutoring of humans. This is actually a
very radical concept with some very peculiar consequences i.e. An entity
that requires no childhood or social connections yet is fully capable of
communicating with every other member immediately. I suspect that such
entities would not actually be social entities. They may be coldly
indifferent or exploitative of each other. Also these entities would not
have the ability to adapt should the environment change quickly.
If it is not already defined in all the relevant libraries , It seems to
have no means of extension according to the preliminary model we are playing
with.
\That does not seem to be what any of us had in mind when the discussion
started. It seems that to be what we call self aware it must exist in a
society and be able to also distinguish its thoughts from those of others.
That difference in individuals must also be attached to some kind of
motivation such as curiosity in order for them to exchange information. That
requires the Natural learning method that was assumed no longer useful?
with a requirement for information exchange and some socialization from
childhood the entities enjoy learning or so it would appear. So why do
humans resist Learning after some period of time.? Was there a failure
introduced by accident?
VIB
Vladimyr Ivan Burachynsky PhD
[hidden email]
120-1053 Beaverhill Blvd.
Winnipeg,Manitoba, R2J3R2
Canada
(204) 2548321 Land
(204) 8016064 Cell
-----Original Message-----
From:
[hidden email] [mailto:
[hidden email]] On Behalf
Of Jochen Fromm
Sent: February-06-11 3:25 AM
To: The Friday Morning Applied Complexity Coffee Group
Subject: Re: [FRIAM] A question for your Roboteers out there
Hi Nick,
I would say language is the key, it is useful if the robot understands
language. A robot usually cannot recognize or perceive itself, if it is not
able to understand language.
In animals, information about the system itself is so important that it is
usually processed and controlled by an own system, the limbic system and the
autonomic nervous system, or in other words, largely by emotions.
So "information about the system itself" is processed by the limbic system,
and "information about other things" by the cerebral cortex.
If robots are able to understand things
through language, then the point where
they start to distinguish "information about the system itself" and
"information about other things" is the point where self-awareness begins.
To know the self means to know where the self ends, and where the rest of
the world begins.
-J.
----- Original Message -----
From: Nicholas Thompson
To: The Friday Morning Applied Complexity Coffee Group
Sent: Saturday, February 05, 2011 8:29 PM
Subject: [FRIAM] A question for your Roboteers out there
At what point in the complexity of a robot (or any other control system)
does it begin to seem useful to parse input into "information about the
system itself" and "information about other things"?
Nick
============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College lectures, archives,
unsubscribe, maps at
http://www.friam.org============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at
http://www.friam.org