consciousness

classic Classic list List threaded Threaded
5 messages Options
Reply | Threaded
Open this post in threaded view
|

consciousness

Nick Thompson

all,
 
One of the reasons I came to Santa Fe and hitched up with FRIAM is that I thought the people in this group were particularly well suited to help me solve the problem of self knowledge.  what it means to say that an entity knows about itself.   However, while we have had many interesting discussions, I have never managed to quite get that subject on the table.  So here goes.
 
Let us imagine that we want to program a robot to do stuff .... many of you have, I gather.  Now, I assume that any robot worth it's salt, will have a certain amount of self knowledge.  It will know, for instance, where it is.  It will know the position of its effectors.  It may also know something about what it has done recently.  
 
So how do roboteers provide their robots with such knowledge.?  Now, I assume, such knowledge gathering is accomplished through sensors.  And while the sensors gather information sufficient for the knowledge in question given the context, the actual information that they supply is much more limited.  So, for instance, the sensor that senses "the position of the forelimb" actually measures a current coming through a resistor, attached to the joints in the limb and a small onboard computer calculates  the  position of the limb based on a bunch of reasonable assumptions about the shape of the robot and the configuration of the world it is operating in.   So even the knowledge of robots is intentional, in the sense that it is incomplete, based on assumptions, and from a definite point of view.  
 
There are other questions I want to ask, but let me stop here for the moment and see what The List has to say.
 
Nick
 
 
Nicholas S. Thompson
Emeritus Professor of Psychology and Ethology,
Clark University ([hidden email])
 
 
 


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org
Reply | Threaded
Open this post in threaded view
|

Re: consciousness

Robert J. Cordingley
Based on my experience with AI/Expert Systems using Inference Engines I think one of the problems is deciding what 'knowing' means.

Let's start with this axiom:  Data (D), Information (I) and Knowledge (K) are different things.

"451 deg F" is data (perhaps from a sensor)
"The temperature of my newspaper is 451 deg F." is information.
"When paper reaches the temperature of 451 deg F it auto-ignites." is knowledge.

So I can infer that my newspaper is bursting into flames (new knowledge). [Thank goodness for fire places.]

An expert system designed to capture this inference and draw the conclusion that my newspaper is catching fire, maintains the D, I and K in memory, somewhat symbolically.  AI experts told us that the most interesting things in life are symbolic rather than numeric (e.g who your father is).  I wouldn't say my PC running the example expert system really 'knew' anything.  Colloquially we'd often talk about the systems knowing something but I think we all felt it an illusion - although a useful one.  Later we built systems that 'knew' when there was a major incident requiring operator action such as an equipment freeze up.

To continue with this type of knowing, a roboteer can use many of the AI/Expert System tools to 'know' stuff at this level, including where it is and so on and even qualify that knowing with a degree of uncertainty.  The quality of the 'knowing' depending on the ability to build the expert system, the representational system used within and the quality and reliability of the sensors.  We assumed all of the components worked in order to draw the conclusion.  Sometimes we deduced that the sensors weren't reliable and therefore rejected the conclusions that could be drawn from their readings - a sort of meta-knowledge.

Since doing this work I don't think there has been any real advance in how computers know anything, so I hope this might answer your question.

Thanks
Robert C


Nicholas Thompson wrote:

all,
 
One of the reasons I came to Santa Fe and hitched up with FRIAM is that I thought the people in this group were particularly well suited to help me solve the problem of self knowledge.  what it means to say that an entity knows about itself.   However, while we have had many interesting discussions, I have never managed to quite get that subject on the table.  So here goes.
 
Let us imagine that we want to program a robot to do stuff .... many of you have, I gather.  Now, I assume that any robot worth it's salt, will have a certain amount of self knowledge.  It will know, for instance, where it is.  It will know the position of its effectors.  It may also know something about what it has done recently.  
 
So how do roboteers provide their robots with such knowledge.?  Now, I assume, such knowledge gathering is accomplished through sensors.  And while the sensors gather information sufficient for the knowledge in question given the context, the actual information that they supply is much more limited.  So, for instance, the sensor that senses "the position of the forelimb" actually measures a current coming through a resistor, attached to the joints in the limb and a small onboard computer calculates  the  position of the limb based on a bunch of reasonable assumptions about the shape of the robot and the configuration of the world it is operating in.   So even the knowledge of robots is intentional, in the sense that it is incomplete, based on assumptions, and from a definite point of view.  
 
There are other questions I want to ask, but let me stop here for the moment and see what The List has to say.
 
Nick
 
 
Nicholas S. Thompson
Emeritus Professor of Psychology and Ethology,
Clark University ([hidden email])
 
 
 

============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College lectures, archives, unsubscribe, maps at http://www.friam.org

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org
Reply | Threaded
Open this post in threaded view
|

Re: consciousness

glen e. p. ropella-2
In reply to this post by Nick Thompson
Nicholas Thompson emitted this, circa 09-06-15 10:25 AM:
> Let us imagine that we want to program a robot to do stuff .... many of
> you have, I gather.  Now, I assume that any robot worth it's salt, will
> have a certain amount of self knowledge.  It will know, for instance,
> where it is.  It will know the position of its effectors.  It may also
> know something about what it has done recently.  
>  
> So how do roboteers provide their robots with such knowledge.?

This depends entirely on the robot.  Most robots are controlled with a
microcontroller.  And in that, they are nothing more than general
purpose computers with sensors attached.  But some robots (e.g. BEAM)
are (basically) just coupled oscillators with sensors attached.

Perhaps there are other types; but these will serve for your question.
For a microcontroller-based robot, the "knowledge" is programmed in by
the programmer.  All such knowledge is explicitly grounded by the
programmer to the I/O data for the programs.  For example, if the
program is: read magnetic field -> turn motor, then the robot's
"knowledge" is about magnetic fields and motors.

In contrast, however, for a BEAM robot, what the robot "knows" can be a
bit of a mystery to the roboteer because the oscillators conflate to
create systemic properties that may not have been obvious to (or
amenable to a closed form solution developed by) the roboteer.  If it's
simple enough (e.g. builds charge from a solar panel into a capacitor
that eventually discharges), the robot "knows" how to store energy
slowly and expend it rapidly.  The knowledge here is implicitly grounded
directly to the sensors and actuators.


Obviously, any sophisticated robot will be a hybrid of these two
extremes.  So, to answer your question as directly as possible,
roboteers do NOT provide the robots with their "knowledge".  The
roboteers construct a machine, that's all.  If there is any knowledge in
a robot, it will remain a mystery to the roboteer until the robots are
intelligent enough to report back what they know. [grin]

--
glen e. p. ropella, 971-222-9095, http://agent-based-modeling.com


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org
Reply | Threaded
Open this post in threaded view
|

Re: consciousness

Nick Thompson
In reply to this post by Nick Thompson
I have to think about this.  

What appears to me is that while I am very stingy about human
consciousness, I am, relative to you. generous about robot awareness.  

It feels like I am trying to close a gap that you are trying to open.

Nick

Nicholas S. Thompson
Emeritus Professor of Psychology and Ethology,
Clark University ([hidden email])
http://home.earthlink.net/~nickthompson/naturaldesigns/




> [Original Message]
> From: glen e. p. ropella <[hidden email]>
> To: The Friday Morning Applied Complexity Coffee Group <[hidden email]>
> Date: 6/15/2009 2:20:59 PM
> Subject: Re: [FRIAM] consciousness
>
> Nicholas Thompson emitted this, circa 09-06-15 10:25 AM:
> > Let us imagine that we want to program a robot to do stuff .... many of
> > you have, I gather.  Now, I assume that any robot worth it's salt, will
> > have a certain amount of self knowledge.  It will know, for instance,
> > where it is.  It will know the position of its effectors.  It may also
> > know something about what it has done recently.  
> >  
> > So how do roboteers provide their robots with such knowledge.?
>
> This depends entirely on the robot.  Most robots are controlled with a
> microcontroller.  And in that, they are nothing more than general
> purpose computers with sensors attached.  But some robots (e.g. BEAM)
> are (basically) just coupled oscillators with sensors attached.
>
> Perhaps there are other types; but these will serve for your question.
> For a microcontroller-based robot, the "knowledge" is programmed in by
> the programmer.  All such knowledge is explicitly grounded by the
> programmer to the I/O data for the programs.  For example, if the
> program is: read magnetic field -> turn motor, then the robot's
> "knowledge" is about magnetic fields and motors.
>
> In contrast, however, for a BEAM robot, what the robot "knows" can be a
> bit of a mystery to the roboteer because the oscillators conflate to
> create systemic properties that may not have been obvious to (or
> amenable to a closed form solution developed by) the roboteer.  If it's
> simple enough (e.g. builds charge from a solar panel into a capacitor
> that eventually discharges), the robot "knows" how to store energy
> slowly and expend it rapidly.  The knowledge here is implicitly grounded
> directly to the sensors and actuators.
>
>
> Obviously, any sophisticated robot will be a hybrid of these two
> extremes.  So, to answer your question as directly as possible,
> roboteers do NOT provide the robots with their "knowledge".  The
> roboteers construct a machine, that's all.  If there is any knowledge in
> a robot, it will remain a mystery to the roboteer until the robots are
> intelligent enough to report back what they know. [grin]
>
> --
> glen e. p. ropella, 971-222-9095, http://agent-based-modeling.com
>
>
> ============================================================
> FRIAM Applied Complexity Group listserv
> Meets Fridays 9a-11:30 at cafe at St. John's College
> lectures, archives, unsubscribe, maps at http://www.friam.org



============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org
Reply | Threaded
Open this post in threaded view
|

Re: consciousness

Jochen Fromm-4
In reply to this post by Nick Thompson
How do roboteers provide their robots with
self-knowledge? It depends how you build the
robot, most robots today are very, very dumb.
As Glen says, they are nothing more than
general purpose computers with sensors attached.
And most have little or no self-knowledge,
like ordinary computers which have absolutely
no self-knowledge. Most machines and computers
do just fine without it.

I think self-knowledge becomes important if you
want to build truly autonomous systems, and
the systems we use today are largely machines
without intentions which are completely controlled
by us. These self-knowledge can also be implicit.

Animals have emotions which contain an implicit
self-knowledge, as a part of the limbic system
they signal the state of the body and control
it at the same time. And there is the autonomic
nervous system which acts below the level of
consciousness. To all of these systems there is
no real counterpart in the current computers and
robots.

Even for humans explicit self-knowledge in form
of self-consciouness is relatively knew,
Julian Jaynes argued in "the origin of
consciousness in the breakdown of the bicameral
mind" that it emerged around 2000-3000 B.C.
And the Greek aphorism gnothi seauton "Know thyself"
from the oracle in Delphi sounds a bit like an
encouragement to become conscious of oneself.
And here the problems seem to begin..

-J.


----- Original Message -----
From: Nicholas Thompson
To: [hidden email]
Sent: Monday, June 15, 2009 7:25 PM
Subject: [FRIAM] consciousness

One of the reasons I came to Santa Fe and hitched up with FRIAM is that I
thought the people in this group were particularly well suited to help me
solve the problem of self knowledge.  what it means to say that an entity
knows about itself.   However, while we have had many interesting
discussions, I have never managed to quite get that subject on the table.
So here goes.

Let us imagine that we want to program a robot to do stuff .... many of you
have, I gather.  Now, I assume that any robot worth it's salt, will have a
certain amount of self knowledge.  It will know, for instance, where it is.
It will know the position of its effectors.  It may also know something
about what it has done recently.

So how do roboteers provide their robots with such knowledge.?  Now, I
assume, such knowledge gathering is accomplished through sensors.  And while
the sensors gather information sufficient for the knowledge in question
given the context, the actual information that they supply is much more
limited.  So, for instance, the sensor that senses "the position of the
forelimb" actually measures a current coming through a resistor, attached to
the joints in the limb and a small onboard computer calculates  the
position of the limb based on a bunch of reasonable assumptions about the
shape of the robot and the configuration of the world it is operating in.
So even the knowledge of robots is intentional, in the sense that it is
incomplete, based on assumptions, and from a definite point of view.

There are other questions I want to ask, but let me stop here for the moment
and see what The List has to say.

Nick


Nicholas S. Thompson
Emeritus Professor of Psychology and Ethology,
Clark University ([hidden email])
http://home.earthlink.net/~nickthompson/naturaldesigns/




============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org