A question for your Roboteers out there

classic Classic list List threaded Threaded
23 messages Options
12
Reply | Threaded
Open this post in threaded view
|

A question for your Roboteers out there

Nick Thompson

At what point in the complexity of a robot (or any other control system) does it begin to seem useful to parse input into “information about the system itself” and “information about other things”?

 

Nick

 

Nicholas S. Thompson

Emeritus Professor of Psychology and Biology

Clark University

http://home.earthlink.net/~nickthompson/naturaldesigns/

http://www.cusf.org

 

 


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org
Reply | Threaded
Open this post in threaded view
|

Re: A question for your Roboteers out there

lrudolph
On 5 Feb 2011 at 12:29, Nicholas  Thompson wrote:

> At what point in the complexity of a robot (or any other control system)
> does it begin to seem useful to parse input into "information about the
> system itself" and "information about other things"?

From the beginning, it's useful to parse input into
"information about what you know how to modify"
and "information about what you can't modify or don't
know how to".  It seems to me that self/other builds
(possibly destructively) on that prior distinction,
at least among the robot(icist)s I've seen.  Nick,
did I ever tell you about the "emergence" of
"pointing behavior" in a robot they run out at
UMass?  

Now I *will* run.  More later no doubt.


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org
Reply | Threaded
Open this post in threaded view
|

Re: A question for your Roboteers out there

Nick Thompson
No you haven't and I wish you would!

-----Original Message-----
From: [hidden email] [mailto:[hidden email]] On Behalf
Of [hidden email]
Sent: Saturday, February 05, 2011 1:54 PM
To: The Friday Morning Applied Complexity Coffee Group
Subject: Re: [FRIAM] A question for your Roboteers out there

        McAfee SiteAdvisor Warning

        This e-mail message contains potentially unsafe links to these
sites:
        friam.org

On 5 Feb 2011 at 12:29, Nicholas  Thompson wrote:

> At what point in the complexity of a robot (or any other control
> system) does it begin to seem useful to parse input into "information
> about the system itself" and "information about other things"?

From the beginning, it's useful to parse input into "information about what
you know how to modify"
and "information about what you can't modify or don't know how to".  It
seems to me that self/other builds (possibly destructively) on that prior
distinction, at least among the robot(icist)s I've seen.  Nick, did I ever
tell you about the "emergence" of "pointing behavior" in a robot they run
out at UMass?  

Now I *will* run.  More later no doubt.


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College lectures, archives,
unsubscribe, maps at http://www.friam.org


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org
Reply | Threaded
Open this post in threaded view
|

Re: A question for your Roboteers out there

Alfredo Covaleda Vélez
In reply to this post by Nick Thompson
You'll be delighted to attend this conference:


Call for Papers - Call for Tutorials and Special Sessions

IEEE CONFERENCE ON DEVELOPMENT AND LEARNING, AND EPIGENETIC ROBOTICS
IEEE ICDL-EPIROB 2011

Frankfurt am Main, Germany
August 24-27, 2011
www.icdl-epirob.org

Conference description
The past decade has seen the emergence of a new scientific field that studies how intelligent biological and artificial systems develop sensorimotor, cognitive and social abilities, over extended periods of time, through dynamic interactions of their brain and body with their physical and social environments. This field lies at the intersection of a number of scientific and engineering disciplines including Neuroscience, Developmental Psychology, Developmental Linguistics, Cognitive Science, Computational Neuroscience, Artificial Intelligence, Machine Learning, Robotics, and Philosophy. Various terms have been associated with this new field such as Autonomous Mental Development, Epigenetic Robotics, Developmental Robotics, etc., and several scientific meetings have been stablished. The two most prominent conferences of this field, the International Conference on Development and Learning (ICDL) and the International Conference on Epigenetic Robotics (EpiRob), are now joining forces and invite submissions for a joint meeting in 2011, to explore and extend the interdisciplinary boundaries of this field.

Keynote speakers
Andrew Barto, University of Massachusetts Amherst,
Jean Mandler (overview talk), University of California, San Diego
Erin Schuman, Max Planck Insitute for Brain Research, Framkfurt am Main
Michael Tomasello, Max Planck Institute for Evolutionary Anthropology, Leipzig

Call for papers
We invite submissions for this exciting window into the future of developmental sciences. Submissions which establish novel links between brain, behavior and computation are particularly encouraged.

Topics of interest include - but are not limited to:
- The development and emergence of perceptual, motor, cognitive, emotional, social, and communicational skills in biological systems and robots
- General principles of development and learning
- Neural and behavioral plasticity
- Grounding of knowledge and development of representations
- Biologically inspired architectures for cognitive development and open-ended development
- Models of emotionally driven behavior
- Mechanisms of intrinsic motivation, exploration and play
- Embodied cognition: Foundations and applications
- Social development in humans and robots
- Use of robots in applied settings such as autism therapy
- Epistemological approaches to Epigenetic / Developmental Robotics

Submissions will be accepted in two categories:
Full six-page papers: Accepted manuscripts will be included in the conference proceedings published by IEEE. They will be selected for either an oral presentation or a featured poster presentation at the conference; featured posters will have a 1 minute "teaser" presentation as part of the main conference session. For articles requiring more than six pages, up to two additional pages may be submitted at an extra charge.

Two-page poster abstracts: The aim of this format is to encourage dissemination of late-breaking results or work that is not sufficiently mature for a full paper. These submissions will NOT be included in the conference proceedings (but the short abstracts will appear online at Frontiers in Neurorobotics http://www.frontiersin.org/neurorobotics/about). Accepted abstracts will be presented during the evening poster sessions.

Manuscripts should be submitted through the online conference management system, available at the conference website www.icdl-epirob.org. For the paper preparation, follow the instructions at the conference website.

Call for tutorials
We invite experts in different areas to organize a 3-hour tutorial, which will be held on the first day of the conference. Participants in tutorials are asked to register for the main conference as well. Tutorials are meant to provide insights into specific topics as well as overviews that will inform the interdisciplinary audience about the state-of-the-art in child development, neuroscience, robotics, or any of the other disciplines represented at the conference.

Submissions (max. two pages) should be sent no later than March 15th to Katharina Rohlfing ([hidden email]) and Ian Fasel ([hidden email]) including:
- Title of tutorial
- Tutorial speaker(s), including short CVs;
- Concept of the tutorial; target audience or prerequisites.

All proposals submitted will be subjected to a review process.

Call for special sessions
A special session will be an opportunity to present a topic in depth, for which format a slot of 1.5 hours will be offered. Special session organizers are invited to submit (1) a summary (250 words) describing the topic, purpose and target audience of the session as well as (2) abstracts of papers (each 250 words) that will constitute the group of presentations. It is suggested that a special session includes three oral presentations to allow for sufficient presentation and discussion time. A discussant (from another discipline) may be added to the special session.

Tutorial and Special Session proposals should be sent no later than March 15th to Katharina Rohlfing ([hidden email]) and Ian Fasel ([hidden email]).

All proposals submitted will be subjected to a review process.

Abstract and Paper Submission Deadline: March 28, 2011
Notification Due: May 16, 2011
Final Version Due: June 20, 2011
Conference: August, 24-27, 2011

Child-care
For families, child-care services will be provided. Please contact Katharina Rohlfing ([hidden email]) concerning your interest in child-care services by the end of May. The detailed organization will be planned according to the needs.

Yukie Nagai
Publicity Chair of ICDL-EpiRob2011 


2011/2/5 Nicholas Thompson <[hidden email]>

At what point in the complexity of a robot (or any other control system) does it begin to seem useful to parse input into “information about the system itself” and “information about other things”?

 

Nick

 

Nicholas S. Thompson

Emeritus Professor of Psychology and Biology

Clark University

http://home.earthlink.net/~nickthompson/naturaldesigns/

http://www.cusf.org

 

 


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org



--
Alfredo

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org
Reply | Threaded
Open this post in threaded view
|

Re: A question for your Roboteers out there

Nick Thompson

What a kind offer!

 

However, I can’t wait.  Could you forward the original question to anyone you think it might interest?

 

==è At what point in the complexity of a robot (or any other control system) does it begin to seem useful to parse input into “information about the system itself” and “information about other things”? <===

I am happy to be told it’s a stupid question … and why. 

 

Thanks,

 

 

Nicholas S. Thompson

Emeritus Professor of Psychology and Biology

Clark University

http://home.earthlink.net/~nickthompson/naturaldesigns/

http://www.cusf.org

 

 

 

 

From: [hidden email] [mailto:[hidden email]] On Behalf Of Alfredo Covaleda
Sent: Saturday, February 05, 2011 6:59 PM
To: The Friday Morning Applied Complexity Coffee Group
Subject: Re: [FRIAM] A question for your Roboteers out there

 



McAfee SiteAdvisor Warning

 

This e-mail message contains potentially unsafe links to these sites:

friam.org

 

 

You'll be delighted to attend this conference:

 

 

Call for Papers - Call for Tutorials and Special Sessions

IEEE CONFERENCE ON DEVELOPMENT AND LEARNING, AND EPIGENETIC ROBOTICS
IEEE ICDL-EPIROB 2011

Frankfurt am Main, Germany
August 24-27, 2011
www.icdl-epirob.org

Conference description
The past decade has seen the emergence of a new scientific field that studies how intelligent biological and artificial systems develop sensorimotor, cognitive and social abilities, over extended periods of time, through dynamic interactions of their brain and body with their physical and social environments. This field lies at the intersection of a number of scientific and engineering disciplines including Neuroscience, Developmental Psychology, Developmental Linguistics, Cognitive Science, Computational Neuroscience, Artificial Intelligence, Machine Learning, Robotics, and Philosophy. Various terms have been associated with this new field such as Autonomous Mental Development, Epigenetic Robotics, Developmental Robotics, etc., and several scientific meetings have been stablished. The two most prominent conferences of this field, the International Conference on Development and Learning (ICDL) and the International Conference on Epigenetic Robotics (EpiRob), are now joining forces and invite submissions for a joint meeting in 2011, to explore and extend the interdisciplinary boundaries of this field.

Keynote speakers
Andrew Barto, University of Massachusetts Amherst,
Jean Mandler (overview talk), University of California, San Diego
Erin Schuman, Max Planck Insitute for Brain Research, Framkfurt am Main
Michael Tomasello, Max Planck Institute for Evolutionary Anthropology, Leipzig

Call for papers
We invite submissions for this exciting window into the future of developmental sciences. Submissions which establish novel links between brain, behavior and computation are particularly encouraged.

Topics of interest include - but are not limited to:
- The development and emergence of perceptual, motor, cognitive, emotional, social, and communicational skills in biological systems and robots
- General principles of development and learning
- Neural and behavioral plasticity
- Grounding of knowledge and development of representations
- Biologically inspired architectures for cognitive development and open-ended development
- Models of emotionally driven behavior
- Mechanisms of intrinsic motivation, exploration and play
- Embodied cognition: Foundations and applications
- Social development in humans and robots
- Use of robots in applied settings such as autism therapy
- Epistemological approaches to Epigenetic / Developmental Robotics

Submissions will be accepted in two categories:
Full six-page papers: Accepted manuscripts will be included in the conference proceedings published by IEEE. They will be selected for either an oral presentation or a featured poster presentation at the conference; featured posters will have a 1 minute "teaser" presentation as part of the main conference session. For articles requiring more than six pages, up to two additional pages may be submitted at an extra charge.

Two-page poster abstracts: The aim of this format is to encourage dissemination of late-breaking results or work that is not sufficiently mature for a full paper. These submissions will NOT be included in the conference proceedings (but the short abstracts will appear online at Frontiers in Neurorobotics http://www.frontiersin.org/neurorobotics/about). Accepted abstracts will be presented during the evening poster sessions.

Manuscripts should be submitted through the online conference management system, available at the conference website www.icdl-epirob.org. For the paper preparation, follow the instructions at the conference website.

Call for tutorials
We invite experts in different areas to organize a 3-hour tutorial, which will be held on the first day of the conference. Participants in tutorials are asked to register for the main conference as well. Tutorials are meant to provide insights into specific topics as well as overviews that will inform the interdisciplinary audience about the state-of-the-art in child development, neuroscience, robotics, or any of the other disciplines represented at the conference.

Submissions (max. two pages) should be sent no later than March 15th to Katharina Rohlfing ([hidden email]) and Ian Fasel ([hidden email]) including:
- Title of tutorial
- Tutorial speaker(s), including short CVs;
- Concept of the tutorial; target audience or prerequisites.

All proposals submitted will be subjected to a review process.

Call for special sessions
A special session will be an opportunity to present a topic in depth, for which format a slot of 1.5 hours will be offered. Special session organizers are invited to submit (1) a summary (250 words) describing the topic, purpose and target audience of the session as well as (2) abstracts of papers (each 250 words) that will constitute the group of presentations. It is suggested that a special session includes three oral presentations to allow for sufficient presentation and discussion time. A discussant (from another discipline) may be added to the special session.

Tutorial and Special Session proposals should be sent no later than March 15th to Katharina Rohlfing ([hidden email]) and Ian Fasel ([hidden email]).

All proposals submitted will be subjected to a review process.

Abstract and Paper Submission Deadline: March 28, 2011
Notification Due: May 16, 2011
Final Version Due: June 20, 2011
Conference: August, 24-27, 2011

Child-care
For families, child-care services will be provided. Please contact Katharina Rohlfing ([hidden email]) concerning your interest in child-care services by the end of May. The detailed organization will be planned according to the needs.

Yukie Nagai
Publicity Chair of ICDL-EpiRob2011 

 

 

2011/2/5 Nicholas Thompson <[hidden email]>

At what point in the complexity of a robot (or any other control system) does it begin to seem useful to parse input into “information about the system itself” and “information about other things”?

 

Nick

 

Nicholas S. Thompson

Emeritus Professor of Psychology and Biology

Clark University

http://home.earthlink.net/~nickthompson/naturaldesigns/

http://www.cusf.org

 

 


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org




--
Alfredo


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org
Reply | Threaded
Open this post in threaded view
|

Re: A question for your Roboteers out there

Jochen Fromm-5
In reply to this post by Nick Thompson
Hi Nick,

I would say language is the key, it is useful
if the robot understands language. A robot
usually cannot recognize or perceive itself,
if it is not able to understand language.

In animals, information about the system
itself is so important that it is usually
processed and controlled by an own system,
the limbic system and the autonomic nervous
system, or in other words, largely by emotions.
So "information about the system itself" is
processed by the limbic system, and "information
about other things" by the cerebral cortex.

If robots are able to understand things
through language, then the point where
they start to distinguish "information about
the system itself" and "information about
other things" is the point where self-awareness
begins. To know the self means to know where
the self ends, and where the rest of the world
begins.

-J.

----- Original Message -----
From: Nicholas Thompson
To: The Friday Morning Applied Complexity Coffee Group
Sent: Saturday, February 05, 2011 8:29 PM
Subject: [FRIAM] A question for your Roboteers out there

At what point in the complexity of a robot (or any other control system)
does it begin to seem useful to parse input into "information about the
system itself" and "information about other things"?

Nick


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org
Reply | Threaded
Open this post in threaded view
|

Re: A question for your Roboteers out there

David Eric Smith
In reply to this post by Nick Thompson
Nick, hi,

Been meaning to send this for a couple of days.  There is a paper on
the role of models in control theory, which is perhaps profound or
perhaps a tautology (Mike Spivak comments that the two naturally go
together):  

Conant, Roger C. and W. Ross Ashby. 1970. Every Good Regulator of a
System Must
be a Model of That System. International Journal of Systems Science 1
(2):89-97.

This should be available for download from a link "Foundations of
Complexity" on the SFI website.  

Presumably it's like a room with mirrors at both ends, which isn't a
true infinite regress, because the images get less resolved at each
reflection.  

One models onesself, presumably, not with the intent that the model be
realistic, but only that it serve some particular purpose.  So we
don't encounter Turing-completeness paradoxes, since an internal model
is not required to be a model of itself, but only a model of some
aspect of itself, or even of that self's interaction in some
contexts.  

The recursive character does indeed make me think of language, as
Jochen says, though not necessarily that the two are "the same" thing.
For the model to be a part of the self, and in that sense, an object
in its own right, and also to serve as a referent to something else
through a suitable system for interpretation, reminds me of the way a
word is both an object subject to manipulation, and a referent to
other objects.  But somehow words are easier.  They are objects with
respect to syntax, mophology, phonology, etc., and referents with
respect to semantics, though I doubt that those distinctions are as
clean we carelessly might suppose.  Is it right, then to say as
counterpart, that internal models, as parts of the self, are objects
under some explicit grammar for handling them, and referents with
respect to a semantics for which that model-language provides
addressing?

It would be interesting if there is a common structure of recursion,
and a "syntactic" sort of cognitive primitive, which underlies many
forms of internal modeling, of which only one is the use of a
grammatical language.  In other words (and replacing what Dennett does
say with what I wish he would say), it is not that language enables
internal modeling, but rather that, in certain cognitive domains, both
build from recursive functionality that we find expressed in the use
of internal models and also in the use of grammatical language.  (I
say "certain cognitive domains" to avoid the Pinker/Fitch/Chomsky
assertion that recursion is exclusively human and exclusively
linguistic-within-human.  That seems a conclusion one can reach only
by selectively ignoring almost everything we know about the world.)  

I suppose that extending some of Russell's thoughts on "proper names"
to deal with other parts of speech would be a way to try to constrain
our thinking empirically.  Maybe a lot of this has already been done.
It's not an area I have had time to learn about.

Eric



============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org
Reply | Threaded
Open this post in threaded view
|

Re: A question for your Roboteers out there

Russell Standish
On Sun, Feb 06, 2011 at 07:27:24AM -0700, Eric Smith wrote:

> Nick, hi,
>
> Been meaning to send this for a couple of days.  There is a paper on
> the role of models in control theory, which is perhaps profound or
> perhaps a tautology (Mike Spivak comments that the two naturally go
> together):  
>
> Conant, Roger C. and W. Ross Ashby. 1970. Every Good Regulator of a
> System Must
> be a Model of That System. International Journal of Systems Science 1
> (2):89-97.
>

Thanks for this reference! That looks very interesting. I find myself
woefully ignorant of the early systems engineering stuff.


--

----------------------------------------------------------------------------
Prof Russell Standish                  Phone 0425 253119 (mobile)
Mathematics                        
UNSW SYDNEY 2052                 [hidden email]
Australia                                http://www.hpcoders.com.au
----------------------------------------------------------------------------

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org
Reply | Threaded
Open this post in threaded view
|

Re: A question for your Roboteers out there

Parks, Raymond
In reply to this post by Nick Thompson
Nick,

  I can't answer for robots but I can answer for control systems.  Basic
control involves measuring a process and acting upon those measurements.
 Advanced control involves measuring the result in the process of
varying the actions upon the process.  This is done by step testing and
is primarily intended to compensate for the physical degradation of the
equipment being controlled (not the process).  In the case of a
hydro-cracking distillation tower, there are nominal settings from when
the tower is new.  Once the tower and the heaters begin to wear and
accumulate gunk, those nominal settings are no longer nominal - thus the
advanced optimization which measures all the sensors and the quality of
the output while varying the actuators over a range of values.  I think
this meets the sense of "information about other things".

Ray Parks                   [hidden email]
Consilient Heuristician     Voice: 505-844-4024
ATA Department              Mobile: 505-238-9359
http://www.sandia.gov/scada Fax: 505-844-9641
http://www.sandia.gov/idart Pager:505-951-6084

On 2/5/11 12:29 PM, Nicholas Thompson wrote:
> At what point in the complexity of a robot (or any other control system)
> does it begin to seem useful to parse input into “information about the
> system itself” and “information about other things”?



============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org
Reply | Threaded
Open this post in threaded view
|

Re: A question for your Roboteers out there

Grant Holland
In reply to this post by David Eric Smith
Eric,

Would love to read the Ashby/Conant article. I don't see at download link on the page at <a href="ttp://www.santafe.edu/library/foundational-papers-complexity-science/">SFI website however. Any other suggestions how I can download it?

Thanks,
Grant

On 2/6/11 7:27 AM, Eric Smith wrote:
Nick, hi,

Been meaning to send this for a couple of days.  There is a paper on
the role of models in control theory, which is perhaps profound or
perhaps a tautology (Mike Spivak comments that the two naturally go
together):  

Conant, Roger C. and W. Ross Ashby. 1970. Every Good Regulator of a
System Must 
be a Model of That System. International Journal of Systems Science 1
(2):89-97.

This should be available for download from a link "Foundations of
Complexity" on the SFI website.  

Presumably it's like a room with mirrors at both ends, which isn't a
true infinite regress, because the images get less resolved at each
reflection.  

One models onesself, presumably, not with the intent that the model be
realistic, but only that it serve some particular purpose.  So we
don't encounter Turing-completeness paradoxes, since an internal model
is not required to be a model of itself, but only a model of some
aspect of itself, or even of that self's interaction in some
contexts.  

The recursive character does indeed make me think of language, as
Jochen says, though not necessarily that the two are "the same" thing.
For the model to be a part of the self, and in that sense, an object
in its own right, and also to serve as a referent to something else
through a suitable system for interpretation, reminds me of the way a
word is both an object subject to manipulation, and a referent to
other objects.  But somehow words are easier.  They are objects with
respect to syntax, mophology, phonology, etc., and referents with
respect to semantics, though I doubt that those distinctions are as
clean we carelessly might suppose.  Is it right, then to say as
counterpart, that internal models, as parts of the self, are objects
under some explicit grammar for handling them, and referents with
respect to a semantics for which that model-language provides
addressing?

It would be interesting if there is a common structure of recursion,
and a "syntactic" sort of cognitive primitive, which underlies many
forms of internal modeling, of which only one is the use of a
grammatical language.  In other words (and replacing what Dennett does
say with what I wish he would say), it is not that language enables
internal modeling, but rather that, in certain cognitive domains, both
build from recursive functionality that we find expressed in the use
of internal models and also in the use of grammatical language.  (I
say "certain cognitive domains" to avoid the Pinker/Fitch/Chomsky
assertion that recursion is exclusively human and exclusively
linguistic-within-human.  That seems a conclusion one can reach only
by selectively ignoring almost everything we know about the world.)  

I suppose that extending some of Russell's thoughts on "proper names"
to deal with other parts of speech would be a way to try to constrain
our thinking empirically.  Maybe a lot of this has already been done.
It's not an area I have had time to learn about.

Eric



============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org
Reply | Threaded
Open this post in threaded view
|

Re: A question for your Roboteers out there

Nick Thompson

Grant,

 

The article was collected in an edited volume by Ashby and is available on Google Scholar.  I am rushing now, but if you don’t find it easily, please get back to me and I will find it for you. 

 

Nick

 

From: [hidden email] [mailto:[hidden email]] On Behalf Of Grant Holland
Sent: Monday, February 07, 2011 9:57 AM
To: The Friday Morning Applied Complexity Coffee Group; [hidden email]
Subject: Re: [FRIAM] A question for your Roboteers out there

 



McAfee SiteAdvisor Warning

 

This e-mail message contains potentially unsafe links to these sites:

friam.org

 

 

Eric,

Would love to read the Ashby/Conant article. I don't see at download link on the page at <a href="ttp://www.santafe.edu/library/foundational-papers-complexity-science/">SFI website however. Any other suggestions how I can download it?

Thanks,
Grant

On 2/6/11 7:27 AM, Eric Smith wrote:

Nick, hi,
 
Been meaning to send this for a couple of days.  There is a paper on
the role of models in control theory, which is perhaps profound or
perhaps a tautology (Mike Spivak comments that the two naturally go
together):  
 
Conant, Roger C. and W. Ross Ashby. 1970. Every Good Regulator of a
System Must 
be a Model of That System. International Journal of Systems Science 1
(2):89-97.
 
This should be available for download from a link "Foundations of
Complexity" on the SFI website.  
 
Presumably it's like a room with mirrors at both ends, which isn't a
true infinite regress, because the images get less resolved at each
reflection.  
 
One models onesself, presumably, not with the intent that the model be
realistic, but only that it serve some particular purpose.  So we
don't encounter Turing-completeness paradoxes, since an internal model
is not required to be a model of itself, but only a model of some
aspect of itself, or even of that self's interaction in some
contexts.  
 
The recursive character does indeed make me think of language, as
Jochen says, though not necessarily that the two are "the same" thing.
For the model to be a part of the self, and in that sense, an object
in its own right, and also to serve as a referent to something else
through a suitable system for interpretation, reminds me of the way a
word is both an object subject to manipulation, and a referent to
other objects.  But somehow words are easier.  They are objects with
respect to syntax, mophology, phonology, etc., and referents with
respect to semantics, though I doubt that those distinctions are as
clean we carelessly might suppose.  Is it right, then to say as
counterpart, that internal models, as parts of the self, are objects
under some explicit grammar for handling them, and referents with
respect to a semantics for which that model-language provides
addressing?
 
It would be interesting if there is a common structure of recursion,
and a "syntactic" sort of cognitive primitive, which underlies many
forms of internal modeling, of which only one is the use of a
grammatical language.  In other words (and replacing what Dennett does
say with what I wish he would say), it is not that language enables
internal modeling, but rather that, in certain cognitive domains, both
build from recursive functionality that we find expressed in the use
of internal models and also in the use of grammatical language.  (I
say "certain cognitive domains" to avoid the Pinker/Fitch/Chomsky
assertion that recursion is exclusively human and exclusively
linguistic-within-human.  That seems a conclusion one can reach only
by selectively ignoring almost everything we know about the world.)  
 
I suppose that extending some of Russell's thoughts on "proper names"
to deal with other parts of speech would be a way to try to constrain
our thinking empirically.  Maybe a lot of this has already been done.
It's not an area I have had time to learn about.
 
Eric
 
 
 
============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org
Reply | Threaded
Open this post in threaded view
|

Re: A question for your Roboteers out there

David Eric Smith
In reply to this post by Grant Holland
Thank you Grant (and also Nick),

Yes, it looks as if the website has changed.  This used to be (I think) under the complexity blog, which I no longer find as a link or in the searches.  Apologies for that.

In any case, the article should be attached here:



All best,

Eric


On Feb 7, 2011, at 9:57 AM, Grant Holland wrote:

Eric,

Would love to read the Ashby/Conant article. I don't see at download link on the page at <a href="ttp://www.santafe.edu/library/foundational-papers-complexity-science/">SFI website however. Any other suggestions how I can download it?

Thanks,
Grant

On 2/6/11 7:27 AM, Eric Smith wrote:
Nick, hi,

Been meaning to send this for a couple of days.  There is a paper on
the role of models in control theory, which is perhaps profound or
perhaps a tautology (Mike Spivak comments that the two naturally go
together):  

Conant, Roger C. and W. Ross Ashby. 1970. Every Good Regulator of a
System Must 
be a Model of That System. International Journal of Systems Science 1
(2):89-97.

This should be available for download from a link "Foundations of
Complexity" on the SFI website.  

Presumably it's like a room with mirrors at both ends, which isn't a
true infinite regress, because the images get less resolved at each
reflection.  

One models onesself, presumably, not with the intent that the model be
realistic, but only that it serve some particular purpose.  So we
don't encounter Turing-completeness paradoxes, since an internal model
is not required to be a model of itself, but only a model of some
aspect of itself, or even of that self's interaction in some
contexts.  

The recursive character does indeed make me think of language, as
Jochen says, though not necessarily that the two are "the same" thing.
For the model to be a part of the self, and in that sense, an object
in its own right, and also to serve as a referent to something else
through a suitable system for interpretation, reminds me of the way a
word is both an object subject to manipulation, and a referent to
other objects.  But somehow words are easier.  They are objects with
respect to syntax, mophology, phonology, etc., and referents with
respect to semantics, though I doubt that those distinctions are as
clean we carelessly might suppose.  Is it right, then to say as
counterpart, that internal models, as parts of the self, are objects
under some explicit grammar for handling them, and referents with
respect to a semantics for which that model-language provides
addressing?

It would be interesting if there is a common structure of recursion,
and a "syntactic" sort of cognitive primitive, which underlies many
forms of internal modeling, of which only one is the use of a
grammatical language.  In other words (and replacing what Dennett does
say with what I wish he would say), it is not that language enables
internal modeling, but rather that, in certain cognitive domains, both
build from recursive functionality that we find expressed in the use
of internal models and also in the use of grammatical language.  (I
say "certain cognitive domains" to avoid the Pinker/Fitch/Chomsky
assertion that recursion is exclusively human and exclusively
linguistic-within-human.  That seems a conclusion one can reach only
by selectively ignoring almost everything we know about the world.)  

I suppose that extending some of Russell's thoughts on "proper names"
to deal with other parts of speech would be a way to try to constrain
our thinking empirically.  Maybe a lot of this has already been done.
It's not an area I have had time to learn about.

Eric



============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org
============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org

Conant_Ashby.pdf (260K) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: A question for your Roboteers out there

Grant Holland
In reply to this post by Nick Thompson
Thanks, Nick.

Grant

On 2/7/11 10:12 AM, Nicholas Thompson wrote:

Grant,

 

The article was collected in an edited volume by Ashby and is available on Google Scholar.  I am rushing now, but if you don’t find it easily, please get back to me and I will find it for you. 

 

Nick

 

From: [hidden email] [[hidden email]] On Behalf Of Grant Holland
Sent: Monday, February 07, 2011 9:57 AM
To: The Friday Morning Applied Complexity Coffee Group; [hidden email]
Subject: Re: [FRIAM] A question for your Roboteers out there

 



McAfee SiteAdvisor Warning

 

This e-mail message contains potentially unsafe links to these sites:

friam.org

 

 

Eric,

Would love to read the Ashby/Conant article. I don't see at download link on the page at <a moz-do-not-send="true" href="ttp://www.santafe.edu/library/foundational-papers-complexity-science/">SFI website however. Any other suggestions how I can download it?

Thanks,
Grant

On 2/6/11 7:27 AM, Eric Smith wrote:

Nick, hi,  
   
Been meaning to send this for a couple of days.  There is a paper on  
the role of models in control theory, which is perhaps profound or  
perhaps a tautology (Mike Spivak comments that the two naturally go  
together):    
   
Conant, Roger C. and W. Ross Ashby. 1970. Every Good Regulator of a  
System Must   
be a Model of That System. International Journal of Systems Science 1  
(2):89-97.  
   
This should be available for download from a link "Foundations of  
Complexity" on the SFI website.    
   
Presumably it's like a room with mirrors at both ends, which isn't a  
true infinite regress, because the images get less resolved at each  
reflection.    
   
One models onesself, presumably, not with the intent that the model be  
realistic, but only that it serve some particular purpose.  So we  
don't encounter Turing-completeness paradoxes, since an internal model  
is not required to be a model of itself, but only a model of some  
aspect of itself, or even of that self's interaction in some  
contexts.    
   
The recursive character does indeed make me think of language, as  
Jochen says, though not necessarily that the two are "the same" thing.  
For the model to be a part of the self, and in that sense, an object  
in its own right, and also to serve as a referent to something else  
through a suitable system for interpretation, reminds me of the way a  
word is both an object subject to manipulation, and a referent to  
other objects.  But somehow words are easier.  They are objects with  
respect to syntax, mophology, phonology, etc., and referents with  
respect to semantics, though I doubt that those distinctions are as  
clean we carelessly might suppose.  Is it right, then to say as  
counterpart, that internal models, as parts of the self, are objects  
under some explicit grammar for handling them, and referents with  
respect to a semantics for which that model-language provides  
addressing?  
   
It would be interesting if there is a common structure of recursion,  
and a "syntactic" sort of cognitive primitive, which underlies many  
forms of internal modeling, of which only one is the use of a  
grammatical language.  In other words (and replacing what Dennett does  
say with what I wish he would say), it is not that language enables  
internal modeling, but rather that, in certain cognitive domains, both  
build from recursive functionality that we find expressed in the use  
of internal models and also in the use of grammatical language.  (I  
say "certain cognitive domains" to avoid the Pinker/Fitch/Chomsky  
assertion that recursion is exclusively human and exclusively  
linguistic-within-human.  That seems a conclusion one can reach only  
by selectively ignoring almost everything we know about the world.)    
   
I suppose that extending some of Russell's thoughts on "proper names"  
to deal with other parts of speech would be a way to try to constrain  
our thinking empirically.  Maybe a lot of this has already been done.  
It's not an area I have had time to learn about.  
   
Eric  
   
   
   
============================================================  
FRIAM Applied Complexity Group listserv  
Meets Fridays 9a-11:30 at cafe at St. John's College  
lectures, archives, unsubscribe, maps at http://www.friam.org  
============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College lectures, archives, unsubscribe, maps at http://www.friam.org

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org
Reply | Threaded
Open this post in threaded view
|

Re: A question for your Roboteers out there

glen ep ropella

http://content.wuala.com/contents/gepr/public/every-good-regulator-must-model-Conant_Ashby_(1970).pdf

> On 2/7/11 10:12 AM, Nicholas Thompson wrote:
>>
>> Grant,
>>
>> The article was collected in an edited volume by Ashby and is available on
>> Google Scholar. I am rushing now, but if you don’t find it easily, please get
>> back to me and I will find it for you.
>>
>> Nick

--
glen e. p. ropella, 971-222-9095, http://tempusdictum.com


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org
Reply | Threaded
Open this post in threaded view
|

Re: A question for your Roboteers out there

Grant Holland
Thanks, Glenn. I was able to find the article from Nick's suggestion -
and ran into lots of other good stuff too.

Grant

On 2/7/11 5:31 PM, glen e. p. ropella wrote:

> http://content.wuala.com/contents/gepr/public/every-good-regulator-must-model-Conant_Ashby_(1970).pdf
>
>> On 2/7/11 10:12 AM, Nicholas Thompson wrote:
>>> Grant,
>>>
>>> The article was collected in an edited volume by Ashby and is available on
>>> Google Scholar. I am rushing now, but if you don’t find it easily, please get
>>> back to me and I will find it for you.
>>>
>>> Nick

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org
Reply | Threaded
Open this post in threaded view
|

Re: A question for your Roboteers out there

Vladimyr Burachynsky
In reply to this post by Jochen Fromm-5
Jochen said"  "information about the system itself" and "information about
other things" is the point where self-awareness begins "

Perhaps this thought is perhaps a little overly compacted. Information about
self does not require language, indeed awareness of the outside world does
not require language. If both are in place language does not arise
automatically.  It does seem that a model of the world mapped out of
perceptions must exist and another symbolic map linking all images of
reality to meanings and to verbal symbols most also be in place.

There is still a lot of wiggle room about when self awareness  emerges.  I
am going to assume no human being is born knowing the language of it's
parents. That requires that an individual interact to begin learning the
things in its environment and the symbolic sounds and meanings. So the most
complex brain on the planet spends some 2 or more decades learning languages
bit by bit. Perhaps self awareness is a continuum not an actual object.
Through language games the individual constantly redefines the state of self
awareness.

That machine Mind we are hypothesizing apparently inherits the complete
library of outside things as well as the libraries of symbols and meanings
and does not require the prolonged tutoring of humans. This is actually a
very radical concept with some very peculiar consequences i.e. An entity
that requires no childhood or social connections yet is fully capable of
communicating with every other member immediately. I suspect that such
entities would not actually be social entities. They may be coldly
indifferent or exploitative of each other. Also these entities would not
have the ability to adapt should the environment change quickly.
If it is not already defined in all the relevant libraries , It seems to
have no means of extension according to the preliminary model we are playing
with.  

\That does not seem to be what any of us had in mind when the discussion
started. It seems that to be what we call self aware it must exist in a
society and be able to also distinguish its thoughts from those of others.
That difference in individuals must also be attached to some kind of
motivation such as curiosity in order for them to exchange information. That
requires the Natural learning method that was assumed no longer useful?
 with a requirement for information exchange and some socialization from
childhood the entities enjoy learning or so it would appear. So why do
humans resist Learning after some period of time.? Was there a failure
introduced by accident?


VIB
Vladimyr Ivan Burachynsky PhD


[hidden email]

120-1053 Beaverhill Blvd.
Winnipeg,Manitoba, R2J3R2
Canada
 (204) 2548321 Land
(204) 8016064  Cell






-----Original Message-----
From: [hidden email] [mailto:[hidden email]] On Behalf
Of Jochen Fromm
Sent: February-06-11 3:25 AM
To: The Friday Morning Applied Complexity Coffee Group
Subject: Re: [FRIAM] A question for your Roboteers out there

Hi Nick,

I would say language is the key, it is useful if the robot understands
language. A robot usually cannot recognize or perceive itself, if it is not
able to understand language.

In animals, information about the system itself is so important that it is
usually processed and controlled by an own system, the limbic system and the
autonomic nervous system, or in other words, largely by emotions.
So "information about the system itself" is processed by the limbic system,
and "information about other things" by the cerebral cortex.

If robots are able to understand things
through language, then the point where
they start to distinguish "information about the system itself" and
"information about other things" is the point where self-awareness begins.
To know the self means to know where the self ends, and where the rest of
the world begins.

-J.

----- Original Message -----
From: Nicholas Thompson
To: The Friday Morning Applied Complexity Coffee Group
Sent: Saturday, February 05, 2011 8:29 PM
Subject: [FRIAM] A question for your Roboteers out there

At what point in the complexity of a robot (or any other control system)
does it begin to seem useful to parse input into "information about the
system itself" and "information about other things"?

Nick


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College lectures, archives,
unsubscribe, maps at http://www.friam.org


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org
Reply | Threaded
Open this post in threaded view
|

Re: A question for your Roboteers out there

Eric Charles
In reply to this post by Jochen Fromm-5
I'm not sure whether it matters to this discussion, but James Gibson (famous perceptual researcher) claimed there was no information about the world that was not information about the self (or in psych-parlance, that there was no easy distinction between exteroception and proprioception). Perception of "the orientation of a surface," for example, is always perception of "where I am," similarly perception of "me falling" is also the perception of "the ground moving towards my head."

Eric


On Tue, Feb 8, 2011 05:06 PM, "Vladimyr Burachynsky" <[hidden email]> wrote:
Jochen said"  "information about the system itself" and
"information about
other things" is the point where self-awareness begins "

Perhaps this thought is perhaps a little overly compacted. Information about
self does not require language, indeed awareness of the outside world does
not require language. If both are in place language does not arise
automatically.  It does seem that a model of the world mapped out of
perceptions must exist and another symbolic map linking all images of
reality to meanings and to verbal symbols most also be in place. 

There is still a lot of wiggle room about when self awareness  emerges.  I
am going to assume no human being is born knowing the language of it's
parents. That requires that an individual interact to begin learning the
things in its environment and the symbolic sounds and meanings. So the most
complex brain on the planet spends some 2 or more decades learning languages
bit by bit. Perhaps self awareness is a continuum not an actual object.
Through language games the individual constantly redefines the state of self
awareness. 

That machine Mind we are hypothesizing apparently inherits the complete
library of outside things as well as the libraries of symbols and meanings
and does not require the prolonged tutoring of humans. This is actually a
very radical concept with some very peculiar consequences i.e. An entity
that requires no childhood or social connections yet is fully capable of
communicating with every other member immediately. I suspect that such
entities would not actually be social entities. They may be coldly
indifferent or exploitative of each other. Also these entities would not
have the ability to adapt should the environment change quickly. 
If it is not already defined in all the relevant libraries , It seems to
have no means of extension according to the preliminary model we are playing
with.  

\That does not seem to be what any of us had in mind when the discussion
started. It seems that to be what we call self aware it must exist in a
society and be able to also distinguish its thoughts from those of others.
That difference in individuals must also be attached to some kind of
motivation such as curiosity in order for them to exchange information. That
requires the Natural learning method that was assumed no longer useful?
 with a requirement for information exchange and some socialization from
childhood the entities enjoy learning or so it would appear. So why do
humans resist Learning after some period of time.? Was there a failure
introduced by accident?


VIB
Vladimyr Ivan Burachynsky PhD


[hidden email]

120-1053 Beaverhill Blvd.
Winnipeg,Manitoba, R2J3R2
Canada 
 (204) 2548321 Land
(204) 8016064  Cell






-----Original Message-----
From: [hidden email] [mailto:[hidden email]] On Behalf
Of Jochen Fromm
Sent: February-06-11 3:25 AM
To: The Friday Morning Applied Complexity Coffee Group
Subject: Re: [FRIAM] A question for your Roboteers out there

Hi Nick,

I would say language is the key, it is useful if the robot understands
language. A robot usually cannot recognize or perceive itself, if it is not
able to understand language.

In animals, information about the system itself is so important that it is
usually processed and controlled by an own system, the limbic system and the
autonomic nervous system, or in other words, largely by emotions.
So "information about the system itself" is processed by the limbic
system,
and "information about other things" by the cerebral cortex.

If robots are able to understand things
through language, then the point where
they start to distinguish "information about the system itself" and
"information about other things" is the point where self-awareness
begins.
To know the self means to know where the self ends, and where the rest of
the world begins.

-J.

----- Original Message -----
From: Nicholas Thompson
To: The Friday Morning Applied Complexity Coffee Group
Sent: Saturday, February 05, 2011 8:29 PM
Subject: [FRIAM] A question for your Roboteers out there

At what point in the complexity of a robot (or any other control
system)
does it begin to seem useful to parse input into "information about the
system itself" and "information about other things"?

Nick


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College lectures, archives,
unsubscribe, maps at http://www.friam.org


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org


Eric Charles

Professional Student and
Assistant Professor of Psychology
Penn State University
Altoona, PA 16601



============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org
Reply | Threaded
Open this post in threaded view
|

Re: A question for your Roboteers out there

Nick Thompson

Eric, You wrote, paraphrasing Gibson,

 

that there was no easy distinction between exteroception and proprioception).

 

Yes BUT….

 

Some of that information from the world is more useful to predicting what I am going to do and other information is more useful for predict what other things are going to do.  I agree with JimL’s point that simple navigation at sea can be pursued in an egocentric manner, but as the Hutchins book makes clear, precious little in navy navigation is actually done that way. 

 

Nick

 

From: [hidden email] [mailto:[hidden email]] On Behalf Of ERIC P. CHARLES
Sent: Tuesday, February 08, 2011 5:32 PM
To: Vladimyr Burachynsky
Cc: 'The Friday Morning Applied Complexity Coffee Group'
Subject: Re: [FRIAM] A question for your Roboteers out there

 

I'm not sure whether it matters to this discussion, but James Gibson (famous perceptual researcher) claimed there was no information about the world that was not information about the self (or in psych-parlance, that there was no easy distinction between exteroception and proprioception). Perception of "the orientation of a surface," for example, is always perception of "where I am," similarly perception of "me falling" is also the perception of "the ground moving towards my head."

Eric


On Tue, Feb 8, 2011 05:06 PM, "Vladimyr Burachynsky" <[hidden email]> wrote:

 
Jochen said"  "information about the system itself" and
"information about
other things" is the point where self-awareness begins "
 
Perhaps this thought is perhaps a little overly compacted. Information about
self does not require language, indeed awareness of the outside world does
not require language. If both are in place language does not arise
automatically.  It does seem that a model of the world mapped out of
perceptions must exist and another symbolic map linking all images of
reality to meanings and to verbal symbols most also be in place. 
 
There is still a lot of wiggle room about when self awareness  emerges.  I
am going to assume no human being is born knowing the language of it's
parents. That requires that an individual interact to begin learning the
things in its environment and the symbolic sounds and meanings. So the most
complex brain on the planet spends some 2 or more decades learning languages
bit by bit. Perhaps self awareness is a continuum not an actual object.
Through language games the individual constantly redefines the state of self
awareness. 
 
That machine Mind we are hypothesizing apparently inherits the complete
library of outside things as well as the libraries of symbols and meanings
and does not require the prolonged tutoring of humans. This is actually a
very radical concept with some very peculiar consequences i.e. An entity
that requires no childhood or social connections yet is fully capable of
communicating with every other member immediately. I suspect that such
entities would not actually be social entities. They may be coldly
indifferent or exploitative of each other. Also these entities would not
have the ability to adapt should the environment change quickly. 
If it is not already defined in all the relevant libraries , It seems to
have no means of extension according to the preliminary model we are playing
with.  
 
\That does not seem to be what any of us had in mind when the discussion
started. It seems that to be what we call self aware it must exist in a
society and be able to also distinguish its thoughts from those of others.
That difference in individuals must also be attached to some kind of
motivation such as curiosity in order for them to exchange information. That
requires the Natural learning method that was assumed no longer useful?
 with a requirement for information exchange and some socialization from
childhood the entities enjoy learning or so it would appear. So why do
humans resist Learning after some period of time.? Was there a failure
introduced by accident?
 
 
VIB
Vladimyr Ivan Burachynsky PhD
 
 
[hidden email]
 
120-1053 Beaverhill Blvd.
Winnipeg,Manitoba, R2J3R2
Canada 
 (204) 2548321 Land
(204) 8016064  Cell
 
 
 
 
 
 
-----Original Message-----
From: [hidden email] [mailto:[hidden email]] On Behalf
Of Jochen Fromm
Sent: February-06-11 3:25 AM
To: The Friday Morning Applied Complexity Coffee Group
Subject: Re: [FRIAM] A question for your Roboteers out there
 
Hi Nick,
 
I would say language is the key, it is useful if the robot understands
language. A robot usually cannot recognize or perceive itself, if it is not
able to understand language.
 
In animals, information about the system itself is so important that it is
usually processed and controlled by an own system, the limbic system and the
autonomic nervous system, or in other words, largely by emotions.
So "information about the system itself" is processed by the limbic
system,
and "information about other things" by the cerebral cortex.
 
If robots are able to understand things
through language, then the point where
they start to distinguish "information about the system itself" and
"information about other things" is the point where self-awareness
begins.
To know the self means to know where the self ends, and where the rest of
the world begins.
 
-J.
 
----- Original Message -----
From: Nicholas Thompson
To: The Friday Morning Applied Complexity Coffee Group
Sent: Saturday, February 05, 2011 8:29 PM
Subject: [FRIAM] A question for your Roboteers out there
 
At what point in the complexity of a robot (or any other control
system)
does it begin to seem useful to parse input into "information about the
system itself" and "information about other things"?
 
Nick
 
 
============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College lectures, archives,
unsubscribe, maps at http://www.friam.org
 
 
============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org
 
 

Eric Charles

Professional Student and
Assistant Professor of Psychology
Penn State University
Altoona, PA 16601


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org
Reply | Threaded
Open this post in threaded view
|

Re: A question for your Roboteers out there

Vladimyr Burachynsky

Well Gentlemen,

 

I must pause and read more of the Gombrich /Gibson dispute.  Jochen started all this …  I learned long ago to be fearful of little ideas and silly questions.

Curious that the principals all seem to have some experience with aerial imaging.  I was also an aerial photo interpreter for a spell and used both aerial and later false color satellite images in creating maps.

Life never progresses in straight lines as we childishly expected.

 

It is not uncommon to learn to do something and be unable to explain how you were accomplishing the task.  Perhaps this is rather more common than not.

Thank you all.

 

Curiosity is still  a problem even after all the investment into disabling it.

 

 

Vladimyr Ivan Burachynsky PhD

[hidden email]

 

120-1053 Beaverhill Blvd.

Winnipeg,Manitoba, R2J3R2

Canada

 (204) 2548321 Land

(204) 8016064  Cell

 

 

 

 

 

From: [hidden email] [mailto:[hidden email]] On Behalf Of Nicholas Thompson
Sent: February-08-11 6:42 PM
To: 'The Friday Morning Applied Complexity Coffee Group'
Subject: Re: [FRIAM] A question for your Roboteers out there

 

Eric, You wrote, paraphrasing Gibson,

 

that there was no easy distinction between exteroception and proprioception).

 

Yes BUT….

 

Some of that information from the world is more useful to predicting what I am going to do and other information is more useful for predict what other things are going to do.  I agree with JimL’s point that simple navigation at sea can be pursued in an egocentric manner, but as the Hutchins book makes clear, precious little in navy navigation is actually done that way. 

 

Nick

 

From: [hidden email] [mailto:[hidden email]] On Behalf Of ERIC P. CHARLES
Sent: Tuesday, February 08, 2011 5:32 PM
To: Vladimyr Burachynsky
Cc: 'The Friday Morning Applied Complexity Coffee Group'
Subject: Re: [FRIAM] A question for your Roboteers out there

 

I'm not sure whether it matters to this discussion, but James Gibson (famous perceptual researcher) claimed there was no information about the world that was not information about the self (or in psych-parlance, that there was no easy distinction between exteroception and proprioception). Perception of "the orientation of a surface," for example, is always perception of "where I am," similarly perception of "me falling" is also the perception of "the ground moving towards my head."

Eric


On Tue, Feb 8, 2011 05:06 PM, "Vladimyr Burachynsky" <[hidden email]> wrote:

 
Jochen said"  "information about the system itself" and
"information about
other things" is the point where self-awareness begins "
 
Perhaps this thought is perhaps a little overly compacted. Information about
self does not require language, indeed awareness of the outside world does
not require language. If both are in place language does not arise
automatically.  It does seem that a model of the world mapped out of
perceptions must exist and another symbolic map linking all images of
reality to meanings and to verbal symbols most also be in place. 
 
There is still a lot of wiggle room about when self awareness  emerges.  I
am going to assume no human being is born knowing the language of it's
parents. That requires that an individual interact to begin learning the
things in its environment and the symbolic sounds and meanings. So the most
complex brain on the planet spends some 2 or more decades learning languages
bit by bit. Perhaps self awareness is a continuum not an actual object.
Through language games the individual constantly redefines the state of self
awareness. 
 
That machine Mind we are hypothesizing apparently inherits the complete
library of outside things as well as the libraries of symbols and meanings
and does not require the prolonged tutoring of humans. This is actually a
very radical concept with some very peculiar consequences i.e. An entity
that requires no childhood or social connections yet is fully capable of
communicating with every other member immediately. I suspect that such
entities would not actually be social entities. They may be coldly
indifferent or exploitative of each other. Also these entities would not
have the ability to adapt should the environment change quickly. 
If it is not already defined in all the relevant libraries , It seems to
have no means of extension according to the preliminary model we are playing
with.  
 
\That does not seem to be what any of us had in mind when the discussion
started. It seems that to be what we call self aware it must exist in a
society and be able to also distinguish its thoughts from those of others.
That difference in individuals must also be attached to some kind of
motivation such as curiosity in order for them to exchange information. That
requires the Natural learning method that was assumed no longer useful?
 with a requirement for information exchange and some socialization from
childhood the entities enjoy learning or so it would appear. So why do
humans resist Learning after some period of time.? Was there a failure
introduced by accident?
 
 
VIB
Vladimyr Ivan Burachynsky PhD
 
 
[hidden email]
 
120-1053 Beaverhill Blvd.
Winnipeg,Manitoba, R2J3R2
Canada 
 (204) 2548321 Land
(204) 8016064  Cell
 
 
 
 
 
 
-----Original Message-----
From: [hidden email] [mailto:[hidden email]] On Behalf
Of Jochen Fromm
Sent: February-06-11 3:25 AM
To: The Friday Morning Applied Complexity Coffee Group
Subject: Re: [FRIAM] A question for your Roboteers out there
 
Hi Nick,
 
I would say language is the key, it is useful if the robot understands
language. A robot usually cannot recognize or perceive itself, if it is not
able to understand language.
 
In animals, information about the system itself is so important that it is
usually processed and controlled by an own system, the limbic system and the
autonomic nervous system, or in other words, largely by emotions.
So "information about the system itself" is processed by the limbic
system,
and "information about other things" by the cerebral cortex.
 
If robots are able to understand things
through language, then the point where
they start to distinguish "information about the system itself" and
"information about other things" is the point where self-awareness
begins.
To know the self means to know where the self ends, and where the rest of
the world begins.
 
-J.
 
----- Original Message -----
From: Nicholas Thompson
To: The Friday Morning Applied Complexity Coffee Group
Sent: Saturday, February 05, 2011 8:29 PM
Subject: [FRIAM] A question for your Roboteers out there
 
At what point in the complexity of a robot (or any other control
system)
does it begin to seem useful to parse input into "information about the
system itself" and "information about other things"?
 
Nick
 
 
============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College lectures, archives,
unsubscribe, maps at http://www.friam.org
 
 
============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org
 
 

Eric Charles

Professional Student and
Assistant Professor of Psychology
Penn State University
Altoona, PA 16601


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org
Reply | Threaded
Open this post in threaded view
|

Re: A question for your Roboteers out there

Vladimyr Burachynsky
In reply to this post by Nick Thompson

Gentlemen,

 

There may be another consideration to include in the Mind and its internal mental models and the components we refer to as images.

 

The fovea of the eye has recently been described as having an inner densely packed region of cone cells very much smaller than previously  described(a Microfovea).

The eyes are capable of very rapid microscopic scanning, to perch an edge just over these cells. The high speed twitch was discovered when investigating certain reading disorders. Apparently accomplished readers could scan a sentence and using the fine twitch( I’ll dig up the reference in a few days) The expert  reader was able to scan an entire sentence at one pass and input the entire string(The eye moves from left to right but is simultaneously twitching at a very high frequency while still steadily progressing to the right  (A single letter or entire word could be balanced upon the micrfovea twitching so that more Pixels  so to speak are collecting data) .Some would call this a form of adaptive optics as used in astronomy.  Those that were unable to do this were deliberately scanning one letter at a time and attempting to build up each word letter by letter and word by word to gain the sentence. A horribly tedious task and not one I can imagine facilitating reading Novels. These people pass all typical vision testing.  The point of this anecdote is that the mental images are not simple two dimensional and the observer is performing some other task besides opening his eyes. The observer is adjusting his eyes and stance to collect a multiplicity of images and different focal lengths and from different references. The mental image of the object is extraordinarily complex. So simple images comparison may not be adequate when describing thought at the preliminary stages. The multiple focal lengths, apertures, and the fact that images are scanned across the retina in a number of patterns suggests that the observer is indeed sampling the environment which was a key point on one side or other of the cognitivists debate.  The way the brain is evaluating each image and eliciting further sequences of images with slight adjustments is unlike simple photography.

 

The collection of visual data is probably more complex than smell or touch, and It may seem that the act of collecting visual data is itself the earliest evidence of a mind in operation. I suspect that the need for studying an image is the need to find patterns within the image that are familiar with those already in memory. Pattern recognition. Hunting for edges and shadows perhaps geometric primitives as well. So the thinking process starts at the first moment of observation . The later forms of thinking seem to be more like reflection and are less active and require less physical participation. Not many individuals are aware of what their eyes are doing when doing simple tasks but they are highly engaged.

 

If some meaning is associated with individual images ( where the edges lay next to each other, the edges become some it) , then the cascade of images may in some manner be building a small narrative. I am here , it is there, the sun is over there the wind comes from there and my dog is running after the white rabbit. ( Simple propositions for a collection of its and whether they are moving or not) Later reflection adds more detail the breed of dog, the species of rabbit, North South, The name of the mountain range, the state or province( Perhaps causality is introduced at this point correctly or incorrectly). The narrator of his experience requires language at some later state to put into order all the ancillary information correctly for sharing with others. Perhaps he builds the primitive narrative simply to store for easier recall ( he may well be a scientific witness or an emotional hedonist). Reflection seems pretty far down the line and may only be required to update minor details. As The narrative must be open and available for amendments or combination with other such holiday experiences. For instance the date and time would be added later so that it can be sequenced with other narratives in order to be an engaging guest at a beer festival for instance. The narrative may be the only choice for storage. It seems that the specific language of the speaker only enters after the simple propositions are created.  The simple propositions come closest to being modelled with notation , the complex narrative requires considerably more elaboration and then introduces ambiguity.

 

Correct me where I stray off . But it seems that the Mind we wish to construct has much to do with Cinematography concepts. That implies much editing .

 

Vladimyr Ivan Burachynsky PhD

 

 

[hidden email]

 

120-1053 Beaverhill Blvd.

Winnipeg,Manitoba, R2J3R2

Canada

 (204) 2548321 Land

(204) 8016064  Cell

 

 

 

From: [hidden email] [mailto:[hidden email]] On Behalf Of Nicholas Thompson
Sent: February-08-11 6:42 PM
To: 'The Friday Morning Applied Complexity Coffee Group'
Subject: Re: [FRIAM] A question for your Roboteers out there

 

Eric, You wrote, paraphrasing Gibson,

 

that there was no easy distinction between exteroception and proprioception).

 

Yes BUT….

 

Some of that information from the world is more useful to predicting what I am going to do and other information is more useful for predict what other things are going to do.  I agree with JimL’s point that simple navigation at sea can be pursued in an egocentric manner, but as the Hutchins book makes clear, precious little in navy navigation is actually done that way. 

 

Nick

 

From: [hidden email] [mailto:[hidden email]] On Behalf Of ERIC P. CHARLES
Sent: Tuesday, February 08, 2011 5:32 PM
To: Vladimyr Burachynsky
Cc: 'The Friday Morning Applied Complexity Coffee Group'
Subject: Re: [FRIAM] A question for your Roboteers out there

 

I'm not sure whether it matters to this discussion, but James Gibson (famous perceptual researcher) claimed there was no information about the world that was not information about the self (or in psych-parlance, that there was no easy distinction between exteroception and proprioception). Perception of "the orientation of a surface," for example, is always perception of "where I am," similarly perception of "me falling" is also the perception of "the ground moving towards my head."

Eric


On Tue, Feb 8, 2011 05:06 PM, "Vladimyr Burachynsky" <[hidden email]> wrote:

 
Jochen said"  "information about the system itself" and
"information about
other things" is the point where self-awareness begins "
 
Perhaps this thought is perhaps a little overly compacted. Information about
self does not require language, indeed awareness of the outside world does
not require language. If both are in place language does not arise
automatically.  It does seem that a model of the world mapped out of
perceptions must exist and another symbolic map linking all images of
reality to meanings and to verbal symbols most also be in place. 
 
There is still a lot of wiggle room about when self awareness  emerges.  I
am going to assume no human being is born knowing the language of it's
parents. That requires that an individual interact to begin learning the
things in its environment and the symbolic sounds and meanings. So the most
complex brain on the planet spends some 2 or more decades learning languages
bit by bit. Perhaps self awareness is a continuum not an actual object.
Through language games the individual constantly redefines the state of self
awareness. 
 
That machine Mind we are hypothesizing apparently inherits the complete
library of outside things as well as the libraries of symbols and meanings
and does not require the prolonged tutoring of humans. This is actually a
very radical concept with some very peculiar consequences i.e. An entity
that requires no childhood or social connections yet is fully capable of
communicating with every other member immediately. I suspect that such
entities would not actually be social entities. They may be coldly
indifferent or exploitative of each other. Also these entities would not
have the ability to adapt should the environment change quickly. 
If it is not already defined in all the relevant libraries , It seems to
have no means of extension according to the preliminary model we are playing
with.  
 
\That does not seem to be what any of us had in mind when the discussion
started. It seems that to be what we call self aware it must exist in a
society and be able to also distinguish its thoughts from those of others.
That difference in individuals must also be attached to some kind of
motivation such as curiosity in order for them to exchange information. That
requires the Natural learning method that was assumed no longer useful?
 with a requirement for information exchange and some socialization from
childhood the entities enjoy learning or so it would appear. So why do
humans resist Learning after some period of time.? Was there a failure
introduced by accident?
 
 
VIB
Vladimyr Ivan Burachynsky PhD
 
 
[hidden email]
 
120-1053 Beaverhill Blvd.
Winnipeg,Manitoba, R2J3R2
Canada 
 (204) 2548321 Land
(204) 8016064  Cell
 
 
 
 
 
 
-----Original Message-----
From: [hidden email] [mailto:[hidden email]] On Behalf
Of Jochen Fromm
Sent: February-06-11 3:25 AM
To: The Friday Morning Applied Complexity Coffee Group
Subject: Re: [FRIAM] A question for your Roboteers out there
 
Hi Nick,
 
I would say language is the key, it is useful if the robot understands
language. A robot usually cannot recognize or perceive itself, if it is not
able to understand language.
 
In animals, information about the system itself is so important that it is
usually processed and controlled by an own system, the limbic system and the
autonomic nervous system, or in other words, largely by emotions.
So "information about the system itself" is processed by the limbic
system,
and "information about other things" by the cerebral cortex.
 
If robots are able to understand things
through language, then the point where
they start to distinguish "information about the system itself" and
"information about other things" is the point where self-awareness
begins.
To know the self means to know where the self ends, and where the rest of
the world begins.
 
-J.
 
----- Original Message -----
From: Nicholas Thompson
To: The Friday Morning Applied Complexity Coffee Group
Sent: Saturday, February 05, 2011 8:29 PM
Subject: [FRIAM] A question for your Roboteers out there
 
At what point in the complexity of a robot (or any other control
system)
does it begin to seem useful to parse input into "information about the
system itself" and "information about other things"?
 
Nick
 
 
============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College lectures, archives,
unsubscribe, maps at http://www.friam.org
 
 
============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org
 
 

Eric Charles

Professional Student and
Assistant Professor of Psychology
Penn State University
Altoona, PA 16601


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org
12