Robert Rosen

classic Classic list List threaded Threaded
26 messages Options
12
Reply | Threaded
Open this post in threaded view
|

Robert Rosen

Marcus G. Daniels
Glen wrote:
> And if you tell it that
> there are only, say, 10 possible answers, it will _merely_ produce one
> of those prescribed 10 possible answers.  
>  
You could say that about an employee, too, but that doesn't give much
insight into what that person might actually be able to do.
> (I live for the day when I ask
> a computer:  "Is this true or false?"  And it answers:  "Neither, it's
> _blue_!"  ;-)
Computers typically don't do that, except in paraphrasing/concept
extraction expert systems (e.g. Cyc), because people don't typically
want them to do that.  For example, it's clear when this Java program is
compiled that the compiler knows what `color' really is.  

enum Color { Blue, Red }

public class Test {

  static void main (String args[] ) {

    Color color = Color.Blue;

    if (color == true) {
      System.out.println ("true!");
    }
  }
}

One easy way to let that go is to switch to a dynamically typed
language, where logical inconsistencies are dealt with in a case by case
basis by the programmer.  (Presumably until the programmer can `see' how
things should fit together.)

As far as detecting (supposedly) ill-posed questions goes, if you are
willing to put aside the complex matter of natural language processing,
it seems to me it's a matter of similarity search against a set
propositions, and then engaging in a dialog of generalization and
precisification with the user to identify an unambiguous and agreeable
form for the question that has appropriate answers.  

Marcus



Reply | Threaded
Open this post in threaded view
|

Robert Rosen

glen ep ropella
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Marcus G. Daniels on 01/08/2008 08:49 AM:
> As far as detecting (supposedly) ill-posed questions goes, if you are
> willing to put aside the complex matter of natural language processing,
> it seems to me it's a matter of similarity search against a set
> propositions, and then engaging in a dialog of generalization and
> precisification with the user to identify an unambiguous and agreeable
> form for the question that has appropriate answers.  

But the issue isn't about handling ill-posed questions on a case-by-case
basis.  In fact, the hypothesis is that ill- versus well- posed
questions is an unrealistic dichotomy.  It's just another form of the
"excluded middle".

A primary point made by RR is that living systems can handle ambiguity
where "machines" cannot.

Of course, it's true that if a programmer pre-scribed a method for
detecting and handling some particular ambiguity, then the machine will
_seem_ like it handles that ambiguity.  But, programmers haven't yet
found a way to handle all ambiguity a computer program may or may not
come across in the far-flung future.  That's in contrast to a living
system, which we _presume_ can handle any ambiguity presented to it (or,
in a softer sense, many many more ambiguities than a computer program
can handle).

- --
glen e. p. ropella, 971-219-3846, http://tempusdictum.com
Almost nobody dances sober, unless they happen to be insane. -- H. P.
Lovecraft

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.6 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFHg7G4ZeB+vOTnLkoRAjTtAKCu0nimkhWcQdIYDn8Uy05N6jwaUACfUzUc
g6rWx3ZPlmAaayG7qqJHJ1g=
=kWTj
-----END PGP SIGNATURE-----


Reply | Threaded
Open this post in threaded view
|

Robert Rosen

Marcus G. Daniels
Glen E. P. Ropella wrote:
> But, programmers haven't yet
> found a way to handle all ambiguity a computer program may or may not
> come across in the far-flung future.  That's in contrast to a living
> system, which we _presume_ can handle any ambiguity presented to it (or,
> in a softer sense, many many more ambiguities than a computer program
> can handle).
>  
Perception, locomotion, and signaling are capabilities that animals have
evolved for millions of years.   It's not fair to compare a learning
algorithm to the learning capabilities of a living system without
factoring in the fact that robots aren't disposable for the sake of
realizing evolutionary selection and search.  And even if they weren't,
do you want drive over robots on the highway to make it so?   Anything
that requires significant short term memory and integration of broad but
scare evidence is probably something a computer will be better at than a
human.  It may be that a `programmer' implements a self-organized neural
net, or an kernel eigensystem solver but that only concerns the large
classes of signals that can be extracted.   It's not like some giant
if/then statement for all possible cases that a programmer would keep
tweaking.

My assertion remains that the things computers do are primarily limited
by the desire of humans to 1) understand what was learned, and then 2)
use it.   If those two conditions are removed, then we are talking about
a very different scenario.  There's little incentive to develop control
systems for robots to keep them stumbling around as long as possible,
with no limits on the actions they can take.

Marcus


Reply | Threaded
Open this post in threaded view
|

Robert Rosen

Phil Henshaw-2
In reply to this post by glen ep ropella
I thought the implication was that the organization of life is an
inherently ill-posed question from an observer's perspective.  To me
that either means you accept 'bad answers' or 'better and better
answers', and the difference is methodological.



Phil Henshaw                       ????.?? ? `?.????
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
680 Ft. Washington Ave
NY NY 10040                      
tel: 212-795-4844                
e-mail: pfh at synapse9.com          
explorations: www.synapse9.com    
-- "it's not finding what people say interesting, but finding what's
interesting in what they say" --


> -----Original Message-----
> From: friam-bounces at redfish.com
> [mailto:friam-bounces at redfish.com] On Behalf Of Glen E. P. Ropella
> Sent: Tuesday, January 08, 2008 12:24 PM
> To: The Friday Morning Applied Complexity Coffee Group
> Subject: Re: [FRIAM] Robert Rosen
>
>
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
>
> Marcus G. Daniels on 01/08/2008 08:49 AM:
> > As far as detecting (supposedly) ill-posed questions goes,
> if you are
> > willing to put aside the complex matter of natural language
> processing,
> > it seems to me it's a matter of similarity search against a set
> > propositions, and then engaging in a dialog of generalization and
> > precisification with the user to identify an unambiguous
> and agreeable
> > form for the question that has appropriate answers.  
>
> But the issue isn't about handling ill-posed questions on a
> case-by-case basis.  In fact, the hypothesis is that ill-
> versus well- posed questions is an unrealistic dichotomy.  
> It's just another form of the "excluded middle".
>
> A primary point made by RR is that living systems can handle
> ambiguity where "machines" cannot.
>
> Of course, it's true that if a programmer pre-scribed a
> method for detecting and handling some particular ambiguity,
> then the machine will _seem_ like it handles that ambiguity.  
> But, programmers haven't yet found a way to handle all
> ambiguity a computer program may or may not come across in
> the far-flung future.  That's in contrast to a living system,
> which we _presume_ can handle any ambiguity presented to it
> (or, in a softer sense, many many more ambiguities than a
> computer program can handle).
>
> - --
> glen e. p. ropella, 971-219-3846, http://tempusdictum.com 
> Almost nobody dances sober, unless they happen to be insane.
> -- H. P. Lovecraft
>
> -----BEGIN PGP SIGNATURE-----
> Version: GnuPG v1.4.6 (GNU/Linux)
> Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org
>
> iD8DBQFHg7G4ZeB+vOTnLkoRAjTtAKCu0nimkhWcQdIYDn8Uy05N6jwaUACfUzUc
> g6rWx3ZPlmAaayG7qqJHJ1g=
> =kWTj
> -----END PGP SIGNATURE-----
>
> ============================================================
> FRIAM Applied Complexity Group listserv
> Meets Fridays 9a-11:30 at cafe at St. John's College
> lectures, archives, unsubscribe, maps at http://www.friam.org
>
>




Reply | Threaded
Open this post in threaded view
|

Robert Rosen

glen ep ropella
In reply to this post by Marcus G. Daniels
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Marcus G. Daniels on 01/08/2008 10:44 AM:
> Perception, locomotion, and signaling are capabilities that animals
> have evolved for millions of years.   It's not fair to compare a
> learning algorithm to the learning capabilities of a living system
> without factoring in the fact that robots aren't disposable for the
> sake of realizing evolutionary selection and search.

Well, first off, you're not arguing with me.  [grin]  I'm doing my best
to explain what I understand of RR's ideas.  That's all.  Feel free to
go over to one of the rosen mailing lists to see how they mostly dislike
me because I disagree with them (... or in their opinion because I'm a
stubborn jerk who doesn't listen to them or because I'm just too lazy to
dig really deep into RR's ideas ... or whatever ;-).

> Anything that requires significant short
> term memory and integration of broad but scare evidence is probably
> something a computer will be better at than a human.

That's just plain silly in terms of RR's ideas because _humans_ program
the computer.  Until/unless we come up with a computer that programs
itself, or a computer that programs another computer, or something of
that sort, computers will _never_ be better at any task than humans.

I.e. in RR terms, humans are THE canalizing "efficient cause" for any
computer system.

> My assertion remains that the things computers do are primarily
> limited by the desire of humans to 1) understand what was learned,
> and then 2) use it.   If those two conditions are removed, then we
> are talking about a very different scenario.  There's little
> incentive to develop control systems for robots to keep them
> stumbling around as long as possible, with no limits on the actions
> they can take.

Computers don't _do_ anything.  Humans _do_ things using computers as tools.

I believe that will change.  But for the time being, it's the case.
Anything else is speculation.  RR speculates (albeit with significant
rhetoric and build-up) that computers, as we now know them, are
incapable of doing everything living systems do.  I would speculate that
computers, as we now know them (but with manymanymanymany more of them
and radically different software), can do everything living systems do.
 But, just because my opinion differs from RR's doesn't mean both
opinions are anything more than speculation.

- --
glen e. p. ropella, 971-219-3846, http://tempusdictum.com
The Government of the United States is in no sense founded on the
Christian religion. -- John Adams

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.6 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFHg8xOZeB+vOTnLkoRAuAEAKCDue88MCLn77MZv/riWMkqE6l0cwCgn50L
izuRo5hXA/ySB2u83GdBUWA=
=YuBQ
-----END PGP SIGNATURE-----


Reply | Threaded
Open this post in threaded view
|

enough of Robert Rosen

Marcus G. Daniels
Glen E. P. Ropella wrote:

>> Anything that requires significant short
>> term memory and integration of broad but scare evidence is probably
>> something a computer will be better at than a human.
>>    
>
> That's just plain silly in terms of RR's ideas because _humans_ program
> the computer.  Until/unless we come up with a computer that programs
> itself, or a computer that programs another computer, or something of
> that sort, computers will _never_ be better at any task than humans.
>  
As you know, one form is Genetic Programming.  
> I.e. in RR terms, humans are THE canalizing "efficient cause" for any
> computer system.
>  
Fine, so let's move on from RR terms.   It seems to be a dead end!

Marcus


Reply | Threaded
Open this post in threaded view
|

enough of Robert Rosen

glen ep ropella
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Marcus G. Daniels on 01/08/2008 11:44 AM:
> Fine, so let's move on from RR terms.   It seems to be a dead end!

No, it's not a dead-end.  It's just a body of theoretical work that we
may or may not need as yet.  I fully support the development of theory
prior to needing that theory.

What if we plug along in the "computationalist" paradigm for the next
100 years and _finally_ realize that, hey! we could have used that
gobbledygook Robert Rosen generated?  Or, worse yet, what if _forget_
about it completely and end up reinventing it?

No, I don't think we should categorize RR terms as a dead-end... not
yet, anyway.

However, if you don't like speculative theoretical discussions, then
feel free to avoid them! [grin]

- --
glen e. p. ropella, 971-219-3846, http://tempusdictum.com
If people never did silly things, nothing intelligent would ever get
done. -- Ludwig Wittgenstein

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.6 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFHg9k+ZeB+vOTnLkoRAgo8AJ48T7cnRvaK7aDoOEMYsBgBHynYpgCg2p9/
7jpoMl79OW5SuwfoGQqNUVI=
=zuRw
-----END PGP SIGNATURE-----


Reply | Threaded
Open this post in threaded view
|

Robert Rosen

glen ep ropella
In reply to this post by Phil Henshaw-2
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Phil Henshaw on 01/08/2008 11:14 AM:
> I thought the implication was that the organization of life is an
> inherently ill-posed question from an observer's perspective.  To me
> that either means you accept 'bad answers' or 'better and better
> answers', and the difference is methodological.

Sure, if by "ill-posed" you mean "our formalisms can't handle the
truth".  If you must use a binary categorization of accepting bad
answers or better and better answers, then RR's work would fall into the
latter category.

And he was trying to help us rigorously determine if and how our
formalisms are inadequate.... i.e. if we get to the point where we can't
accept the poor expressiveness of our formalisms, then what do we do?
Well, develop a more powerful formalism ... hence all that hoo-ha about
category theory in "Life Itself".

- --
glen e. p. ropella, 971-219-3846, http://tempusdictum.com
Moderation in temper is always a virtue; but moderation in principle is
always a vice. -- Thomas Paine

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.6 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFHg9pCZeB+vOTnLkoRAkuaAKC92EpmCOuX7YGG03aPOaAC+h1GawCgtsq9
Z1ggeFcamjQAM3cwKLxElPo=
=CQXD
-----END PGP SIGNATURE-----


Reply | Threaded
Open this post in threaded view
|

enough of Robert Rosen

Marcus G. Daniels
In reply to this post by glen ep ropella
Glen E. P. Ropella wrote:
> It's just a body of theoretical work that we
> may or may not need as yet.  I fully support the development of theory
> prior to needing that theory.
Fine, and I fully support the deconstruction of theory prior to using it!
In what way does Genetic Programming not provide an efficient cause?  
Having a stochastic aspect, and the possibility to define new
instructions, it seems to me to provide an escape from anything a human
might have intended.   This learning algorithm could escape the
constraints of being a `tool' by being used in a robot with similar
senses as ours and interacting with the conditions of the `real' world.

Marcus


Reply | Threaded
Open this post in threaded view
|

enough of Robert Rosen

glen ep ropella
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Marcus G. Daniels on 01/08/2008 12:46 PM:
> Fine, and I fully support the deconstruction of theory prior to using it!

That's the spirit!

> In what way does Genetic Programming not provide an efficient cause?  
> Having a stochastic aspect, and the possibility to define new
> instructions, it seems to me to provide an escape from anything a human
> might have intended.   This learning algorithm could escape the
> constraints of being a `tool' by being used in a robot with similar
> senses as ours and interacting with the conditions of the `real' world.

Well, correct me if I'm wrong, but GP currently requires a human to set
up the objective function.  And even in the cases where a system is
created so that the objective function is dynamically (and/or
implicitly) evolved, my suspicion is that the GP would soon find a
computational exploit that would result in either an infinite loop
(and/or deadlock), crash, or some sort of "exception".

As for the robot, you're just begging the question.  A robot is a tool
built and programmed by us.  Or, positing a regression to where we are
currently, a robot_N that is built by robot_(N-1), that is built by
robot_(N-2), ..., is built by a living system.

RR's position might be that such a chain from 1 .. N is more fragile
than a lineage of living systems.  Namely, the efficient cause (humans
in this case) cannot be removed even with a large but finite N _because_
machines are not closed to efficient cause.

Whether or not RR's rhetoric is _sound_ is one thing.  We can prove his
rhetoric unsound by creating such a robot lineage.  But to prove his
rhetoric invalid, we'll have to show that computation is not fragile to
ambiguity.  And as far as I can tell, such a proof (that RR's rhetoric
is invalid) would involve a constructive proof that sets up a holarchy
of formal systems that, together, are not fragile in the way GP systems
are fragile.

Somehow we would have to build a set of (sufficiently complicated, as in
modern mathematics) formal systems and prove ([meta-]mathematically)
that this set is robust to ambiguity.  I.e. it will never go into an
infinite (null) loop, crash, or trigger some exception.


Of course, we could take the _easier_ tack and point out a technical
flaw in RR's rhetoric (as the largely ineffective criticism of Penrose's
argument does).  My choice for such a cheap shot criticism lies at the
heart of "closure to efficient cause".  And my criticism is basically
that nothing is really closed to efficient cause.  Everything is
embedded in a dynamically generated and evolving goo that is
holistically dependent on everything else in the goo.  But even if such
cheap shots are successful in getting people to ignore RR, it still
doesn't make any progress on RR's main question:  "can we devise better
formalisms that more accurately describe living systems?"

- --
glen e. p. ropella, 971-219-3846, http://tempusdictum.com
We are drowning in information, while starving for wisdom. -- E.O. Wilson

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.6 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFHg+xkZeB+vOTnLkoRApPWAKCgotysX3Ooh36zeYj7Ipg4Mm59hACdFX+x
krJqxKFwyGGc8q99ePPb9X8=
=c1fa
-----END PGP SIGNATURE-----


Reply | Threaded
Open this post in threaded view
|

enough of Robert Rosen

Albert Moore & Associates
In reply to this post by Marcus G. Daniels
 

 

-----Original Message-----
From: [hidden email] [mailto:[hidden email]] On Behalf Of Marcus G. Daniels
Sent: Tuesday, January 08, 2008 1:47 PM
To: The Friday Morning Applied Complexity Coffee Group
Subject: Re: [FRIAM] enough of Robert Rosen

 

Glen E. P. Ropella wrote:

> It's just a body of theoretical work that we

> may or may not need as yet.  I fully support the development of theory

> prior to needing that theory.

Fine, and I fully support the deconstruction of theory prior to using it!

In what way does Genetic Programming not provide an efficient cause?

 

Because you need to look beyond the physical plane of existence into the spiritual to see that the soul is the real
administrator of the life's path.  

Genetics is simply the hardware. Genetics is not actually causal and nothing physical can ever be causal. See The
Kybalion, Principles of Hermes.

 

Having a stochastic aspect, and the possibility to define new

instructions, it seems to me to provide an escape from anything a human

might have intended.   This learning algorithm could escape the

constraints of being a `tool' by being used in a robot with similar

senses as ours and interacting with the conditions of the `real' world.

 

Marcus

 

============================================================

FRIAM Applied Complexity Group listserv

Meets Fridays 9a-11:30 at cafe at St. John's College

lectures, archives, unsubscribe, maps at http://www.friam.org

 

 

 

--

No virus found in this incoming message.

Checked by AVG Free Edition.

Version: 7.5.516 / Virus Database: 269.17.13/1214 - Release Date: 1/8/2008 1:38 PM

 


  _____  

I am using the free version of SPAMfighter for private users.
It has removed 5220 spam emails to date.
Paying users do not have this message in their emails.
Try SPAMfighter <http://www.spamfighter.com/len>  for free now!

-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://redfish.com/pipermail/friam_redfish.com/attachments/20080108/2c780d34/attachment.html 

Reply | Threaded
Open this post in threaded view
|

not enough of Robert Rosen

Joost Rekveld
In reply to this post by glen ep ropella

On Jan 8, 2008, at 10:34 PM, Glen E. P. Ropella wrote:

>
>> In what way does Genetic Programming not provide an efficient cause?
>> Having a stochastic aspect, and the possibility to define new
>> instructions, it seems to me to provide an escape from anything a  
>> human
>> might have intended.   This learning algorithm could escape the
>> constraints of being a `tool' by being used in a robot with similar
>> senses as ours and interacting with the conditions of the `real'  
>> world.
>
> Well, correct me if I'm wrong, but GP currently requires a human to  
> set
> up the objective function.  And even in the cases where a system is
> created so that the objective function is dynamically (and/or
> implicitly) evolved, my suspicion is that the GP would soon find a
> computational exploit that would result in either an infinite loop
> (and/or deadlock), crash, or some sort of "exception".

This is certainly a good point, but from what I understand of Rosen's  
theories another limitation of GP has to do with the fact that the  
language in which the programming is done can not evolve. The syntax  
will always be circumscribed by a subset of the programming language  
that is used to set up the GP, and the semantics of what the symbols  
represent in terms of real-world measurements or actions will be  
fixed by the robot's senses and actuators.
The only novelty possible in such a system are new arrangements of  
already existing primitives (which in a computationalist view could  
be enough, actually, if you think like Zuse/Toffoli etc. that the  
universe really is a big computer anyway, which means that an  
isomorphism could be possible), but from what I understand from  
Rosen, Pattee, Pask and Cariani is that novelty in a real, non-
platonic (let's say Aristotelic ?) world has to do with the  
appearance of new primitives: new symbols with new meanings in a new  
syntax. The construction of symbols in the real world is an open-
ended process, which is why no isomorphism with a closed, formal  
system is possible.

right, my two cents worth and no doubt garbled version of what I've  
been reading lately. (I'm reading the thesis by Peter Cariani right  
now, who was a student of Pattee and sort of a follower of Rosen).
I'm not schooled in these matters in any way, so criticism, yes  
please, and I will probably have no other reply than pointing to the  
writings of Cariani and Rosen.

ciao,

Joost.


-------------------------------------------

                             Joost Rekveld
-----------    http://www.lumen.nu/rekveld

-------------------------------------------

?This alone I ask you, O reader, that when you peruse the
account of these marvels that you do not set up for yourself
as a standard human intellectual pride, but rather the great
size and vastness of earth and sky; and, comparing with
that Infinity these slender shadows in which miserably and
anxiously we are enveloped, you will easily know that I have
related nothing which is beyond belief.?
(Girolamo Cardano)

-------------------------------------------







Reply | Threaded
Open this post in threaded view
|

enough of Robert Rosen

Marcus G. Daniels
In reply to this post by glen ep ropella
Glen E. P. Ropella wrote:

>> In what way does Genetic Programming not provide an efficient cause?  
>> Having a stochastic aspect, and the possibility to define new
>> instructions, it seems to me to provide an escape from anything a human
>> might have intended.   This learning algorithm could escape the
>> constraints of being a `tool' by being used in a robot with similar
>> senses as ours and interacting with the conditions of the `real' world.
>>    
>
> Well, correct me if I'm wrong, but GP currently requires a human to set
> up the objective function.  And even in the cases where a system is
> created so that the objective function is dynamically (and/or
> implicitly) evolved, my suspicion is that the GP would soon find a
> computational exploit that would result in either an infinite loop
> (and/or deadlock), crash, or some sort of "exception".
>  
The objective function can be to an extent arbitrary and self-defined by
the agent, but there must be a large implicit emphasis on avoiding
death.  In a simulated world, a way to deal with exceptions is to trap
them, and then reflect that in the objective function.  Existing memory
management hardware, operating systems and programming languages have
good facilities for trapping exceptions.

Imagine you have some program evolving in a process on a Linux system.  
Yes, a program could [try] to allocate so much memory that system would
crash, or find a stack exploit (e.g. to get root and compromise the
kernel), but by in large the way a broken program would die because the
memory management hardware trapped illegal memory requests.  If a
process actually succeeds in killing the whole system, it's a security
bug in the operating system (or secure programming language, etc.).  As
for infinite loops or deadlocks, these are things that a management
process can readily detect.   For the worst case, satellites typically
have independent monitoring hardware that reboots the main operating
system should it become unresponsive.  But normally could you just have
one software process monitoring the performance of the others.  And here
I mean performance in the sense of CPU utilization (e.g. is it cycling
through the same program counter range over and over) and wall clock
runtime.  This is all in the context of a simulated environment, of
course.  In the robot example, the robots would just slump on the ground
or jump up and down or whatever until its energy supplies were exhausted.

Marcus


Reply | Threaded
Open this post in threaded view
|

enough of Robert Rosen

Marcus G. Daniels
In reply to this post by glen ep ropella
Glen E. P. Ropella wrote:
> As for the robot, you're just begging the question.  A robot is a tool
> built and programmed by us.  Or, positing a regression to where we are
> currently, a robot_N that is built by robot_(N-1), that is built by
> robot_(N-2), ..., is built by a living system.
>  
I'm imagining that the program that the robot executes is also genetic
program, but one that benefits from richer and more dynamic perceptual
data than in purely simulated world.   The genetic code can be inherited
by the robot (a rusty old robot transfers its instructions to a new
shiny robot), or the robot can evolve its own programs during its
lifetime using simulation or experiment.  The GP candidates are random
perturbations against things that sort-of work, so the random noise
eventually gets it or at one of its millions of peers out of local
minima (relative to its objectives).    There's also effectively noise
in addition to the signals from the dynamically generated and evolving
goo of the environment.  

Marcus


Reply | Threaded
Open this post in threaded view
|

not enough of Robert Rosen

Marcus G. Daniels
In reply to this post by Joost Rekveld
Joost Rekveld wrote:
> This is certainly a good point, but from what I understand of Rosen's  
> theories another limitation of GP has to do with the fact that the  
> language in which the programming is done can not evolve.
I don't see why this must be so.   One could imagine that a robot had a
field programmable gate array that could, in effect, burn an all new
processor and bring it online.  But, usually when new computer
architectures are being developed, the developers just write a software
simulator for it in initial stages (that mimics the intended physics of
the hardware design).
Even the adiabatic quantum computer people at DWave are using existing
silicon process technologies to design circuits..

> The syntax  
> will always be circumscribed by a subset of the programming language  
> that is used to set up the GP, and the semantics of what the symbols  
> represent in terms of real-world measurements or actions will be  
> fixed by the robot's senses and actuators.
Biotech, nanotech... ?


Reply | Threaded
Open this post in threaded view
|

not enough of Robert Rosen

Marcus G. Daniels
In reply to this post by Joost Rekveld
Joost Rekveld wrote:
> This is certainly a good point, but from what I understand of Rosen's  
> theories another limitation of GP has to do with the fact that the  
> language in which the programming is done can not evolve.
20 amino acids seem to go a long way...  :-)


Reply | Threaded
Open this post in threaded view
|

not enough of Robert Rosen

Joost Rekveld
In reply to this post by Marcus G. Daniels

On Jan 8, 2008, at 11:52 PM, Marcus G. Daniels wrote:

> Joost Rekveld wrote:
>> This is certainly a good point, but from what I understand of Rosen's
>> theories another limitation of GP has to do with the fact that the
>> language in which the programming is done can not evolve.
> I don't see why this must be so.   One could imagine that a robot  
> had a
> field programmable gate array that could, in effect, burn an all new
> processor and bring it online.

sure, but can a robot develop representations for other operations  
than those already in its specifications ?
can it design a processor that has some novel feature that is not  
already possible in the robots current architecture ?

> But, usually when new computer
> architectures are being developed, the developers just write a  
> software
> simulator for it in initial stages (that mimics the intended  
> physics of
> the hardware design).
> Even the adiabatic quantum computer people at DWave are using existing
> silicon process technologies to design circuits..

I guess the main creative factor in these examples are the people  
involved in designing new specifications and defining symbols  
representing aspects of the new hardware they are developing...


>
>> The syntax
>> will always be circumscribed by a subset of the programming language
>> that is used to set up the GP, and the semantics of what the symbols
>> represent in terms of real-world measurements or actions will be
>> fixed by the robot's senses and actuators.
> Biotech, nanotech... ?

yes, I guess so.
In this Cariani thesis I mentioned he posits some kind of real-world  
assembly process enabling the construction of new senses and actuators.
( see "On the design of devices with emergent semantic functions",  
<http://homepage.mac.com/cariani/CarianiWebsite/Cariani89.pdf> )

I guess the crucial difference is that such a self-constructing robot  
would be grounded in the real world and not in a prespecified  
computed universe. It would be able to evolve its own computed universe.
I'm not sure what to think of all this, but I like Cariani's ideas a  
lot and so far I haven't found any basic flaw in them.
But, as said, being non-schooled in these matters that doesn't  
necessarily mean very much.



On Jan 8, 2008, at 11:56 PM, Marcus G. Daniels wrote:
> Joost Rekveld wrote:
>> This is certainly a good point, but from what I understand of Rosen's
>> theories another limitation of GP has to do with the fact that the
>> language in which the programming is done can not evolve.
> 20 amino acids seem to go a long way...  :-)
>

characters make no language...


cheers,

Joost.

-------------------------------------------

                             Joost Rekveld
-----------    http://www.lumen.nu/rekveld

-------------------------------------------

?This alone I ask you, O reader, that when you peruse the
account of these marvels that you do not set up for yourself
as a standard human intellectual pride, but rather the great
size and vastness of earth and sky; and, comparing with
that Infinity these slender shadows in which miserably and
anxiously we are enveloped, you will easily know that I have
related nothing which is beyond belief.?
(Girolamo Cardano)

-------------------------------------------







Reply | Threaded
Open this post in threaded view
|

not enough of Robert Rosen

Marcus G. Daniels
Joost Rekveld wrote:
> sure, but can a robot develop representations for other operations  
> than those already in its specifications ?
> can it design a processor that has some novel feature that is not  
> already possible in the robots current architecture ?
>  
The main capability it would offer would be to make certain kinds of
calculations more feasible, or more accurate.
That could be important for a robot to specialize to new kinds of
physical environments, for example.
Another reason might be to capture chaotic effects that would only be
witnessed with parallel execution.
But I'm hard pressed to think of many things that can't be simulated, at
least in principle.   That's to say it can't be described or modeled,
which is to say that conversation about it ridiculous!  If not, it's up
to the modeler to say specific ways in which a set of primitives are
inadequate.

>> But, usually when new computer
>> architectures are being developed, the developers just write a  
>> software
>> simulator for it in initial stages (that mimics the intended  
>> physics of
>> the hardware design).
>> Even the adiabatic quantum computer people at DWave are using existing
>> silicon process technologies to design circuits..
>>    
>
> I guess the main creative factor in these examples are the people  
> involved in designing new specifications and defining symbols  
> representing aspects of the new hardware they are developing...
>  
But in this context, those symbols can be represented with the old
symbols, provided the old symbols were from a Turing complete system
(and they are).    The notion of introducing a symbol or verb to a
computational system is no big deal.   It's a primitive in programming
languages like Lisp.

>> 20 amino acids seem to go a long way...  :-)
>>    
>
> characters make no language...
>  
It seems to me it's the language that's important, and how suitable that
language is to the environment at hand.
That's not to say there aren't new useful primitives to be discovered.

Marcus


Reply | Threaded
Open this post in threaded view
|

not enough of Robert Rosen

glen ep ropella
In reply to this post by Joost Rekveld
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1


I'm going to violate the bottom-post rule because all 3 of the following
excerpts focus on the point I made (in response to G?nther) that there's
a difference between "computation" as the software that runs on a
machine and the machine, itself.

When we talk about "computation", are we talking about a concrete
_thing_ that exists out there in the world?  Or are we talking about an
abstract machine that exists only in our minds (or software as the case
may be)?

Marcus' comments show that he's talking about the former... computers
are real machines that can avail themselves of the full machinery of
reality.  Hence, that type of "computation" isn't limited in the way RR
suggests because that's not what "computability" refers to.

A robot that can change itself based on sensory-motor interactions with
the real world is not a computer in the same sense as a universal turing
machine.

This distinction provides plenty of fodder for long arguments and
confusion between Rosenites.  Some even say that an extant, concrete
machine in the real world actually is complex_rr in the same sense that
a rock or a mountain is (but not a tree or a cat).  Others vehemently
deny that.  The former seem to submit to degrees of complexity_rr
whereas the others seem to think it's bivalent.

So, I already asked this; but, the conversation really needs a clear
understanding of what we mean by "computation".  Perhaps we could split
it into two categories:  computation_c would indicate the activities of
a concrete machine and computation_a would indicate the (supposed)
activities of a universal turing machine.

Joost Rekveld on 01/08/2008 02:13 PM:
> isomorphism could be possible), but from what I understand from
> Rosen, Pattee, Pask and Cariani is that novelty in a real, non-
> platonic (let's say Aristotelic ?) world has to do with the
> appearance of new primitives: new symbols with new meanings in a new
>  syntax. The construction of symbols in the real world is an open-
> ended process, which is why no isomorphism with a closed, formal
> system is possible.

Marcus G. Daniels on 01/08/2008 02:52 PM:
> I don't see why this must be so.   One could imagine that a robot had
> a field programmable gate array that could, in effect, burn an all
> new processor and bring it online.  But, usually when new computer
> architectures are being developed, the developers just write a
> software simulator for it in initial stages (that mimics the intended
> physics of the hardware design). Even the adiabatic quantum computer
> people at DWave are using existing silicon process technologies to
> design circuits..

Joost Rekveld on 01/08/2008 03:24 PM:
> I guess the crucial difference is that such a self-constructing robot
>  would be grounded in the real world and not in a prespecified
> computed universe. It would be able to evolve its own computed
> universe. I'm not sure what to think of all this, but I like
> Cariani's ideas a lot and so far I haven't found any basic flaw in
> them. But, as said, being non-schooled in these matters that doesn't
> necessarily mean very much.

- --
glen e. p. ropella, 971-219-3846, http://tempusdictum.com
Government never furthered any enterprise but by the alacrity with which
it got out of its way. -- Henry David Thoreau

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.6 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFHhB7PZeB+vOTnLkoRAkxPAJkBFRfqeFx/UOEwqm05yJOZ8WHO9gCfTefY
HYWqQsjEqLVI5D13iIW0zoc=
=WGg8
-----END PGP SIGNATURE-----


Reply | Threaded
Open this post in threaded view
|

enough of Robert Rosen

glen ep ropella
In reply to this post by Marcus G. Daniels
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Marcus G. Daniels on 01/08/2008 02:18 PM:

> Glen E. P. Ropella wrote:
>> Well, correct me if I'm wrong, but GP currently requires a human to set
>> up the objective function.  And even in the cases where a system is
>> created so that the objective function is dynamically (and/or
>> implicitly) evolved, my suspicion is that the GP would soon find a
>> computational exploit that would result in either an infinite loop
>> (and/or deadlock), crash, or some sort of "exception".
>>  
> The objective function can be to an extent arbitrary and self-defined by
> the agent, but there must be a large implicit emphasis on avoiding
> death.  In a simulated world, a way to deal with exceptions is to trap
> them, and then reflect that in the objective function.  Existing memory
> management hardware, operating systems and programming languages have
> good facilities for trapping exceptions.

Aha!  What you're implicitly referring to, here, is an assemblage
(though not a holarchy) of formal systems ... just like I suggested. [grin]

If inference within one of the formal systems (e.g. memory allocation)
reaches an impasse, the machine hops out of that system and into one
that has a different semantic grounding (e.g. the OS or a hardware
driver) takes over and "plugs the hole".  After the hole is plugged, it
hops back inside the prior formal system and continues on.

_Or_ the latter formal system, through its inference modifies the former
formal system (new axiom, new alphabet, whatever) such that the previous
exception can no longer obtain.  Of course, in that case, it's probably
true that the inference int he former system has to be re-run from the
start rather than picking up where it left off... but, hey, c'est la vie.

The reason I suggested a holarchy rather than just an adhoc assemblage
of systems, however, is important because it's unlikely we'd be able to
design an assemblage of formal systems to handle every exception.
(Sorry for repeating myself...)  So, what's necessary is either the
on-the-fly generation of new systems along with on-the-fly
re-architecting of the assemblage OR a holarchy where every sub-system,
regardless of what level it's at, is further composed of sub-sub-systems.

> runtime.  This is all in the context of a simulated environment, of
> course.  In the robot example, the robots would just slump on the ground
> or jump up and down or whatever until its energy supplies were exhausted.

Well, this is another example of fragility to ambiguity and, to some
extent is the heart of my cheap shot criticism of RR's concept.  The
robot should fail gracefully (like living systems do).  A robot endowed
with the holarchy of formal systems would do everything in its power to
avoid slumping on the ground or doing something over and over with no
discernible effect.  I.e. it would _explore_ not only its own repertoire
(determined by the formal systems) but also the repertoire of its
environment.  Hence, a robot would find ways to harness things in its
environment to plug any holes (resolve any ambiguities) it couldn't
otherwise plug.

E.g. a troubled robot may well find itself replacing its aging
transistor-based "computer" with, say, a bag of wet fat/meat it harvests
from that annoying human who lives in the apartment next door.

- --
glen e. p. ropella, 971-219-3846, http://tempusdictum.com
Do not fear to be eccentric in opinion, for every opinion now accepted
was once eccentric. -- Bertrand Russell

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.6 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFHhCONZeB+vOTnLkoRAh5RAJ9fqcffe75m7axl9b1u8z1Rvbq/gACgkaTS
FTBfh0LyX/ibYot7lIgitN8=
=cUMT
-----END PGP SIGNATURE-----


12