(no subject)

classic Classic list List threaded Threaded
17 messages Options
Reply | Threaded
Open this post in threaded view
|

(no subject)

Nick Thompson
Phil,

Answering your post will require some heavy thinking.  I have a garden that is in danger of dying of the heat and a brain, likewise, so it may be a few days.  Probably most of the problems you are having with me are because I am by instinct (if not ideology) a behaviorist.  For me, mind talk, and even talk like "my" brain has to be justified in terms of patterns in what we can see, feel, and touch.   Let me pick on the "my" brain notion.  Perhaps this is one of the problems with computer analogies, BUT ... would we ever say that a program "has" the hardware it runs on?   We would be morelikely to say, I should think, that the computer has the software.  Why, then, do we say that I have a brain?  MY BODY has a brain for sure, but I am not my body.  Who is the I that has my brain?  Is "I" a proper loop?  

Any way, I said I wasnt going to and then I did.  Sorry.

NIck

Nicholas Thompson
nickthompson at earthlink.net
http://home.earthlink.net/~nickthompson


----- Original Message -----
From: Phil Henshaw
To: nickthompson at earthlink.net;friam at redfish.com
Cc: echarles
Sent: 7/16/2006 11:21:20 AM
Subject: RE:


[ph] great questions!  tried to keep short & as html...
isn't it weird how html 'fixed' the old email discussion thread format?!

Nick wrote:
All
This is becoming bloody interesting.  Although I am not sure I agree with Phil's comments, the  way in which they are posed sets the stage, I think, for huge advances in clarity.  So let's hang in here and see where we can get.  I am prepared to hang in either until we agree in detail or understand our differences in detail.  

 For reasons that will become clear, I am going to respond first to phil's message three, and then go back. I will use different type faces for my comments and his.   For those of you who dont have your HTML turned on, this might be confusing.  

PHIL'S MESSAGE THREE

Can we agree that trying to describe the characteristics of whole system
behavior, relating the simple to the complex, is a difficult challenge
that prompts each of us to stretch the meanings of words in ways others
feel uncomfortable with?

OH, Lordy Lordy yes.....  We sure can.


 I think the simplest property all ordinary
whole systems have, air currents to orangutans, is loops of organization
which give operational meaning to inside & outside. I know how to find
lots of them, but can't figure out what to call them.

(Italics mine!)  If we can cash out the italicized words in a manner that others find comfortable and can adopt, we might get a nobel prize. I am serious.   Nothing has been so tortured and confusing and misleading as the deployment of the terms "outside" and "inside" in psychology (let alone, biology).  This conversation is very worth having.  But let me highlight a problem with the project.  Is  a system definable except in sofar as it is a subsystem of a larger system?
[PH] there's a problem with the word 'definable' since the category of systems which we're trying to describe are real and all real things can be defined only by a form of pointing...  The frequent problem is separating the discussion of one thing being inside another as a mental category, from discussion of those which may refer to things inside others as links in their process.  

 Are we in one of those "turtles all the way down" situations?
[PH] I'm not sure where the question of systems needing to be subsystems comes up.   I think systems and levels can be organizationally independent in large part.   There is always going to be layer upon layer patterns and structures, disappearing into the distance up and down I suppose.    I see no necessity that all that works as a whole, though.  To me the simple model is that anything that acts as a whole 'treats' the rest of the universe as a junk yard for making pick-ups and drop-offs.   That there's lots of independence seems to be a reliable general observation, though you do need to push back and forth to find out what 'lots' means there.

I dont mind, if we are, but I think we need to know.  If so, then inside and outside will refer to different levels in a multilevel system: i.e., what is inside when seen from "above" is outside when seen from "below".    (Do I have that backwards?)  To me the confusion of levels of organization is the Great Sin of psychological and biological talk, and if we can avoid it, we will be WAY AHEAD of in the game.  
[PH] I only meant the in-out distinction as inside the loop or not, as internal & external to an ostensibly circular process.   It might be much the same as some corresponding physical boundary, or not.    It's a stretch but maybe there are things physically inside the boundary of your skin, but not inside the process loops of your biology.   I recently heard one of those "isn't science amazing" observations, that there may be 10-20 times the number of independent organisms living inside each of us as we have cells.    I don't think they're necessarily all there for some essential purpose to us.   Some may be purely freeloaders.    I also think Stan Salthe's idea (I think that's where I got the word for it) that independent systems interpenetrate seems to fit well with observation.   Things, say a street corner, can participate in multiple systems on multiple levels at the same time.   Things in one hierarchy may also participate in other hierarchies, like all the different personal, social and intellectual hierarchies each of us belong to.   Maybe my notion that whole systems are independent is a wishful choice to make the overwhelming complexity go away, maybe it's the only way things could possibly work, maybe it's a somewhat testable observation using a system's growth process as the implicit map of its interior...


PHIL'S MESSAGE TWO:

On the suggestion I went back to Nick & Pat [Derr]'s paper, but don't find the
departure from a normal sifting of categories (philosophical
conversation) that science broke from ages ago, and that Nick's own
sharp questions about things with homing trajectories represents

i think you have a point I should listen to here, but I am not sure I understand it.  I think you are saying that what I have been saying about homing mechanisms (that they are intentional systems, etc.) is out of keeping from what Pat and I say in the paper.  I obviously have to be concerned with such an inconsistency.  Inconsistencies are usually growth points.
[PH] a little different, I'm just saying I got a charge out of the idea of 'homing' as a system phenomenon to be accounted for, which to me is good raw science concerning a physical observable.   For any number of possible reasons I didn't see the connection to observables in the other piece.   The biggest problem I've had in systems theory is getting people to talk about observables as the subjects and tests of their theories...  I don't think you can do 'virtual science'.   There's plenty of solid work to be done figuring out how to refer to confusing undefined things more reliably.

 I'm a
designer, among other things, and while I understand the need to stretch
the meanings of common words to raise new issues, I think 'design' and
'intentionality' can't be done by systems that don't make 'images'.

Here, precisely is the nub of the issue.  What a joy to get here so quickly and pointedly.  My position is that EITHER (1) the thermostat's bimetalic thermocoil is an IMAGE of the room temperature OR (2) the statement that humans have internal working models (images)  of the world is non-sense.
[PH] hmmm, why draw that conclusion?    To me images are found in version 1.0 (some kind of physical imprint), up to 3.0 (mental projections of rules, like my mental pictures of George Washington mapped onto the surfaces of electrons...).    The latter are made by physical processes but are not physical things.    A thermostat is a type 1.0 image of room temperature but I'd ask for a little more to go on before looking for whether it has type 3.0 images somewhere.   I don't see how our choice of descriptors for thermostats restricts humans, though.    Taking a middle ground, a mouse with version 2.0 images (heading for a mouse hole with a clear intent where to go), apparently has a mental map that he can detour from and return to in adapting the process getting to safety.  Mouse thinking seems sort of one dimensional though.    I tend to think there's a long stepwise continuum, with maybe ants working on version 1.2 and maybe chimps and bonobos working on version 2.9, though observation and discussion might change that.

I am happy with either view but to argue that there is anything special about humans in this regard seems just plain metaphysical.  
[PH] why?   People have brains, and enjoy having multiple virtual experiences at the same time occasionally.  Thermostats don't do that.  Even if there's a little logical window through which one can find a thermostat more intelligent than things that are entirely unresponsive there are many great differences of both degree and kind.   I'd be happier including thermostats among things having some technical aspect of intelligence if they evolved on their own, though.   I don't think much of anything man made (in the usual sense) is responsive and behaves as whole the way natural systems do.

What IS demonstrably different about humans is their capacity to make material images and pass them around.  To say that one cannot be a designer without having an actual physical plan would te an interesting assertion, but one which humans could not always meet on all the occasions when they are planning their worlds.  
[PH] right, 'being a designer' wouldn't seem to always require a picture in the brain of some desired future paired with a method of changing the world to approximate that image, but I think that's the baseline meaning of 'design'.   The other interesting version is how a craftsperson works, organically discovering the work in the process of responding to his or her materials, sort of a collaborative thing where the designer blurs the lines of the roles and the finished design is discovered in the construction rather than pre-conceived.

You might stretch it to say that natural systems and their behaviors ARE
images of their environments (though that's not what we mean when we
refer to the mental state the word refers to), but I don't think it's
right to say natural systems or their behaviors HAVE images [of? ], or designs
on, their environments. That just seems to take us back to teleology.

I have written on this to the point of obsession, but as you predict, have not had a lot of luck convincing people.  I dont think extending the discriptors intentional and image-having lead to teleology (which is, after all, a form of circular reasoning)  unless the descriptor is treated as an explainer of the property it explains.  There is no harm in claiming that a rat looking for food looks like a man holding a map of the city and looking for a restaurant (if it is true); the crime comes in claiming that is a map that causes the mappish behavior.  As with the golden goose, when we get inside and look around,we are likely to find only pink mush.  
[PH] I'm not sure if the problem with teleology is its circularity, like the problem with 'fitness' in evolution theory.    Isn't the question of teleology whether systems have maps of some kind that tell them or are what their purpose is?   It's something more than having a kind of programmed 'destiny'.    I think there are properties of systems that might make people want to go out on that limb, but without identifying corresponding physical structures of control some other explanation is probably better.

If 'intentionality' is to be read into complex systems that probably
don't have vast reflective worlds of projected imagery seated in a
central control structure, I think it'll disagree with the natural
meaning of the word and be confusing. We grope for how to describe how
things without brains can act as a whole, but I vote we not use
'intentionality'.
[PH] maybe the natural meaning of the word can be a little broader than restricting it to human intent (v 1.0 to 3.0 as above), but I still think there are lots of associations with the word that just don't fit a general description of systems.

But dont we have exactly the same problem with "the brain as a whole".   There is nothing there is that foresees the organization of the brain, is there? Nor anything that foresees what the brain is going to do.
[PH] Not for me.  It's true we can't explain how brains work, or what our next thoughts will be.   Still, we have lots of perfectly good words that are defined only by reference to recognizable patterns and things we don't understand very well.  My observation is that most such definitions are properly attached to specific natural forms and though language needs a lot of flexibility it's better to keep meanings sorted by what they refer to, their natural structures, and choose not to find 'apples' on 'bushes', for example.    For me no general explanation of 'brain' is needed to differentiate it from 'thermostat', just the evidence of the natural word associations we use.

To say that non-cognitive systems don't 'have designs on' other things
isn't to say that the 'designs of nature' aren't real. They seem to be
the main source of human 'invention'. Maybe the disconnect, that nature
has design, but doesn't do design, is telling though.

Is a farmer who sets about selecting the best pigeon a designer?
[PH] In the normal sense sure, if he's following his own design.   If he's selecting a pigeon for reproduction (assuming that's what you meant) but according to his cousin's design, or for some reason other than to influence the next generation  then I think not.

I think it's one
of the most widespread errors of thought, that we use the same words for
things in nature and in our minds, and don't distinguish...

We precisely disagree on this point.  So precisely, in fact, that I think we are going to be able to work it out.  Exciting.  
[PH] ??    But don't languages need both internal structure and references to things outside their structure?

To think
the world is in your mind does give you a peculiarly enthralling feeling
of power, no doubt, but as a lifestyle it actually leaves you quite
powerless.

What is the  heuristic basis  for maintaining a dualism apriori?  Or if the dualism isnt  a priori, what is its empirical  base.  Given, for instance, that human beings have ony twice as many genes as fruitflies  and given that most of the most important gene sequences are nearly the same in fruitflies  and humans (and , for all intents and purposes ARE thesame in chimps and humans), where does one stand to say that humans are FUNDAMENTALLY different?  
[PH] every change of state is a fundamental difference, and there seem to have been long chains of interesting ones in the successions of species over time, including ours.   That's how I read the evidence of punctuated equilibrium, that speciation is a change of state in a stepwise development process.   The big changes in kind that humans went through in the past 4 million years are clearly remarkable.   What I find most intriguing is that they occurred for some reason other than facilitating verbal language, culture and arts.    Those things definitely came later I think.    I think maybe the reason we care about whether humans are fundamentally different is that we feel guilty about building ourselves up so much when there's no one to disagree with us, along with perhaps a deep suspicion that it's just a handicap that keeps us from seeing what is fundamentally different about everything else.

Given some of the major misunderstandings of modern man, (our global economic plan, for example, to make ever more complicated changes to the earth at ever more rapid rates, forever)  I think it's possible that we have a lot of other things mixed up too.   This conversation is where I'm first beginning to wonder if perhaps some of the most primitive traits of living systems are among those we think exist for us alone, for example.    That remark I made about when your mind goes blank in a kiss was about a common shared moment when we are intensely conscious and alert but often briefly stop thinking.   Which can you say is the higher state of being at that moment, maintaining conscious thought or letting it go?     The example occurred to me in thinking of all those many times when we're acting competently without conscious perception at all, and my puzzling over why we still tend to credit all our behavior to thought.  

I think the big difference with humans is our virtual worlds, these amazing projected things that are so much more persuasive, entertaining and self-sufficient than our tenuous grip on the messy realities around us, that I expect we'll continue to have some use for.

Cheers!


Nicholas Thompson
nickthompson at earthlink.net
http://home.earthlink.net/~nickthompson

Phil Henshaw  
sy at synapse9.com
www.synapse9.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: /pipermail/friam_redfish.com/attachments/20060716/ecf59490/attachment-0001.html

Reply | Threaded
Open this post in threaded view
|

(no subject)

James Steiner
On 7/16/06, Nicholas Thompson <nickthompson at earthlink.net> wrote:
>  Who is the I that has my brain?  Is "I" a proper loop?

Tangentally, this question is part of the reason I am very disturbed
by the concept of the "singularity"--the point in history when the
mind/brain can be downloaded/backed-up, then later restored/uploaded
into, for example, a clone--thus creating practical immortality. I
don't believe that the being that results, while itself thinking that
it is the same "I" as me, will be. It can't be, as a dicontinuity has
occured--"I" might even have lived a while after the "copy" was
stored. So, even though my thoughts might go on, and something that
thinks it's me goes on as if it was me, the truth is that "I"--the me
that is behind my eyes-- is dead.

E.g.: "Down and Out in the Magic Kingdom" by Cory Doctorow
Now available for free download: http://www.craphound.com/down/download.php

~~James
_____________________
http://www.turtlezero.com


Reply | Threaded
Open this post in threaded view
|

(no subject)

Steve Smith
>  I
> don't believe that the being that results, while itself thinking that
> it is the same "I" as me, will be. It can't be, as a dicontinuity has
> occured--"I" might even have lived a while after the "copy" was
> stored. So, even though my thoughts might go on, and something that
> thinks it's me goes on as if it was me, the truth is that "I"--the me
> that is behind my eyes-- is dead.
>

My favorite thought experiment on this topic is the one of
malfunctioning Star Trek transporter... your original body/brain gets
scanned, transmitted and reconstituted.   The original fails to be
destroyed (assuming non-destructive read/scan)... now there are two..
.neither is neccesarily aware of the other... in theory, both feel like
the original...

I don't believe this technology is likely to occur before we become
(somehow) sophisticated enough to not want to  or care to do such
things... for other reasons...




Reply | Threaded
Open this post in threaded view
|

singularity

Carlos Gershenson
In reply to this post by James Steiner
>
> Tangentally, this question is part of the reason I am very disturbed
> by the concept of the "singularity"

I made yesterday a blog entry about the singularity:
http://complexes.blogspot.com/2006/07/limits-of-moores-law.html

Best regards,

     Carlos Gershenson...
     Centrum Leo Apostel, Vrije Universiteit Brussel
     Krijgskundestraat 33. B-1160 Brussels, Belgium
     http://homepages.vub.ac.be/~cgershen/

   ?Winning or losing does not matter as  much as what you learn from  
it?




Reply | Threaded
Open this post in threaded view
|

singularity

Bill Eldridge
Carlos Gershenson wrote:
>> Tangentally, this question is part of the reason I am very disturbed
>> by the concept of the "singularity"
>>    
>
> I made yesterday a blog entry about the singularity:
> http://complexes.blogspot.com/2006/07/limits-of-moores-law.html
>
>  
Well, you note, " How the hell do you program a human mind in there???
It takes us several years just to learn to talk!"

Part of the answer is, "We just copy the ability from computer to computer".
Humans are difficult to clone. Machines much less so. Is it because machines
are so much less complex, or that the method nature chose was the best
available
at the time or that human replication serves other purposes as well not
satisfied
by an in-depth copy? A bit of all 3. Humans have not evolved terribly
quickly,
but this model has had relatively long shelf life - much longer than an
ENIAC, say.

As we digitize data, information and knowledge, it becomes easier to load
up a machine with it all. Obviously accessing and integrating this is
more important
than just having it stored on relatively fast disk, but it's hard to
deny that the ability
to store tons of knowledge is an advantage.

Machines have much faster data transfer internal-to-external.

Where humans do seem to win is in internal communications and the software
programming. I'm sure as we go to molecular computers we'll pick up some
speed
on the internal bus bandwidth as well. Not that cognition is all about
speed -
slow filtering is very useful in places.

Regarding the software, well, human development is a little bit stupid.
Yes, it takes
us years to learn how to talk, and then we spend years learning "Row row
row your boat"
and other time intensive learn-by-repetition-and-rote tasks just so we
can be relatively
self-sufficient for 50 years, which means we hold meaningless jobs so we
can find time
to head to the bar. But our external knowledge and technology cumulate,
so in 2016 we'll be able to organize computer knowledge much better than
we do know,
and presumably as the machines get smarter and smarter, they can play a
larger role
in programming their descendants.

The relevancy of Google answers is much better than we had 10 years ago,
in a large
part due to comingling requests and answers over millions of nodes and
requests, as well as the
algorithms that go into the responses. Where will this approach be in 10
years?
What new insights, what new applications? Computers will be more capable of
aggregating insights from a billion more nodes and applying the insights
to new problems.
While humans are getting better at programming the ability to have these
insights,
we are not getting much better at having the insights ourselves. Our
creative thinking
more and more depends on the machine for its completion.

That doesn't mean all computer questions are tackled with ease. There
were big linguistics/
machine learning setbacks in the 1980's, AI was overhyped, etc. But
these efforts don't so
much disappear as they recur as technological and societal environments
become more
prepared to utilize them. Whether this all leads to a singularity
followed by the Cyberiad,
or simply continues as a long-term symbiotic relationship (man and dog,
computer and man),
I don't know - I favor the latter. But not because we can't program,
only that the relationship
will continue to evolve in ways we find useful, and we've already made
great progress on
what we'd like machines to do even in the short span of computer
science. I don't expect
a single algorithm or insight to change everything - I imagine there
will be a number of evolving,
slightly incompatible approaches, from which a few will gain sway and
slowly be replaced.

In any case, just because software evolution has historically gone
slower than hardware, I don't think
it's inherent in programming that that has to be true forever. For one
thing, we've used hardware as
stable datum - program new tasks, but leave the hardware design
consistent and backwards compatible.
So while the goal of the hardware is higher performance/efficiency,
software has to have better performance
and more features. And attempts to improve automatic programming have
had poor results. But
the state of programming is much improved over 1991, and my guess is
that it's only a matter of time
before all of our efforts in different approaches hit something that
pays off more exponentially.

Okay, I didn't address the one question - if you copy my mind out to
disk and back into another body,
will it have identity as "Bill", self-knowledge, consciousness, etc.? I
think Lem answered that in the Cyberiad,
but I'll have to re-read it, I don't store data that well.

-------------- next part --------------
An HTML attachment was scrubbed...
URL: /pipermail/friam_redfish.com/attachments/20060719/e1fdc788/attachment-0001.html

Reply | Threaded
Open this post in threaded view
|

singularity

Carlos Gershenson
Dear Bill,
>
> Well, you note, " How the hell do you program a human mind in there???
> It takes us several years just to learn to talk!"
>
> Part of the answer is, "We just copy the ability from computer to  
> computer".

I agree, but we are still very far from doing it for only one machine!


>  But our external knowledge and technology cumulate,
> so in 2016 we'll be able to organize computer knowledge much better  
> than we do know,

Also agree, but what I claim is that maybe the evolution of software  
is not exponential, as it is with hardware, so there would be no  
singularity in sight...

>
> Whether this all leads to a singularity followed by the Cyberiad,
> or simply continues as a long-term symbiotic relationship (man and  
> dog, computer and man),
> I don't know - I favor the latter.

Me too.

Best regards,

     Carlos Gershenson...
     Centrum Leo Apostel, Vrije Universiteit Brussel
     Krijgskundestraat 33. B-1160 Brussels, Belgium
     http://homepages.vub.ac.be/~cgershen/

   ?Tendencies tend to change...?


-------------- next part --------------
An HTML attachment was scrubbed...
URL: /pipermail/friam_redfish.com/attachments/20060719/a740d863/attachment.html

Reply | Threaded
Open this post in threaded view
|

singularity

Russell Standish
On Wed, Jul 19, 2006 at 02:07:58PM +0200, Carlos Gershenson wrote:
>
> Also agree, but what I claim is that maybe the evolution of software  
> is not exponential, as it is with hardware, so there would be no  
> singularity in sight...

I wouldn't be surprised if software development was actually
exponential, however it is harder to measure improvement, and the
improvement is not a smooth as hardware improvement.

During my 25 years of programming computers, I have seen several
revolutionary "jumps" in software: vectorisation, parallelisation,
object-oriented programming, higher-level scripting (Perl, Python et
al), evolutionary algorithms ...

Each of these software techniques has brought orders of magnitude of
increased functionality, but in each case the effect is different
(generally not across the board), and hard to quantify. During the
same period we have seen approximately 8 generations of Intel
processors or 5 orders of magnitude in processor performance, measured
on the same scale

Cheers

--
*PS: A number of people ask me about the attachment to my email, which
is of type "application/pgp-signature". Don't worry, it is not a
virus. It is an electronic signature, that may be used to verify this
email came from me if you have PGP or GPG installed. Otherwise, you
may safely ignore this attachment.

----------------------------------------------------------------------------
A/Prof Russell Standish                  Phone 8308 3119 (mobile)
Mathematics                               0425 253119 (")
UNSW SYDNEY 2052                 R.Standish at unsw.edu.au            
Australia                                http://parallel.hpc.unsw.edu.au/rks
            International prefix  +612, Interstate prefix 02
----------------------------------------------------------------------------



Reply | Threaded
Open this post in threaded view
|

singularity

Bill Eldridge
In reply to this post by Carlos Gershenson
Carlos Gershenson wrote:

> Dear Bill,
>>
>> Well, you note, " How the hell do you program a human mind in there???
>> It takes us several years just to learn to talk!"
>>
>> Part of the answer is, "We just copy the ability from computer to
>> computer".
>
> I agree, but we are still very far from doing it for only one machine!
>
>
>>  But our external knowledge and technology cumulate,
>> so in 2016 we'll be able to organize computer knowledge much better
>> than we do know,
>
> Also agree, but what I claim is that maybe the evolution of software
> is not exponential, as it is with hardware, so there would be no
> singularity in sight...

Well, software is funny, so this statement isn't exactly true.
With the Internet, there can be more than exponential distribution, the
spread of Google maps being an example,
making the evolution and adopted of algorithms and techniques very quick.
With the Internet, that distribution has essentially 0 cost.
With the Internet, we're able to scale software development, adding more
programmers to a problem.
Where we haven't lived up to the hype is programmer productivity -
exponential growth of productivity for a program.
Again, we're caught relying on the exponential effects of network
interaction rather than individual gains.
But there are still ways we're using software, such as SOA, that starts
to impact software efficiency/utility in a much more
productive fashion, even if lines of code/day don't drastically improve.
I'm not convinced that software programming productivity itself can't
hit some exponential capability,
but it's evaded us so far. So yes, no software singularity in sight.

On the physical side, Moore's law hasn't applied to bus bandwidth, only
to storage amounts and processing speed, giving us
a great discrepancy in how much we can store vs. how quickly we can
access it. This is a bottleneck for
memory-CPU transfers and disk-memory transfers, as well as getting
things out to the network.

Anyway, I'm not a big fan of future messianic implosion dates. In 100
years we'll still be screaming at kids to brush
their teeth before bed and arguing about who has the best beer,
regardless of what our computers are doing.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: /pipermail/friam_redfish.com/attachments/20060719/57b98d1a/attachment.html

Reply | Threaded
Open this post in threaded view
|

singularity

Jochen Fromm-3
In reply to this post by Carlos Gershenson

I don't agree with you on this topic.
In your blog you say "just imagine that you have
all that computational power right now [..] how
the hell do you program a human mind in there"
It is perhaps easier than we think: the brain
delegates its own construction largely to the
environment. All you need is therefore a suitable
environment (which is complex enough) and an
adaptive system with a high capability to learn,
including advanced learning rules (something as
simple as the delta-rule for neural networks,
Oja's learning rule or Hebb's rule). Then connect
it to the real world (through a robot) or a realistic
virtual world (through an agent) and start learning.
The drawback is of course then you might end up
with an artificial mind you don't understand
anymore. It is for instance very hard to understand
even simple neural networks that have been
trained with backpropagation and the
delta-rule.

-J.
________________________________

From: Carlos Gershenson
Sent: Wednesday, July 19, 2006 2:08 PM
To: The Friday Morning Applied Complexity Coffee Group
Subject: Re: [FRIAM] singularity

"How the hell do you program a human mind in there???
It takes us several years just to learn to talk!"
       



Reply | Threaded
Open this post in threaded view
|

singularity

Carlos Gershenson

On 19 Jul 2006, at 15:16, Jochen Fromm wrote:

>
> I don't agree with you on this topic.
> In your blog you say "just imagine that you have
> all that computational power right now [..] how
> the hell do you program a human mind in there"
> It is perhaps easier than we think: the brain
> delegates its own construction largely to the
> environment. All you need is therefore a suitable
> environment (which is complex enough)

how the hell do you program such a complex environment in there?
OK, you can connect a robot to the real world, but then you would  
need to develop something similar to our sensors, which are very far  
from being cameras and IR sensors... Just for moving, we use  
propioception, that is, sensors of the position of our limbs...  
Cognition is not only about computational power, but also about (en)
action and embodiment...

I think it is just that the human mind is not platform independent...

> and an
> adaptive system with a high capability to learn,
> including advanced learning rules (something as
> simple as the delta-rule for neural networks,
> Oja's learning rule or Hebb's rule).

I'm afraid it's not as easy as that. All these learning algorithms  
are good models, but we are VEEERY far from understanding how we  
really do it...
Again, if we had all that computing power now, we wouldn't be able to  
know which learning methods to use so that the machine would develop  
human-like intelligence.

I'm not saying it's not possible, I'm saying I don't see it coming  
soon... 15 years ago we were at the insect level, and we are still  
there. Well, no, maybe we are even a bit farther, since we've learned  
much more about the complexity of insects we didn't know about...  
basically, insects got much smarter in the last 15 years than our  
robots...

Best regards,

     Carlos Gershenson...
     Centrum Leo Apostel, Vrije Universiteit Brussel
     Krijgskundestraat 33. B-1160 Brussels, Belgium
     http://homepages.vub.ac.be/~cgershen/

   ?Tendencies tend to change...?




Reply | Threaded
Open this post in threaded view
|

singularity

Robert Holmes
In reply to this post by Russell Standish
Indeed. I'm certainly capable of misapplying statistical techniques several
orders of magnitude faster than I could a decade ago.

Robetr

On 7/19/06, Russell Standish <r.standish at unsw.edu.au> wrote:

>
> On Wed, Jul 19, 2006 at 02:07:58PM +0200, Carlos Gershenson wrote:
> >
> > Also agree, but what I claim is that maybe the evolution of software
> > is not exponential, as it is with hardware, so there would be no
> > singularity in sight...
>
> I wouldn't be surprised if software development was actually
> exponential, however it is harder to measure improvement, and the
> improvement is not a smooth as hardware improvement.
>
> During my 25 years of programming computers, I have seen several
> revolutionary "jumps" in software: vectorisation, parallelisation,
> object-oriented programming, higher-level scripting (Perl, Python et
> al), evolutionary algorithms ...
>
> Each of these software techniques has brought orders of magnitude of
> increased functionality, but in each case the effect is different
> (generally not across the board), and hard to quantify. During the
> same period we have seen approximately 8 generations of Intel
> processors or 5 orders of magnitude in processor performance, measured
> on the same scale
>
> Cheers
>
> --
> *PS: A number of people ask me about the attachment to my email, which
> is of type "application/pgp-signature". Don't worry, it is not a
> virus. It is an electronic signature, that may be used to verify this
> email came from me if you have PGP or GPG installed. Otherwise, you
> may safely ignore this attachment.
>
>
> ----------------------------------------------------------------------------
> A/Prof Russell Standish                  Phone 8308 3119 (mobile)
> Mathematics                                    0425 253119 (")
> UNSW SYDNEY 2052                         R.Standish at unsw.edu.au
> Australia
> http://parallel.hpc.unsw.edu.au/rks
>             International prefix  +612, Interstate prefix 02
>
> ----------------------------------------------------------------------------
>
>
> ============================================================
> FRIAM Applied Complexity Group listserv
> Meets Fridays 9a-11:30 at cafe at St. John's College
> lectures, archives, unsubscribe, maps at http://www.friam.org
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: /pipermail/friam_redfish.com/attachments/20060719/fb0a28fd/attachment-0001.html

Reply | Threaded
Open this post in threaded view
|

singularity

Carlos Gershenson
In reply to this post by Russell Standish
> I wouldn't be surprised if software development was actually
> exponential, however it is harder to measure improvement, and the
> improvement is not a smooth as hardware improvement.

I guess that we would like to have a general measure of the growth of  
software complexity, but I don't know if there is anything like that,  
nor how easy would it be to develop... moreover to check... where  
could we get the data of e.g. number of lines of code, or source code  
size in Kb, of software for the last 20 years or so???

A rough and naive way would be to check e.g. the size in KB of the  
installation files of a certain software, e.g. Linux, Windows, MS  
Office, Corel Draw, AutoCAD...
(with Linux it's quite difficult, because a minimal version of it can  
fit in a couple of floppies, all the rest are add-ons...)

Best regards,

     Carlos Gershenson...
     Centrum Leo Apostel, Vrije Universiteit Brussel
     Krijgskundestraat 33. B-1160 Brussels, Belgium
     http://homepages.vub.ac.be/~cgershen/

   ?Tendencies tend to change...?




Reply | Threaded
Open this post in threaded view
|

singularity

Russell Standish
Crude quantitative measures are no good. For instance, the intro of OO
techniques can increase functionality with sometimes a decrease in the
number of lines of code. An example close to home for me was the
change from EcoLab 3 to EcoLab 4. The number of lines halved, but
functionality was increased maybe tenfold (**subjective measure warning**).



On Thu, Jul 20, 2006 at 12:22:20PM +0200, Carlos Gershenson wrote:

> > I wouldn't be surprised if software development was actually
> > exponential, however it is harder to measure improvement, and the
> > improvement is not a smooth as hardware improvement.
>
> I guess that we would like to have a general measure of the growth of  
> software complexity, but I don't know if there is anything like that,  
> nor how easy would it be to develop... moreover to check... where  
> could we get the data of e.g. number of lines of code, or source code  
> size in Kb, of software for the last 20 years or so???
>
> A rough and naive way would be to check e.g. the size in KB of the  
> installation files of a certain software, e.g. Linux, Windows, MS  
> Office, Corel Draw, AutoCAD...
> (with Linux it's quite difficult, because a minimal version of it can  
> fit in a couple of floppies, all the rest are add-ons...)
>
> Best regards,
>
>      Carlos Gershenson...
>      Centrum Leo Apostel, Vrije Universiteit Brussel
>      Krijgskundestraat 33. B-1160 Brussels, Belgium
>      http://homepages.vub.ac.be/~cgershen/
>
>    ?Tendencies tend to change...?
>
>
>
> ============================================================
> FRIAM Applied Complexity Group listserv
> Meets Fridays 9a-11:30 at cafe at St. John's College
> lectures, archives, unsubscribe, maps at http://www.friam.org

--
*PS: A number of people ask me about the attachment to my email, which
is of type "application/pgp-signature". Don't worry, it is not a
virus. It is an electronic signature, that may be used to verify this
email came from me if you have PGP or GPG installed. Otherwise, you
may safely ignore this attachment.

----------------------------------------------------------------------------
A/Prof Russell Standish                  Phone 8308 3119 (mobile)
Mathematics                               0425 253119 (")
UNSW SYDNEY 2052                 R.Standish at unsw.edu.au            
Australia                                http://parallel.hpc.unsw.edu.au/rks
            International prefix  +612, Interstate prefix 02
----------------------------------------------------------------------------



Reply | Threaded
Open this post in threaded view
|

singularity

Carlos Gershenson
> Crude quantitative measures are no good. For instance, the intro of OO
> techniques can increase functionality with sometimes a decrease in the
> number of lines of code. An example close to home for me was the
> change from EcoLab 3 to EcoLab 4. The number of lines halved, but
> functionality was increased maybe tenfold (**subjective measure  
> warning**).

Then maybe a measure could be the length of the manuals
+documentation, which reflect the functionality of a particular program?
(Well, Francis just switched to MacOS X from MacOS 9, and the one  
thing he complained was that there was no manual... he didn't like  
the amount of help files)

If this would be reasonable, I don't see that these have increased  
too much, since the size of books hasn't increased noticeably... in  
Unix/Linux you could measure it better with the size of man and how-
to pages

Best regards,

     Carlos Gershenson...
     Centrum Leo Apostel, Vrije Universiteit Brussel
     Krijgskundestraat 33. B-1160 Brussels, Belgium
     http://homepages.vub.ac.be/~cgershen/

   ?Tendencies tend to change...?




Reply | Threaded
Open this post in threaded view
|

singularity

Russell Standish
Like weighing Stroustrup versus Kernighan & Richie ?? I think the C++
book weighs 4 times as much as the C book, but I'm sure C++ is more
than 4 times as powerful...


Cheers

On Thu, Jul 20, 2006 at 01:36:00PM +0200, Carlos Gershenson wrote:

> > Crude quantitative measures are no good. For instance, the intro of OO
> > techniques can increase functionality with sometimes a decrease in the
> > number of lines of code. An example close to home for me was the
> > change from EcoLab 3 to EcoLab 4. The number of lines halved, but
> > functionality was increased maybe tenfold (**subjective measure  
> > warning**).
>
> Then maybe a measure could be the length of the manuals
> +documentation, which reflect the functionality of a particular program?
> (Well, Francis just switched to MacOS X from MacOS 9, and the one  
> thing he complained was that there was no manual... he didn't like  
> the amount of help files)
>
> If this would be reasonable, I don't see that these have increased  
> too much, since the size of books hasn't increased noticeably... in  
> Unix/Linux you could measure it better with the size of man and how-
> to pages
>
> Best regards,
>
>      Carlos Gershenson...
>      Centrum Leo Apostel, Vrije Universiteit Brussel
>      Krijgskundestraat 33. B-1160 Brussels, Belgium
>      http://homepages.vub.ac.be/~cgershen/
>
>    ?Tendencies tend to change...?
>
>
>
> ============================================================
> FRIAM Applied Complexity Group listserv
> Meets Fridays 9a-11:30 at cafe at St. John's College
> lectures, archives, unsubscribe, maps at http://www.friam.org

--
*PS: A number of people ask me about the attachment to my email, which
is of type "application/pgp-signature". Don't worry, it is not a
virus. It is an electronic signature, that may be used to verify this
email came from me if you have PGP or GPG installed. Otherwise, you
may safely ignore this attachment.

----------------------------------------------------------------------------
A/Prof Russell Standish                  Phone 8308 3119 (mobile)
Mathematics                               0425 253119 (")
UNSW SYDNEY 2052                 R.Standish at unsw.edu.au            
Australia                                http://parallel.hpc.unsw.edu.au/rks
            International prefix  +612, Interstate prefix 02
----------------------------------------------------------------------------



Reply | Threaded
Open this post in threaded view
|

singularity

Bill Eldridge
In reply to this post by Carlos Gershenson
Carlos Gershenson wrote:

>> Crude quantitative measures are no good. For instance, the intro of OO
>> techniques can increase functionality with sometimes a decrease in the
>> number of lines of code. An example close to home for me was the
>> change from EcoLab 3 to EcoLab 4. The number of lines halved, but
>> functionality was increased maybe tenfold (**subjective measure  
>> warning**).
>>    
>
> Then maybe a measure could be the length of the manuals
> +documentation, which reflect the functionality of a particular program?
>  
Here we're swaying between measuring bloatware and measuring
documentation, the dreaded
Rubicon of the software programmer. Actually, automated documentation
tools have improved.
But a package where programmers hate documenting (or document in-line),
or are busier adding
features than doing documentation will be regarded as less progressive.
Some software packages
realistically have less need for intricate documentation.

Also, improvements in software can be a more intuitive interface
(decreasing needed documentation),
improved speed, improved modularity, maintainibility, integratability,
etc. This is true of Moore's
Law as well - it's a very one-dimensional measure of computing progress.
Many many people are
more impressed with the ability to plug in their video camera to a PC
rather than the ability of
Excel to open 60% faster. We simply have little way to evaluate software
progress across the board
with a measure like Moore's Law. But if we examine tasks and explore
speed, functionality and cost
for handling tasks, we get some comparisons. Offshoring of software
programming becomes easier because it's
easier to turn some types of software into a commodity task. Other
field-specific tasks are much more
involved, and may become actually doable thanks to software progress,
but still may be much slower
tasks to do than trivial already-done-a-million-times tasks. But both
types are necessary.

-------------- next part --------------
An HTML attachment was scrubbed...
URL: /pipermail/friam_redfish.com/attachments/20060720/0812414f/attachment-0001.html

Reply | Threaded
Open this post in threaded view
|

unsubscribe

joselobo25@hotmail.com
In reply to this post by Carlos Gershenson
An HTML attachment was scrubbed...
URL: /pipermail/friam_redfish.com/attachments/20060720/a3734937/attachment.html