The what is AI question

classic Classic list List threaded Threaded
2 messages Options
Reply | Threaded
Open this post in threaded view
|

The what is AI question

Phil Henshaw-2
Rob,
It threw me off just a bit that the first two sentences of mine quoted
below were taken from a larger paragraph and placed in reverse order...
but I think you're saying I appear to be struggling with the root
problem of self reference you go on to discuss.   Sometimes I come up
with wacky analogies, and this one reminding me of our minds being like
flypaper, but with only one sticky side, the other side once having been
sticky but now with its tendrils all curled up and hidden like the
missing dimensions the physicists say are needed to explain the unity of
the world....
 
Marketing may have some role in pre-chewing any new point of view for
people who like or need that.  I think the root problem is that people
develop their ideas independent of the world and tend not to change them
till they have to.   Thus, the real issue for finding good choices in
unfamiliar situations (like our present state of change) is learning how
to see where reality is different from what we imagine it to be.   The
methods available are plentiful, but seem to be useless until one
realizes that all images, *however useful they may be to us internally*,
are actually just projections of our own root values and *all* of them
are infinitely different from the physical things they are images of.
Yea it's a jump, but well worth taking.    
 
If we use our root values to judge things, and the world changes, how do
we then judge our values???   That's the way I'd state the
self-referential dilemma you mention other examples of.   As far as I
can tell they can only be answered by finding a way to honestly look at
the whole physical meaning of your values, and then reaching for even
deeper values when there's a conflict.   Presently there's a conflict
between the earth and growth, for example, that I think you can only
resolve by suspending all usual judgments and looking at the whole
effect of what's happening, drawing on values deeper than all the ones
you need to question.    All that takes is work I think.
 
The reason I say "once having been sticky" in the analogy above is that
I think there were actually two major breaks with reality that occurred
in the evolution of thought, one at around 50-60k years ago, when
building our imaginary worlds took precedence over directly observing
the real one, and the other around 5-10k years ago, when the event
called the 'fall of man' blinded the creative center of human culture to
seeing that anything but man had anything inside.  When you actually
look for other things built and operated from the inside, you find the
universe is chock full of them.    I think being blinded to that is
where we got our obsession with control and our belief there's noting at
stake in unlimited growth.    Pretty nuts?    These are indeed sort of
educated guesses phrased in experimental terms, but the evidence is
pretty clear to me that events of this kind did occur at real places and
times, and that its reasonable to say it's more a matter of narrowing
down what actually happened.
 

Phil Henshaw                       ????.?? ? `?.????
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
680 Ft. Washington Ave
NY NY 10040                      
tel: 212-795-4844                
e-mail: pfh at synapse9.com          
explorations: www.synapse9.com <http://www.synapse9.com/>    

-----Original Message-----
From: [hidden email] [mailto:[hidden email]] On
Behalf Of Rob Howard
Sent: Wednesday, December 27, 2006 12:30 AM
To: 'The Friday Morning Applied Complexity Coffee Group'
Subject: Re: [FRIAM] The what is AI question



[ph] An intelligent person will predictably come up with unexpected
points of view and solutions for new problems. That seems to display
something of a tendency to accept mystification in place of explanation,
but I don't see the root meaning of intelligence in it.    

 

[ph] This is a great issue.  Making a world model in our minds the way
we do does seem to require that the qualities of things are those we
bestow on them by changing our images of them.  All we have to guide us
is our world model, so when we change our own or each other's world
models it displays our ultimate control over the world in which we

operate.  

 

[ph] Another view is that things are what a scientific study would tell
you, their web of relationships with other things and the nest of
internal structures with which those other related things connect.  This
refers to a an extensive group of physical systems most of which are and
will remain largely or entirely unknown.  It's hard to have it both
ways, and the former surely seems to dominate, but getting rid of the
latter all together seems dangerous, don't you think?

 

 

[rh] It?s like the 100-year history of ?quantum? in six words:
?Discrete
 Duality
 Unpredictable
 Small
 Mystical
 Chaotic...?. I
wonder what marketing will bring next decade. My guess is that, like
quantum mechanics, where the definitions only make sense when we realize
that we?re part of the measurement process and not independently
isolated from it, a definition for intelligence and consciousness will
likewise only make sense when we realize that we are part of the
measurement process too. We define ourselves as ?intelligent? and
?conscious? with respect to ourselves. Then we attempt to project that
reflexive definition onto outside things and act surprised when the
reflexivity instantly disappears. We are the baseline because we have
said so!

 

Earlier examples of this defining process applied to the definition of
?life?. We are alive because we say so! So we look for these
characteristics in other animals. Early theocracy associated life with a
soul only to deduce that insects must not really be alive because they
?of course, have no soul?. Later science, wanting to include this
taxonomy, starts looking at dynamic qualities, such as burning food and
exhaling the byproducts. Along comes the industrial age fuel-burning
machines, and then ?self replication? becomes the quintessential element
of life. Then certain crystallization processes are observed to satisfy
the definition. Wanting to exclude them (I suppose for aesthetic
reasons) science then adds mutation, fitness functions and other
evolutionary terms to the definition. Someone takes a closer looks at
RNA-viruses isolated from their hosts, and
 well, it?s back to the
drawing board.

 

Are we not playing the same self-referential definitions game with
intelligence and consciousness as we once did, and are still doing, with
life?

 

 

Robert Howard

Phoenix, Arizona

 

 

Well, how about the ability to respond to unexpected situations with

useful choices?   Is that low or high on the tests of intelligence?

 

Phil Henshaw                       ????.?? ? `?.????

 

 

 

-----Original Message-----

From:  Rob Howard

Sent: Monday, December 25, 2006 12:32 PM

To: 'The Friday Morning Applied Complexity Coffee Group'

Subject: Re: [FRIAM] The what is AI question

 

>What if the analogy of intelligence is unexpected predictability?

>I can roll a pair of dice, and that is unpredictable; but it?s not

>unexpected. I expect a Gaussian curve of totals.

 

[ph] I think you're saying that people have frequently bestowed
'intelligence' on things that were merely predictable.  That seems to
display something of a tendency to accept mystification in place of
explanation, but I don't see the root meaning of intelligence in it.  An
intelligent person will predictably come up with unexpected points of

view and solutions for new problems.   It's the aspect of invention

there, not the mystery of the process, that displays the intelligence
involved I think. ph

 

>A few thousand years ago, the states of the moon were unpredictable

>(eclipses, elevation, and to some extent, phases). Humans consequently

>animated it with intelligence by calling it Luna?the moon goddess.

>All deities have intelligence. The same occurred with the planets,

>weather; and even social conditions like love and war. Only when these

things became >expectedly predictable did they loose their intelligence.

You all

>remember ELIZA! At least for the first five minutes of play, the game

>did take on intelligence. However, after review of the actual code did

>the game instantly lose it mystery. Kasparov bestowed intelligence on

>Deep Blue, which I?m sure the programmers did not.

 

>In this sense, intelligence is not a property that external things

>have. It?s something that we bestow upon, or perceive in external

>things. Is not one of the all time greatest insults on one?s

>intelligence the accusation of being predictable?

 

[ph] This is a great issue.  Making a world model in our minds the way
we do does seem to require that the qualities of things are those we
bestow on them by changing our images of them.  All we have to guide us
is our world model, so when we change our own or each other's world
models it displays our ultimate control over the world in which we

operate.  

 

[ph] Another view is that things are what a scientific study would tell
you, their web of relationships with other things and the nest of
internal structures with which those other related things connect.  This
refers to a an extensive group of physical systems most of which are and
will remain largely or entirely unknown.  It's hard to have it both
ways, and the former surely seems to dominate, but getting rid of the
latter all together seems dangerous, don't you think?

 

>I suspect that any measure of intelligence will be relative to the

>observer?s ability to predict expected causal effects and be pleasantly


>surprised?not too unlike the Turing Test.

 

 

 

>Robert Howard

>Phoenix, Arizona

 

 


  _____  


From: [hidden email] [mailto:[hidden email]] On
Behalf Of Phil Henshaw
Sent: Tuesday, December 26, 2006 5:33 AM
To: rolandthompson at mindspring.com; 'The Friday Morning Applied
Complexity Coffee Group'
Subject: Re: [FRIAM] The what is AI question

 

I checked the description of Touring's test again...
[http://www.google.com/search?hl=en&q=touring+test]   Doesn't it
actually say "a good fake is the real thing"?     I always thought the
identifying characteristics of 'real thing' included having aspects that
make a real difference that can't be faked, there for anyone to see if
they look for them?  

 


Phil Henshaw                       ????.?? ? `?.????
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
680 Ft. Washington Ave
NY NY 10040                      
tel: 212-795-4844                
e-mail: pfh at synapse9.com          
explorations: www.synapse9.com <http://www.synapse9.com/>    

-----Original Message-----
From: [hidden email] [mailto:[hidden email]] On
Behalf Of rolandthompson at mindspring.com
Sent: Monday, December 25, 2006 8:51 PM
To: The Friday Morning Applied Complexity Coffee Group
Subject: Re: [FRIAM] The what is AI question

in a reverse turing test, if a human could convince other humans that he
was a macine/computer would he then be unintelligent.    From" fooled by
randomness", if memory serves.



-------------- next part --------------
An HTML attachment was scrubbed...
URL: /pipermail/friam_redfish.com/attachments/20061228/a955c386/attachment-0001.html

Reply | Threaded
Open this post in threaded view
|

The what is AI question + recursive languages

Robert Howard-2-3
Phil,

 

This has been a very interesting thread with you. I just drove to California
for Christmas and bought an audio tape for the road titled ?Before the Dawn?
by Nicholas Wade. It?s very interesting about how much information
archeologists can recover just from DNA in living humans today. In part of
this lecture, Wade argues that complex combinatorial grammatical sentences
evolved almost abruptly around 50-60K years ago. The new idea I learned is
the suggestion that the part of our mammalian brains that were once used for
navigation, and no longer so important in the last 100,000 years, contained
built-in ?recursion? processing. For example, I need to get from A to D, but
there?s B and C in between, and if C is blocked, I must backtrack to B along
a different path. And that this navigation part of our brains evolved away
from navigation toward the grammatical speech parts of our brain. It?s no
wonder recursion, which we all take for granted, is highly built into our
language.

 

This all parallels your mentioning that ?building our imaginary worlds took
precedence over directly observing the real one? about this same time. I?m
not sure what the relation is there, or why this would cause imaginary
worlds to take precedence over real ones, but if so, perhaps it was caused
by the chaotic transition period from navigation use to the language use
that might introduce enough contradictions to cause humans to just give up
in lieu of the imaginary world. Later, as evolution worked out the bugs,
reality started to become more favorable. NOTE: I?m just going with your
interesting idea.

 

Anyway, this has some interesting thought branches along the lines of Owen?s
request for mathematical parsing and searching. Just about all mathematics
is recursive. So are modern programming languages. And it becomes an
interesting question whether this ?recursion? is really out there, or just
in our heads?the results of evolution. And in no way does this lessen its
?realness?. Humans are going to explain things, and model things, and see
things in terms of recursive reduction?whether we like it or not. So
defining ?intelligence? or encoding mathematical expressions for optimal
searching is going to be recursive.

 

There are two major problems with anything recursive: (1) when to stop; and
(2) detecting cycles. Reality solves these problems for us. The world is
finite. Therefore any search across a composite tree created from real
things will eventually exhaust its supply of things. Thus reality stops
recursion. In addition, cycles are impossible in reality because everything
is cause-and-effect?in that temporal order. But inside our imaginations,
which include definitions, modeling, mathematics and computer programming,
cycles can happen. We do sometimes have redundant definitions. For example,
?what is an object?? Answer, ?It?s anything that
?. CYCLE! But I like your
flypaper analogy better.

 

So when I said ?self-referential definitions game with intelligence and
consciousness?, I was referring to the recursion that happens inside our
brains as we develop models of things.

 

As far as your ?growth of the world? problem, I wonder if the solutions will
only form when the population gets to about 10 billion or so people. Since
we?re not there yet, we cannot see any solution in sight.

 

Here?s part of the transcript from Wade: ?Early
<http://query.nytimes.com/gst/fullpage.html?sec=health&res=9503E0DF173CF936A
25754C0A9659C8B63>  Voices: The Leap to Language"

 

Robert Howard
Phoenix, Arizona

  _____  

From: [hidden email] [mailto:[hidden email]] On Behalf
Of Phil Henshaw
Sent: Wednesday, December 27, 2006 11:21 PM
To: 'The Friday Morning Applied Complexity Coffee Group'
Subject: Re: [FRIAM] The what is AI question

 

Rob,

It threw me off just a bit that the first two sentences of mine quoted below
were taken from a larger paragraph and placed in reverse order...  but I
think you're saying I appear to be struggling with the root problem of self
reference you go on to discuss.   Sometimes I come up with wacky analogies,
and this one reminding me of our minds being like flypaper, but with only
one sticky side, the other side once having been sticky but now with its
tendrils all curled up and hidden like the missing dimensions the physicists
say are needed to explain the unity of the world....

 

Marketing may have some role in pre-chewing any new point of view for people
who like or need that.  I think the root problem is that people develop
their ideas independent of the world and tend not to change them till they
have to.   Thus, the real issue for finding good choices in unfamiliar
situations (like our present state of change) is learning how to see where
reality is different from what we imagine it to be.   The methods available
are plentiful, but seem to be useless until one realizes that all images,
*however useful they may be to us internally*, are actually just projections
of our own root values and *all* of them are infinitely different from the
physical things they are images of.    Yea it's a jump, but well worth
taking.    

 

If we use our root values to judge things, and the world changes, how do we
then judge our values???   That's the way I'd state the self-referential
dilemma you mention other examples of.   As far as I can tell they can only
be answered by finding a way to honestly look at the whole physical meaning
of your values, and then reaching for even deeper values when there's a
conflict.   Presently there's a conflict between the earth and growth, for
example, that I think you can only resolve by suspending all usual judgments
and looking at the whole effect of what's happening, drawing on values
deeper than all the ones you need to question.    All that takes is work I
think.

 

The reason I say "once having been sticky" in the analogy above is that I
think there were actually two major breaks with reality that occurred in the
evolution of thought, one at around 50-60k years ago, when building our
imaginary worlds took precedence over directly observing the real one, and
the other around 5-10k years ago, when the event called the 'fall of man'
blinded the creative center of human culture to seeing that anything but man
had anything inside.  When you actually look for other things built and
operated from the inside, you find the universe is chock full of them.    I
think being blinded to that is where we got our obsession with control and
our belief there's noting at stake in unlimited growth.    Pretty nuts?
These are indeed sort of educated guesses phrased in experimental terms, but
the evidence is pretty clear to me that events of this kind did occur at
real places and times, and that its reasonable to say it's more a matter of
narrowing down what actually happened.

 


Phil Henshaw                       ????.?? ? `?.????
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
680 Ft. Washington Ave
NY NY 10040                      
tel: 212-795-4844                
e-mail: pfh at synapse9.com          
explorations: www.synapse9.com <http://www.synapse9.com/>    

-----Original Message-----
From: [hidden email] [mailto:[hidden email]] On Behalf
Of Rob Howard
Sent: Wednesday, December 27, 2006 12:30 AM
To: 'The Friday Morning Applied Complexity Coffee Group'
Subject: Re: [FRIAM] The what is AI question

[ph] An intelligent person will predictably come up with unexpected points
of view and solutions for new problems. That seems to display something of a
tendency to accept mystification in place of explanation, but I don't see
the root meaning of intelligence in it.    

 

[ph] This is a great issue.  Making a world model in our minds the way we do
does seem to require that the qualities of things are those we bestow on
them by changing our images of them.  All we have to guide us is our world
model, so when we change our own or each other's world models it displays
our ultimate control over the world in which we

operate.  

 

[ph] Another view is that things are what a scientific study would tell you,
their web of relationships with other things and the nest of internal
structures with which those other related things connect.  This refers to a
an extensive group of physical systems most of which are and will remain
largely or entirely unknown.  It's hard to have it both ways, and the former
surely seems to dominate, but getting rid of the latter all together seems
dangerous, don't you think?

 

 

[rh] It?s like the 100-year history of ?quantum? in six words: ?Discrete

Duality
 Unpredictable
 Small
 Mystical
 Chaotic...?. I wonder what
marketing will bring next decade. My guess is that, like quantum mechanics,
where the definitions only make sense when we realize that we?re part of the
measurement process and not independently isolated from it, a definition for
intelligence and consciousness will likewise only make sense when we realize
that we are part of the measurement process too. We define ourselves as
?intelligent? and ?conscious? with respect to ourselves. Then we attempt to
project that reflexive definition onto outside things and act surprised when
the reflexivity instantly disappears. We are the baseline because we have
said so!

 

Earlier examples of this defining process applied to the definition of
?life?. We are alive because we say so! So we look for these characteristics
in other animals. Early theocracy associated life with a soul only to deduce
that insects must not really be alive because they ?of course, have no
soul?. Later science, wanting to include this taxonomy, starts looking at
dynamic qualities, such as burning food and exhaling the byproducts. Along
comes the industrial age fuel-burning machines, and then ?self replication?
becomes the quintessential element of life. Then certain crystallization
processes are observed to satisfy the definition. Wanting to exclude them (I
suppose for aesthetic reasons) science then adds mutation, fitness functions
and other evolutionary terms to the definition. Someone takes a closer looks
at RNA-viruses isolated from their hosts, and
 well, it?s back to the
drawing board.

 

Are we not playing the same self-referential definitions game with
intelligence and consciousness as we once did, and are still doing, with
life?

 

 

Robert Howard

Phoenix, Arizona

 

 

Well, how about the ability to respond to unexpected situations with

useful choices?   Is that low or high on the tests of intelligence?

 

Phil Henshaw                       ????.?? ? `?.????

 

 

 

-----Original Message-----

From:  Rob Howard

Sent: Monday, December 25, 2006 12:32 PM

To: 'The Friday Morning Applied Complexity Coffee Group'

Subject: Re: [FRIAM] The what is AI question

 

>What if the analogy of intelligence is unexpected predictability?

>I can roll a pair of dice, and that is unpredictable; but it?s not

>unexpected. I expect a Gaussian curve of totals.

 

[ph] I think you're saying that people have frequently bestowed
'intelligence' on things that were merely predictable.  That seems to
display something of a tendency to accept mystification in place of
explanation, but I don't see the root meaning of intelligence in it.  An
intelligent person will predictably come up with unexpected points of

view and solutions for new problems.   It's the aspect of invention

there, not the mystery of the process, that displays the intelligence
involved I think. ph

 

>A few thousand years ago, the states of the moon were unpredictable

>(eclipses, elevation, and to some extent, phases). Humans consequently

>animated it with intelligence by calling it Luna?the moon goddess.

>All deities have intelligence. The same occurred with the planets,

>weather; and even social conditions like love and war. Only when these

things became >expectedly predictable did they loose their intelligence.

You all

>remember ELIZA! At least for the first five minutes of play, the game

>did take on intelligence. However, after review of the actual code did

>the game instantly lose it mystery. Kasparov bestowed intelligence on

>Deep Blue, which I?m sure the programmers did not.

 

>In this sense, intelligence is not a property that external things

>have. It?s something that we bestow upon, or perceive in external

>things. Is not one of the all time greatest insults on one?s

>intelligence the accusation of being predictable?

 

[ph] This is a great issue.  Making a world model in our minds the way we do
does seem to require that the qualities of things are those we bestow on
them by changing our images of them.  All we have to guide us is our world
model, so when we change our own or each other's world models it displays
our ultimate control over the world in which we

operate.  

 

[ph] Another view is that things are what a scientific study would tell you,
their web of relationships with other things and the nest of internal
structures with which those other related things connect.  This refers to a
an extensive group of physical systems most of which are and will remain
largely or entirely unknown.  It's hard to have it both ways, and the former
surely seems to dominate, but getting rid of the latter all together seems
dangerous, don't you think?

 

>I suspect that any measure of intelligence will be relative to the

>observer?s ability to predict expected causal effects and be pleasantly

>surprised?not too unlike the Turing Test.

 

 

 

>Robert Howard

>Phoenix, Arizona

 

 


  _____  


From: [hidden email] [mailto:[hidden email]] On Behalf
Of Phil Henshaw
Sent: Tuesday, December 26, 2006 5:33 AM
To: rolandthompson at mindspring.com; 'The Friday Morning Applied Complexity
Coffee Group'
Subject: Re: [FRIAM] The what is AI question

 

I checked the description of Touring's test again...
[http://www.google.com/search?hl=en&q=touring+test]   Doesn't it actually
say "a good fake is the real thing"?     I always thought the identifying
characteristics of 'real thing' included having aspects that make a real
difference that can't be faked, there for anyone to see if they look for
them?  

 


Phil Henshaw                       ????.?? ? `?.????
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
680 Ft. Washington Ave
NY NY 10040                      
tel: 212-795-4844                
e-mail: pfh at synapse9.com          
explorations: www.synapse9.com <http://www.synapse9.com/>    

-----Original Message-----
From: [hidden email] [mailto:[hidden email]] On Behalf
Of rolandthompson at mindspring.com
Sent: Monday, December 25, 2006 8:51 PM
To: The Friday Morning Applied Complexity Coffee Group
Subject: Re: [FRIAM] The what is AI question

in a reverse turing test, if a human could convince other humans that he was
a macine/computer would he then be unintelligent.    From" fooled by
randomness", if memory serves.

 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: /pipermail/friam_redfish.com/attachments/20061228/9b042e3f/attachment-0001.html