Crutchfield 's "Is anything ever new?"

classic Classic list List threaded Threaded
41 messages Options
123
Reply | Threaded
Open this post in threaded view
|

Crutchfield 's "Is anything ever new?"

Nick Thompson
All,
 
Over the years I can remember many animated conversations among psychologists about whether it is possible to see something new, since there is no way for the cognitive machinery to recognize something for which it does not already have a template.  Often cited in those discussions was the reported experience of people who had congenital cateracts removed and could not, for a time, see anything. 
 
the answer to this cocktail party conundrum has always seemed to me an emphatic YES and NO.   No we cannot see anything entirely new, however nothing that we encounter is ever entirely new.  so, for instance, let it be the case that you had never heard of unicorns, never seen an illustration of a unicorn, etc, and a unicorn were to trot into the St. Johns Cafe tomorrow.  Would you see it?  Well, if you knew about horses and narwhales, I would say yes, because while you would not immediately see a unicorn you would see a horse with a narwale tusk in the middle of its forehead. 
 
Now, it seems to me that Crutchfield's essay (in the Emergence book, for those of you who have it) is asking the scientific version of that question. 
Do we actually ever discover anything new.  His explicit answer, in the last paragraph of the essay, would seem to be "yes", but the argument seems in many places to lead in the oppsite direction.  Discovery,  he seems to argue, consists of shifting from one form of computation to another where forms of computation are defined by a short list of machine-types. 
 
Has anybody out there read the article and have an opinion on this matter? 
 
Popper's falsificationism would seem to imply that scientists never DISCOVER anything new;  they IMAGINE new things, and then, having imagined them,  find them.  Bold Conjectures, he called it.   Seems to go along with Kubie's idea of the preconscious as a place where pieces of experience get scrambled into new combinations. 
 
Nick
 
 
 
 
Nicholas S. Thompson
Emeritus Professor of Psychology and Ethology,
Clark University ([hidden email])
 
 
 


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org
Reply | Threaded
Open this post in threaded view
|

Re: Crutchfield 's "Is anything ever new?"

Russ Abbott
This seems to me to be asking a version of the question whether one can ever think something for which one does not already have a word--i.e., whether one's language determines and limits one's possible thoughts.

I think that's wrong.  A simple argument would be that if it were true then we would never have thought anything since we evolved from single cell organisms that had no language.

I tend to agree with Nick that most if not all of our new thoughts are combinations and mutations of existing thoughts. But that seems to be good enough.

Of course single celled organisms didn't have thoughts either. But how thought started is another question. I don't think it started with abstract concepts. How did we (animals) first manage to convert perceptions into concepts that could be stored and manipulated?  To tell that story clearly would be a very nice bit of science. But it certainly happened.

-- Russ A


On Thu, Oct 29, 2009 at 10:49 PM, Nicholas Thompson <[hidden email]> wrote:
All,
 
Over the years I can remember many animated conversations among psychologists about whether it is possible to see something new, since there is no way for the cognitive machinery to recognize something for which it does not already have a template.  Often cited in those discussions was the reported experience of people who had congenital cateracts removed and could not, for a time, see anything. 
 
the answer to this cocktail party conundrum has always seemed to me an emphatic YES and NO.   No we cannot see anything entirely new, however nothing that we encounter is ever entirely new.  so, for instance, let it be the case that you had never heard of unicorns, never seen an illustration of a unicorn, etc, and a unicorn were to trot into the St. Johns Cafe tomorrow.  Would you see it?  Well, if you knew about horses and narwhales, I would say yes, because while you would not immediately see a unicorn you would see a horse with a narwale tusk in the middle of its forehead. 
 
Now, it seems to me that Crutchfield's essay (in the Emergence book, for those of you who have it) is asking the scientific version of that question. 
Do we actually ever discover anything new.  His explicit answer, in the last paragraph of the essay, would seem to be "yes", but the argument seems in many places to lead in the oppsite direction.  Discovery,  he seems to argue, consists of shifting from one form of computation to another where forms of computation are defined by a short list of machine-types. 
 
Has anybody out there read the article and have an opinion on this matter? 
 
Popper's falsificationism would seem to imply that scientists never DISCOVER anything new;  they IMAGINE new things, and then, having imagined them,  find them.  Bold Conjectures, he called it.   Seems to go along with Kubie's idea of the preconscious as a place where pieces of experience get scrambled into new combinations. 
 
Nick
 
 
 
 
Nicholas S. Thompson
Emeritus Professor of Psychology and Ethology,
Clark University ([hidden email])
 
 
 


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org
Reply | Threaded
Open this post in threaded view
|

Re: Crutchfield 's "Is anything ever new?"

Owen Densmore
Administrator
In reply to this post by Nick Thompson
Jim's Davis site has the paper, along with the longer technical paper:
   http://cse.ucdavis.edu/~cmg/papers/EverNew.pdf
   http://cse.ucdavis.edu/~cmg/compmech/pubs/EverNewTitlePage.htm

     -- Owen


On Oct 29, 2009, at 11:49 PM, Nicholas Thompson wrote:

> All,
>
> Over the years I can remember many animated conversations among  
> psychologists about whether it is possible to see something new,  
> since there is no way for the cognitive machinery to recognize  
> something for which it does not already have a template.  Often  
> cited in those discussions was the reported experience of people who  
> had congenital cateracts removed and could not, for a time, see  
> anything.
>
> the answer to this cocktail party conundrum has always seemed to me  
> an emphatic YES and NO.   No we cannot see anything entirely new,  
> however nothing that we encounter is ever entirely new.  so, for  
> instance, let it be the case that you had never heard of unicorns,  
> never seen an illustration of a unicorn, etc, and a unicorn were to  
> trot into the St. Johns Cafe tomorrow.  Would you see it?  Well, if  
> you knew about horses and narwhales, I would say yes, because while  
> you would not immediately see a unicorn you would see a horse with a  
> narwale tusk in the middle of its forehead.
>
> Now, it seems to me that Crutchfield's essay (in the Emergence book,  
> for those of you who have it) is asking the scientific version of  
> that question.
> Do we actually ever discover anything new.  His explicit answer, in  
> the last paragraph of the essay, would seem to be "yes", but the  
> argument seems in many places to lead in the oppsite direction.  
> Discovery,  he seems to argue, consists of shifting from one form of  
> computation to another where forms of computation are defined by a  
> short list of machine-types.
>
> Has anybody out there read the article and have an opinion on this  
> matter?
>
> Popper's falsificationism would seem to imply that scientists never  
> DISCOVER anything new;  they IMAGINE new things, and then, having  
> imagined them,  find them.  Bold Conjectures, he called it.   Seems  
> to go along with Kubie's idea of the preconscious as a place where  
> pieces of experience get scrambled into new combinations.
>
> Nick
>

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org
Reply | Threaded
Open this post in threaded view
|

Re: Crutchfield 's "Is anything ever new?"

Ted Carmichael
In reply to this post by Russ Abbott
Have you seen all those commercials for Windows 7?  Microsoft's "new" operating system?

It isn't new at all.  Just the same old ones and zeros.

-Ted

On Fri, Oct 30, 2009 at 2:15 AM, Russ Abbott <[hidden email]> wrote:
This seems to me to be asking a version of the question whether one can ever think something for which one does not already have a word--i.e., whether one's language determines and limits one's possible thoughts.

I think that's wrong.  A simple argument would be that if it were true then we would never have thought anything since we evolved from single cell organisms that had no language.

I tend to agree with Nick that most if not all of our new thoughts are combinations and mutations of existing thoughts. But that seems to be good enough.

Of course single celled organisms didn't have thoughts either. But how thought started is another question. I don't think it started with abstract concepts. How did we (animals) first manage to convert perceptions into concepts that could be stored and manipulated?  To tell that story clearly would be a very nice bit of science. But it certainly happened.

-- Russ A


On Thu, Oct 29, 2009 at 10:49 PM, Nicholas Thompson <[hidden email]> wrote:
All,
 
Over the years I can remember many animated conversations among psychologists about whether it is possible to see something new, since there is no way for the cognitive machinery to recognize something for which it does not already have a template.  Often cited in those discussions was the reported experience of people who had congenital cateracts removed and could not, for a time, see anything. 
 
the answer to this cocktail party conundrum has always seemed to me an emphatic YES and NO.   No we cannot see anything entirely new, however nothing that we encounter is ever entirely new.  so, for instance, let it be the case that you had never heard of unicorns, never seen an illustration of a unicorn, etc, and a unicorn were to trot into the St. Johns Cafe tomorrow.  Would you see it?  Well, if you knew about horses and narwhales, I would say yes, because while you would not immediately see a unicorn you would see a horse with a narwale tusk in the middle of its forehead. 
 
Now, it seems to me that Crutchfield's essay (in the Emergence book, for those of you who have it) is asking the scientific version of that question. 
Do we actually ever discover anything new.  His explicit answer, in the last paragraph of the essay, would seem to be "yes", but the argument seems in many places to lead in the oppsite direction.  Discovery,  he seems to argue, consists of shifting from one form of computation to another where forms of computation are defined by a short list of machine-types. 
 
Has anybody out there read the article and have an opinion on this matter? 
 
Popper's falsificationism would seem to imply that scientists never DISCOVER anything new;  they IMAGINE new things, and then, having imagined them,  find them.  Bold Conjectures, he called it.   Seems to go along with Kubie's idea of the preconscious as a place where pieces of experience get scrambled into new combinations. 
 
Nick
 
 
 
 
Nicholas S. Thompson
Emeritus Professor of Psychology and Ethology,
Clark University ([hidden email])
 
 
 


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org
Reply | Threaded
Open this post in threaded view
|

Re: Crutchfield 's "Is anything ever new?"

Nick Thompson
In reply to this post by Nick Thompson
Ted,
 
If I understand Crutchfield arightly, he would not regard Windows 7 as  something "new" since it is written on the same kind of "machine" as XP.  
 
Do I understand him correctly? 
 
 
Nicholas S. Thompson
Emeritus Professor of Psychology and Ethology,
Clark University ([hidden email])
 
 
 
 
----- Original Message -----
Sent: 10/30/2009 11:32:48 AM
Subject: Re: [FRIAM] Crutchfield 's "Is anything ever new?"

Have you seen all those commercials for Windows 7?  Microsoft's "new" operating system?

It isn't new at all.  Just the same old ones and zeros.

-Ted

On Fri, Oct 30, 2009 at 2:15 AM, Russ Abbott <[hidden email]> wrote:
This seems to me to be asking a version of the question whether one can ever think something for which one does not already have a word--i.e., whether one's language determines and limits one's possible thoughts.

I think that's wrong.  A simple argument would be that if it were true then we would never have thought anything since we evolved from single cell organisms that had no language.

I tend to agree with Nick that most if not all of our new thoughts are combinations and mutations of existing thoughts. But that seems to be good enough.

Of course single celled organisms didn't have thoughts either. But how thought started is another question. I don't think it started with abstract concepts. How did we (animals) first manage to convert perceptions into concepts that could be stored and manipulated?  To tell that story clearly would be a very nice bit of science. But it certainly happened.

-- Russ A


On Thu, Oct 29, 2009 at 10:49 PM, Nicholas Thompson <[hidden email]> wrote:
All,
 
Over the years I can remember many animated conversations among psychologists about whether it is possible to see something new, since there is no way for the cognitive machinery to recognize something for which it does not already have a template.  Often cited in those discussions was the reported experience of people who had congenital cateracts removed and could not, for a time, see anything. 
 
the answer to this cocktail party conundrum has always seemed to me an emphatic YES and NO.   No we cannot see anything entirely new, however nothing that we encounter is ever entirely new.  so, for instance, let it be the case that you had never heard of unicorns, never seen an illustration of a unicorn, etc, and a unicorn were to trot into the St. Johns Cafe tomorrow.  Would you see it?  Well, if you knew about horses and narwhales, I would say yes, because while you would not immediately see a unicorn you would see a horse with a narwale tusk in the middle of its forehead. 
 
Now, it seems to me that Crutchfield's essay (in the Emergence book, for those of you who have it) is asking the scientific version of that question. 
Do we actually ever discover anything new.  His explicit answer, in the last paragraph of the essay, would seem to be "yes", but the argument seems in many places to lead in the oppsite direction.  Discovery,  he seems to argue, consists of shifting from one form of computation to another where forms of computation are defined by a short list of machine-types. 
 
Has anybody out there read the article and have an opinion on this matter? 
 
Popper's falsificationism would seem to imply that scientists never DISCOVER anything new;  they IMAGINE new things, and then, having imagined them,  find them.  Bold Conjectures, he called it.   Seems to go along with Kubie's idea of the preconscious as a place where pieces of experience get scrambled into new combinations. 
 
Nick
 
 
 
 
Nicholas S. Thompson
Emeritus Professor of Psychology and Ethology,
Clark University ([hidden email])
 
 
 


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org
Reply | Threaded
Open this post in threaded view
|

Re: Crutchfield 's "Is anything ever new?"

glen e. p. ropella-2
In reply to this post by Nick Thompson

Of course, I agree wholeheartedly with Crutchfield's conjecture that
some sort of "pattern closure" is necessary for emergence to make any
ontological sense.  (I'm sure everyone agrees I've beaten that horse to
death, though my distinction between phenomenon and property hit the
list with a very elastic wet thud. ;-)

And although I don't particularly like his mechanism-phenomena
construction that allows novel structures (as well as discovery) -- i.e.
his "computational mechanics", he's certainly in the right ballpark.

But when you take this sort of thing out of its context (which you do by
asking your question in the context of _mind_ and personal experience),
you seem to miss Popper's point entirely.  And, since Crutchfield's
point requires explicit consideration of the observer (measurements),
you seem to have missed his point, too.

Re: Popper's falsificationism has nothing to do with imagining and then
falsifying the imaginings.  It is the proposition that reality is
unattainable and only approachable in the limit.  You can't look
directly at it.  You can only look _around_ it.  Further, the "looking"
is behavior-based, not thought-based at all.  It literally does not
matter what crazy thoughts are going through the minds of the
experimenters.  What matters is that the experimenters are all _doing_
the same thing to get the same result.  That's because _doing_ (actions
... I know, I beat this horse dead, too...) translates from person to
person (though perhaps not from person to horse or elephant to
bacterium) and thoughts do NOT translate from person to person.
Justificationism is the enemy, here in this proposition.

So, scientists do not imagine then find novelty via critical
rationalism.  They construct and select amongst various _methods_ of
interacting with the world, whatever that world may contain, new or not.

Relating it back to Crutchfield's computational mechanics, one thing a
critical rationalist might do is design an experiment that _constructs_
a mechanism-phenomenon that seems to always work.  Whether or not, the
first time that scientist constructed that thing, the
mechanism-phenomenon never before existed in the universe, we'll never
know, nor can we know.  Those questions are metaphysical, spiritual,
philosophical.  What we _can_ know, however, is that every time we go
through the same motions that guy went through, we get the same result.

So, again, I have to argue against naive realism and assert that
questions like this ("Is anything really 'new') are useless and their
answers don't matter except in the spiritual, metaphysical sense.  If it
makes you a better cook or truck driver to believe one way or the other,
fine.  But attempts to accurately transfer these unscientific beliefs to
other humans will always fail (and thank the gods for the interesting
ways in which they fail!), because only actions transfer.


Thus spake Nicholas Thompson circa 09-10-29 10:49 PM:

> Over the years I can remember many animated conversations among
> psychologists about whether it is possible to see something new,
> since there is no way for the cognitive machinery to recognize
> something for which it does not already have a template.  Often cited
> in those discussions was the reported experience of people who had
> congenital cateracts removed and could not, for a time, see anything.
>
>
> the answer to this cocktail party conundrum has always seemed to me
> an emphatic YES and NO.   No we cannot see anything entirely new,
> however nothing that we encounter is ever entirely new.  so, for
> instance, let it be the case that you had never heard of unicorns,
> never seen an illustration of a unicorn, etc, and a unicorn were to
> trot into the St. Johns Cafe tomorrow.  Would you see it?  Well, if
> you knew about horses and narwhales, I would say yes, because while
> you would not immediately see a unicorn you would see a horse with a
> narwale tusk in the middle of its forehead.
>
> Now, it seems to me that Crutchfield's essay (in the Emergence book,
> for those of you who have it) is asking the scientific version of
> that question. Do we actually ever discover anything new.  His
> explicit answer, in the last paragraph of the essay, would seem to be
> "yes", but the argument seems in many places to lead in the oppsite
> direction.  Discovery,  he seems to argue, consists of shifting from
> one form of computation to another where forms of computation are
> defined by a short list of machine-types.
>
> Has anybody out there read the article and have an opinion on this
> matter?
>
> Popper's falsificationism would seem to imply that scientists never
> DISCOVER anything new;  they IMAGINE new things, and then, having
> imagined them,  find them.  Bold Conjectures, he called it.   Seems
> to go along with Kubie's idea of the preconscious as a place where
> pieces of experience get scrambled into new combinations.

--
glen e. p. ropella, 971-222-9095, http://agent-based-modeling.com


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org
Reply | Threaded
Open this post in threaded view
|

Re: Crutchfield 's "Is anything ever new?"

Ted Carmichael
In reply to this post by Nick Thompson
Hi, Nick.

I'll have to read it more carefully, but on first blush I would say: not necessarily.

My tongue-in-cheek email was really a comment about scale.  All computer programs, no matter how different, can be mapped at the most basic level to one type of binary value and three types of computer operations: the three basic logic gates.  It is the particular number of these and their combination that make Word different from XP, which is different from Halo, etc.

These gates - one level higher - are combined into a 'grammar' that describes a low-level programming language, such as assembly.  So the basic operations in a machine code are more varied and numerous than just three gates.  The next jump in scale - perhaps a high-level programming language, such as Java - has an even larger grammar.

To address your question I would point out that the division between software and hardware (and firmware) is arbitrary, in terms of which scale is the delimiter.  It's been decided based on what is most useful for manufacturing, etc.  Likewise, the scale (grammar) of Java was chosen so that programmers could ignore everything below it, and write the same code on 'any' computer.  This is similar in how we ignore the low-level stuff when writing a document, and just concentrate on the words and - to a lesser extent - the letters we use.

Two different database packages may have the exact same functionality at the scale that we care about, but "under the hood" are completely different ... say, if one was written for the Mac and one for a PC.

So to answer your question directly, I would have to know what scale Crutchfield uses to determine when something is new.  Presumably, Windows 7 has two types of innovation: the functionality presented to the user is different, as well as the efficiencies in the underlying architecture. Windows XP was famous for using a different kernel than Vista, making it more robust and stable, in addition to the changes that the user could see.

But if Cruthfield goes down a level, he could claim that it isn't new at all since it uses the same, more fundamental, architecture of a PC's hardware, rather than a Mac's hardware, or some other configuration (grammar) of hardware.

In response to Glen's comments, I would say that his differentiation between thoughts and actions is also a somewhat arbitrary choice of scale.  I agree that how two people shoot a basketball is usually more easily translated between them than how they calculate the product of two numbers.  When I shoot a basketball, I follow the same general procedure (knees bent, one hand on the side of the ball and one hand behind it, etc) that other people do.  But my physical structure is still different than another person's, so I have refined the general procedure to better match my physical structure.  (Or not, since I usually miss the basket.)

Two different people calculating a product, however, may use two totally different methods.  One person may even have a larger grammar for this, utilizing more methods for more types of numbers than the second person.  (In effect, he has more of his brain dedicated to these types of tasks, which give him the power to have a larger "math" grammar.)  So it's probably more precise to say: at a certain scale 'actions' can be mapped between two people but 'thoughts' cannot be.

If you go down to the lower level processes, all of our neurons behave in approximately the same ways.  So at this scale they can be mapped, one person to another.  I.e., when thinking, one of my neurons is just as easily mapped to one of your neurons as my actions are to your similar actions.

Anyway, as I said I'll have to read Crutchfield more carefully, to know exactly what he is complaining about.  But I think it is interesting to point out the parallel between "emergence" and "innovation."  A new emergent property can arise when the underlying elements are configured/connected in a new way.  Likewise, something that is innovative can also be considered a new configuration of pieces we already had access to.

Cheers,

Ted

On Fri, Oct 30, 2009 at 3:45 PM, Nicholas Thompson <[hidden email]> wrote:
Ted,
 
If I understand Crutchfield arightly, he would not regard Windows 7 as  something "new" since it is written on the same kind of "machine" as XP.  
 
Do I understand him correctly? 
 
 
Nicholas S. Thompson
Emeritus Professor of Psychology and Ethology,
Clark University ([hidden email])
 
 
 
 
----- Original Message -----
Sent: 10/30/2009 11:32:48 AM
Subject: Re: [FRIAM] Crutchfield 's "Is anything ever new?"

Have you seen all those commercials for Windows 7?  Microsoft's "new" operating system?

It isn't new at all.  Just the same old ones and zeros.

-Ted

On Fri, Oct 30, 2009 at 2:15 AM, Russ Abbott <[hidden email]> wrote:
This seems to me to be asking a version of the question whether one can ever think something for which one does not already have a word--i.e., whether one's language determines and limits one's possible thoughts.

I think that's wrong.  A simple argument would be that if it were true then we would never have thought anything since we evolved from single cell organisms that had no language.

I tend to agree with Nick that most if not all of our new thoughts are combinations and mutations of existing thoughts. But that seems to be good enough.

Of course single celled organisms didn't have thoughts either. But how thought started is another question. I don't think it started with abstract concepts. How did we (animals) first manage to convert perceptions into concepts that could be stored and manipulated?  To tell that story clearly would be a very nice bit of science. But it certainly happened.

-- Russ A


On Thu, Oct 29, 2009 at 10:49 PM, Nicholas Thompson <[hidden email]> wrote:
All,
 
Over the years I can remember many animated conversations among psychologists about whether it is possible to see something new, since there is no way for the cognitive machinery to recognize something for which it does not already have a template.  Often cited in those discussions was the reported experience of people who had congenital cateracts removed and could not, for a time, see anything. 
 
the answer to this cocktail party conundrum has always seemed to me an emphatic YES and NO.   No we cannot see anything entirely new, however nothing that we encounter is ever entirely new.  so, for instance, let it be the case that you had never heard of unicorns, never seen an illustration of a unicorn, etc, and a unicorn were to trot into the St. Johns Cafe tomorrow.  Would you see it?  Well, if you knew about horses and narwhales, I would say yes, because while you would not immediately see a unicorn you would see a horse with a narwale tusk in the middle of its forehead. 
 
Now, it seems to me that Crutchfield's essay (in the Emergence book, for those of you who have it) is asking the scientific version of that question. 
Do we actually ever discover anything new.  His explicit answer, in the last paragraph of the essay, would seem to be "yes", but the argument seems in many places to lead in the oppsite direction.  Discovery,  he seems to argue, consists of shifting from one form of computation to another where forms of computation are defined by a short list of machine-types. 
 
Has anybody out there read the article and have an opinion on this matter? 
 
Popper's falsificationism would seem to imply that scientists never DISCOVER anything new;  they IMAGINE new things, and then, having imagined them,  find them.  Bold Conjectures, he called it.   Seems to go along with Kubie's idea of the preconscious as a place where pieces of experience get scrambled into new combinations. 
 
Nick
 
 
 
 
Nicholas S. Thompson
Emeritus Professor of Psychology and Ethology,
Clark University ([hidden email])
 
 
 


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org
Reply | Threaded
Open this post in threaded view
|

Re: Crutchfield 's "Is anything ever new?"

Carl Tollander
In reply to this post by Nick Thompson
I believe, that anyone who has played jazz, in any of its forms, could easily say yes.   Those who have not played jazz, but have imagined themselves doing so,  might say no.  

Nicholas Thompson wrote:
All,
 
Over the years I can remember many animated conversations among psychologists about whether it is possible to see something new, since there is no way for the cognitive machinery to recognize something for which it does not already have a template.  Often cited in those discussions was the reported experience of people who had congenital cateracts removed and could not, for a time, see anything. 
 
the answer to this cocktail party conundrum has always seemed to me an emphatic YES and NO.   No we cannot see anything entirely new, however nothing that we encounter is ever entirely new.  so, for instance, let it be the case that you had never heard of unicorns, never seen an illustration of a unicorn, etc, and a unicorn were to trot into the St. Johns Cafe tomorrow.  Would you see it?  Well, if you knew about horses and narwhales, I would say yes, because while you would not immediately see a unicorn you would see a horse with a narwale tusk in the middle of its forehead. 
 
Now, it seems to me that Crutchfield's essay (in the Emergence book, for those of you who have it) is asking the scientific version of that question. 
Do we actually ever discover anything new.  His explicit answer, in the last paragraph of the essay, would seem to be "yes", but the argument seems in many places to lead in the oppsite direction.  Discovery,  he seems to argue, consists of shifting from one form of computation to another where forms of computation are defined by a short list of machine-types. 
 
Has anybody out there read the article and have an opinion on this matter? 
 
Popper's falsificationism would seem to imply that scientists never DISCOVER anything new;  they IMAGINE new things, and then, having imagined them,  find them.  Bold Conjectures, he called it.   Seems to go along with Kubie's idea of the preconscious as a place where pieces of experience get scrambled into new combinations. 
 
Nick
 
 
 
 
Nicholas S. Thompson
Emeritus Professor of Psychology and Ethology,
Clark University ([hidden email])
 
 
 

============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College lectures, archives, unsubscribe, maps at http://www.friam.org

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org
Reply | Threaded
Open this post in threaded view
|

Re: Crutchfield 's "Is anything ever new?"

glen e. p. ropella-2
In reply to this post by Ted Carmichael
Thus spake Ted Carmichael circa 10/30/2009 03:33 PM:
> In response to Glen's comments, I would say that his differentiation between
> thoughts and actions is also a somewhat arbitrary choice of scale.  I agree
> that how two people shoot a basketball is usually more easily translated
> between them than how they calculate the product of two numbers.  When I
> shoot a basketball, I follow the same general procedure (knees bent, one
> hand on the side of the ball and one hand behind it, etc) that other people
> do.  But my physical structure is still different than another person's, so
> I have refined the general procedure to better match my physical structure.
>  (Or not, since I usually miss the basket.)

Yes, you're onto something, here.  But I wouldn't consider it a matter
of general vs. specific for throwing a basketball.  Any general method
you may think exists is an illusion.  Let's say you're learning how to
do it from a coach and several fellow players.  For each other person
you watch do it, their method is particular to _them_.  In such a case,
there is no general method.  You may _imagine_ some illusory general
method in your head.  But when the method is executed, it is always
particular.

Now consider the coach's _description_ or model of the method.  Even in
that case, the description, the words, the actions the coach executes
with his mouth and hands in an attempt to communicate an idea are
particular to him.  The descriptive actions are particular to him.  Even
in that case, there is no general method.  Any general method you may
think exists is pure fiction.  What matters is the particular actions.

Induction is a myth. [*]

It's not general vs. specific.  It is abstract vs. concrete.  You're
observation of either the coach's description or your fellow players'
methods is chock full of errors and noise.  In order to cope with such
noise and translate from their actions to your actions, you have to fill
in the blanks.  You are totally ignorant of, say, how fast to twitch
your eyes while you're maintaining focus on the basket... or how fast to
twitch your hand/finger muscles while holding the ball.  You can't
observe those parts of the method when watching your fellow players.
And such information is totally absent from the coach's description.
So, you have to make that stuff up yourself.

And you make it up based on your _particular_ concrete ontogenetic
history.  And, hence, when you execute the method, it is also particular
to you.

However, because your hands, fingers, and eye muscles are almost
identical to those of your fellow players and your coach, the method is
transferable despite the huge HUGE _HUGE_ number of errors and amount of
noise in your observations.

> Two different people calculating a product, however, may use two totally
> different methods.  One person may even have a larger grammar for this,
> utilizing more methods for more types of numbers than the second person.
>  (In effect, he has more of his brain dedicated to these types of tasks,
> which give him the power to have a larger "math" grammar.)  So it's probably
> more precise to say: at a certain scale 'actions' can be mapped between two
> people but 'thoughts' cannot be.

It's less a matter of scale than it is of noise and error.  When
calculating a product (or doing any of the more _mechanical_ -- what
used to be called "effective" -- methods), the amount of noise and error
in the transmission from one to another is minimized to a huge extent.
Math is transferable from person to person for precisely this reason.
It is _formal_, syntactic.  Every effort of every mathematician goes
toward making math exact, precise, and unambiguous.

So, my argument is that you may _think_ that you have different methods
for calculating any product, and indeed, they may be slightly different.
 But the amount of variance between, say, two people adding 1+1 and two
people throwing a basketball is huge, HUGE, _HUGE_. [grin]  OK.  I'll
stop that.  Because (some) math is crisp, it's easier to fill in the
blanks after watching someone do it.

Now, contrast arithmetic with, for example, coinductive proofs.  While
it's very easy to watch a fellow mathematician add numbers and then go
add numbers yourself.  It's quite difficult to demonstrate the existence
of a corecursive set after watching another person do it.  (At least in
my own personal math-challenged context, it's difficult. ;-)  You can't
just quickly fill in the blanks unless you have a lot... and I mean a
LOT of mathematical experience lying about in your ontogenic history.
Typically, you have to reduce the error and noise by lots of back and
forth... "What did you do there?" ... "Why did you do that?" ... "What's
that mean?"  Etc.

Hence, it's not a matter of scale.  It's a matter of the amount of
error, noise, and ignorance in the observation of the method.  And it's
not about the transfer of the fictitious flying spaghetti monsters in
your head.  It's a matter of transferring the actions, whatever the
symbols may mean.

> If you go down to the lower level processes, all of our neurons behave in
> approximately the same ways.  So at this scale they can be mapped, one
> person to another.  I.e., when thinking, one of my neurons is just as easily
> mapped to one of your neurons as my actions are to your similar actions.

Right.  But similarity at various scales is only relevant because it
helps determine the amount of error, noise, variance, and uncertainty at
whatever layer the abstraction (abstracted from the concrete) occurs.
Note I said "layer", not "level".  The whole concept of levels is a red
herring and should be totally PURGED from the conversation of emergence,
in my not so humble opinion. ;-)


* I have what I think are strong arguments _against_ the position I'm
taking, here.  But I'm trying to present the argument in a pure form so
that it's clear.  I'm sure at some point in the future when I finally
get a chance to pull out those arguments, someone will accuse me of
contradicting myself. [sigh]

--
glen e. p. ropella, 971-222-9095, http://agent-based-modeling.com


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org
Reply | Threaded
Open this post in threaded view
|

Re: Crutchfield 's "Is anything ever new?"

Ted Carmichael
I'm actually fine with re-defining 'scale' to mean something along the lines of the amount of error in the mapping.  That is mostly, I think, what I was trying to say.  Let me see if I can clarify my points a little.

There is definitely a large number of differences between two people using the same method to shoot a basket.  All the things you mentioned - eye movement, exact combination of muscles, etc.  I was trying to say that this is a different scale (a wider range of error, perhaps) when compared to two shooters using different methods ... e.g., one person shoots in the traditional way and one person makes a 'granny shot.'

I agree that two people using the same method is an illusion.  But it is a useful illusion, when differentiating between the traditional method and the granny method.  Similarly, when Kareem Abdul-Jabbar used the hook shot, it was an innovative (hence: new) method for the NBA.  In this way I would say there are different levels of abstraction available ... one simply picks the level of abstraction that is useful for analysis.

I tried to use the mathematical example of calculating a product to illustrate this same idea.  When calculating 49 * 12, one might use the common method of using first the one's column, then the ten's column, and adding the results, etc.  Another person may invent a new method, noticing that 49 is one less than 50, and that half of 12 is 6, and say the answer is 600 - (12 * 1) = 588.  Still another may say that 490 + 100 - 2 is the answer.

What is innovative about these new methods is not that they ignore the common operations of adding, multiplying, and subtracting.  It's that these basic operations are combined in an innovative way.  If Crutchfield asks: is this really something new?  I would say "yes."  If he points out that all three methods use the same old operations, I would say that doesn't matter ... those operations are used in an innovative way; in a new combination.

In a slightly different vein, Java is a "new" programming language even if it is only used to implement the same old algorithms.  The implementation is new, even if the algorithm - the method - is the same.  This is analogous to two mathematicians using the same "trick" to get a product, even if the respective neuron networks each person possesses to implement this method are slightly different.

I do admit the term "level" or "scope" can exhibit ambiguities.  But I still find that "level" is a useful distinction.  It does imply varying degrees of complexity, and I think that is a valid implication, even if it is hard to nail down.

I also find it hard to define a counter-example to the proposition that emergent features of a system are always produced from the interactions of elements "one level down."  When we look at a marketplace, we assume the "invisible hand" is the result of human interaction.  There doesn't seem to be much use in jumping from the level of neurons - or even worse, quarks - straight to the marketplace.

Of course, depending on the scope of the "market" being studied, individual businesses and other multi-person entities may be the most basic element of this system.  There may even be entities defined as "one person" within this system, depending on how much heterogeneity you allow between individual elements.  

But, however you define the elements, this essentially means the same as saying "one level down," when talking about the emergent properties of that system.  If you want to talk about the emergent properties of a corporation, then you have redefined your system, and hence redefined your elements.

Anyway, the larger point is that innovation happens by combining elements in a new way, however those elements are defined.  A RISK processor is innovative in how it combines basic computer operations.  Java is innovative in the instructions sent to the processor, and the package of common tools that comes with it.  A new algorithm is innovative in how it uses these tools at a different level of abstraction.  And a software package may be new in how it combines many existing algorithms and other elements of visualization and human-computer interaction.

If you don't like "levels" and prefer "layers," then I'm okay with that.  But I don't really see the distinction.  Can you expand on that?

Cheers,

Ted

On Sun, Nov 1, 2009 at 11:43 AM, glen e. p. ropella <[hidden email]> wrote:
Thus spake Ted Carmichael circa 10/30/2009 03:33 PM:
> In response to Glen's comments, I would say that his differentiation between
> thoughts and actions is also a somewhat arbitrary choice of scale.  I agree
> that how two people shoot a basketball is usually more easily translated
> between them than how they calculate the product of two numbers.  When I
> shoot a basketball, I follow the same general procedure (knees bent, one
> hand on the side of the ball and one hand behind it, etc) that other people
> do.  But my physical structure is still different than another person's, so
> I have refined the general procedure to better match my physical structure.
>  (Or not, since I usually miss the basket.)

Yes, you're onto something, here.  But I wouldn't consider it a matter
of general vs. specific for throwing a basketball.  Any general method
you may think exists is an illusion.  Let's say you're learning how to
do it from a coach and several fellow players.  For each other person
you watch do it, their method is particular to _them_.  In such a case,
there is no general method.  You may _imagine_ some illusory general
method in your head.  But when the method is executed, it is always
particular.

Now consider the coach's _description_ or model of the method.  Even in
that case, the description, the words, the actions the coach executes
with his mouth and hands in an attempt to communicate an idea are
particular to him.  The descriptive actions are particular to him.  Even
in that case, there is no general method.  Any general method you may
think exists is pure fiction.  What matters is the particular actions.

Induction is a myth. [*]

It's not general vs. specific.  It is abstract vs. concrete.  You're
observation of either the coach's description or your fellow players'
methods is chock full of errors and noise.  In order to cope with such
noise and translate from their actions to your actions, you have to fill
in the blanks.  You are totally ignorant of, say, how fast to twitch
your eyes while you're maintaining focus on the basket... or how fast to
twitch your hand/finger muscles while holding the ball.  You can't
observe those parts of the method when watching your fellow players.
And such information is totally absent from the coach's description.
So, you have to make that stuff up yourself.

And you make it up based on your _particular_ concrete ontogenetic
history.  And, hence, when you execute the method, it is also particular
to you.

However, because your hands, fingers, and eye muscles are almost
identical to those of your fellow players and your coach, the method is
transferable despite the huge HUGE _HUGE_ number of errors and amount of
noise in your observations.

> Two different people calculating a product, however, may use two totally
> different methods.  One person may even have a larger grammar for this,
> utilizing more methods for more types of numbers than the second person.
>  (In effect, he has more of his brain dedicated to these types of tasks,
> which give him the power to have a larger "math" grammar.)  So it's probably
> more precise to say: at a certain scale 'actions' can be mapped between two
> people but 'thoughts' cannot be.

It's less a matter of scale than it is of noise and error.  When
calculating a product (or doing any of the more _mechanical_ -- what
used to be called "effective" -- methods), the amount of noise and error
in the transmission from one to another is minimized to a huge extent.
Math is transferable from person to person for precisely this reason.
It is _formal_, syntactic.  Every effort of every mathematician goes
toward making math exact, precise, and unambiguous.

So, my argument is that you may _think_ that you have different methods
for calculating any product, and indeed, they may be slightly different.
 But the amount of variance between, say, two people adding 1+1 and two
people throwing a basketball is huge, HUGE, _HUGE_. [grin]  OK.  I'll
stop that.  Because (some) math is crisp, it's easier to fill in the
blanks after watching someone do it.

Now, contrast arithmetic with, for example, coinductive proofs.  While
it's very easy to watch a fellow mathematician add numbers and then go
add numbers yourself.  It's quite difficult to demonstrate the existence
of a corecursive set after watching another person do it.  (At least in
my own personal math-challenged context, it's difficult. ;-)  You can't
just quickly fill in the blanks unless you have a lot... and I mean a
LOT of mathematical experience lying about in your ontogenic history.
Typically, you have to reduce the error and noise by lots of back and
forth... "What did you do there?" ... "Why did you do that?" ... "What's
that mean?"  Etc.

Hence, it's not a matter of scale.  It's a matter of the amount of
error, noise, and ignorance in the observation of the method.  And it's
not about the transfer of the fictitious flying spaghetti monsters in
your head.  It's a matter of transferring the actions, whatever the
symbols may mean.

> If you go down to the lower level processes, all of our neurons behave in
> approximately the same ways.  So at this scale they can be mapped, one
> person to another.  I.e., when thinking, one of my neurons is just as easily
> mapped to one of your neurons as my actions are to your similar actions.

Right.  But similarity at various scales is only relevant because it
helps determine the amount of error, noise, variance, and uncertainty at
whatever layer the abstraction (abstracted from the concrete) occurs.
Note I said "layer", not "level".  The whole concept of levels is a red
herring and should be totally PURGED from the conversation of emergence,
in my not so humble opinion. ;-)


* I have what I think are strong arguments _against_ the position I'm
taking, here.  But I'm trying to present the argument in a pure form so
that it's clear.  I'm sure at some point in the future when I finally
get a chance to pull out those arguments, someone will accuse me of
contradicting myself. [sigh]

--
glen e. p. ropella, 971-222-9095, http://agent-based-modeling.com


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org
Reply | Threaded
Open this post in threaded view
|

Re: Crutchfield 's "Is anything ever new?"

Nick Thompson
In reply to this post by Nick Thompson
Well, to the extent that this is a discussion of crutchfield, I dont see how the hook shot would be something new in his terms.  He seems to mean something quite narrow by "new" and it seems to have something to do with a new type of computational "machine".  Since "computational machine" is an intuitional black hole for me, I cannot say whether jump shot is a new sort of computational machine or not, but I am inclined to doubt it.
 
Nick
 
Nicholas S. Thompson
Emeritus Professor of Psychology and Ethology,
Clark University ([hidden email])
 
 
 
 
----- Original Message -----
Sent: 11/1/2009 6:54:44 PM
Subject: Re: [FRIAM] Crutchfield 's "Is anything ever new?"

I'm actually fine with re-defining 'scale' to mean something along the lines of the amount of error in the mapping.  That is mostly, I think, what I was trying to say.  Let me see if I can clarify my points a little.

There is definitely a large number of differences between two people using the same method to shoot a basket.  All the things you mentioned - eye movement, exact combination of muscles, etc.  I was trying to say that this is a different scale (a wider range of error, perhaps) when compared to two shooters using different methods ... e.g., one person shoots in the traditional way and one person makes a 'granny shot.'

I agree that two people using the same method is an illusion.  But it is a useful illusion, when differentiating between the traditional method and the granny method.  Similarly, when Kareem Abdul-Jabbar used the hook shot, it was an innovative (hence: new) method for the NBA.  In this way I would say there are different levels of abstraction available ... one simply picks the level of abstraction that is useful for analysis.

I tried to use the mathematical example of calculating a product to illustrate this same idea.  When calculating 49 * 12, one might use the common method of using first the one's column, then the ten's column, and adding the results, etc.  Another person may invent a new method, noticing that 49 is one less than 50, and that half of 12 is 6, and say the answer is 600 - (12 * 1) = 588.  Still another may say that 490 + 100 - 2 is the answer.

What is innovative about these new methods is not that they ignore the common operations of adding, multiplying, and subtracting.  It's that these basic operations are combined in an innovative way.  If Crutchfield asks: is this really something new?  I would say "yes."  If he points out that all three methods use the same old operations, I would say that doesn't matter ... those operations are used in an innovative way; in a new combination.

In a slightly different vein, Java is a "new" programming language even if it is only used to implement the same old algorithms.  The implementation is new, even if the algorithm - the method - is the same.  This is analogous to two mathematicians using the same "trick" to get a product, even if the respective neuron networks each person possesses to implement this method are slightly different.

I do admit the term "level" or "scope" can exhibit ambiguities.  But I still find that "level" is a useful distinction.  It does imply varying degrees of complexity, and I think that is a valid implication, even if it is hard to nail down.

I also find it hard to define a counter-example to the proposition that emergent features of a system are always produced from the interactions of elements "one level down."  When we look at a marketplace, we assume the "invisible hand" is the result of human interaction.  There doesn't seem to be much use in jumping from the level of neurons - or even worse, quarks - straight to the marketplace.

Of course, depending on the scope of the "market" being studied, individual businesses and other multi-person entities may be the most basic element of this system.  There may even be entities defined as "one person" within this system, depending on how much heterogeneity you allow between individual elements.  

But, however you define the elements, this essentially means the same as saying "one level down," when talking about the emergent properties of that system.  If you want to talk about the emergent properties of a corporation, then you have redefined your system, and hence redefined your elements.

Anyway, the larger point is that innovation happens by combining elements in a new way, however those elements are defined.  A RISK processor is innovative in how it combines basic computer operations.  Java is innovative in the instructions sent to the processor, and the package of common tools that comes with it.  A new algorithm is innovative in how it uses these tools at a different level of abstraction.  And a software package may be new in how it combines many existing algorithms and other elements of visualization and human-computer interaction.

If you don't like "levels" and prefer "layers," then I'm okay with that.  But I don't really see the distinction.  Can you expand on that?

Cheers,

Ted

On Sun, Nov 1, 2009 at 11:43 AM, glen e. p. ropella <[hidden email]> wrote:
Thus spake Ted Carmichael circa 10/30/2009 03:33 PM:
> In response to Glen's comments, I would say that his differentiation between
> thoughts and actions is also a somewhat arbitrary choice of scale.  I agree
> that how two people shoot a basketball is usually more easily translated
> between them than how they calculate the product of two numbers.  When I
> shoot a basketball, I follow the same general procedure (knees bent, one
> hand on the side of the ball and one hand behind it, etc) that other people
> do.  But my physical structure is still different than another person's, so
> I have refined the general procedure to better match my physical structure.
>  (Or not, since I usually miss the basket.)

Yes, you're onto something, here.  But I wouldn't consider it a matter
of general vs. specific for throwing a basketball.  Any general method
you may think exists is an illusion.  Let's say you're learning how to
do it from a coach and several fellow players.  For each other person
you watch do it, their method is particular to _them_.  In such a case,
there is no general method.  You may _imagine_ some illusory general
method in your head.  But when the method is executed, it is always
particular.

Now consider the coach's _description_ or model of the method.  Even in
that case, the description, the words, the actions the coach executes
with his mouth and hands in an attempt to communicate an idea are
particular to him.  The descriptive actions are particular to him.  Even
in that case, there is no general method.  Any general method you may
think exists is pure fiction.  What matters is the particular actions.

Induction is a myth. [*]

It's not general vs. specific.  It is abstract vs. concrete.  You're
observation of either the coach's description or your fellow players'
methods is chock full of errors and noise.  In order to cope with such
noise and translate from their actions to your actions, you have to fill
in the blanks.  You are totally ignorant of, say, how fast to twitch
your eyes while you're maintaining focus on the basket... or how fast to
twitch your hand/finger muscles while holding the ball.  You can't
observe those parts of the method when watching your fellow players.
And such information is totally absent from the coach's description.
So, you have to make that stuff up yourself.

And you make it up based on your _particular_ concrete ontogenetic
history.  And, hence, when you execute the method, it is also particular
to you.

However, because your hands, fingers, and eye muscles are almost
identical to those of your fellow players and your coach, the method is
transferable despite the huge HUGE _HUGE_ number of errors and amount of
noise in your observations.

> Two different people calculating a product, however, may use two totally
> different methods.  One person may even have a larger grammar for this,
> utilizing more methods for more types of numbers than the second person.
>  (In effect, he has more of his brain dedicated to these types of tasks,
> which give him the power to have a larger "math" grammar.)  So it's probably
> more precise to say: at a certain scale 'actions' can be mapped between two
> people but 'thoughts' cannot be.

It's less a matter of scale than it is of noise and error.  When
calculating a product (or doing any of the more _mechanical_ -- what
used to be called "effective" -- methods), the amount of noise and error
in the transmission from one to another is minimized to a huge extent.
Math is transferable from person to person for precisely this reason.
It is _formal_, syntactic.  Every effort of every mathematician goes
toward making math exact, precise, and unambiguous.

So, my argument is that you may _think_ that you have different methods
for calculating any product, and indeed, they may be slightly different.
 But the amount of variance between, say, two people adding 1+1 and two
people throwing a basketball is huge, HUGE, _HUGE_. [grin]  OK.  I'll
stop that.  Because (some) math is crisp, it's easier to fill in the
blanks after watching someone do it.

Now, contrast arithmetic with, for example, coinductive proofs.  While
it's very easy to watch a fellow mathematician add numbers and then go
add numbers yourself.  It's quite difficult to demonstrate the existence
of a corecursive set after watching another person do it.  (At least in
my own personal math-challenged context, it's difficult. ;-)  You can't
just quickly fill in the blanks unless you have a lot... and I mean a
LOT of mathematical experience lying about in your ontogenic history.
Typically, you have to reduce the error and noise by lots of back and
forth... "What did you do there?" ... "Why did you do that?" ... "What's
that mean?"  Etc.

Hence, it's not a matter of scale.  It's a matter of the amount of
error, noise, and ignorance in the observation of the method.  And it's
not about the transfer of the fictitious flying spaghetti monsters in
your head.  It's a matter of transferring the actions, whatever the
symbols may mean.

> If you go down to the lower level processes, all of our neurons behave in
> approximately the same ways.  So at this scale they can be mapped, one
> person to another.  I.e., when thinking, one of my neurons is just as easily
> mapped to one of your neurons as my actions are to your similar actions.

Right.  But similarity at various scales is only relevant because it
helps determine the amount of error, noise, variance, and uncertainty at
whatever layer the abstraction (abstracted from the concrete) occurs.
Note I said "layer", not "level".  The whole concept of levels is a red
herring and should be totally PURGED from the conversation of emergence,
in my not so humble opinion. ;-)


* I have what I think are strong arguments _against_ the position I'm
taking, here.  But I'm trying to present the argument in a pure form so
that it's clear.  I'm sure at some point in the future when I finally
get a chance to pull out those arguments, someone will accuse me of
contradicting myself. [sigh]

--
glen e. p. ropella, 971-222-9095,
http://agent-based-modeling.com


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at
http://www.friam.org


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org
Reply | Threaded
Open this post in threaded view
|

Re: Crutchfield 's "Is anything ever new?"

Nick Thompson
In reply to this post by Nick Thompson
What kind of levels are we talking about, here.  

I guess I think that levels are logical.   So  A black bird is one level, a
bunch of black birds is another level, and warehouse full of bunches of
black birds another level.  Ditto, a black bird, a flock of black birds,
and a sky full of flocks of blackbirds.  What makes the individual-to-flock
level shift interesting is the manner in which the flock takes form
because the behavior of the birds varies with their positions within the
aggregate.  Ditto the form of a pile of sand, for that matter.  

Reading Glen concerning SCALE, I am thinking that scale and variance are
necessarily interrelated -- unless scale is just how big something is in
relation to the distance between the pole and the equator of the earth or
the length of the King's stride.  

Nick





Nicholas S. Thompson
Emeritus Professor of Psychology and Ethology,
Clark University ([hidden email])
http://home.earthlink.net/~nickthompson/naturaldesigns/




> [Original Message]
> From: glen e. p. ropella <[hidden email]>
> To: The Friday Morning Applied Complexity Coffee Group <[hidden email]>
> Date: 11/1/2009 9:44:31 AM
> Subject: Re: [FRIAM] Crutchfield 's "Is anything ever new?"
>
> Thus spake Ted Carmichael circa 10/30/2009 03:33 PM:
> > In response to Glen's comments, I would say that his differentiation
between
> > thoughts and actions is also a somewhat arbitrary choice of scale.  I
agree
> > that how two people shoot a basketball is usually more easily translated
> > between them than how they calculate the product of two numbers.  When I
> > shoot a basketball, I follow the same general procedure (knees bent, one
> > hand on the side of the ball and one hand behind it, etc) that other
people
> > do.  But my physical structure is still different than another
person's, so
> > I have refined the general procedure to better match my physical
structure.

> >  (Or not, since I usually miss the basket.)
>
> Yes, you're onto something, here.  But I wouldn't consider it a matter
> of general vs. specific for throwing a basketball.  Any general method
> you may think exists is an illusion.  Let's say you're learning how to
> do it from a coach and several fellow players.  For each other person
> you watch do it, their method is particular to _them_.  In such a case,
> there is no general method.  You may _imagine_ some illusory general
> method in your head.  But when the method is executed, it is always
> particular.
>
> Now consider the coach's _description_ or model of the method.  Even in
> that case, the description, the words, the actions the coach executes
> with his mouth and hands in an attempt to communicate an idea are
> particular to him.  The descriptive actions are particular to him.  Even
> in that case, there is no general method.  Any general method you may
> think exists is pure fiction.  What matters is the particular actions.
>
> Induction is a myth. [*]
>
> It's not general vs. specific.  It is abstract vs. concrete.  You're
> observation of either the coach's description or your fellow players'
> methods is chock full of errors and noise.  In order to cope with such
> noise and translate from their actions to your actions, you have to fill
> in the blanks.  You are totally ignorant of, say, how fast to twitch
> your eyes while you're maintaining focus on the basket... or how fast to
> twitch your hand/finger muscles while holding the ball.  You can't
> observe those parts of the method when watching your fellow players.
> And such information is totally absent from the coach's description.
> So, you have to make that stuff up yourself.
>
> And you make it up based on your _particular_ concrete ontogenetic
> history.  And, hence, when you execute the method, it is also particular
> to you.
>
> However, because your hands, fingers, and eye muscles are almost
> identical to those of your fellow players and your coach, the method is
> transferable despite the huge HUGE _HUGE_ number of errors and amount of
> noise in your observations.
>
> > Two different people calculating a product, however, may use two totally
> > different methods.  One person may even have a larger grammar for this,
> > utilizing more methods for more types of numbers than the second person.
> >  (In effect, he has more of his brain dedicated to these types of tasks,
> > which give him the power to have a larger "math" grammar.)  So it's
probably
> > more precise to say: at a certain scale 'actions' can be mapped between
two

> > people but 'thoughts' cannot be.
>
> It's less a matter of scale than it is of noise and error.  When
> calculating a product (or doing any of the more _mechanical_ -- what
> used to be called "effective" -- methods), the amount of noise and error
> in the transmission from one to another is minimized to a huge extent.
> Math is transferable from person to person for precisely this reason.
> It is _formal_, syntactic.  Every effort of every mathematician goes
> toward making math exact, precise, and unambiguous.
>
> So, my argument is that you may _think_ that you have different methods
> for calculating any product, and indeed, they may be slightly different.
>  But the amount of variance between, say, two people adding 1+1 and two
> people throwing a basketball is huge, HUGE, _HUGE_. [grin]  OK.  I'll
> stop that.  Because (some) math is crisp, it's easier to fill in the
> blanks after watching someone do it.
>
> Now, contrast arithmetic with, for example, coinductive proofs.  While
> it's very easy to watch a fellow mathematician add numbers and then go
> add numbers yourself.  It's quite difficult to demonstrate the existence
> of a corecursive set after watching another person do it.  (At least in
> my own personal math-challenged context, it's difficult. ;-)  You can't
> just quickly fill in the blanks unless you have a lot... and I mean a
> LOT of mathematical experience lying about in your ontogenic history.
> Typically, you have to reduce the error and noise by lots of back and
> forth... "What did you do there?" ... "Why did you do that?" ... "What's
> that mean?"  Etc.
>
> Hence, it's not a matter of scale.  It's a matter of the amount of
> error, noise, and ignorance in the observation of the method.  And it's
> not about the transfer of the fictitious flying spaghetti monsters in
> your head.  It's a matter of transferring the actions, whatever the
> symbols may mean.
>
> > If you go down to the lower level processes, all of our neurons behave
in
> > approximately the same ways.  So at this scale they can be mapped, one
> > person to another.  I.e., when thinking, one of my neurons is just as
easily

> > mapped to one of your neurons as my actions are to your similar actions.
>
> Right.  But similarity at various scales is only relevant because it
> helps determine the amount of error, noise, variance, and uncertainty at
> whatever layer the abstraction (abstracted from the concrete) occurs.
> Note I said "layer", not "level".  The whole concept of levels is a red
> herring and should be totally PURGED from the conversation of emergence,
> in my not so humble opinion. ;-)
>
>
> * I have what I think are strong arguments _against_ the position I'm
> taking, here.  But I'm trying to present the argument in a pure form so
> that it's clear.  I'm sure at some point in the future when I finally
> get a chance to pull out those arguments, someone will accuse me of
> contradicting myself. [sigh]
>
> --
> glen e. p. ropella, 971-222-9095, http://agent-based-modeling.com
>
>
> ============================================================
> FRIAM Applied Complexity Group listserv
> Meets Fridays 9a-11:30 at cafe at St. John's College
> lectures, archives, unsubscribe, maps at http://www.friam.org



============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org
Reply | Threaded
Open this post in threaded view
|

Re: Crutchfield 's "Is anything ever new?"

Ted Carmichael
In reply to this post by Nick Thompson
It's a good point.  It seems Crutchfield is using the idea of a computational machine - the parts necessary to construct a model of the environment - to define innovation.  The innovation is a leap from one model class to another ... from one type of machine to another.

If one defines the computational machine for a jump-shot as a model of how to get the ball in the basket, then the hook shot is a leap to a new model class, fundamentally different from the "jump-shot" model class.  The person who developed the hook shot has developed a new model that performs better under a certain environment.

If this is correct, then how narrowly he defines a model class would determine how broadly he defines "new."  I think.

-Ted

On Sun, Nov 1, 2009 at 9:54 PM, Nicholas Thompson <[hidden email]> wrote:
Well, to the extent that this is a discussion of crutchfield, I dont see how the hook shot would be something new in his terms.  He seems to mean something quite narrow by "new" and it seems to have something to do with a new type of computational "machine".  Since "computational machine" is an intuitional black hole for me, I cannot say whether jump shot is a new sort of computational machine or not, but I am inclined to doubt it.
 
Nick
 
Nicholas S. Thompson
Emeritus Professor of Psychology and Ethology,
Clark University ([hidden email])
 
 
 
 
----- Original Message -----
Sent: 11/1/2009 6:54:44 PM
Subject: Re: [FRIAM] Crutchfield 's "Is anything ever new?"

I'm actually fine with re-defining 'scale' to mean something along the lines of the amount of error in the mapping.  That is mostly, I think, what I was trying to say.  Let me see if I can clarify my points a little.

There is definitely a large number of differences between two people using the same method to shoot a basket.  All the things you mentioned - eye movement, exact combination of muscles, etc.  I was trying to say that this is a different scale (a wider range of error, perhaps) when compared to two shooters using different methods ... e.g., one person shoots in the traditional way and one person makes a 'granny shot.'

I agree that two people using the same method is an illusion.  But it is a useful illusion, when differentiating between the traditional method and the granny method.  Similarly, when Kareem Abdul-Jabbar used the hook shot, it was an innovative (hence: new) method for the NBA.  In this way I would say there are different levels of abstraction available ... one simply picks the level of abstraction that is useful for analysis.

I tried to use the mathematical example of calculating a product to illustrate this same idea.  When calculating 49 * 12, one might use the common method of using first the one's column, then the ten's column, and adding the results, etc.  Another person may invent a new method, noticing that 49 is one less than 50, and that half of 12 is 6, and say the answer is 600 - (12 * 1) = 588.  Still another may say that 490 + 100 - 2 is the answer.

What is innovative about these new methods is not that they ignore the common operations of adding, multiplying, and subtracting.  It's that these basic operations are combined in an innovative way.  If Crutchfield asks: is this really something new?  I would say "yes."  If he points out that all three methods use the same old operations, I would say that doesn't matter ... those operations are used in an innovative way; in a new combination.

In a slightly different vein, Java is a "new" programming language even if it is only used to implement the same old algorithms.  The implementation is new, even if the algorithm - the method - is the same.  This is analogous to two mathematicians using the same "trick" to get a product, even if the respective neuron networks each person possesses to implement this method are slightly different.

I do admit the term "level" or "scope" can exhibit ambiguities.  But I still find that "level" is a useful distinction.  It does imply varying degrees of complexity, and I think that is a valid implication, even if it is hard to nail down.

I also find it hard to define a counter-example to the proposition that emergent features of a system are always produced from the interactions of elements "one level down."  When we look at a marketplace, we assume the "invisible hand" is the result of human interaction.  There doesn't seem to be much use in jumping from the level of neurons - or even worse, quarks - straight to the marketplace.

Of course, depending on the scope of the "market" being studied, individual businesses and other multi-person entities may be the most basic element of this system.  There may even be entities defined as "one person" within this system, depending on how much heterogeneity you allow between individual elements.  

But, however you define the elements, this essentially means the same as saying "one level down," when talking about the emergent properties of that system.  If you want to talk about the emergent properties of a corporation, then you have redefined your system, and hence redefined your elements.

Anyway, the larger point is that innovation happens by combining elements in a new way, however those elements are defined.  A RISK processor is innovative in how it combines basic computer operations.  Java is innovative in the instructions sent to the processor, and the package of common tools that comes with it.  A new algorithm is innovative in how it uses these tools at a different level of abstraction.  And a software package may be new in how it combines many existing algorithms and other elements of visualization and human-computer interaction.

If you don't like "levels" and prefer "layers," then I'm okay with that.  But I don't really see the distinction.  Can you expand on that?

Cheers,

Ted

On Sun, Nov 1, 2009 at 11:43 AM, glen e. p. ropella <[hidden email]> wrote:
Thus spake Ted Carmichael circa 10/30/2009 03:33 PM:
> In response to Glen's comments, I would say that his differentiation between
> thoughts and actions is also a somewhat arbitrary choice of scale.  I agree
> that how two people shoot a basketball is usually more easily translated
> between them than how they calculate the product of two numbers.  When I
> shoot a basketball, I follow the same general procedure (knees bent, one
> hand on the side of the ball and one hand behind it, etc) that other people
> do.  But my physical structure is still different than another person's, so
> I have refined the general procedure to better match my physical structure.
>  (Or not, since I usually miss the basket.)

Yes, you're onto something, here.  But I wouldn't consider it a matter
of general vs. specific for throwing a basketball.  Any general method
you may think exists is an illusion.  Let's say you're learning how to
do it from a coach and several fellow players.  For each other person
you watch do it, their method is particular to _them_.  In such a case,
there is no general method.  You may _imagine_ some illusory general
method in your head.  But when the method is executed, it is always
particular.

Now consider the coach's _description_ or model of the method.  Even in
that case, the description, the words, the actions the coach executes
with his mouth and hands in an attempt to communicate an idea are
particular to him.  The descriptive actions are particular to him.  Even
in that case, there is no general method.  Any general method you may
think exists is pure fiction.  What matters is the particular actions.

Induction is a myth. [*]

It's not general vs. specific.  It is abstract vs. concrete.  You're
observation of either the coach's description or your fellow players'
methods is chock full of errors and noise.  In order to cope with such
noise and translate from their actions to your actions, you have to fill
in the blanks.  You are totally ignorant of, say, how fast to twitch
your eyes while you're maintaining focus on the basket... or how fast to
twitch your hand/finger muscles while holding the ball.  You can't
observe those parts of the method when watching your fellow players.
And such information is totally absent from the coach's description.
So, you have to make that stuff up yourself.

And you make it up based on your _particular_ concrete ontogenetic
history.  And, hence, when you execute the method, it is also particular
to you.

However, because your hands, fingers, and eye muscles are almost
identical to those of your fellow players and your coach, the method is
transferable despite the huge HUGE _HUGE_ number of errors and amount of
noise in your observations.

> Two different people calculating a product, however, may use two totally
> different methods.  One person may even have a larger grammar for this,
> utilizing more methods for more types of numbers than the second person.
>  (In effect, he has more of his brain dedicated to these types of tasks,
> which give him the power to have a larger "math" grammar.)  So it's probably
> more precise to say: at a certain scale 'actions' can be mapped between two
> people but 'thoughts' cannot be.

It's less a matter of scale than it is of noise and error.  When
calculating a product (or doing any of the more _mechanical_ -- what
used to be called "effective" -- methods), the amount of noise and error
in the transmission from one to another is minimized to a huge extent.
Math is transferable from person to person for precisely this reason.
It is _formal_, syntactic.  Every effort of every mathematician goes
toward making math exact, precise, and unambiguous.

So, my argument is that you may _think_ that you have different methods
for calculating any product, and indeed, they may be slightly different.
 But the amount of variance between, say, two people adding 1+1 and two
people throwing a basketball is huge, HUGE, _HUGE_. [grin]  OK.  I'll
stop that.  Because (some) math is crisp, it's easier to fill in the
blanks after watching someone do it.

Now, contrast arithmetic with, for example, coinductive proofs.  While
it's very easy to watch a fellow mathematician add numbers and then go
add numbers yourself.  It's quite difficult to demonstrate the existence
of a corecursive set after watching another person do it.  (At least in
my own personal math-challenged context, it's difficult. ;-)  You can't
just quickly fill in the blanks unless you have a lot... and I mean a
LOT of mathematical experience lying about in your ontogenic history.
Typically, you have to reduce the error and noise by lots of back and
forth... "What did you do there?" ... "Why did you do that?" ... "What's
that mean?"  Etc.

Hence, it's not a matter of scale.  It's a matter of the amount of
error, noise, and ignorance in the observation of the method.  And it's
not about the transfer of the fictitious flying spaghetti monsters in
your head.  It's a matter of transferring the actions, whatever the
symbols may mean.

> If you go down to the lower level processes, all of our neurons behave in
> approximately the same ways.  So at this scale they can be mapped, one
> person to another.  I.e., when thinking, one of my neurons is just as easily
> mapped to one of your neurons as my actions are to your similar actions.

Right.  But similarity at various scales is only relevant because it
helps determine the amount of error, noise, variance, and uncertainty at
whatever layer the abstraction (abstracted from the concrete) occurs.
Note I said "layer", not "level".  The whole concept of levels is a red
herring and should be totally PURGED from the conversation of emergence,
in my not so humble opinion. ;-)


* I have what I think are strong arguments _against_ the position I'm
taking, here.  But I'm trying to present the argument in a pure form so
that it's clear.  I'm sure at some point in the future when I finally
get a chance to pull out those arguments, someone will accuse me of
contradicting myself. [sigh]

--
glen e. p. ropella, 971-222-9095,
http://agent-based-modeling.com


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at
http://www.friam.org


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org
Reply | Threaded
Open this post in threaded view
|

levels vs. layers (forked from Crutchfield 's "Is anything ever new?")

glen e. p. ropella-2
In reply to this post by Ted Carmichael
Your post deserves more attention than I have at the moment.  If it's
not covered by someone else, I'll get to it in the next few days.  But
since I can respond to this one quickly, I will.

Thus spake Ted Carmichael circa 11/01/2009 05:53 PM:
> If you don't like "levels" and prefer "layers," then I'm okay with that.
>  But I don't really see the distinction.  Can you expand on that?

Levels require hierarchy.  Layers don't.  Of course, in the most
oft-used example of layers -- the onion -- a hierarchy is there.  But
it's not necessary.  Think of something like a scarf scrunched up at the
middle but fully spread out on the ends.  In the scrunched up regions,
there are many folds.  If you view that region at a small enough scale,
those folds don't look like folds, they look like layers.  In fact, they
are layers, without hierarchy.

Now, if you're prejudiced or biased and you naturally prefer one side of
the small region of scrunched up scarf, then you might say that's the
"top" and, as you burrow through the layers, you approach the "bottom".
 If that's your adopted bias, then it's reasonable to call them
"levels".  But using the term embeds the bias.  It's more accurate to
just stick with calling them layers and avoid the bias if possible.

I agree that it always SEEMS reasonable enough to talk about how a lower
level mechanism generates a higher level phenomenon.  But you can never
separate out any potential bias if you automatically begin _every_ study
assuming that one layer is somehow lower than another layer.  So, it's
best to make every attempt to remove as much potential bias from the
language as possible.  If we stop using level and stick to using layer,
we are open to the idea that what we used to think of as the higher
level might actually be the mechanism for what we used to think of as
the lower level.  Swapping perspective becomes easier; and that opens
the door to more rigorously defined and, ultimately, scientific
descriptions.

I tend to view it a bit like the reversibility of time.  If we'd never
expressed dynamics in terms of time reversible equations, we never would
have been able to clearly articulate time IRreversible processes.
Similarly, if, indeed, there really are things like "downward
causation", then we'll never be able to clearly articulate it if we
_always_ embed the assumption of upward and downward in all our
language.  Hence, a clear discussion of emergence has to avoid embedding
that assumption... It has to avoid the word "level", at least until we
can rebuild it from the more general term "layer".

--
glen e. p. ropella, 971-222-9095, http://agent-based-modeling.com


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org
Reply | Threaded
Open this post in threaded view
|

Re: Crutchfield 's "Is anything ever new?"

glen e. p. ropella-2
In reply to this post by Ted Carmichael

First, I pick a few nits just to be sure we're communicating.  Please
note that I almost didn't send this because too much of what I say is
just distracting nit-picking.  But then I decided that's OK because the
people who don't want to read it can just hit the delete key. ;-)

Thus spake Ted Carmichael circa 09-11-01 05:53 PM:
> I'm actually fine with re-defining 'scale' to mean something along the lines
> of the amount of error in the mapping.  That is mostly, I think, what I was
> trying to say.

Well, I couldn't redefine 'scale' that way.  For me, the word "scale" is
really a synonym for the word "measure" (noun).  It sets the granularity
at which one can distinguish parts.  That means it's an aspect or
perspective taken when looking at some phenomena.

Now, it's true that indistinguishability or "wiggle room" is the dual of
 scale.  So, if I choose scale X, that implies a granularity below which
I can't distinguish things.  So, error, noise, and uncertainty are
related to the dual of scale.  But just because one cannot measure at a
finer grain does NOT imply that there are finer grained things wiggling
about down below the chosen scale, only that there COULD be.

Translating methods from one person to another involves scale to the
extent that the scale chosen for observing is capable of precisely
mapping measurements of the other guy's actions to controls for your
own.  As such, it's not arbitrary, at all.  In some contexts, scale must
be carefully chosen and in others scale is irrelevant.  We can often
translate methods from human to human because regardless of what scale
is chosen, we are similar all the way up and down all the scales.  And
this is also what allows us to trust the false concept of translating
ideas from human to human, which was what my original criticism was
about: Ideas should not be a part of this conversation of novelty.

>  Let me see if I can clarify my points a little.
>
> There is definitely a large number of differences between two people using
> the same method to shoot a basket.  All the things you mentioned - eye
> movement, exact combination of muscles, etc.  
>
> [...]
>
> I agree that two people using the same method is an illusion.

Actually, I was arguing the opposite, that two people (as long as they
have the same physical, chemical, biological, anatomical, etc.
structure) _can_ use the same method.  They do NOT because there's
always a mismatch between measurement and reconstruction (sensors and
effectors).  But they could if the sensors were precise and complete enough.

But what is an illusion is the generic method.  No such thing exists.
If, for example, you try to generalize a method from, say, 20
chimpanzees and 20 humans accomplishing the same objective... let's say
eating something, then the generalization is an illusion.  And, I agree
that it's a useful illusion.


> I was trying to say that this
> is a different scale (a wider range of error, perhaps) when compared
to two

> shooters using different methods ... e.g., one person shoots in the
> traditional way and one person makes a 'granny shot.'
>
> [...]
>
> But it is a
> useful illusion, when differentiating between the traditional method and the
> granny method.  Similarly, when Kareem Abdul-Jabbar used the hook shot, it
> was an innovative (hence: new) method for the NBA.  In this way I would say
> there are different levels of abstraction available ... one simply picks the
> level of abstraction that is useful for analysis.
>
> I tried to use the mathematical example of calculating a product to
> illustrate this same idea.  When calculating 49 * 12, one might use the
> common method of using first the one's column, then the ten's column, and
> adding the results, etc.  Another person may invent a new method, noticing
> that 49 is one less than 50, and that half of 12 is 6, and say the answer is
> 600 - (12 * 1) = 588.  Still another may say that 490 + 100 - 2 is the
> answer.

OK.  I don't think methods can be tacitly distinguished by choice of
scale.  To be clear, measurements (state) can be distinguished by choice
of scale; but actions (functions, methods) can't.  So, if we choose the
coarsest scale for the basketball example, we have two states: 1) ball
at point A and 2) ball in hoop.  At that scale, you're right that you
can't distinguish the measurements from the jump, hook, or granny shots.
 Then add more states, let's say: 1) ball at point A, 2) ball at point
B, and 3) ball in hoop.  Between the 3 methods, state (2) will be
different.  So, again, you're right that you can distinguish the TRACE
of the methods.

And you can then argue (by the duality of congruence and bisimilarity)
that a distinction between the measurements implies a distinction
between the methods.  But you can't distinguish between methods directly.

What I was arguing with, however, was your statement that the
distinction between thought and action was a somewhat arbitrary choice
of scale.  The scale is not at all arbitrary.  To distinguish between
the traces of the hook and jump shot, you need a smaller scale than that
required for the distinction between the traces of the hook or jump and
the granny shot.

All of which goes back to what I tried to say before.  The
transferability of methods isn't really about scale but about the
mismatch between the measurements and the actions you have to take to
execute your particular method.  I.e. the distinction between thoughts
and actions is NOT a matter of (even a somewhat) arbitrary choice of
scale.  It's about whether the twitching we do as part of all our
methods is commensurate with the twitching others do as part of all
their methods.  When tracing the method of someone very similar to us,
our methods of interpolating between states are similar enough to allow
us to execute a different method that has the same trace.

> What is innovative about these new methods is not that they ignore the
> common operations of adding, multiplying, and subtracting.  It's that these
> basic operations are combined in an innovative way.  If Crutchfield asks: is
> this really something new?  I would say "yes."  If he points out that all
> three methods use the same old operations, I would say that doesn't matter
> ... those operations are used in an innovative way; in a new combination.
>
> In a slightly different vein, Java is a "new" programming language even if
> it is only used to implement the same old algorithms.  The implementation is
> new, even if the algorithm - the method - is the same.  This is analogous to
> two mathematicians using the same "trick" to get a product, even if the
> respective neuron networks each person possesses to implement this method
> are slightly different.

I don't think Crutchfield's framework would classify the hook shot or
Java as novel because they aren't examples of movement to a more
expressive class of models.  Languages of equivalent power are used to
express the jump and hook shots.  Granted, perhaps the internal models
of the individual players use impoverished languages, in which case,
after seeing their first hook shot, they may realize that there's a more
expressive language they _could_ be using.  But ontologically, the hook
shot is not really (for real, actually) new.

By analogy, imagine a 2 dimensional real (pun intended) plane.  We
already know of all the functions like x^2, x^3, x+y, etc.  Then when I
take my pencil and draw some squiggly line across the plane, is my new
"function" really new?  If not, then how is the hook shot new?

Similarly, Java, as a language, is no more powerful than C.  Granted,
perhaps the individuals who use C may actually used impoverished models
of C and when they see someone implement something cool, they may revise
their own internal model.  But, ontologically, C and Java are just as
expressive as languages.  (This argument depends on the distinction
between the language Java and the many libraries accessible to it and
the language C and the many libraries accessible to it.)

However, we can consider parallel systems more expressive than serial
systems.  So, I think parallel computation would meet Crutchfield's
definition of novel.

> I do admit the term "level" or "scope" can exhibit ambiguities.  But I still
> find that "level" is a useful distinction.  It does imply varying degrees of
> complexity, and I think that is a valid implication, even if it is hard to
> nail down.
>
> I also find it hard to define a counter-example to the proposition that
> emergent features of a system are always produced from the interactions of
> elements "one level down."  

Well, it would be hard to construct a counter example because "emergent
feature" is ambiguous, as is "produced", "interaction", and "element".
[grin]  So, it's no surprise that it's difficult to construct such a
counter example.  No matter what you come up with, all you need to do is
subtly redefine any of those words to fit the context.

I'm not being snarky, here, either.  I truly believe the language you're
using to talk about this is hopelessly self-fulfilling ... and perhaps
even degenerate.  Of course emergent features emerge up one level from
the level just below them!  That sounds tautological to me.  You can't
construct a counter example because it's like saying ( x == y ) implies
(x == y).

Perhaps it's my own personal mental disorder; but I see this sort of
thing everywhere people talk about "emergence".  (Remember, though, that
Nick introduced critical rationalism into the conversation and, in that
context, _all_ rhetoric is degenerate ... all deduction is tautological.)

> Anyway, the larger point is that innovation happens by combining elements in
> a new way, however those elements are defined.  A RISK processor is
> innovative in how it combines basic computer operations.  Java is innovative
> in the instructions sent to the processor, and the package of common tools
> that comes with it.  A new algorithm is innovative in how it uses these
> tools at a different level of abstraction.  And a software package may be
> new in how it combines many existing algorithms and other elements of
> visualization and human-computer interaction.

I don't disagree with you, personally.  But I think Crutchfield's
criteria are a bit stiffer.  I think he's saying something's new only if
it the _class_ or family changes.  I.e. as RussellS might say, when the
successor language expresses something the predecessor language can't
express.  Since Java and C are equivalent and RISC and CISC are
equivalent, they're not novel with respect to each other.

Of course, I may be totally wrong.  And I'd be grateful to anyone who
shows me where.

--
glen e. p. ropella, 971-222-9095, http://agent-based-modeling.com


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org
Reply | Threaded
Open this post in threaded view
|

flat sets and scale (was Crutchfield 's "Is anything ever new?")

glen e. p. ropella-2
In reply to this post by Nick Thompson
I think I explained all this in the past 2 e-mails; but let me repeat
just to waste everyone's bandwidth before net neutrality fails. [grin]
And I changed the subject since we're not talking about Crutchfield's
model classes anymore.

Thus spake Nicholas Thompson circa 09-11-01 07:27 PM:
> What kind of levels are we talking about, here.  
>
> I guess I think that levels are logical.   So  A black bird is one level, a
> bunch of black birds is another level, and warehouse full of bunches of
> black birds another level.  Ditto, a black bird, a flock of black birds,
> and a sky full of flocks of blackbirds.  What makes the individual-to-flock
> level shift interesting is the manner in which the flock takes form
> because the behavior of the birds varies with their positions within the
> aggregate.  Ditto the form of a pile of sand, for that matter.  

I don't have a problem with using the word "level" in general.  I only
have a problem trying to define (or construct) emergence while using it.
 It's not the circularity that bothers me so much as the sloppiness.

For example when you say "a black bird is one level, a bunch is another
level", that sort of thing drives me batty.  The only sense I can make
of it is in the mathematical induction sense where we construct a set
using the successor function.  And, in that sense, there's no difference
between 2 black birds and a bazillion black birds.  And "level" just
fails to capture whatever intuition you're after between 1 black bird
and 2 black birds.

The same is true for a warehouse full of bunches of black birds.  All
you've done is redefine the unit.  There's no fundamental difference
between a warehouse of bunches and a bunch of birds.  No level has been
crossed.  All you're doing is drawing lines around things in different
ways and claiming that's important in some mysterious way.

Now, if you took the route of Rosen and said that drawing lines that way
is one thing but drawing lines with non-well-founded sets is quite
another thing, then I'd understand because sets that cannot be members
of themselves are very different from sets that can be members of
themselves.  Imagine a warehouse full of bunches of black birds, each of
which contains a warehouse containing bunches of black birds, ...  Now
_that's_ interesting and different.

> Reading Glen concerning SCALE, I am thinking that scale and variance are
> necessarily interrelated -- unless scale is just how big something is in
> relation to the distance between the pole and the equator of the earth or
> the length of the King's stride.  

Yes, indistinguishability is the dual of scale.  If you fix the scale,
any variance in the thing being observed will either be observable or
not.  Scale, as I'm using it, isn't about how big something is.  It's
about the measuring stick you hold up against that something.

--
glen e. p. ropella, 971-222-9095, http://agent-based-modeling.com


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org
Reply | Threaded
Open this post in threaded view
|

Re: Crutchfield 's "Is anything ever new?"

Ted Carmichael
In reply to this post by glen e. p. ropella-2
Yes; I will now call you "Glen the pedant." ;-)

On Mon, Nov 2, 2009 at 8:07 PM, glen e. p. ropella <[hidden email]> wrote:

First, I pick a few nits just to be sure we're communicating.  Please
note that I almost didn't send this because too much of what I say is
just distracting nit-picking.  But then I decided that's OK because the
people who don't want to read it can just hit the delete key. ;-)

Thus spake Ted Carmichael circa 09-11-01 05:53 PM:
> I'm actually fine with re-defining 'scale' to mean something along the lines
> of the amount of error in the mapping.  That is mostly, I think, what I was
> trying to say.

Well, I couldn't redefine 'scale' that way.  For me, the word "scale" is
really a synonym for the word "measure" (noun).  It sets the granularity
at which one can distinguish parts.  That means it's an aspect or
perspective taken when looking at some phenomena.

Now, it's true that indistinguishability or "wiggle room" is the dual of
 scale.  So, if I choose scale X, that implies a granularity below which
I can't distinguish things.  

Well ... no.  If you choose a particular scale, that implies that you are unconcerned with using a finer or coarser grain to distinguish things ... that you choose to differentiate at one level and not another.  It says nothing of ability.

I suppose now you'll say, "Well, once you have chosen scale X, then you are limited in your ability ... limited by scale X," and then I'll say, "I thought you meant 'measure X' even though 'measure' is a verb," and then you'll say, "Measure can also be a noun," and then the pies will come out and hilarity will ensue and someone will have to clean up the mess.
 
Translating methods from one person to another involves scale to the
extent that the scale chosen for observing is capable of precisely
mapping measurements of the other guy's actions to controls for your
own.  As such, it's not arbitrary, at all.  In some contexts, scale must
be carefully chosen and in others scale is irrelevant.  We can often
translate methods from human to human because regardless of what scale
is chosen, we are similar all the way up and down all the scales.  

?? I don't get this part.  I'm 6'5", which means there is a ~99% chance I am taller than you.  As such, my jump shot will differ from yours in many subtle ways.  

We need a coarser scale in order to equate them.  Assume a scale that doesn't distinguish between your jump shot and mine, but one that is still fine enough to distinguish between a jump shot and a hook shot.  If I use this scale, then a hook shot is something new, i.e. something different than a jump shot.  If, however, I use an even coarser scale - say, your two-state solution (ha!) - then these two methods of shooting a ball are no longer distinguishable.
 
And
this is also what allows us to trust the false concept of translating
ideas from human to human, which was what my original criticism was
about: Ideas should not be a part of this conversation of novelty.

You have to prove that, I think.  Occam's razor and all.  The null-hypothesis would be that similar ideas spring from similar mental processes.
 
But what is an illusion is the generic method.  No such thing exists.
If, for example, you try to generalize a method from, say, 20
chimpanzees and 20 humans accomplishing the same objective... let's say
eating something, then the generalization is an illusion.  And, I agree
that it's a useful illusion.


Yay!  We agree!  Now let me tell you what I really meant...
 
OK.  I don't think methods can be tacitly distinguished by choice of
scale.  To be clear, measurements (state) can be distinguished by choice
of scale; but actions (functions, methods) can't.  So, if we choose the
coarsest scale for the basketball example, we have two states: 1) ball
at point A and 2) ball in hoop.  At that scale, you're right that you
can't distinguish the measurements from the jump, hook, or granny shots.
 Then add more states, let's say: 1) ball at point A, 2) ball at point
B, and 3) ball in hoop.  Between the 3 methods, state (2) will be
different.  So, again, you're right that you can distinguish the TRACE
of the methods.

And you can then argue (by the duality of congruence and bisimilarity)
that a distinction between the measurements implies a distinction
between the methods.  But you can't distinguish between methods directly.

I'm not sure what you are getting at here.  If you can watch someone playing basketball, and you know when to say "That was a jump shot" and when to say "That was a hook shot," then you are able to distinguish between the methods.  If you aren't able to see the difference, then you are probably using the wrong scale for your analysis.
 
What I was arguing with, however, was your statement that the
distinction between thought and action was a somewhat arbitrary choice
of scale.  The scale is not at all arbitrary.

Perhaps "artificial" is a better word.  The scale and the type of the analysis is a choice that we make.  We determine what the threshold is for saying one thing is different than another.  We try to make these thresholds useful, but they are artificially imposed by our desire to categorize.  That's all I meant.
 

All of which goes back to what I tried to say before.  The
transferability of methods isn't really about scale but about the
mismatch between the measurements and the actions you have to take to
execute your particular method.  I.e. the distinction between thoughts
and actions is NOT a matter of (even a somewhat) arbitrary choice of
scale.  It's about whether the twitching we do as part of all our
methods is commensurate with the twitching others do as part of all
their methods.  When tracing the method of someone very similar to us,
our methods of interpolating between states are similar enough to allow
us to execute a different method that has the same trace.

I don't see how comparing two methods of shooting a ball is any different than comparing two methods of mental calculation.  I've already conceded that each of these require a different scale of analysis.  Yet some scale that can equate two methods of mental calculation does exist.

But I think you got that last bit backwards.  When you say two different methods are capable of producing the same trace, then your scale is very coarse and limited ... you're only using 2 or 3 states of where the ball is.  Which is fine, if that's how you chose to analyze the action.  But given that scale, the similarity between the two people that produce the trace doesn't matter.  You're not even looking at the people.  (Inferred by the fact that you don't care to differentiate between the methods.)
 

> What is innovative about these new methods is not that they ignore the
> common operations of adding, multiplying, and subtracting.  It's that these
> basic operations are combined in an innovative way.  If Crutchfield asks: is
> this really something new?  I would say "yes."  If he points out that all
> three methods use the same old operations, I would say that doesn't matter
> ... those operations are used in an innovative way; in a new combination.

I don't think Crutchfield's framework would classify the hook shot or
Java as novel because they aren't examples of movement to a more
expressive class of models.  

Is that how he defined innovation?  That the new model class is necessarily more expressive?  If so, I missed it.  I thought he just said innovation is jump to a different model class.  

But if he says the new model class must be more expressive, then I disagree.  If he says the hook shot is nothing new - that it is functionally the same as the jump shot, and hence in the same model class  - then his granularity of analysis is too coarse.  His "model class" is too broad.  The hook shot was new, and it was innovative, as these things are commonly understood.  
 
By analogy, imagine a 2 dimensional real (pun intended) plane.  We
already know of all the functions like x^2, x^3, x+y, etc.  Then when I
take my pencil and draw some squiggly line across the plane, is my new
"function" really new?

Yes!  That's the beauty of it.  The elements are already defined, and the number and type of squiggly lines are limited by these elements.  But your line is (presumably) a new combination of these elements never seen before.  Just like Windows 7 is a new OS, combining 1's and 0's, and using logical NAND gates, in a new way.
 
Well, it would be hard to construct a counter example because "emergent
feature" is ambiguous, as is "produced", "interaction", and "element".
[grin]  So, it's no surprise that it's difficult to construct such a
counter example.  No matter what you come up with, all you need to do is
subtly redefine any of those words to fit the context.

I'm not being snarky, here, either.  I truly believe the language you're
using to talk about this is hopelessly self-fulfilling ... and perhaps
even degenerate.  Of course emergent features emerge up one level from
the level just below them!  That sounds tautological to me.  You can't
construct a counter example because it's like saying ( x == y ) implies
(x == y).

Two balls are floating in space.  You say ball A is above ball B, and this implies that ball B is below ball A.  This is tautological, I agree.  But that doesn't mean you haven't said something useful about the two balls.  And note: you have also said something useful about the two balls' relationship to other things in the environment, by defining "above" and "below."

Well, I've stayed up WAY too late writing this.  

Cheers,

Ted


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org
Reply | Threaded
Open this post in threaded view
|

Re: Crutchfield 's "Is anything ever new?"

Owen Densmore
Administrator
Does the enthusiastic response to Nick's seminar suggest having it  
held on-line for those unable to show up at the Santa Fe Library?

I'm sure it could easily be done with skype or some similar technology.

I ask because we are exploring ways to address "higher education" in  
Santa Fe.  Santa Fe is pretty rural, so does not have a university to  
call its own.  It *does* have several schools, profs, PhDs, think-
tanks and so on, but not organized yet into access to higher  
education.  (i.e. upper undergraduate through graduate studies).

Nick is doing good work in this area .. he can tell you more if you  
ask him.

     -- Owen



============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org
Reply | Threaded
Open this post in threaded view
|

Re: Crutchfield 's "Is anything ever new?"

glen e. p. ropella-2
In reply to this post by Ted Carmichael
Thus spake Ted Carmichael circa 11/02/2009 11:59 PM:
> Yes; I will now call you "Glen the pedant." ;-)

That's not near good enough, since I'm poorly educated and an
anti-intellectual... from Texas no less... You'd have to include
something about hypocrisy or hubris, too... hypocritical hubristic
pedant? [grin]

OK.  No nits this time.

>
> On Mon, Nov 2, 2009 at 8:07 PM, glen e. p. ropella <
> [hidden email]> wrote:
>>
>> Translating methods from one person to another involves scale to the
>> extent that the scale chosen for observing is capable of precisely
>> mapping measurements of the other guy's actions to controls for your
>> own.  As such, it's not arbitrary, at all.  In some contexts, scale must
>> be carefully chosen and in others scale is irrelevant.  We can often
>> translate methods from human to human because regardless of what scale
>> is chosen, we are similar all the way up and down all the scales.
>
>
> ?? I don't get this part.  I'm 6'5", which means there is a ~99% chance I am
> taller than you.  As such, my jump shot will differ from yours in many
> subtle ways.

I disagree.  The actions can be the same.  The states are necessarily
different.  The actions will scale (i.e. be invariant to changes in
scale) from my height to yours, primarily because we're working in a
metric space.  I.e. the state space is very well behaved.  The method
can be the same but a trace of either method will show different values
in the same space.

>> And
>> this is also what allows us to trust the false concept of translating
>> ideas from human to human, which was what my original criticism was
>> about: Ideas should not be a part of this conversation of novelty.
>>
>
> You have to prove that, I think.  Occam's razor and all.  The
> null-hypothesis would be that similar ideas spring from similar mental
> processes.

Oh no, no, no, no.  In order to apply a principle of parsimony for a
_theory_, we need the theory.  And "idea" is an ill-defined (actually
undefined) construct.  So, if you include "idea" at all, you've already
thrown Occam out the window.  We can't measure ideas.  We can't point to
ideas.  We can't ship them around the country via UPS.  Etc.  Ideas are
ephemeral and, I argue, fictitious.  Theories (especially testable ones)
don't include ideas at all.  Theories involve sentences, statements,
rhetoric, etc., but not ideas.

Going back to Crutchfield, this is part of what he's recognizing.  The 4
types of mechanics he talks about are not about ideas at all.  They are
about descriptions, languages, model classes, real stuff outside the mind.

>> OK.  I don't think methods can be tacitly distinguished by choice of
>> scale.  To be clear, measurements (state) can be distinguished by choice
>> of scale; but actions (functions, methods) can't.  So, if we choose the
>> coarsest scale for the basketball example, we have two states: 1) ball
>> at point A and 2) ball in hoop.  At that scale, you're right that you
>> can't distinguish the measurements from the jump, hook, or granny shots.
>>  Then add more states, let's say: 1) ball at point A, 2) ball at point
>> B, and 3) ball in hoop.  Between the 3 methods, state (2) will be
>> different.  So, again, you're right that you can distinguish the TRACE
>> of the methods.
>>
>> And you can then argue (by the duality of congruence and bisimilarity)
>> that a distinction between the measurements implies a distinction
>> between the methods.  But you can't distinguish between methods directly.
>>
>
> I'm not sure what you are getting at here.  If you can watch someone playing
> basketball, and you know when to say "That was a jump shot" and when to say
> "That was a hook shot," then you are able to distinguish between the
> methods.  If you aren't able to see the difference, then you are probably
> using the wrong scale for your analysis.

Sorry if I've been vague.  I'm saying that you cannot measure the
processes.  You can only measure the results of the processes, the
states of the system.  For example, let's say we have a machine with
states A, B, and C and transition functions f, g, and h such that f(a in
A) = b in B, g(b in B) = c in C, and h(c in C) = a in A.  You cannot
measure f, g, and h, the functions, the methods.  You can only measure
the states a, b, and c.

So, when you watch someone play basketball and you _measure_ one
player's approach to the net and attempt to put the ball through the
net, you're not observing the actions, the functions, the method(s),
you're observing the various states of the system (at a very high
sampling rate).

The point being that there are states through which the system travels
that you cannot observe, even at that high sampling rate.  And, most
importantly, you can NEVER directly observe the actions, functions,
methods whose results are the states you do observe.

You need a hypothetical construct like the duality between congruence
between states and bisimilarity between actions in order to conclude
that your observation of the states is good enough.

That's what I'm saying... [grin]  Of course, you may well ask why I'm
saying that... to which I'd reply: because the transferability of
methods from one particular to another particular depends fundamentally
on the bisimilarity of the two machines, not the sampling rate at which
the measurements are taken.

> When you say two different
> methods are capable of producing the same trace, then your scale is very
> coarse and limited ... you're only using 2 or 3 states of where the ball is.
>  Which is fine, if that's how you chose to analyze the action.  But given
> that scale, the *similarity *between the two people that produce the trace
> doesn't matter.  You're not even looking at the people.  (Inferred by the
> fact that you don't care to differentiate between the methods.)

I'm saying something much stronger.  Methods are transferable when the
source machine and the target machine are bisimilar, i.e. their
transition functions (actions, movements, processes, not state) are the
same.  And we cannot measure transition functions, no matter what the
scale or granularity or sampling frequency.

Now, to the degree we can fill in the blanks when we see the trace of
some other machine's actions, then we can reconstruct a congruent trace
with our own machine.  But that reconstruction is made more difficult if
the other machine is very much different from our own.

Sampling frequency (aka scale) does matter.  But it's orthogonal.  What
really matters is the ability to fill in the blanks.  And that is
governed by the similarity between the functions of the two machines.

I hope that's clearer.

> Is that how he defined innovation?  That the new model class is necessarily
> more expressive?  If so, I missed it.  I thought he just said innovation is
> jump to a *different *model class.

Well, there is an _implication_ (Nick's right that Crutchfield is
ambiguous) that the modeler changes model classes because the previous
one is somehow inadequate and the new one captures the referent better.
 That doesn't necessarily imply a superset of expressive power.  But
it's damn close.  Movement to a more expressive model class will always
be innovative by this measure.  But a kind of "lateral" movement where
the new language is equally expressive but a better natural fit to the
referent would probably also qualify.  Perhaps even a "downward"
movement to a class that is less powerful might be innovative in the
sense that _if_ you can use a less powerful (and hence less complicated)
class to adequately represent the referent, then that's a good thing.
But, really, if the less expressive class can represent the referent
efficiently, then so can the more expressive class.  All that would be
required is a different model within the same class.

So, I suspect Crutchfield would require at least "lateral" movement in
the model class hierarchy, if not "upward" movement.

>> By analogy, imagine a 2 dimensional real (pun intended) plane.  We
>> already know of all the functions like x^2, x^3, x+y, etc.  Then when I
>> take my pencil and draw some squiggly line across the plane, is my new
>> "function" really new?
>>
>
> Yes!  That's the beauty of it.  The elements are already defined, and the
> number and type of squiggly lines are limited by these elements.  But your
> line is (presumably) a new combination of these elements never seen before.
>  Just like Windows 7 is a new OS, combining 1's and 0's, and using logical
> NAND gates, in a new way.

I just have to flatly disagree.  I don't think arbitrary new squiggles
on a 2D real plane or Windows 7 are novel according to Crutchfield's
criteria in this paper.  The model class hasn't changed.  The squiggly
line is just like any other curve and Windows 7 is just like any other
of our present operating systems.  (Now, one might argue that Plan 9 or
the Hurd are novel, because -- according to my ignorant understanding --
they are fundamentally different than their brethren; but not Windows 7.
 Ugh!  Why are we talking about a microsoft product? ;-)

--
glen e. p. ropella, 971-222-9095, http://tempusdictum.com


--
glen e. p. ropella, 971-222-9095, http://agent-based-modeling.com


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org
Reply | Threaded
Open this post in threaded view
|

Re: Crutchfield 's "Is anything ever new?"

Nick Thompson
In reply to this post by Nick Thompson
Owen, and all,
 
This is in HTML.

I have had some experience with meetings on camera and have found them strained and unproductive.  What has seemed to work a little bit here is for distant people who have read or who are reading the same book to comment on their reading electronically as we go along.  But it ONLY works if people stick close to the text  If people just use the text as a place to tee-off from, very soon the author becomes irrelevant and commentators talk past one another. 
 
If focus on authors meanings can be maintained, it might be possible to create some convergent writing task which could be worked at by locals and distant people equally ... a joint book review for JASSS or white paper on "intrinsic computation".  This would provide an crucial element, which  our Emergence Seminar lacks to make it fully equivalent to the best graduate seminars in major universities : -- a  WRITTEN product of the work of the seminar. 

Just to get the feel of what that would be like, I have been working on my own summary of and commentary on Crutchfield.  I attach the summary and also copy it in below.  Note that I have made every effort to represent the original intent of the author as faithfully as I could.  Commentary to follow.  It would be interesting to see if others might contribute to such a document. 
 
Oh, and Owen, as to your previous comment.  It is no defense of the ambiguity of a document to claim that the document will be clear only to readers of three other documents  written by the same author ... well, unless the paper's introduction makes clear that the author intends the paper only for such a narrow audience. 
 
 
Nick
 
TEST OF CRUTCHFIELD SUMMARY FOLLOWS:
 

Crutchfield, James P. ( 2008)  Is Anything Ever New? Considering Emergence.  In,  Bedau, M and Humphreys, P. Emergence: Contemporary readings in philosophy and science. Cambridge, MA:  MIT Press. 

 

Accepting the notion that emergence is the coming-into-being of something new, Crutchfield interprets novelty in computational terms.   His desire to make such a re-interpretation is justified by the observer-dependency of the criteria commonly used to support the assertion that some classically ‘emergent” phenomena such as the BZ reaction and Benard cycles are “new” In these cases, the newness defined by the  theorist’s failure to anticipate the outcome.  To escape the arbitrariness of defining emergence in terms of the weak theories of its describers, Crutchfield suggests that properties should only be regarded as new if they are “intrinsic: i.e., new from the point of view of  the system of which they are part and new in ways that increase  the functionality of that system.  For example, he writes

 

Competitive agents in an efficient capital market control their individual production-investment and stock-ownership strategies based on the optimal pricing that has emerged from their collective behavior.  (p 271)

 

and

 

What is distinctive about intrinsic emergence is that the patterns formed confer additional functionality which supports global information processing. (p. 272). 

 

In intrinsic emergence, the system itself, or a subsystem within it, forms a model of the system, and it is by reference to changes in this “internal” model that the system is judged new.  Such internal models are prone to the same tradeoff between verisimilitude and completeness that afflicts any external scientific model.  The best compromise in this tradeoff can, according to C. be taken as the best description of the actual structure of the system. 

 

            But in what terms do we evaluate this outcome?  One solution is to employ “ideas from the theory of discrete computation,” since all a scientist can ever know is his data stream and since analyzing structure in streams of data is what computation theory understands best..  Computational theory answers these sorts of questions in terms of the classes of machines it can recognize in the data stream.

 

…the architecture of the machines themselves represents the organization of the information processing, that is, the intrinsic computation.  (p 276)

 

 He thus provides the following definition of emergence:   

 

.   A process undergoes emergence if at some time the architecture of information processing has changed in such a way that a distinct and more powerful level of intrinsic computation has appeared that was not present in earlier conditions. (p279)

 

 

 

The most promising area for the application of these ideas is in resolving the “contemporary debate on the dominant mechanisms operating in biological evolution.” (p. 279).   None of the protagonists in the argument between biological Selectionist, Historicist, and Structuralist approaches to evolution have an adequate theory of biological structure. Crutchfield proposes a computational mechanics to explain evolutionary changes in structure in which innovation occurs via hierarchical machine reconstruction.

 

His conclusion is that

 

With careful attention to the location of the observer and the system-under-study, with detailed accounting of intrinsic computation, quantitative measures of complexity, we can analyze the patterns, structures, and novel information processing architectures that emerge in nonlinear processes.  In this way, we demonstrate that something new has appeared.  [p 284]  



Nicholas S. Thompson
Emeritus Professor of Psychology and Ethology,
Clark University ([hidden email])
http://home.earthlink.net/~nickthompson/naturaldesigns/
http://www.cusf.org [City University of Santa Fe]




> [Original Message]
> From: Owen Densmore <[hidden email]>
> To: The Friday Morning Applied Complexity Coffee Group <[hidden email]>
> Date: 11/3/2009 10:06:28 AM
> Subject: Re: [FRIAM] Crutchfield 's "Is anything ever new?"
>
> Does the enthusiastic response to Nick's seminar suggest having it 
> held on-line for those unable to show up at the Santa Fe Library?
>
> I'm sure it could easily be done with skype or some similar technology.
>
> I ask because we are exploring ways to address "higher education" in 
> Santa Fe.  Santa Fe is pretty rural, so does not have a university to 
> call its own.  It *does* have several schools, profs, PhDs, think-
> tanks and so on, but not organized yet into access to higher 
> education.  (i.e. upper undergraduate through graduate studies).
>
> Nick is doing good work in this area .. he can tell you more if you 
> ask him.
>
>      -- Owen
>
>
>
> ============================================================
> FRIAM Applied Complexity Group listserv
> Meets Fridays 9a-11:30 at cafe at St. John's College
> lectures, archives, unsubscribe, maps at http://www.friam.org

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org

Crutchfield.doc (38K) Download Attachment
123