Phil,
My life time passion concerns the world's love affair with circular explanation. An explanation is circular if the terms in which it is identfied are the same terms as those by whatever explains it. The complex-random distinction is one of those, isnt it? I.e., we cannot know whether a pattern is random or complex until we know exactly how that pattern was made. I am hoping I am wrong about this. NIck > [Original Message] > From: <friam-request at redfish.com> > To: <friam at redfish.com> > Date: 8/25/2006 12:00:37 PM > Subject: Friam Digest, Vol 38, Issue 53 > > Send Friam mailing list submissions to > friam at redfish.com > > To subscribe or unsubscribe via the World Wide Web, visit > http://redfish.com/mailman/listinfo/friam_redfish.com > or, via email, send a message with subject or body 'help' to > friam-request at redfish.com > > You can reach the person managing the list at > friam-owner at redfish.com > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of Friam digest..." > > > Today's Topics: > > 1. lecture Wed August 30 - David Stout: 100 Monkey Garden - > Interactive Ecosystem (Stephen Guerin) > 2. Is disorder harder to describe than order? (Phil Henshaw) > 3. Re: Is disorder harder to describe than order? (Jochen Fromm) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Thu, 24 Aug 2006 17:15:58 -0600 > From: "Stephen Guerin" <stephen.guerin at redfish.com> > Subject: [FRIAM] lecture Wed August 30 - David Stout: 100 Monkey > Garden - Interactive Ecosystem > To: <friam at redfish.com>, <discuss at nmvis.org> > Cc: io at csf.edu > Message-ID: <002f01c6c7d3$43f9f780$0202fea9 at hongyu> > Content-Type: text/plain; charset="us-ascii" > > *** note that this lecture will be hosted at College of Santa Fe *** > > SPEAKER: > David Stout, Cory Metcalf and Luke DuBois > College of Santa Fe > > TITLE: > 100 Monkey Garden - Interactive Ecosystem > > LOCATION: > MOV-iN Gallery > College of Santa Fe > 1600 St. Michaels Dr. > > Located at Moving Image Arts Department > (same building as THE SCREEN) > map: http://mov-in.org/aboutus.php > > TIME: Wed, August 30 12:30p > > Lunch will be available for purchase > > ABSTRACT: > Video/sound artist and moving image arts Professor David Stout will give a > personal tour of this highly immersive interactive ecosystem and digital > space. Using multiple projectors and computer monitors, as well as an array of > sound and motion sensing devices, David enables the 100 Monkey spectator to > witness a digitally imagined world creating and recreating itself, its rules and > dimensions. Accompanying art pieces fill the space and compliment this > particular style of digital art. > > Check out past incarnations of this piece online: > http://nfold.csf.edu/Pages/100MonkeyGarden.htm > > Other works by David Stout: > http://nfold.csf.edu/ > > Santa Fe's THE Magazine review of this installation: > http://mov-in.org/Dstout100MonkeyMOV_iN_CDE05-1.pdf > > BIOS: > David Stout is an interactive video-sound artist and one of the worlds > laptop performers exploring real-time cross-synthesis of sound and image. He > is the recipient of the Harvestworks Interactive Technology Award and the Sun > Micro Systems Award for Academic Excellence (2004) and a nominee for the both > the WTN World Technology Award (2003) and the International Media Art Prize > (2004). His work in interactive media includes electro-acoustic scores for stage > and screen, live cinema, video-dance, data-base narrative, noise performance and > telematic video events that emphasize multi-screen projection as an extension of > performer, audience and environment. David currently lives and works in Santa > Fe, New Mexico. > > Cory Metcalf is a moving image and sound artist who lives in Santa Fe, NM. His > work explores the intersection of human performance, real-time media systems and > responsive installation environments. His interests range from the field of > bio-mimicry to the practices of aerial theater, extended vocal techniques and > instrumental noise-music performance. As a seminal member of the interactive > performance group, i2O, Metcalf developed dynamic diffusion sound designs for > live acoustics and video performance instruments. Metcalf's interest in physical > computing is evidenced in works such as Sensor Swarm, a hybrid interactive > performance-installation that employs sensing technology to blur the distinction > between the audience and performance, fore-grounding the normally unconscious > influence that humans impose on their environment. Currently Cory is working > with real-time 3D simulation and complex data feed-back programs to model > synthetic-ecologies based on genetic and behavioral processes found in living > systems. > > R. Luke DuBois is a composer, programmer, and video artist living in New York > City. He holds a doctorate in music composition from Columbia University, and > teaches interactive sound and video performance at Columbia's Computer Music > Center and at the Interactive Telecommunications Program at New York University. > He has done collaborated on interactive performance, installation and music > production work with many artists, most recently Toni Dove, Todd Reynolds, > Michael Joaquin Grey, Elliott Sharp, and Michael Gordon, and was a staff > programming consultant for Engine27 for the 2003 season. He is a co-author of > Jitter, a software suite developed by Cycling'74 for real-time manipulation of > matrix data. His music (with or without his band, the Freight Elevator Quartet), > is available on Caipirinha/Sire, Cycling'74, and Cantaloupe music, and his > artwork is represented by Bitforms Gallery in New York City. > > > > > ------------------------------ > > Message: 2 > Date: Thu, 24 Aug 2006 23:13:54 -0400 > From: "Phil Henshaw" <sy at synapse9.com> > Subject: [FRIAM] Is disorder harder to describe than order? > To: "'The Friday Morning Applied Complexity Coffee Group'" > <friam at redfish.com> > Message-ID: <00ee01c6c7f4$7fc5b990$2f01a8c0 at SavyII> > Content-Type: text/plain; charset=iso-8859-1 > > > I was reading Yaneer Bar-Yam's construction of systems theory from > Shannon's information theory and couldn't help notice that I disagree > that disorder is harder to describe. Yes, it's useful to have a theory > that helps you design efficient use of bandwidth, but maybe that doesn't > have to do with the real difference between order and disorder. > > A random distribution of data looks to me like a very complicated > question with a very simple answer, and a patterned distribution a > somewhat simpler question with an impossible answer (at least any way > we've agreed to describe natural systems so far). The material > evidence is that science has made great progress with the former, the > phenomena of the world based on random processes, in that they can be > reliably described. > > Could it be that there's a flaw in Shannon, or was he maybe talking > about data (questions) rather than information (answers)? > > > > Phil Henshaw ????.?? ? `?.???? > ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ > 680 Ft. Washington Ave > NY NY 10040 > tel: 212-795-4844 > e-mail: pfh at synapse9.com > explorations: www.synapse9.com > > > > > > ------------------------------ > > Message: 3 > Date: Fri, 25 Aug 2006 10:36:11 +0200 > From: "Jochen Fromm" <fromm at vs.uni-kassel.de> > Subject: Re: [FRIAM] Is disorder harder to describe than order? > To: "'The Friday Morning Applied Complexity Coffee Group'" > <friam at redfish.com> > Message-ID: <000001c6c821$87f404f0$537a338d at Toshiba> > Content-Type: text/plain; charset="us-ascii" > > > Something is hard to describe if it is complex. > Neither pure disorder nor pure order in form of simple > regularities is very hard to describe. Complexity is > characterized by order in disorder or order in chaos: > regularity in irregularity, predictability in > unpredictability, and unity in diversity. > > Murray Gell-Mann argues that the effective complexity > for both completely regular and completely random systems > is very low, because you cannot find many regularities > in the system which can be expressed by a suitable schema, > description or rule (see the end of chapter 5 in his book > "The Quark and the Jaguar"). > > -J. > > > > > ------------------------------ > > _______________________________________________ > Friam mailing list > Friam at redfish.com > http://redfish.com/mailman/listinfo/friam_redfish.com > > > End of Friam Digest, Vol 38, Issue 53 > ************************************* |
Well, what's the phrase? "All models are wrong" I think it is. Not
all models are useless, of course. I guess the models I was talking about were the ones that compared all knowledge to a two dimensional image of either random dots or a pattern in which you could see the shape of a person. The latter is clearly simpler because you can ignore some of the dots and still get the picture. What's damnably hard to describe in it, though, is what order it is that makes the image. That's what I'd call the 'information' you're looking for. For that you either need a lot of 'prior knowledge', or to examine the full history of life on earth, among other things. It's that difference between data and information that I often find skipped over. It's not cut and dried and it seems odd that 'information theory' always describes it as cut and dried. Then when data compacting algorithms, which are a great boon but nothing more, get used as causal explanations for complex organization in nature, I think the distinction between our tools and our subjects is getting lost. The clue to me is that they compare translating between different languages to lengths of dots. To translate good prose from English to Japanese you have to teach Japanese how to speak English, because the concepts are different. It's a real art. I think information theory assumes all concepts are the same, and that's probably inaccurate, even if data density is a fascinating and very important concept. I'm not sure I see the circular relation you describe, though. There are things left out, as you suggest, like not knowing what to call a pattern without knowing what the pattern is supposed to tell you (i.e. providing no analysis method whatever but snap judgment). I think that's what's finessed with using a picture of a person. Lots of images pop up without a question. The image itself doesn't tell you much actually. Maybe that's what you mean, that the sweeping generalities rely on your automatic judgments of the image before you ask where any judgments would come from? Jochen is suggesting that it's really the mixture of order and disorder that's hard to describe. Of course I don't disagree, mixing things makes it quite difficult, toward impossible, to separate noise from your signal, for example. I think the real issue, though, is how very hard it is to describe the signal itself. That's what most of our information sources seem encumbered with, sketchy data reflecting complex highly organized things mixed with worse data on other things that conveys nothing recognizable at all. We call it 'random' but that's not where it comes from, just all we can understanding of what it means. The actual problem is how to describe the organized stuff. I don't think the complexly organized things are often recurrent patterns in a pervasive disorder, but usually independent and cohesive real things, pushed into the background behind the appearance of disorder because the disorder is distracting. Maybe the reason complex order is tantalizing is that there is an answer somewhere. Some will want to treat the mixed data we get as purely a mathematical analysis puzzle, but I think of it as a thing-out-there puzzle, which the analysis can be quite useful for. Phil Henshaw ????.?? ? `?.???? ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 680 Ft. Washington Ave NY NY 10040 tel: 212-795-4844 e-mail: pfh at synapse9.com explorations: www.synapse9.com > -----Original Message----- > From: friam-bounces at redfish.com > [mailto:friam-bounces at redfish.com] On Behalf Of Nicholas Thompson > Sent: Friday, August 25, 2006 3:42 PM > To: friam at redfish.com > Subject: Re: [FRIAM] order and disorder > > > Phil, > > My life time passion concerns the world's love affair with > circular explanation. An explanation is circular if the > terms in which it is identfied are the same terms as those by > whatever explains it. The complex-random distinction is one > of those, isnt it? I.e., we cannot know whether a pattern > is random or complex until we know exactly how that pattern > was made. > > I am hoping I am wrong about this. > > NIck > > > > [Original Message] > > From: <friam-request at redfish.com> > > To: <friam at redfish.com> > > Date: 8/25/2006 12:00:37 PM > > Subject: Friam Digest, Vol 38, Issue 53 > > > > Send Friam mailing list submissions to > > friam at redfish.com > > > > To subscribe or unsubscribe via the World Wide Web, visit > > http://redfish.com/mailman/listinfo/friam_redfish.com > > or, via email, send a message with subject or body 'help' to > > friam-request at redfish.com > > > > You can reach the person managing the list at > > friam-owner at redfish.com > > > > When replying, please edit your Subject line so it is more specific > > than "Re: Contents of Friam digest..." > > > > > > Today's Topics: > > > > 1. lecture Wed August 30 - David Stout: 100 Monkey Garden - > > Interactive Ecosystem (Stephen Guerin) > > 2. Is disorder harder to describe than order? (Phil Henshaw) > > 3. Re: Is disorder harder to describe than order? (Jochen Fromm) > > > > > > > ---------------------------------------------------------------------- > > > > Message: 1 > > Date: Thu, 24 Aug 2006 17:15:58 -0600 > > From: "Stephen Guerin" <stephen.guerin at redfish.com> > > Subject: [FRIAM] lecture Wed August 30 - David Stout: 100 Monkey > > Garden - Interactive Ecosystem > > To: <friam at redfish.com>, <discuss at nmvis.org> > > Cc: io at csf.edu > > Message-ID: <002f01c6c7d3$43f9f780$0202fea9 at hongyu> > > Content-Type: text/plain; charset="us-ascii" > > > > *** note that this lecture will be hosted at College of > Santa Fe *** > > > > SPEAKER: > > David Stout, Cory Metcalf and Luke DuBois > > College of Santa Fe > > > > TITLE: > > 100 Monkey Garden - Interactive Ecosystem > > > > LOCATION: > > MOV-iN Gallery > > College of Santa Fe > > 1600 St. Michaels Dr. > > > > Located at Moving Image Arts Department > > (same building as THE SCREEN) > > map: http://mov-in.org/aboutus.php > > > > TIME: Wed, August 30 12:30p > > > > Lunch will be available for purchase > > > > ABSTRACT: > > Video/sound artist and moving image arts Professor David Stout will > > give a personal tour of this highly immersive interactive ecosystem > > and digital > art > > space. Using multiple projectors and computer monitors, as > well as an > array of > > sound and motion sensing devices, David enables the 100 Monkey > > spectator > to > > witness a digitally imagined world creating and recreating > itself, its > rules and > > dimensions. Accompanying art pieces fill the space and > compliment this > > particular style of digital art. > > > > Check out past incarnations of this piece online: > > http://nfold.csf.edu/Pages/100MonkeyGarden.htm > > > > Other works by David Stout: > > http://nfold.csf.edu/ > > > > Santa Fe's THE Magazine review of this installation: > > http://mov-in.org/Dstout100MonkeyMOV_iN_CDE05-1.pdf > > > > BIOS: > > David Stout is an interactive video-sound artist and one of > the worlds > leading > > laptop performers exploring real-time cross-synthesis of sound and > > image. > He > > is the recipient of the Harvestworks Interactive Technology > Award and > > the > Sun > > Micro Systems Award for Academic Excellence (2004) and a > nominee for > > the > both > > the WTN World Technology Award (2003) and the International > Media Art > Prize > > (2004). His work in interactive media includes > electro-acoustic scores > for stage > > and screen, live cinema, video-dance, data-base narrative, noise > performance and > > telematic video events that emphasize multi-screen projection as an > extension of > > performer, audience and environment. David currently lives > and works > > in > Santa > > Fe, New Mexico. > > > > Cory Metcalf is a moving image and sound artist who lives > in Santa Fe, > NM. His > > work explores the intersection of human performance, real-time media > systems and > > responsive installation environments. His interests range from the > > field > of > > bio-mimicry to the practices of aerial theater, extended vocal > > techniques > and > > instrumental noise-music performance. As a seminal member of the > interactive > > performance group, i2O, Metcalf developed dynamic diffusion sound > > designs > for > > live acoustics and video performance instruments. Metcalf's > interest > > in > physical > > computing is evidenced in works such as Sensor Swarm, a hybrid > > interactive performance-installation that employs sensing > technology > > to blur the > distinction > > between the audience and performance, fore-grounding the normally > unconscious > > influence that humans impose on their environment. Currently Cory is > working > > with real-time 3D simulation and complex data feed-back programs to > > model synthetic-ecologies based on genetic and behavioral processes > > found in > living > > systems. > > > > R. Luke DuBois is a composer, programmer, and video artist > living in > > New > York > > City. He holds a doctorate in music composition from Columbia > > University, > and > > teaches interactive sound and video performance at > Columbia's Computer > Music > > Center and at the Interactive Telecommunications Program at New York > University. > > He has done collaborated on interactive performance, > installation and > music > > production work with many artists, most recently Toni Dove, Todd > > Reynolds, Michael Joaquin Grey, Elliott Sharp, and Michael > Gordon, and > > was a staff programming consultant for Engine27 for the > 2003 season. > > He is a > co-author of > > Jitter, a software suite developed by Cycling'74 for real-time > manipulation of > > matrix data. His music (with or without his band, the > Freight Elevator > Quartet), > > is available on Caipirinha/Sire, Cycling'74, and Cantaloupe > music, and > > his artwork is represented by Bitforms Gallery in New York City. > > > > > > > > > > ------------------------------ > > > > Message: 2 > > Date: Thu, 24 Aug 2006 23:13:54 -0400 > > From: "Phil Henshaw" <sy at synapse9.com> > > Subject: [FRIAM] Is disorder harder to describe than order? > > To: "'The Friday Morning Applied Complexity Coffee Group'" > > <friam at redfish.com> > > Message-ID: <00ee01c6c7f4$7fc5b990$2f01a8c0 at SavyII> > > Content-Type: text/plain; charset=iso-8859-1 > > > > > > I was reading Yaneer Bar-Yam's construction of systems theory from > > Shannon's information theory and couldn't help notice that > I disagree > > that disorder is harder to describe. Yes, it's useful to have a > > theory that helps you design efficient use of bandwidth, > but maybe that doesn't > > have to do with the real difference between order and disorder. > > > > A random distribution of data looks to me like a very complicated > > question with a very simple answer, and a patterned distribution a > > somewhat simpler question with an impossible answer (at > least any way > > we've agreed to describe natural systems so far). The material > > evidence is that science has made great progress with the > former, the > > phenomena of the world based on random processes, in that > they can be > > reliably described. > > > > Could it be that there's a flaw in Shannon, or was he maybe talking > > about data (questions) rather than information (answers)? > > > > > > > > Phil Henshaw ????.?? ? `?.???? > > ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ > > 680 Ft. Washington Ave > > NY NY 10040 > > tel: 212-795-4844 > > e-mail: pfh at synapse9.com > > explorations: www.synapse9.com > > > > > > > > > > > > ------------------------------ > > > > Message: 3 > > Date: Fri, 25 Aug 2006 10:36:11 +0200 > > From: "Jochen Fromm" <fromm at vs.uni-kassel.de> > > Subject: Re: [FRIAM] Is disorder harder to describe than order? > > To: "'The Friday Morning Applied Complexity Coffee Group'" > > <friam at redfish.com> > > Message-ID: <000001c6c821$87f404f0$537a338d at Toshiba> > > Content-Type: text/plain; charset="us-ascii" > > > > > > Something is hard to describe if it is complex. > > Neither pure disorder nor pure order in form of simple > > regularities is very hard to describe. Complexity is > > characterized by order in disorder or order in chaos: > > regularity in irregularity, predictability in > > unpredictability, and unity in diversity. > > > > Murray Gell-Mann argues that the effective complexity > > for both completely regular and completely random systems > > is very low, because you cannot find many regularities > > in the system which can be expressed by a suitable schema, > > description or rule (see the end of chapter 5 in his book > > "The Quark and the Jaguar"). > > > > -J. > > > > > > > > > > ------------------------------ > > > > _______________________________________________ > > Friam mailing list > > Friam at redfish.com > > http://redfish.com/mailman/listinfo/friam_redfish.com > > > > > > End of Friam Digest, Vol 38, Issue 53 > > ************************************* > > > > ============================================================ > FRIAM Applied Complexity Group listserv > Meets Fridays 9a-11:30 at cafe at St. John's College > lectures, archives, unsubscribe, maps at http://www.friam.org > > |
Phil, et al -
> It's that difference between data and information > that I often find skipped over. It's not cut and dried and it seems > odd > that 'information theory' always describes it as cut and dried. Then > when data compacting algorithms, which are a great boon but nothing > more, get used as causal explanations for complex organization in > nature, I think the distinction between our tools and our subjects is > getting lost. When I first started seriously contemplating things like information theory, I had the (dis?)advantage of not being schooled in it directly but instead having a lot of the necessary tools to contemplate it, to try to reinvent some of the ideas others had already put out for us. Specifically, I had a grounding in statistical physics and markov models, and a grounding in computer logic and programming, but not specifically in information theory. That's what degrees in math and physics and a strong interest in computers got you back in the 70's I guess. What that lead me to contemplate a *lot* was "relative entropy"... or the simple notion that the amount of effective entropy in a string of bits (or an ensemble of physical states) was highly dependent on your knowledge of the string of bits (physical system). If you assume a string of bits has some order in it and especially if you have external knowledge (a model of that order) to predict with, then the "entropy" is effectively less. This would be why, for example, that jpeg's simple cosine-model for "predicting" bits in a string (along a line of an image) works so well on certain types of images (3D objects with shaded surfaces..) where run-length encoding did not. > > The clue to me is that they compare translating between different > languages to lengths of dots. To translate good prose from English to > Japanese you have to teach Japanese how to speak English, because the > concepts are different. It's a real art. I think information theory > assumes all concepts are the same, and that's probably inaccurate, even > if data density is a fascinating and very important concept. I've also thought a bit about this in terms of basis spaces or basis vectors. One might suggest that each natural language (say English or Japanese), represents a basis space for meaning, ideas, thoughts, creative expression, etc. And any meaningful utterance (a single word, a sentence, a dialog, a book, an encyclopedia, a library) in that language is a vector (I suppose non-meaningful ones are too, whatever the Jabberwocky in the Slivy Toves that means!). And it is not clear if these two basis spaces truly "cover" the same territory. My variation on your observation is to note that to learn a language fully requires learning the culture of the language fully, which most (all by definition?) members of a given culture never even achieve. We revere the OED because it gives us first-known uses of words and their context and so forth... most of us are amazed half of the time when we look up a word, to discover it's (apparent) origins and/or multiply nuanced uses, etc. > > I'm not sure I see the circular relation you describe, though. There > are things left out, as you suggest, like not knowing what to call a > pattern without knowing what the pattern is supposed to tell you (i.e. > providing no analysis method whatever but snap judgment). I think > that's > what's finessed with using a picture of a person. Lots of images pop > up without a question. The image itself doesn't tell you much > actually. Maybe that's what you mean, that the sweeping generalities > rely on your automatic judgments of the image before you ask where any > judgments would come from? I don't imagine that we only have one or two levels of pattern matching going on... I think we have many levels and not all of them coplanar or parallel. I've had experiences where at a glance I saw a "thing" which caused me to think I had seen some other "thing" which on careful review (looking more carefully at "thing one" and considering all of the ways I might have extracted the image or idea of "thing two" from it" I could see lots of levels of patterning. A series of "dots" can suggest a line... several of these "lines" can suggest an arc or an edge or a boundary, and each of these can suggest some negative or positive space which can suggest an area or an object which can suggest higher orders of objects like an animal or a vehicle or a person which can suggest a relationship or a scene (flight or fight!) or ... > > Jochen is suggesting that it's really the mixture of order and disorder > that's hard to describe. Of course I don't disagree, mixing things > makes it quite difficult, toward impossible, to separate noise from > your > signal, for example. One man's noise is another man's signal!? > > I don't think the complexly organized things are often recurrent > patterns in a pervasive disorder, but usually independent and cohesive > real things, pushed into the background behind the appearance of > disorder because the disorder is distracting. Maybe the reason > complex > order is tantalizing is that there is an answer somewhere. Some will > want to treat the mixed data we get as purely a mathematical analysis > puzzle, but I think of it as a thing-out-there puzzle, which the > analysis can be quite useful for. I think "noise" is a bogus (or at least relative) concept. Noise is what we wish to ignore for a given purpose. At LANL, for example, we model lighting very thoroughly because we want to remove they constitute virtually all of the "background noise" when you are listening for the EMP from a nuclear explosion... This "noise" is hugely signal to meteorologists... - Steve |
Steve
> Phil, et al - > > > It's that difference between data and information > > that I often find skipped over. It's not cut and > > dried and it seems odd > > that 'information theory' always describes it as cut and > > dried. Then > > when data compacting algorithms, which are a great boon but nothing > > more, get used as causal explanations for complex organization in > > nature, I think the distinction between our tools and our > > subjects is getting lost. > When I first started seriously contemplating things like information > theory, I had the (dis?)advantage of not being schooled in it > directly but instead having a lot of the necessary tools to > contemplate it, to try to reinvent some of the ideas others > had already put out for us. > Specifically, I had a grounding in statistical physics and markov > models, and a grounding in computer logic and programming, but not > specifically in information theory. That's what degrees in math and > physics and a strong interest in computers got you back in the 70's I > guess. > > What that lead me to contemplate a *lot* was "relative entropy"... or > the simple notion that the amount of effective entropy in a string of > bits (or an ensemble of physical states) was highly dependent on your > knowledge of the string of bits (physical system). If you assume a > string of bits has some order in it and especially if you > have external > knowledge (a model of that order) to predict with, then the "entropy" > is effectively less. This would be why, for example, that jpeg's > simple cosine-model for "predicting" bits in a string (along > a line of an image) works so well on certain types of images > (3D objects with > shaded surfaces..) where run-length encoding did not. Great! A cosine model is a good useful incorrect universal model of shape in data. There may be better ones, based on the same implied principle that in regions of continuity a small number of points gives you a simple rule for all the points in-between. The one I use is even more naturalistic and easier to calculate, the rule that the 2nd or 3rd derivative at a point, or something, is the same approached from either direction... If anyone knows anyone, I'd like to talk to people interested in generalizing this and the related issues. My math isn't really strong enough. Still, isn't the basic question when to make the jump from recognizing patterns in the data to recognizing things in the world? Huge steps have been made in pattern screening, fingerprints & text searches and other complicated things. Isn't the 'holy grail' to do the same for complex systems? What would you look for? Continuities and breaks, periods when shapes have higher derivatives all of the same sign, etc. ..'relative entropy' sounds a little like the concept of 'random with respect to' the local pattern discontinuities in organizational hierarchies. The behavior of materials is exactly the larger scales of the behavior of their molecules. The question is whether the behavior of the whole arises from individually orderly molecules behaving 'randomly with respect to' the whole. > > > > > The clue to me is that they compare translating between different > > languages to lengths of dots. To translate good prose from > > English to > > Japanese you have to teach Japanese how to speak English, > > because the > > concepts are different. It's a real art. I think > > information theory > > assumes all concepts are the same, and that's probably inaccurate, > > even if data density is a fascinating and very important concept. > I've also thought a bit about this in terms of basis spaces or basis > vectors. > One might suggest that each natural language (say English or > Japanese), represents a basis space for meaning, ideas, > thoughts, creative > expression, > etc. And any meaningful utterance (a single word, a sentence, a > dialog, a > book, an encyclopedia, a library) in that language is a vector (I > suppose > non-meaningful ones are too, whatever the Jabberwocky in the > Slivy Toves > that means!). And it is not clear if these two basis spaces truly > "cover" > the same territory. > > My variation on your observation is to note that to learn a > language fully requires learning the culture of the language > fully, which most (all by definition?) members of a given > culture never even achieve. We revere the OED because it > gives us first-known uses of words and their context and so > forth... most of us are amazed half of the time when we look > up a word, to discover it's (apparent) origins and/or > multiply nuanced uses, etc. Yes, the same idea. Maybe the most useful word for it is 'nuance', those feint and powerful paths of association. Not much nuance to data! (unless you read between the lines, of course) > > > > I'm not sure I see the circular relation you describe, > > though. There > > are things left out, as you suggest, like not knowing what > > to call a > > pattern without knowing what the pattern is supposed to > > tell you (i.e. > > providing no analysis method whatever but snap judgment). I think > > that's > > what's finessed with using a picture of a person. Lots of > > images pop > > up without a question. The image itself doesn't tell you much > > actually. Maybe that's what you mean, that the sweeping > > generalities > > rely on your automatic judgments of the image before you > > ask where any > > judgments would come from? > > I don't imagine that we only have one or two levels of > pattern matching going on... I think we have many levels and > not all of them coplanar or parallel. I've had > experiences where at a glance I saw a "thing" which > caused me to think I had seen some other "thing" which on > careful review (looking more carefully at "thing one" and > considering all of the ways I might have extracted the image > or idea of "thing two" from it" I could see > lots of levels of patterning. A series of "dots" can > suggest a line... several > of these "lines" can suggest an arc or an edge or a boundary, > and each of > these can suggest some negative or positive space which can > suggest an area or an object which can suggest higher orders > of objects like an animal or a vehicle or a person which can > suggest a relationship or a scene (flight or fight!) or ... Sorting out the 'powers of suggestion' in any data is definitely not easy. The closest information theory would seem to come is with the algorithms, that I have no real understanding of but can see how well they work, for making up rules of association between patterns and then skimming matches from huge sets of alternates. The one thing in the natural world that seems to do something similar, by a different means perhaps, is human thought. I don't think thought is either digital or analog, but the outside appearance is that people have a similar amazing facility at word puzzles as Google has on the web, and neither have a proportionate grasp on other kinds of meaning. Isn't there something similar in the disproportionate performance levels on very similar tasks? There are lots of interpretation tasks neither man or machine seems likely to ever master, but another one that might be mastered by either, by different means perhaps, is reading curves from dots. It's one of those natural navigation tasks, to read as far ahead on the curves as possible to minimize the steering necessary. It's the core problem for 'homing systems', I think, which the world produces in abundance and variety. Basic thermostats only respond to the set point crossings, above or below, but could reasonably be engineered to respond to the system's implied thermal mass (past responsiveness) and the rate of approach of the set point(implied energy flux), just reading the dynamics of the curve. > > > > Jochen is suggesting that it's really the mixture of order and > > disorder that's hard to describe. Of course I don't > > disagree, mixing > > things makes it quite difficult, toward impossible, to > > separate noise > > from your signal, for example. > > One man's noise is another man's signal!? Well, sure. A man looks at a basket of apples as something to buy and his son looks at it as something to eat! That discrepancy might not depend of choosing which source noise to ignore, or it might. They might both be overlooking the dirt and worm holes the mom would notice right off, giving her a much bigger picture, and leading her to quickly scurry the two boys away from that stall in the market! > > > I don't think the complexly organized things are often recurrent > > patterns in a pervasive disorder, but usually independent > > and cohesive > > real things, pushed into the background behind the appearance of > > disorder because the disorder is distracting. Maybe the reason > > complex > > order is tantalizing is that there is an answer somewhere. > > Some will > > want to treat the mixed data we get as purely a > > mathematical analysis > > puzzle, but I think of it as a thing-out-there puzzle, which the > > analysis can be quite useful for. > > I think "noise" is a bogus (or at least relative) concept. > Noise is what we wish to ignore for a given purpose. > At LANL, for example, we model > lighting very thoroughly because we want to remove they > constitute virtually all of the "background noise" when you > are listening for the EMP from a nuclear explosion... > This "noise" is hugely signal to meteorologists... As you were mentioning before, there are many kinds and layers of signal and noise. That's maybe my main objection to what I was taught in data analysis, essentially to treat data as if all the pattern you didn't understand was made by the same universal noise generator. It ain't so. In studying changes in fossil shape over time there's a basic choice at the beginning. Is the irregularity in the data produced by sampling changes clustered around a smooth curve that has multiple scales of long and short term fluctuation?, or is it produced by a single random jumping machine that takes off from each point to land precisely at your next point?? The two starting assumptions look almost the same, even to careful analysis sometimes. I say, try'm all, see if anything works. When you first find one that works it may give you a major component you can subtract out to more clearly see the others! Phil > - Steve |
Free forum by Nabble | Edit this page |