-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1 phil henshaw sy at synapse9.com Tue Dec 11 13:34:36 EST 2007: > What I actually find to be the rewarding part of it, though breaking > the theoretical boundaries to allow the indefinable things outside > our models into the discussion is quite necessary, is to then develop > confidence with exploring them. > > [...] > > Human designs have long tended to be abstractions 'in a box', like > equations, and machines, which have no capability themselves of > exploring or adapting to their environments. _If_ people are paying > attention models evolve by people making new ones. When you look to > see why that is you find it is achieved by building the box and > essentially defining whatever is outside the structures of models > away. It introduces bias. Learning to do the opposite, exploring > the complex world around our models, and asking other questions, > needs the aid of methods though. > > [...] > > I also, for measuring total environmental impacts, use the tried and > true way to look outside any box... "follow the money". Some people > are even responding to how very effective it is as a measure! > > gtg > > Do you have theory or method for visualizing or exploring the stuff > outside the box? To some extent, yes. A complete method requires 2 sub-methods: construction and evaluation. My construction method is still just mine and I haven't made any (public) attempts to formalize or communicate it. And, one can make a reasonable argument (based on critical rationalism) that construction should NOT be rigorous or formalized. Indeed, we accept any model, irregardless of its source or structure. But our evaluation method, which is more formalized, imposes 4 basic requirements for modeling any given "functional unit": R1: co-simulation of 3 models: M1: a synthetic model, M2: a reference/pedigreed model, and M3: a data model R2: inter-aspect influences are discrete R3: models are designed to be transpersonal R4: well-defined similarity measure(s) In the most degenerate case, we end up exploring the behaviors _between_ the three models. Each model type has its flaws (the shape of the "box", what's left out, bugs or artifacts within, etc.); but, requiring that they are fundamentally different types of models _forces_ the modeler to explore the stuff outside each of the three boxes. I can say a lot more; but, I'll stop there and await criticism. - -- glen e. p. ropella, 971-219-3846, http://tempusdictum.com I have the heart of a child. I keep it in a jar on my shelf. -- Robert Bloch -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.6 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iD8DBQFHXy0yZeB+vOTnLkoRAi+3AJ9PtvxMGXe18a/rZ/Kl9AjyhxuU7ACfVRyC rGFzgWaMOw4Lbgf6dUgplbw= =T3qH -----END PGP SIGNATURE----- |
Glen E. P. Ropella wrote:
> R1: co-simulation of 3 models: > M1: a synthetic model, > M2: a reference/pedigreed model, and > M3: a data model > Each model type has its flaws (the shape of the > "box", what's left out, bugs or artifacts within, etc.); but, requiring > that they are fundamentally different types of models _forces_ the > modeler to explore the stuff outside each of the three boxes. It's stretching it to say this is triangulation and it is stretching it to say they are fundamentally different. That being said, it's hard to see any alternative. |
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1 Marcus G. Daniels on 12/11/2007 05:26 PM: > Glen E. P. Ropella wrote: >> R1: co-simulation of 3 models: >> M1: a synthetic model, >> M2: a reference/pedigreed model, and >> M3: a data model >> Each model type has its flaws (the shape of the >> "box", what's left out, bugs or artifacts within, etc.); but, requiring >> that they are fundamentally different types of models _forces_ the >> modeler to explore the stuff outside each of the three boxes. > > It's stretching it to say this is triangulation That's why I used quotes around the word. And that's why I said it's not really triangulation. Why would you reiterate my point? Perhaps to build rhetorical momentum for your next assertion, which is false. [grin] > and it is stretching it to say they are fundamentally different. No. It's not stretching to say the three models are fundamentally different. They _could_ be similar if one so chose. But, in GENERAL, they will be fundamentally different if we don't put any constraints on the source or construction of the models. BTW, this evaluation method is outlined in our papers if anyone wants any more detail. - -- glen e. p. ropella, 971-219-3846, http://tempusdictum.com Fear is the main source of superstition, and one of the main sources of cruelty. To conquer fear is the beginning of wisdom. -- Bertrand Russell -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.6 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iD8DBQFHX0U/ZeB+vOTnLkoRApUPAJ9EHB84UbrubSl+9IcbMImfZWhKQwCgl8SI avSNtR3KdCqHxi2jz9XNpXo= =I5/7 -----END PGP SIGNATURE----- |
Glen E. P. Ropella wrote:
> No. It's not stretching to say the three models are fundamentally > different. They _could_ be similar if one so chose. But, in GENERAL, > they will be fundamentally different if we don't put any constraints on > the source or construction of the models. Sure, if you manage to invent two entirely new ways of looking at a problem [the data collection plan/model and a design for a synthetic model]. Theoretical frameworks rarely come out of thin air -- new models come from extensions and tweaks to a reference model, and the finite gestalt of a scientific community. That's inevitable, I think, unless you happen to have a topic for study that has very rich set of data available (that wasn't collected motivated by some hypothesis). |
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1 Marcus G. Daniels on 12/11/2007 06:53 PM: > Sure, if you manage to invent two entirely new ways of looking at a > problem [the data collection plan/model and a design for a synthetic > model]. Theoretical frameworks rarely come out of thin air -- new > models come from extensions and tweaks to a reference model, and the > finite gestalt of a scientific community. That's inevitable, I think, > unless you happen to have a topic for study that has very rich set of > data available (that wasn't collected motivated by some hypothesis). But you don't need a new theoretical framework for a model to be fundamentally different from another model. As we've seen on this list alone, there are already a wide variety of theoretical frameworks that are _never_ directly compared. For the most part, I think this is because people prematurely decide that two frameworks are incommensurate and that it doesn't make sense to target the same referent with models in the two frameworks. A great example is hybrid (discrete + continuous) systems. For some reason, we feel the need to call such systems "hybrid" even though they're not really that difficult to combine. The trick is that the _theoretical_ tools used to reason about them are different. But, we can pull together lots of different things and run them in co-simulation without requiring theoretical commensurability. Likewise, analogs come from the weirdest places. For example, we can compare the models for the "meter"; the metal rod is fundamentally different from the distance light travels in a vacuum. These are fundamentally different models of the meter. Another example is an RC plane versus a balsa wood plane as models of a life size plane. The models are fundamentally different. All that's required is a common aspect ("lift"). Granted, when multi-modeling becomes standard practice, we will (probably) eventually consolidate our model construction methods, which will constrain such model construction (all rooted in physics no doubt). And _then_ it will be reasonable to say that the various models are NOT fundamentally different. But right now, in the immature modeling and simulation discipline we have, any two models are very likely to be very different. In fact, part of our purpose in publishing our functional unit representation method is to help push for the development of multi-modeling methodology so that we can make models with incommensurate structure phenomenally more commensurate through aspects and co-simulation. The idea being to construct/select populations of structures to find those that best generate the targeted behavior. - -- glen e. p. ropella, 971-219-3846, http://tempusdictum.com The assertion that our ego consists of protein molecules seems to me one of the most ridiculous ever made. -- Kurt G?del. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.6 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iD8DBQFHX/VtZeB+vOTnLkoRAogNAJ9Nr/fvLcINJ90VTSnXFW/3SCHBdgCcDuMM aufG3PUvc5WnEvMYFVhenfk= =OkEa -----END PGP SIGNATURE----- |
Glen E. P. Ropella wrote:
> Another example is an RC > plane versus a balsa wood plane as models of a life size plane. > All that's required is a common aspect ("lift"). Yes, I agree that it's better to have many models of something than just one, as that will tend to explore at least some `interstitial space'. No, I don't agree that just because there are multiple models that those models won't be prone to correlation. People are copycats, in subtle and not so subtle ways. And as Eric points out, even a pure statistical inference approach is prone to correlation that will require further investigation to get at causality. To my way of thinking, data mining is just a first step to get into the right space at all. My concern is simply the one that Phil raises from time to time: Looking for your keys under the street light, when in fact they are in a dark alley. |
In reply to this post by glen ep ropella
Here's a good example. I'm at the fed conference on sustainable design
metrics and strategies, hearing about high quality metrics for the energy content of building decisions and the complex redesigns for reducing our impacts by 50% by 2030. Only problem is, the best metrics are showing the problem is more complicated, and that the trends in performance gains are actually slowing down not speeding up, and, that the measures of the total energy content of our decisions people have spent the most time on miss literally 90% of the total. Apparently the distribution of energy content our decisions are responsible for has a 'fat tail', which makes 90% of it unaccountable. That seems to mean that all the strategies are missing the main target. Given problematic indicators like that it seems to we should 'look around' for the hidden lists of things not represented.... Phil Henshaw ????.?? ? `?.???? ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 680 Ft. Washington Ave NY NY 10040 tel: 212-795-4844 e-mail: pfh at synapse9.com explorations: www.synapse9.com > -----Original Message----- > From: friam-bounces at redfish.com > [mailto:friam-bounces at redfish.com] On Behalf Of Glen E. P. Ropella > Sent: Wednesday, December 12, 2007 9:51 AM > To: The Friday Morning Applied Complexity Coffee Group > Subject: Re: [FRIAM] complexity and emergence > > > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > Marcus G. Daniels on 12/11/2007 06:53 PM: > > Sure, if you manage to invent two entirely new ways of looking at a > > problem [the data collection plan/model and a design for a > synthetic > > model]. Theoretical frameworks rarely come out of thin air -- new > > models come from extensions and tweaks to a reference > model, and the > > finite gestalt of a scientific community. That's > inevitable, I think, > > unless you happen to have a topic for study that has very > rich set of > > data available (that wasn't collected motivated by some > hypothesis). > > But you don't need a new theoretical framework for a model to > be fundamentally different from another model. As we've seen > on this list alone, there are already a wide variety of > theoretical frameworks that are _never_ directly compared. > For the most part, I think this is because people prematurely > decide that two frameworks are incommensurate and that it > doesn't make sense to target the same referent with models in > the two frameworks. > > A great example is hybrid (discrete + continuous) systems. > For some reason, we feel the need to call such systems > "hybrid" even though they're not really that difficult to > combine. The trick is that the _theoretical_ tools used to > reason about them are different. But, we can pull together > lots of different things and run them in co-simulation > without requiring theoretical commensurability. > > Likewise, analogs come from the weirdest places. For > example, we can compare the models for the "meter"; the metal > rod is fundamentally different from the distance light > travels in a vacuum. These are fundamentally different > models of the meter. Another example is an RC plane versus a > balsa wood plane as models of a life size plane. The models > are fundamentally different. All that's required is a common > aspect ("lift"). > > Granted, when multi-modeling becomes standard practice, we will > (probably) eventually consolidate our model construction > methods, which will constrain such model construction (all > rooted in physics no doubt). And _then_ it will be > reasonable to say that the various models are NOT > fundamentally different. But right now, in the immature > modeling and simulation discipline we have, any two models > are very likely to be very different. In fact, part of our > purpose in publishing our functional unit representation > method is to help push for the development of multi-modeling > methodology so that we can make models with incommensurate > structure phenomenally more commensurate through aspects and > co-simulation. The idea being to construct/select > populations of structures to find those that best generate > the targeted behavior. > > - -- > glen e. p. ropella, 971-219-3846, http://tempusdictum.com > The assertion that our ego consists of protein molecules > seems to me one of the most ridiculous ever made. -- Kurt G?del. > > -----BEGIN PGP SIGNATURE----- > Version: GnuPG v1.4.6 (GNU/Linux) > Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org > > iD8DBQFHX/VtZeB+vOTnLkoRAogNAJ9Nr/fvLcINJ90VTSnXFW/3SCHBdgCcDuMM > aufG3PUvc5WnEvMYFVhenfk= > =OkEa > -----END PGP SIGNATURE----- > > ============================================================ > FRIAM Applied Complexity Group listserv > Meets Fridays 9a-11:30 at cafe at St. John's College > lectures, archives, unsubscribe, maps at http://www.friam.org > > |
Which federal conference is this and how about some citations for your 90%
fat tail comment. Gus Koehler, Ph.D. President and Principal Time Structures, Inc. 1545 University Ave. Sacramento, CA 95825 916-564-8683, Fax: 916-564-7895 Cell: 916-716-1740 www.timestructures.com -----Original Message----- From: [hidden email] [mailto:[hidden email]] On Behalf Of phil henshaw Sent: Wednesday, December 12, 2007 9:22 AM To: 'The Friday Morning Applied Complexity Coffee Group' Subject: Re: [FRIAM] complexity and emergence Here's a good example. I'm at the fed conference on sustainable design metrics and strategies, hearing about high quality metrics for the energy content of building decisions and the complex redesigns for reducing our impacts by 50% by 2030. Only problem is, the best metrics are showing the problem is more complicated, and that the trends in performance gains are actually slowing down not speeding up, and, that the measures of the total energy content of our decisions people have spent the most time on miss literally 90% of the total. Apparently the distribution of energy content our decisions are responsible for has a 'fat tail', which makes 90% of it unaccountable. That seems to mean that all the strategies are missing the main target. Given problematic indicators like that it seems to we should 'look around' for the hidden lists of things not represented.... Phil Henshaw ????.?? ? `?.???? ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 680 Ft. Washington Ave NY NY 10040 tel: 212-795-4844 e-mail: pfh at synapse9.com explorations: www.synapse9.com > -----Original Message----- > From: friam-bounces at redfish.com > [mailto:friam-bounces at redfish.com] On Behalf Of Glen E. P. Ropella > Sent: Wednesday, December 12, 2007 9:51 AM > To: The Friday Morning Applied Complexity Coffee Group > Subject: Re: [FRIAM] complexity and emergence > > > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > Marcus G. Daniels on 12/11/2007 06:53 PM: > > Sure, if you manage to invent two entirely new ways of looking at a > > problem [the data collection plan/model and a design for a > synthetic > > model]. Theoretical frameworks rarely come out of thin air -- new > > models come from extensions and tweaks to a reference > model, and the > > finite gestalt of a scientific community. That's > inevitable, I think, > > unless you happen to have a topic for study that has very > rich set of > > data available (that wasn't collected motivated by some > hypothesis). > > But you don't need a new theoretical framework for a model to be > fundamentally different from another model. As we've seen on this > list alone, there are already a wide variety of theoretical frameworks > that are _never_ directly compared. > For the most part, I think this is because people prematurely decide > that two frameworks are incommensurate and that it doesn't make sense > to target the same referent with models in the two frameworks. > > A great example is hybrid (discrete + continuous) systems. > For some reason, we feel the need to call such systems "hybrid" even > though they're not really that difficult to combine. The trick is > that the _theoretical_ tools used to reason about them are different. > But, we can pull together lots of different things and run them in > co-simulation without requiring theoretical commensurability. > > Likewise, analogs come from the weirdest places. For example, we can > compare the models for the "meter"; the metal rod is fundamentally > different from the distance light travels in a vacuum. These are > fundamentally different models of the meter. Another example is an RC > plane versus a balsa wood plane as models of a life size plane. The > models are fundamentally different. All that's required is a common > aspect ("lift"). > > Granted, when multi-modeling becomes standard practice, we will > (probably) eventually consolidate our model construction methods, > which will constrain such model construction (all rooted in physics no > doubt). And _then_ it will be reasonable to say that the various > models are NOT fundamentally different. But right now, in the > immature modeling and simulation discipline we have, any two models > are very likely to be very different. In fact, part of our purpose in > publishing our functional unit representation method is to help push > for the development of multi-modeling methodology so that we can make > models with incommensurate structure phenomenally more commensurate > through aspects and co-simulation. The idea being to construct/select > populations of structures to find those that best generate the > targeted behavior. > > - -- > glen e. p. ropella, 971-219-3846, http://tempusdictum.com The > assertion that our ego consists of protein molecules seems to me one > of the most ridiculous ever made. -- Kurt G?del. > > -----BEGIN PGP SIGNATURE----- > Version: GnuPG v1.4.6 (GNU/Linux) > Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org > > iD8DBQFHX/VtZeB+vOTnLkoRAogNAJ9Nr/fvLcINJ90VTSnXFW/3SCHBdgCcDuMM > aufG3PUvc5WnEvMYFVhenfk= > =OkEa > -----END PGP SIGNATURE----- > > ============================================================ > FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe > at St. John's College lectures, archives, unsubscribe, maps at > http://www.friam.org > > ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College lectures, archives, unsubscribe, maps at http://www.friam.org |
In reply to this post by Marcus G. Daniels
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1 Marcus G. Daniels on 12/12/2007 08:39 AM: > Yes, I agree that it's better to have many models of something than just > one, as that will tend to explore at least some `interstitial space'. > No, I don't agree that just because there are multiple models that those > models won't be prone to correlation. No one is claiming that given any set of models there is a guarantee of zero correlation between them. I never made any such claim. But, the point I _am_ making is that multi-modeling facilitates the exploration and finding of things one didn't consider to begin with... a.k.a. looking in places other than just under the street light. As multi-modeling methodology matures, adding extra constraints for model construction will be natural and expected. - -- glen e. p. ropella, 971-219-3846, http://tempusdictum.com What luck for rulers that men do not think. -- Adolf Hitler -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.6 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iD8DBQFHYJ2SZeB+vOTnLkoRAjGKAJ9/LauokLUTsQF6sp/DT3xCzg/hbQCgobac B8MQGFht5fHZvfBHLxgdfeY= =pyRj -----END PGP SIGNATURE----- |
Free forum by Nabble | Edit this page |