FRIAM and causality

classic Classic list List threaded Threaded
25 messages Options
12
Reply | Threaded
Open this post in threaded view
|

FRIAM and causality

glen ep ropella
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1


Sorry for breaking the threading.

Phil wrote:
> But isn't the shape of our varying ability to fit our models a direct
> image of 'nature itself', in fact, and our main mistake to discard
> them all but the 'best' one and so loose the shape of what they are
> all unable to describe?  That's why I like to go back and forth
> studying alternate models for their discrepancies and their fit,
> using models as learning tools rather than answers.  I think the
> notable thing you find that way is independent whole systems...i

Yes!

Sheesh, your prose is so hard to parse it feels good when I finally do
parse it. [grin]

Anyway, I definitely agree that it's a "mistake" in some sense to
discard all but the best projections.  However, in cases where a limit
_exists_ (and it is reasonable to believe it exists), then it's not a
mistake at all.  Preserving an erroneous model when much more accurate
models are at hand would be perverse (or evidence that one should be a
historian rather than a scientist).  I'm not talking about the type of
preservation that allows us to think back and learn from previous
events.  I'm talking about someone _sticking_ to and/or regularly
relying on a "bad" model even when they know it's wrong.

However, in most cases, we have no idea if the limit even exists and it
is often just psychological bias or delusion that makes us believe in
such a limit.  And in _those_ cases (MOST cases) it is definitely a
mistake to discard any model that is reasonably effective.  (Notice my
shift from "erroneous" or "accurate" to "effective".)

Personally, I believe this is the fundamental point of critical
rationalism and _open_ science where we allow and seriously consider
_any_ hypothesis, no matter how bizarre or offensive.  Only when a
hypothesis is falsified should it be demoted to secondary consideration
or the history books.

- --
glen e. p. ropella, 971-219-3846, http://tempusdictum.com
We must respect the other fellow's religion, but only in the sense and
to the extent that we respect his theory that his wife is beautiful and
his children smart. -- H.L. Mencken

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.6 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFHTfzbZeB+vOTnLkoRAon3AJwLpmeuuW86PeKLEjj9Raw+erP23ACgtOcM
UPMukBlumR6ywMMkAb9TF0M=
=5vqn
-----END PGP SIGNATURE-----


Reply | Threaded
Open this post in threaded view
|

FRIAM and causality

Russell Standish
All scientific models/theories tend to lie on a plane with the axes
"accuracy" and "ease of use". Explicability is also there, roughly
aligned with "ease of use".

Basically we should only keep those theories/models that lie on the
Pareto front, and discard those that are dominated. This is why we
still keep Newtonian gravity, even though it is less accurate than
GR (ie falsified), but discard the Ptolomaic system.

Cheers.

On Wed, Nov 28, 2007 at 03:42:19PM -0800, Glen E. P. Ropella wrote:

>
> Anyway, I definitely agree that it's a "mistake" in some sense to
> discard all but the best projections.  However, in cases where a limit
> _exists_ (and it is reasonable to believe it exists), then it's not a
> mistake at all.  Preserving an erroneous model when much more accurate
> models are at hand would be perverse (or evidence that one should be a
> historian rather than a scientist).  I'm not talking about the type of
> preservation that allows us to think back and learn from previous
> events.  I'm talking about someone _sticking_ to and/or regularly
> relying on a "bad" model even when they know it's wrong.
>

--

----------------------------------------------------------------------------
A/Prof Russell Standish                  Phone 0425 253119 (mobile)
Mathematics                        
UNSW SYDNEY 2052                 hpcoder at hpcoders.com.au
Australia                                http://www.hpcoders.com.au
----------------------------------------------------------------------------


Reply | Threaded
Open this post in threaded view
|

FRIAM and causality

Phil Henshaw-2
Russell,
That's a sound way to choose the most valuable model of the moment, but
it won't help you with what models can't show.   You need to study the
space between the models.  If you use optimal models and study the
discrepancy between them and the continually changing systems they
imperfectly reflect, you have a chance of seeing and engaging with the
real thing.  

Models are inherently lifeless, and quite unlike the inventive
independent networks we find in the complex physical world.  Using the
'best' model to represent nature is like putting a high resolution
picture of a frog in your son's terrarium.  Very nice, but not the real
thing.  Assuming that all behavior is deterministic, just waiting for us
to find the formula, still lingers.   It blocks learning about what we
can't write formulas for, though, so I think it should be among the
first things to go.


Phil Henshaw                       ????.?? ? `?.????
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
680 Ft. Washington Ave
NY NY 10040                      
tel: 212-795-4844                
e-mail: pfh at synapse9.com          
explorations: www.synapse9.com    


> -----Original Message-----
> From: friam-bounces at redfish.com
> [mailto:friam-bounces at redfish.com] On Behalf Of Russell Standish
> Sent: Wednesday, November 28, 2007 8:11 PM
> To: The Friday Morning Applied Complexity Coffee Group
> Subject: Re: [FRIAM] FRIAM and causality
>
>
> All scientific models/theories tend to lie on a plane with
> the axes "accuracy" and "ease of use". Explicability is also
> there, roughly aligned with "ease of use".
>
> Basically we should only keep those theories/models that lie
> on the Pareto front, and discard those that are dominated.
> This is why we still keep Newtonian gravity, even though it
> is less accurate than GR (ie falsified), but discard the
> Ptolomaic system.
>
> Cheers.
>
> On Wed, Nov 28, 2007 at 03:42:19PM -0800, Glen E. P. Ropella wrote:
>
> >
> > Anyway, I definitely agree that it's a "mistake" in some sense to
> > discard all but the best projections.  However, in cases
> where a limit
> > _exists_ (and it is reasonable to believe it exists), then
> it's not a
> > mistake at all.  Preserving an erroneous model when much
> more accurate
> > models are at hand would be perverse (or evidence that one
> should be a
> > historian rather than a scientist).  I'm not talking about
> the type of
> > preservation that allows us to think back and learn from previous
> > events.  I'm talking about someone _sticking_ to and/or regularly
> > relying on a "bad" model even when they know it's wrong.
> >
>
> --
>
> --------------------------------------------------------------
> --------------
> A/Prof Russell Standish                  Phone 0425 253119 (mobile)
> Mathematics                        
> UNSW SYDNEY 2052                 hpcoder at hpcoders.com.au
> Australia                                http://www.hpcoders.com.au
> --------------------------------------------------------------
> --------------
>
> ============================================================
> FRIAM Applied Complexity Group listserv
> Meets Fridays 9a-11:30 at cafe at St. John's College
> lectures, archives, unsubscribe, maps at http://www.friam.org
>
>




Reply | Threaded
Open this post in threaded view
|

FRIAM and causality

Phil Henshaw-2
In reply to this post by glen ep ropella
Glen

>
> Sorry for breaking the threading.
>
> Phil wrote:
> > But isn't the shape of our varying ability to fit our
> models a direct
> > image of 'nature itself', in fact, and our main mistake to discard
> > them all but the 'best' one and so loose the shape of what they are
> > all unable to describe?  That's why I like to go back and forth
> > studying alternate models for their discrepancies and their
> fit, using
> > models as learning tools rather than answers.  I think the notable
> > thing you find that way is independent whole systems...i
>
> Yes!
>
> Sheesh, your prose is so hard to parse it feels good when I
> finally do parse it. [grin]

Well, partly that's from my mad editing scheme... :-,) repeated word
substitution looking for ways to suggest difficult ideas.  I too find on
rereading that it can get disjointed...    Glad it occasionally works!

>
> Anyway, I definitely agree that it's a "mistake" in some
> sense to discard all but the best projections.  However, in
> cases where a limit _exists_ (and it is reasonable to believe
> it exists), then it's not a mistake at all.  

Yes, like the data showing that economies are approaching a
thermodynamic limit in using energy to create wealth.   I use
approaching limits to narrow down my definitions of natural structures
and categories all the time.  My preference is using them like an
envelope with a space in-between, though.  When you use upper and lower
bounds to home in on a natural subject, fitting to it with a matching
shape, like a ball in a catcher's mitt, your image of any prominence in
the natural structure is self-centering.  By 'pointing around' it rather
than 'pointing at' it you're also both more likely to capture where the
natural structure is located and less likely to represent it as being
your model.  

> Preserving an
> erroneous model when much more accurate models are at hand
> would be perverse (or evidence that one should be a historian
> rather than a scientist).  I'm not talking about the type of
> preservation that allows us to think back and learn from
> previous events.  I'm talking about someone _sticking_ to
> and/or regularly relying on a "bad" model even when they know
> it's wrong.

A historical view is a fine way to gain more perspective on how our
images really fit with nature.  The complex world is far too complicated
and full of independently behaving things, and any way to begin to
appreciate that seems fine.  Then as we begin using models as learning
tools rather than representational tools, looking for which models are
the 'most help for learning' rather than the 'least wrong for
representation', I think the focus moves toward modeling as a learning
process.

>
> However, in most cases, we have no idea if the limit even
> exists and it is often just psychological bias or delusion
> that makes us believe in such a limit.  And in _those_ cases
> (MOST cases) it is definitely a mistake to discard any model
> that is reasonably effective.  (Notice my shift from
> "erroneous" or "accurate" to "effective".)

Yes!  They help you recall your own thought processes and it's
branchings.  Another way to make discarded models more useful can be to
break them up.  Turning well made things into a clutter of probably
useless parts might seem confusing, but like compost they may contain
very useful parts for some unforeseen purpose.  That can even be a
specific strategy for evolving complex systems sometimes.  Economies use
that method of creative reinvention fairly often, capitalizing the
fortuitous design of waste products and byproducts.

>
> Personally, I believe this is the fundamental point of
> critical rationalism and _open_ science where we allow and
> seriously consider _any_ hypothesis, no matter how bizarre or
> offensive.  Only when a hypothesis is falsified should it be
> demoted to secondary consideration or the history books.

Sure, while not discarding too much, and we should still keep the word
'falsified'.  False theories, say like those of Freud or Lamarck, the
flat earth or idealized determinism, can offer fruitful ground for
asking what made them so compelling.  

...does this go anywhere you think?

Best,

Phil
 
> - --
> glen e. p. ropella, 971-219-3846, http://tempusdictum.com
> We must respect the other fellow's religion, but only in the
> sense and to the extent that we respect his theory that his
> wife is beautiful and his children smart. -- H.L. Mencken
>




Reply | Threaded
Open this post in threaded view
|

FRIAM and causality

Russell Standish
In reply to this post by Phil Henshaw-2
This is a very "Phil Henshaw" response - its a bit hard to know how to
respond to this.

On Thu, Nov 29, 2007 at 10:14:41AM -0500, Phil Henshaw wrote:
> Russell,
> That's a sound way to choose the most valuable model of the moment, but
> it won't help you with what models can't show.   You need to study the
> space between the models.  If you use optimal models and study the
> discrepancy between them and the continually changing systems they
> imperfectly reflect, you have a chance of seeing and engaging with the
> real thing.  
>

So you're just saying we should be performing crossover operations
between successful models? But this is exactly what happens when
multidisciplinary teams form leading to cross-polination of ideas. The
results are often quite interesting and advance the field.

> Models are inherently lifeless, and quite unlike the inventive
> independent networks we find in the complex physical world.

As a long time ALife practitioner, I don't really believe this at
all. I have often been surprised at the behaviour of my models, even
lifelike behaviour.

> Using the
> 'best' model to represent nature is like putting a high resolution
> picture of a frog in your son's terrarium.  Very nice, but not the real
> thing.  

Nice metaphor, but I don't understand how it relates... What about
replacing the frog with a detailed robotic imitation that has been
evolved to imitate frog behaviour using artificial life techniques?

> Assuming that all behavior is deterministic, just waiting for us
> to find the formula, still lingers.

What do you think of stochastic descriptions of nature then (starting with
Boltzmann's statistical physics)?

> It blocks learning about what we
> can't write formulas for, though, so I think it should be among the
> first things to go.
>

What we cannot "write formulas for" (by which I mean "find
compressible descriptions for"), we cannot learn. For that is the very
nature of learning - being able to generalise from the specific.


--

----------------------------------------------------------------------------
A/Prof Russell Standish                  Phone 0425 253119 (mobile)
Mathematics                        
UNSW SYDNEY 2052                 hpcoder at hpcoders.com.au
Australia                                http://www.hpcoders.com.au
----------------------------------------------------------------------------


Reply | Threaded
Open this post in threaded view
|

FRIAM and causality

Douglas Roberts-2
Biting my lip over here. [Don't respond...DON'T RESPOND!]

--
Doug Roberts, RTI International
droberts at rti.org
doug at parrot-farm.net
505-455-7333 - Office
505-670-8195 - Cell

On Dec 5, 2007 5:14 PM, Russell Standish <r.standish at unsw.edu.au> wrote:

> This is a very "Phil Henshaw" response - its a bit hard to know how to
> respond to this.
>
> On Thu, Nov 29, 2007 at 10:14:41AM -0500, Phil Henshaw wrote:
> > Russell,
> > That's a sound way to choose the most valuable model of the moment, but
> > it won't help you with what models can't show.   You need to study the
> > space between the models.  If you use optimal models and study the
> > discrepancy between them and the continually changing systems they
> > imperfectly reflect, you have a chance of seeing and engaging with the
> > real thing.
> >
>
> So you're just saying we should be performing crossover operations
> between successful models? But this is exactly what happens when
> multidisciplinary teams form leading to cross-polination of ideas. The
> results are often quite interesting and advance the field.
>
> > Models are inherently lifeless, and quite unlike the inventive
> > independent networks we find in the complex physical world.
>
> As a long time ALife practitioner, I don't really believe this at
> all. I have often been surprised at the behaviour of my models, even
> lifelike behaviour.
>
> > Using the
> > 'best' model to represent nature is like putting a high resolution
> > picture of a frog in your son's terrarium.  Very nice, but not the real
> > thing.
>
> Nice metaphor, but I don't understand how it relates... What about
> replacing the frog with a detailed robotic imitation that has been
> evolved to imitate frog behaviour using artificial life techniques?
>
> > Assuming that all behavior is deterministic, just waiting for us
> > to find the formula, still lingers.
>
> What do you think of stochastic descriptions of nature then (starting with
> Boltzmann's statistical physics)?
>
> > It blocks learning about what we
> > can't write formulas for, though, so I think it should be among the
> > first things to go.
> >
>
> What we cannot "write formulas for" (by which I mean "find
> compressible descriptions for"), we cannot learn. For that is the very
> nature of learning - being able to generalise from the specific.
>
>
> --
>
>
> ----------------------------------------------------------------------------
> A/Prof Russell Standish                  Phone 0425 253119 (mobile)
> Mathematics
> UNSW SYDNEY 2052                         hpcoder at hpcoders.com.au
> Australia                                http://www.hpcoders.com.au
>
> ----------------------------------------------------------------------------
>
> ============================================================
> FRIAM Applied Complexity Group listserv
> Meets Fridays 9a-11:30 at cafe at St. John's College
> lectures, archives, unsubscribe, maps at http://www.friam.org
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://redfish.com/pipermail/friam_redfish.com/attachments/20071205/cca57354/attachment.html 

Reply | Threaded
Open this post in threaded view
|

FRIAM and causality

glen ep ropella
In reply to this post by glen ep ropella
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1


OK.  I hope this is the last time I have to break the threading.  My
upgrade is stalled; so my exim4 should work fine for now. [grin]

Phil on Thu Nov 29 at 11:46:09 EST 2007 wrote:
> Sure, while not discarding too much, and we should still keep the word
> 'falsified'.  False theories, say like those of Freud or Lamarck, the
> flat earth or idealized determinism, can offer fruitful ground for
> asking what made them so compelling.  
>
> ...does this go anywhere you think?

Well, going back to your original objection to my use of the word "any",
I think it does go somewhere.  My statement was that any actual (a.k.a.
realized, "real"... whatever word you use) system can be projected onto
any ordering (or any measure in general).  The resulting projection may
be a gross distortion of the system or a relatively accurate representation.

My point was simply that multiple models are necessary.  But, taking
your point that even gross distortions are useful for learning, we might
posit that not only are multiple models necessary, but the
_distribution_ of those models must have a certain character.  E.g.
perhaps really "bad" models _must_ be included in order to understand
the system.  I'd say that "goes somewhere".

- --
glen e. p. ropella, 971-219-3846, http://tempusdictum.com
See, in my line of work you got to keep repeating things over and over
and over again for the truth to sink in, to kind of catapult the
propaganda. -- George W. Bush

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.6 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFHV0cgZeB+vOTnLkoRAujNAJ9RsNyeitlPOxKtq4fYL+2CqtHL5wCfehg9
gZMEtsP8SvOR9pMFYD64774=
=0UeV
-----END PGP SIGNATURE-----


Reply | Threaded
Open this post in threaded view
|

FRIAM and causality

Phil Henshaw-2
Glen,
Excellent!  If they're honestly derived from physical things, like
network maps, say, every model is going to be both a 'bad' model and a
helpful one.  The principle comes to this complex statement, yes, but I
think also to a simple one that to understand anything you need multiple
measures.   A measure is a sort of simplistic model.

You could say, "He's 6'1"." and have a model of a person that has some
helpful and some bad features.  Measure is informative, but it's also a
reductive projection of only one dimension or set of relationships from
the subject.  It's like taking Lincoln's quip "A tree is best measured
when it is down" and turning it backwards, to say 'it takes many kinds
of measure to begin getting the whole picture'...  

The hard part seems to be to take the first dark step to accepting there
might be a shape of another form that the measures are missing (like the
whole tree or person).  It means looking for how to best extend and
complete your image based on the limited cast of the measures at hand.
Interpolation gone wild?? Free form projection perhaps??  Sort of...
You just gotta do something to make sense of the larger continuities
that develop in natural complex systems.  What I think we can see
clearly is that our measures and models are highly incomplete.

Phil

>
> OK.  I hope this is the last time I have to break the
> threading.  My upgrade is stalled; so my exim4 should work
> fine for now. [grin]
>
> Phil on Thu Nov 29 at 11:46:09 EST 2007 wrote:
> > Sure, while not discarding too much, and we should still
> keep the word
> > 'falsified'.  False theories, say like those of Freud or
> Lamarck, the
> > flat earth or idealized determinism, can offer fruitful ground for
> > asking what made them so compelling.
> >
> > ...does this go anywhere you think?
>
> Well, going back to your original objection to my use of the
> word "any", I think it does go somewhere.  My statement was
> that any actual (a.k.a. realized, "real"... whatever word you
> use) system can be projected onto any ordering (or any
> measure in general).  The resulting projection may be a gross
> distortion of the system or a relatively accurate representation.
>
> My point was simply that multiple models are necessary.  But,
> taking your point that even gross distortions are useful for
> learning, we might posit that not only are multiple models
> necessary, but the _distribution_ of those models must have a
> certain character.  E.g. perhaps really "bad" models _must_
> be included in order to understand the system.  I'd say that
> "goes somewhere".
>
> - --
> glen e. p. ropella, 971-219-3846, http://tempusdictum.com
> See, in my line of work you got to keep repeating things over
> and over and over again for the truth to sink in, to kind of
> catapult the propaganda. -- George W. Bush
>
> -----BEGIN PGP SIGNATURE-----
> Version: GnuPG v1.4.6 (GNU/Linux)
> Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org
>
> iD8DBQFHV0cgZeB+vOTnLkoRAujNAJ9RsNyeitlPOxKtq4fYL+2CqtHL5wCfehg9
> gZMEtsP8SvOR9pMFYD64774=
> =0UeV
> -----END PGP SIGNATURE-----
>
> ============================================================
> FRIAM Applied Complexity Group listserv
> Meets Fridays 9a-11:30 at cafe at St. John's College
> lectures, archives, unsubscribe, maps at http://www.friam.org
>
>




Reply | Threaded
Open this post in threaded view
|

FRIAM and causality

glen ep ropella
In reply to this post by Russell Standish
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Russell Standish on 12/05/2007 04:14 PM:
>> It blocks learning about what we
>> can't write formulas for, though, so I think it should be among the
>> first things to go.
>>
>
> What we cannot "write formulas for" (by which I mean "find
> compressible descriptions for"), we cannot learn. For that is the very
> nature of learning - being able to generalise from the specific.

I don't really disagree with you.  But, given Douglas' self-restraint,
someone must jump in... and since I'm not afraid of making an ass out of
myself, it may as well be me. [grin]

I don't think it's completely true that we cannot learn something that
won't submit to a "good" compressible description.  The difference
between tacit experience and explicit knowledge highlights that some
things (which seem to resist "good" compressed description) are
_learnable_.  The catch lies in the definition of "learning".

Granted, personal (particular) experience can't be completely
transpersonal.  Hence, all tacit experience has some element(s) that
cannot be formulated as explicit knowledge and transmitted.  But, we can
_appeal_ to (or rely upon) some psychologically projected (imputed)
commonalities between us.  For example, if I see you wearing shoes with
laces, I can assume that you know what it's like to tie your shoes.  (No
smart-alack remarks about being dressed by one's mother!)  And in that
sense, even if I can't write a formula for "tying one's shoes", I can
still _learn_ how to tie shoes.  Further, I can use the inaccurate
("bad") formulas for how to tie one's shoes as a way to actually learn
how to tie shoes.  Even further, I can _teach_ others how to tie their
shoes based on these "bad" models.

Hence, we can use "bad" models to learn something that has no "good" model.

One might even go so far as to say _that's_ the very nature of learning,
not as you characterize it above.  Note, however, that this is
pre-scientific.  Science (and all externalized, transpersonal methods)
relies wholeheartedly on making as much knowledge as explicit as
possible.  If one went that far, it would be reasonable to assume that
the more autistic and rational of us learn best through the development
of compressed descriptions, whereas those of us addicted to getting our
hands dirty learn best through the use of "bad" models and direct
experience of applying those "bad" models.

- --
glen e. p. ropella, 971-219-3846, http://tempusdictum.com
Morality cannot exist one minute without freedom... Only a free man can
possibly be moral. Unless a good deed is voluntary, it has no moral
significance. -- Everett Martin

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.6 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFHWE24ZeB+vOTnLkoRAn7NAKCk56nz+lA1qPEG5yzySQnIqKutvwCgx90G
/bGBq9jNrFcyaX10b3HuTUs=
=yNb8
-----END PGP SIGNATURE-----


Reply | Threaded
Open this post in threaded view
|

FRIAM and causality

Marcus G. Daniels
>
> And in that
> sense, even if I can't write a formula for "tying one's shoes", I can
> still _learn_ how to tie shoes.  Further, I can use the inaccurate
> ("bad") formulas for how to tie one's shoes as a way to actually learn
> how to tie shoes.  Even further, I can _teach_ others how to tie their
> shoes based on these "bad" models.
What's the metric you're using for good and bad here? That one person
looked it up on Wikipedia and another person learned it from their mom,
i.e. formal vs. informal description?   Or ability to stay tied vs. ease
in shoe removal, or??  Or some mixture of these features?  Who decides
the relative weights for goodness?


Reply | Threaded
Open this post in threaded view
|

FRIAM and causality

glen ep ropella
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Marcus G. Daniels on 12/06/2007 12:23 PM:

>> And in that
>> sense, even if I can't write a formula for "tying one's shoes", I can
>> still _learn_ how to tie shoes.  Further, I can use the inaccurate
>> ("bad") formulas for how to tie one's shoes as a way to actually learn
>> how to tie shoes.  Even further, I can _teach_ others how to tie their
>> shoes based on these "bad" models.
>
> What's the metric you're using for good and bad here? That one person
> looked it up on Wikipedia and another person learned it from their mom,
> i.e. formal vs. informal description?   Or ability to stay tied vs. ease
> in shoe removal, or??  Or some mixture of these features?  Who decides
> the relative weights for goodness?

I'm not using a measure ("metric" is the wrong word) at all.  My
statements are measure-independent.  _All_ measures provide an
incomplete description of any system.  "The map is not the territory."

If one can find a measure that is complete, then that measure _is_ the
system.

How one determines whether a given measure is better than another
depends entirely on their purpose at the time.

- --
glen e. p. ropella, 971-219-3846, http://tempusdictum.com
With or without religion, you would have good people doing good things
and evil people doing evil things. But for good people to do evil
things, that takes religion. -- Steven Weinberg

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.6 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFHWGDaZeB+vOTnLkoRAtrKAJkBf7YZx94ctNK44bd6oJ4LnS6sVwCgjSjH
sQgbSJ+cilF6Ig33+GHjHoY=
=Y34k
-----END PGP SIGNATURE-----


Reply | Threaded
Open this post in threaded view
|

FRIAM and causality

Russell Standish
In reply to this post by Marcus G. Daniels
On Thu, Dec 06, 2007 at 01:23:00PM -0700, Marcus G. Daniels wrote:

> >
> > And in that
> > sense, even if I can't write a formula for "tying one's shoes", I can
> > still _learn_ how to tie shoes.  Further, I can use the inaccurate
> > ("bad") formulas for how to tie one's shoes as a way to actually learn
> > how to tie shoes.  Even further, I can _teach_ others how to tie their
> > shoes based on these "bad" models.
> What's the metric you're using for good and bad here? That one person
> looked it up on Wikipedia and another person learned it from their mom,
> i.e. formal vs. informal description?   Or ability to stay tied vs. ease
> in shoe removal, or??  Or some mixture of these features?  Who decides
> the relative weights for goodness?
>

There is no formalised metric, and it may not even be
formalisable. However, a good model is one that has utility, it can
predict stuff, or explain stuff, or a mixture of the two. A better
model is one that can both explain and predict stuff better than the
other model. Otherwise other models that can better predict or explain
stuff are just other good models.

Obviously there is a certain amount of subjectivity here - some folk
think that a model "God did it" explains stuff, but explaining stuff
in terms of a mysterious, unexplainable, all powerful entity doesn't
work for me, nor for most scientists.

Cheers

--

----------------------------------------------------------------------------
A/Prof Russell Standish                  Phone 0425 253119 (mobile)
Mathematics                        
UNSW SYDNEY 2052                 hpcoder at hpcoders.com.au
Australia                                http://www.hpcoders.com.au
----------------------------------------------------------------------------


Reply | Threaded
Open this post in threaded view
|

FRIAM and causality

Russell Standish
In reply to this post by glen ep ropella
On Thu, Dec 06, 2007 at 11:30:00AM -0800, Glen E. P. Ropella wrote:

> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
>
> Russell Standish on 12/05/2007 04:14 PM:
> >> It blocks learning about what we
> >> can't write formulas for, though, so I think it should be among the
> >> first things to go.
> >>
> >
> > What we cannot "write formulas for" (by which I mean "find
> > compressible descriptions for"), we cannot learn. For that is the very
> > nature of learning - being able to generalise from the specific.
>
> I don't really disagree with you.  But, given Douglas' self-restraint,
> someone must jump in... and since I'm not afraid of making an ass out of
> myself, it may as well be me. [grin]
>
> I don't think it's completely true that we cannot learn something that
> won't submit to a "good" compressible description.  The difference
> between tacit experience and explicit knowledge highlights that some
> things (which seem to resist "good" compressed description) are
> _learnable_.  The catch lies in the definition of "learning".
>
> Granted, personal (particular) experience can't be completely
> transpersonal.  Hence, all tacit experience has some element(s) that
> cannot be formulated as explicit knowledge and transmitted.  But, we can
> _appeal_ to (or rely upon) some psychologically projected (imputed)
> commonalities between us.  For example, if I see you wearing shoes with
> laces, I can assume that you know what it's like to tie your shoes.  (No
> smart-alack remarks about being dressed by one's mother!)  And in that
> sense, even if I can't write a formula for "tying one's shoes", I can
> still _learn_ how to tie shoes.  Further, I can use the inaccurate
> ("bad") formulas for how to tie one's shoes as a way to actually learn
> how to tie shoes.  Even further, I can _teach_ others how to tie their
> shoes based on these "bad" models.

Well I didn't have in mind cerebellum type learning, but rather
cerebral learning. I'm not sure whether the cerebellum involves
compressible descriptions or not.

I had never come across this notion of tacit knowledge before - I've
just read the Wikipedia article on it. It certainly wasn't what I had
in mind when discussing learning, but I wasn't explicit about that
(excuse the pun!).

Note that inaccurate is not necessarily bad. Newtonian gravity is
inaccurate (it gets Mercury's orbit wrong in a detectable way), but it
is still a good model (you can fly spacecraft using it). Bad just
means lack of any utility.

>
> Hence, we can use "bad" models to learn something that has no "good" model.
>
> One might even go so far as to say _that's_ the very nature of learning,
> not as you characterize it above.  Note, however, that this is
> pre-scientific.  Science (and all externalized, transpersonal methods)
> relies wholeheartedly on making as much knowledge as explicit as
> possible.  If one went that far, it would be reasonable to assume that
> the more autistic and rational of us learn best through the development
> of compressed descriptions, whereas those of us addicted to getting our
> hands dirty learn best through the use of "bad" models and direct
> experience of applying those "bad" models.
>

If you are learning, the models you use must be getting better, and
don't remain bad. And yes, these models are compressed descriptions of
reality.

However, whether tacit learning involves models, or not I simply don't
know. It is possible that I have some form of model of how my bicycle
behaves buried in cerebellum which is used as part of a
predictor-corrector loop. It is also possible that this is just an
interpretation of what is a tangle of neurons and synaptic weights,
just as we might say an artificial neural network is a predictive model (but
not at all explanatory), whereas a fuzzy logic controller is (at least
partially) explanatory.

> - --
> glen e. p. ropella, 971-219-3846, http://tempusdictum.com
> Morality cannot exist one minute without freedom... Only a free man can
> possibly be moral. Unless a good deed is voluntary, it has no moral
> significance. -- Everett Martin
>
> -----BEGIN PGP SIGNATURE-----
> Version: GnuPG v1.4.6 (GNU/Linux)
> Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org
>
> iD8DBQFHWE24ZeB+vOTnLkoRAn7NAKCk56nz+lA1qPEG5yzySQnIqKutvwCgx90G
> /bGBq9jNrFcyaX10b3HuTUs=
> =yNb8
> -----END PGP SIGNATURE-----
>
> ============================================================
> FRIAM Applied Complexity Group listserv
> Meets Fridays 9a-11:30 at cafe at St. John's College
> lectures, archives, unsubscribe, maps at http://www.friam.org

--

----------------------------------------------------------------------------
A/Prof Russell Standish                  Phone 0425 253119 (mobile)
Mathematics                        
UNSW SYDNEY 2052                 hpcoder at hpcoders.com.au
Australia                                http://www.hpcoders.com.au
----------------------------------------------------------------------------


Reply | Threaded
Open this post in threaded view
|

FRIAM and causality

Marcus G. Daniels
In reply to this post by Russell Standish
Russell Standish wrote:
> There is no formalised metric, and it may not even be
> formalisable. However, a good model is one that has utility, it can
> predict stuff, or explain stuff, or a mixture of the two.
Well, it seems to me once there is a question, then bad and good become
grounded (if not found) and we can stop going in circles.   The answer
may be loafers, or it may be a square knot, or it may involve some funny
looking lambda calculus stuff that compresses the how-to-tie-shoes story
to some minimal number of characters.


Reply | Threaded
Open this post in threaded view
|

FRIAM and causality

glen ep ropella
In reply to this post by Phil Henshaw-2
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Phil Henshaw on 12/06/2007 10:53 AM:
> The hard part seems to be to take the first dark step to accepting there
> might be a shape of another form that the measures are missing (like the
> whole tree or person).  It means looking for how to best extend and
> complete your image based on the limited cast of the measures at hand.
> Interpolation gone wild?? Free form projection perhaps??  Sort of...
> You just gotta do something to make sense of the larger continuities
> that develop in natural complex systems.  What I think we can see
> clearly is that our measures and models are highly incomplete.

I think we agree, which normally means there's nothing to talk about!
[grin]  But, I thought I'd throw out my term for what you're describing:
 "triangulation".

It's not really triangulation, of course.  But it's certainly more like
triangulation than, say, population sampling.  Perhaps we could call it
"tuple-angulation"???  [grin]

Here's a paper in which "we" (i.e. my outrageous rhetoric is reigned in
and made coherent by the authors of the paper ;-) try to describe it:

   http://www.biomedcentral.com/1752-0509/1/14/abstract

See Figure 1.  This particular example is just one sub-type of the
general method we're talking about, here, though.

- --
glen e. p. ropella, 971-219-3846, http://tempusdictum.com
If this were a dictatorship, it would be a heck of a lot easier, just so
long as I'm the dictator. -- George W. Bush

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.6 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFHWX6LZeB+vOTnLkoRAhfiAJ4ldUf3p2wtlih3736TIp28uVtEZACfWyMf
Pi/MX4iy1xD4PrqQNyNvbYo=
=9GWs
-----END PGP SIGNATURE-----


Reply | Threaded
Open this post in threaded view
|

FRIAM and causality

Marcus G. Daniels
Phil Henshaw wrote:
> The hard part seems to be to take the first dark step to accepting there
> might be a shape of another form that the measures are missing (like the
> whole tree or person).
Glen E. P. Ropella wrote:
> See Figure 1.  This particular example is just one sub-type of the
> general method we're talking about, here, though.
>  
Figure 1 concerns using behavioral distributions estimated from in vitro
data to constrain the choice of parameters/tuples/object
composition/etc. in an agent model -- model fitting.  Phil seems to be
talking about the situation where it isn't yet clear what to measure --
theory driving experiment, e.g. the development of general relativity
preceding experiments to find gravitational waves.



Reply | Threaded
Open this post in threaded view
|

FRIAM and causality

glen ep ropella
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Marcus G. Daniels on 12/07/2007 10:37 AM:
> Figure 1 concerns using behavioral distributions estimated from in vitro
> data to constrain the choice of parameters/tuples/object
> composition/etc. in an agent model -- model fitting.

No.  It concerns the iterative construction of new models (and new
measures) which behave (according to the chosen measures) more like the
reference model.  It is not model fitting in the sense of merely tuning
parameters so that a model reproduces the data.

In this sense, it is not about model fitting.  It is about triangulating
around the actual system in an effort to gain a better understanding
(and way to characterize) the behavior of the actual system.  The fact
that the in vitro model is not iterated is just a consequence of
practical matters.  Both the in vitro model and the in silico model are
modified to generate the vague phenotype of the actual system.

>  Phil seems to be
> talking about the situation where it isn't yet clear what to measure --
> theory driving experiment, e.g. the development of general relativity
> preceding experiments to find gravitational waves.

It's the same thing.  It's just the case that our models are filled with
nitty gritty (and obfuscating) particular details.  It may be tempting
to think that we already _know_ precisely what to measure.  But we do
NOT.  We use prior models (including measures) in order to triangulate
toward better models.

- --
glen e. p. ropella, 971-219-3846, http://tempusdictum.com
Think not those faithful who praise all thy words and actions; but those
who kindly reprove thy faults. -- Socrates

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.6 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFHWaPbZeB+vOTnLkoRAqo0AKCpwKG+5Qhdx9J7CduLqeXU4QnVowCdGvvb
s//LKWhufSQKWnlHslu1A2w=
=Zvww
-----END PGP SIGNATURE-----


Reply | Threaded
Open this post in threaded view
|

FRIAM and causality

Phil Henshaw-2
In reply to this post by glen ep ropella
Glen,
I think I missed some replies there somehow, glad you picked it up.
 

> Russell Standish on 12/05/2007 04:14 PM:
> >> It blocks learning about what we
> >> can't write formulas for, though, so I think it should be
> among the
> >> first things to go.
> >>
> >
> > What we cannot "write formulas for" (by which I mean "find
> > compressible descriptions for"), we cannot learn. For that
> is the very
> > nature of learning - being able to generalise from the specific.
>
> I don't really disagree with you.  But, given Douglas'
> self-restraint, someone must jump in... and since I'm not
> afraid of making an ass out of myself, it may as well be me. [grin]
>
> I don't think it's completely true that we cannot learn
> something that won't submit to a "good" compressible
> description.  The difference between tacit experience and
> explicit knowledge highlights that some things (which seem to
> resist "good" compressed description) are _learnable_.  The
> catch lies in the definition of "learning".

PH: I wouldn't disagree that there are some kinds of generalities that
are so solid that they seem like they're real things.  They're certainly
real for us, but I think, they are still physically located in our
minds.  The conservation laws, for example, are highly reliable, but
don't seem to have a physical presence except in our minds.   Some
learning tasks are sort of the opposite, and can't result in capturing
what is outside our minds in a reliable form to hold inside our minds.
Complex systems are one of those, just not reducible.   I don't see it
as an all-or-nothing kind of thing, though.   When I see a growth curve
I usually find confirming evidence that what produced it contained an
evolving network of relationships.  

I can usually extract some descriptors from the records of it for
analysis.  That analysis usually directs my attention to hidden features
of the evolving network that I hadn't thought of looking for.   I'm also
usually able to confirm those things are there and learn new things
about them with a little more study.   I don't require a picture in my
mind of what I don't see, and accept I may never understand the whole
system but.  I still learn useful things by looking for what gives it
continuity.  From experience I learn that a directed search for
continuities not yet seen will keep revealing more.

One good example of exploring continuities to discover new structures
are the discoveries of physics that came from equations in which the
denominators tended to zero.  Those revealed parts of nature that
violated our equations, and that was very useful.

>
> Granted, personal (particular) experience can't be completely
> transpersonal.  Hence, all tacit experience has some
> element(s) that cannot be formulated as explicit knowledge
> and transmitted.  But, we can _appeal_ to (or rely upon) some
> psychologically projected (imputed) commonalities between us.
>  For example, if I see you wearing shoes with laces, I can
> assume that you know what it's like to tie your shoes.  (No
> smart-alack remarks about being dressed by one's mother!)  
> And in that sense, even if I can't write a formula for "tying
> one's shoes", I can still _learn_ how to tie shoes.  Further,
> I can use the inaccurate
> ("bad") formulas for how to tie one's shoes as a way to
> actually learn how to tie shoes.  Even further, I can _teach_
> others how to tie their shoes based on these "bad" models.
>
> Hence, we can use "bad" models to learn something that has no
> "good" model.
>
> One might even go so far as to say _that's_ the very nature
> of learning, not as you characterize it above.  Note,
> however, that this is pre-scientific.  Science (and all
> externalized, transpersonal methods) relies wholeheartedly on
> making as much knowledge as explicit as possible.  If one
> went that far, it would be reasonable to assume that the more
> autistic and rational of us learn best through the
> development of compressed descriptions, whereas those of us
> addicted to getting our hands dirty learn best through the
> use of "bad" models and direct experience of applying those
> "bad" models.

There are lots of ways 'bad' models can be used.  I think the one I was
thinking of first is that they might be good for sort of 'reverse
engineering' the thought process from which they  came.  I often look at
a 'bad' model and ask what was the valid part of the question that led
to it.   What we then got to was an image of considering a set of bad
models, built from various generating questions, and the complementary
spaces they describe and shapes of realities they hint at beyond the
models.  

I'm not sure how to use that in a direct analytical way with any real
models of things at all.  I just like it as an image for responding to
the dilemma that all models are simplistic and we need to look how they
misrepresent reality for other hints.   Perhaps the simplest practical
interpretation is how models need to change over time, why we need a
process of examining the divergence between models and reality to change
the models over and over to keep up.  That leaves a trail of models as a
record of either a thought process or a natural system process, or both.


Best,

Phil

>
> - --
> glen e. p. ropella, 971-219-3846, http://tempusdictum.com 
> Morality cannot exist one minute without freedom... Only a
> free man can possibly be moral. Unless a good deed is
> voluntary, it has no moral significance. -- Everett Martin
>
> -----BEGIN PGP SIGNATURE-----
> Version: GnuPG v1.4.6 (GNU/Linux)
> Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org
>
> iD8DBQFHWE24ZeB+vOTnLkoRAn7NAKCk56nz+lA1qPEG5yzySQnIqKutvwCgx90G
> /bGBq9jNrFcyaX10b3HuTUs=
> =yNb8
> -----END PGP SIGNATURE-----
>
> ============================================================
> FRIAM Applied Complexity Group listserv
> Meets Fridays 9a-11:30 at cafe at St. John's College
> lectures, archives, unsubscribe, maps at http://www.friam.org
>
>




Reply | Threaded
Open this post in threaded view
|

FRIAM and causality

Phil Henshaw-2
In reply to this post by glen ep ropella
Glen,

>
> Phil Henshaw on 12/06/2007 10:53 AM:
> > The hard part seems to be to take the first dark step to accepting
> > there might be a shape of another form that the measures
> are missing
> > (like the whole tree or person).  It means looking for how to best
> > extend and complete your image based on the limited cast of the
> > measures at hand. Interpolation gone wild?? Free form projection
> > perhaps??  Sort of... You just gotta do something to make
> sense of the
> > larger continuities that develop in natural complex
> systems.  What I
> > think we can see clearly is that our measures and models are highly
> > incomplete.
>
> I think we agree, which normally means there's nothing to
> talk about! [grin]  But, I thought I'd throw out my term for
> what you're describing:  "triangulation".
>
> It's not really triangulation, of course.  But it's certainly
> more like triangulation than, say, population sampling.  
> Perhaps we could call it "tuple-angulation"???  [grin]

PH: I guess I just call it filling in the gaps, understanding that as a
combination of analysis and synthesis.   So, if 'gaps' then become a raw
material for systems science part of what makes a model 'good' is if you
can see how it is also interestingly 'bad', since without having some
interest in the 'bad' you can't be tracking the usually moving and
significantly misrepresented targets of the physical system.. :-,)

I do come close to 'triangulation' in my derivative reconstruction
method, except I use 4 points to find a 5th rather than 2 points to find
a 3rd.  Given 5 points in time sequence it imputes a new value for the
middle one, based on the making the implied 3rd derivatives from right
and left the same (going forward and back in separate passes and
averaging).  If each point is considered a separate "bad" model for the
system one could impute an average value and a system having a single
fixed average state.   Using derivative reconstruction imputes a
continuous complex process without fixed definition instead.   That
seems to be a less distorting way of data smoothing, and more useful for
raising questions about the turning points within the changing
mechanisms producing it.



> Here's a paper in which "we" (i.e. my outrageous rhetoric is
> reigned in and made coherent by the authors of the paper ;-)
> try to describe it:
>
   http://www.biomedcentral.com/1752-0509/1/14/abstract

>See Figure 1.  This particular example is just one sub-type of the
general method we're talking
>about, here, though.

PH:  I was impressed with the clarity of the abstract and their not
confusing biology, lab chemistry and computer model references.  Figure
1 puzzles me though.  I get your suggestion that this shows a way people
are using new visualization techniques to compare models.   I don't
understand how highly complex comparisons of test tube and computer
based things would make them look so very much alike unless both are
parametric data displays of a sort not described, though.   Comparing
hugely complicated systems does need visualization help, certainly, but
if that's what makes the images look so much alike it should be
mentioned.  Still, what I get from the picture is that they give
themselves an A+.  I don't see how their model recreates some features
of the natural process and interestingly leaves others out.  It's
importantly that art of making what you've failed to account for
interesting, rather than hiding it, that I find missing in lots of
studies.

So, here's to all 'bad' models...!  may we survive them...:-)

Best,

Phil
- --
glen e. p. ropella, 971-219-3846, http://tempusdictum.com
If this were a dictatorship, it would be a heck of a lot easier, just so
long as I'm the dictator. -- George W. Bush

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.6 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFHWX6LZeB+vOTnLkoRAhfiAJ4ldUf3p2wtlih3736TIp28uVtEZACfWyMf
Pi/MX4iy1xD4PrqQNyNvbYo=
=9GWs
-----END PGP SIGNATURE-----

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org





Reply | Threaded
Open this post in threaded view
|

FRIAM and causality

glen ep ropella
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Phil Henshaw on 12/07/2007 01:42 PM:

> PH:  I was impressed with the clarity of the abstract and their not
> confusing biology, lab chemistry and computer model references.  Figure
> 1 puzzles me though.  I get your suggestion that this shows a way people
> are using new visualization techniques to compare models.   I don't
> understand how highly complex comparisons of test tube and computer
> based things would make them look so very much alike unless both are
> parametric data displays of a sort not described, though.   Comparing
> hugely complicated systems does need visualization help, certainly, but
> if that's what makes the images look so much alike it should be
> mentioned.  Still, what I get from the picture is that they give
> themselves an A+.  I don't see how their model recreates some features
> of the natural process and interestingly leaves others out.  It's
> importantly that art of making what you've failed to account for
> interesting, rather than hiding it, that I find missing in lots of
> studies.

Just for clarity, it's a cartoon and not a visualization.  The diagram
is merely intended to give a visual impression of the iterative process
being used.  The gray smudges and spots representing targeted attributes
do not map to particular behaviors of the in vitro or in silico models.

So, it's not that they're giving themselves an A+, they're just trying
to say that the first model (the gray circle in A) is falsified because
it doesn't exhibit the behavior indicated by the spot labeled "a" even
though it exhibits the behaviors labeled "t".  The second model (not
just the same model with different parameter values), pointed to by "2"
in B is _also_ falsified because it does not exhibit "a".  However,
there are indications that model 2 is "better" than model 1 because it
exhibits those two behaviors indicated by the spots that are closer in
the behavior space to "a".  The subsequent model 3 is _validated_
because it exhibits behaviors "t" and "a".

> So, here's to all 'bad' models...!  may we survive them...:-)

Perfect!  I'll make that toast over my next pint.

- --
glen e. p. ropella, 971-219-3846, http://tempusdictum.com
The only good is knowledge and the only evil is ignorance. -- Socrates

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.6 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFHWcv/ZeB+vOTnLkoRAsDcAJ97VJWKqW1O7XZjfvRqJccektNC3QCgn1fV
TJh+giOWVLF9kvPtmpfVoi0=
=sxCj
-----END PGP SIGNATURE-----


12