Welcome, Jim

classic Classic list List threaded Threaded
55 messages Options
123
Reply | Threaded
Open this post in threaded view
|

recap on Rosen

Russell Standish
On Thu, Apr 24, 2008 at 07:06:34AM -0700, glen e. p. ropella wrote:

> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
>
> Russell Standish wrote:
> > On Wed, Apr 23, 2008 at 08:55:29PM +0200, G?nther Greindl wrote:
> >> But, as said above, it seems that RR defines mechanism differently. This
> >> is of course very unfortunate, as it will have people talking past each
> >> other. Unfortunate also because mechanism is indeed a word which can be
> >> given a precise, mathematical meaning.
> >>
> >
> > Is this in fact the case?
>
> Which part?  Are you asking whether it's true that Rosen uses a peculiar
> definition of "mechanism"?  Or are you asking whether or not the normal
>  use of "mechanism" can be given a precise, mathematical meaning?

The former, and I mean whether his definition is inequivalent to the
usual computer science meaning, not that he expesses the usual concept
in a peculiar way.

>
> > When I read "What is Life",
>
> Surely you mean "Life Itself"?  "What is Life" is a book by Schr?dinger.
>

Yes, sorry - its been a few years :(

> - --
> glen e. p. ropella, 971-219-3846, http://tempusdictum.com
> And therefore the victories won by a master of war gain him neither
> reputation for wisdom nor merit for valour.   -- Sun Tzu, "The Art of War"
>
> -----BEGIN PGP SIGNATURE-----
> Version: GnuPG v1.4.6 (GNU/Linux)
> Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org
>
> iD8DBQFIEJPqpVJZMHoGoM8RAuq4AKCFXP+v/dj8VAEUDJ3LMYkV95ZZWACfRjkp
> 7Mpoco1Dhq/GSW8T7acq2EE=
> =XzKe
> -----END PGP SIGNATURE-----
>
> ============================================================
> FRIAM Applied Complexity Group listserv
> Meets Fridays 9a-11:30 at cafe at St. John's College
> lectures, archives, unsubscribe, maps at http://www.friam.org

--

----------------------------------------------------------------------------
A/Prof Russell Standish                  Phone 0425 253119 (mobile)
Mathematics                        
UNSW SYDNEY 2052                 hpcoder at hpcoders.com.au
Australia                                http://www.hpcoders.com.au
----------------------------------------------------------------------------


Reply | Threaded
Open this post in threaded view
|

recap on Rosen

Phil Henshaw-2
In reply to this post by glen ep ropella
The question arises from how using self-consistent models for physical
systems full of undefined parts forces us to leave the undefined parts out
of the model, as for the individual behavior of any natural system.

There is a kind of 'common physics' displayed by the users of mismatched
models and subjects of this type.   The environmental movement spent 30
years developing ways to turn agricultural land into fuel before they found
out that someone else was already using the same land for something else.
It tipped a balance in a natural system and triggered a world food crisis
we're beginning to see can't be solved by increasing the food supply.   The
whole idea had been to correct the harm being done to the earth by other
people who had made almost exactly the same mistake by over using our energy
supplies in the first place.  

The environmentalists used a massive network of activists and decades of
well funded governmental and industrial research, and they all failed to ask
what in nature the new land use might run into.  To me it looks like they
were using a simple self-consistent model for their purpose and never
questioned whether it contained living things that might react in an
unexpected way not represented in their model.  If you don't 'see the life'
you don't get wonder what it'll do when you interfere with it.   That's what
always seems to be the problem.  We don't know how independent things will
react when we run into them, importantly because we have a habit of using
models that conceal the presence of the things that'll get in our way.  

Self-consistent models represent environments very well, just omitting their
living parts, "mind without matter".

Would any of the things you guys suggested fix that?

Phil Henshaw???????????????????
??? ????.?? ? `?.????
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
680 Ft. Washington Ave   NY NY 10040? tel: 212-795-4844?????
e-mail: pfh at synapse9.com?????explorations: www.synapse9.com??
?in the last 200 years the amount of change that once needed a century?of
thought now takes just five weeks?


> -----Original Message-----
> From: friam-bounces at redfish.com [mailto:friam-bounces at redfish.com] On
> Behalf Of glen e. p. ropella
> Sent: Thursday, April 24, 2008 6:02 PM
> To: The Friday Morning Applied Complexity Coffee Group
> Subject: Re: [FRIAM] recap on Rosen
>
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
>
> Marcus G. Daniels wrote:
> > phil henshaw wrote:
> >> Can a self-consistent model have independently behaving parts, like
> >> environments do?
> >>
> > If the independently behaving parts don't have some underlying common
> > physics (e.g. they could in principle become different from time to
> time
> > according to some simple rules, but generally are the same), then
> there
> > will be so many degrees of freedom from the independently behaving
> parts
> > that arguments about why a system does what it does will be
> > quantitatively as good as any other.
>
> I don't think that's quite true.  It's close to true, but not quite
> true. [grin]
>
> Even if the parts don't have a common, underlying physics
> (Truth/Reality), as long as they can interact _somehow_ and if they
> interact a lot (highly connected), then a common "physics" may cohere
> after a time so that a forcing structure limits the degrees of freedom.
>
> In such a case (perhaps physical symmetry breaking is one example?),
> some arguments about why a system does what it does will be more
> accurate and precise than others, namely the ones that capture the
> emergent "physics".
>
> This could be true even if the "physics" that emerges is completely
> abstracted from the original medium of interaction (the actual
> physics).
>  Ultimately, whether such a "ladder of abstraction" is _completely_
> closed or not is a matter of faith or philosophy.  Is there a bottom
> turtle or not?
>
> - --
> glen e. p. ropella, 971-219-3846, http://tempusdictum.com
> The ultimate result of shielding men from the effects of folly, is to
> fill the world with fools. -- Herbert Spencer
>
> -----BEGIN PGP SIGNATURE-----
> Version: GnuPG v1.4.6 (GNU/Linux)
> Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org
>
> iD8DBQFIEQNnpVJZMHoGoM8RAl+TAJ46LnSihLOL4dwjNfXY+9zTCdtU+ACfXVPn
> QTbC887A1yQK0MhaH5IqBew=
> =UD39
> -----END PGP SIGNATURE-----
>
> ============================================================
> FRIAM Applied Complexity Group listserv
> Meets Fridays 9a-11:30 at cafe at St. John's College
> lectures, archives, unsubscribe, maps at http://www.friam.org





Reply | Threaded
Open this post in threaded view
|

recap on Rosen

Günther Greindl
In reply to this post by glen ep ropella

> OK.  So RR makes a prohibitive claim ... something like "living systems
> cannot be accurately modeled with a UTM because MR systems cannot be
> realized".  And you are refuting that claim by a counter-claim that MR
> systems _can_ be realized, emphasizing that the recursion theorem is
> crucial to such a realization.
>
> Do I have it right?

Yes that's basically my claim - RR also mentions his closed efficient
cause, that's where the rec. theorem comes in: you can code whatever
behaviour you like and then replicate it indefinitely.

What is _not_ addressed in the (M,R) model is how it comes up in the
first place (= origin of life); that is where evolution comes in, and a
machine model is at no disadvantage here, again.

Cheers,
G?nther


Reply | Threaded
Open this post in threaded view
|

recap on Rosen

glen ep ropella
In reply to this post by Phil Henshaw-2
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

phil henshaw wrote:
> Self-consistent models represent environments very well, just omitting their
> living parts, "mind without matter".
>
> Would any of the things you guys suggested fix that?

I believe so.  At least 1/2 of the solution to any problem lies in a
good formulation of the problem.  And in that sense, being able to state
(as precisely as possible) which closures are maintained in which
context and which closures are broken in which context, therefore,
contributes immensely to the solution.

I.e. if the problem is that our modeling methods only capture isolable
(separable, "linear", analytic, etc.) systems _well_, then we need other
modeling methods to capture holistic ("nonlinear", non-analytic)
systems.  As I understand it, this is the basic conception behind the
"sustainability movement", somehow capturing or understanding
externalities and engineering organizations so that their waste is more
useful to other organizations.

What Rosen tried to do (in my _opinion_) is help us specify what parts
of our modeling methods are inadequate to the task of capturing certain
broken closures.  I.e. I think he tried to explain _why_ so many of our
models are so fragile, namely, because they cannot capture the closure
of efficient cause (agency).  That concept requires no mathematics (ala
category theory).  But he tried to communicate the concept using
mathematics and logic via the discussions of Poincare's
"impredicativity" and rhetorical vs. causal loops.

So, yes, I think these things can help with our understanding of the
fragility of _simple_ models ("mechanism" in Rosen's peculiar
terminology).  Even if Rosen's MR-systems or his "closure to efficient
cause" are inadequate to the task (which I think they _are_), at least
considering those attempts and how/where they may fail facilitates our
progress toward other, hopefully more successful, solutions.

- --
glen e. p. ropella, 971-219-3846, http://tempusdictum.com
Ours is a world of nuclear giants and ethical infants. -- Omar N. Bradley

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.6 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFIEjtUpVJZMHoGoM8RAt6gAJkB0y2YDBB3/LsFr8i561UrfEPvsgCggAKu
I8mcbIbWrFljoixYiONhrCg=
=CxBC
-----END PGP SIGNATURE-----


Reply | Threaded
Open this post in threaded view
|

recap on Rosen

glen ep ropella
In reply to this post by Günther Greindl
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

G?nther Greindl wrote:

>> OK.  So RR makes a prohibitive claim ... something like "living
>> systems cannot be accurately modeled with a UTM because MR systems
>> cannot be realized".  And you are refuting that claim by a
>> counter-claim that MR systems _can_ be realized, emphasizing that
>> the recursion theorem is crucial to such a realization.
>>
>> Do I have it right?
>
> Yes that's basically my claim - RR also mentions his closed efficient
>  cause, that's where the rec. theorem comes in: you can code whatever
>  behaviour you like and then replicate it indefinitely.

OK.  But you must realize that this is not really a _refutation_ or
disproof.  It's just one guy (Rosen) arguing with another guy (G?nther).
 For an actual refutation (proof that Rosen's claim is false), you'd
have to provide an explicit (effective) construction of a computational
living system.

And you haven't done that. [grin] Hence, you haven't proven Rosen wrong
... yet.  ALifers across the planet are working on this constructive
proof feverishly, of course.

Or, you could show us specifically where Rosen's claim contradicts the
recursion theorem.  But to my knowledge nobody has formalized Rosen's
work to the degree of specificity we'd need to show such a
contradiction.  I could easily be wrong about that, of course.  So, if
you'll point to such a rigorous formulation of Rosen's claim and
precisely how it contradicts the recursion theorem, then we could say
that one or the other (Rosen's or the recursion theorem) is refuted.

> What is _not_ addressed in the (M,R) model is how it comes up in the
>  first place (= origin of life);

Nobody (including the most zealous Rosenite, I think) would disagree
with that.

> that is where evolution comes in, and a machine model is at no
> disadvantage here, again.

It would be interesting to augment MR systems with some reasonably
accurate formulation of evolution.

- --
glen e. p. ropella, 971-219-3846, http://tempusdictum.com
Almost nobody dances sober, unless they happen to be insane. -- H. P.
Lovecraft

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.6 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFIEj5KpVJZMHoGoM8RAkx0AJ4ivFZFJgaCq9gdvoMWnbON3fnYzwCgqR/A
tG+AVzNzHle0kEt6dKpDeww=
=o6uQ
-----END PGP SIGNATURE-----


Reply | Threaded
Open this post in threaded view
|

recap on Rosen

Phil Henshaw-2
In reply to this post by glen ep ropella
How does that

>
> phil henshaw wrote:
> > Self-consistent models represent environments very well, just
> omitting their
> > living parts, "mind without matter".
> >
> > Would any of the things you guys suggested fix that?
>
> I believe so.  At least 1/2 of the solution to any problem lies in a
> good formulation of the problem.  And in that sense, being able to
> state
> (as precisely as possible) which closures are maintained in which
> context and which closures are broken in which context, therefore,
> contributes immensely to the solution.

[ph] the requirement is that your model describe new behavior of independent
organisms or communities things you have no information about because they
never occurred before.  What's the modeling strategy for that?

 

> I.e. if the problem is that our modeling methods only capture isolable
> (separable, "linear", analytic, etc.) systems _well_, then we need
> other
> modeling methods to capture holistic ("nonlinear", non-analytic)
> systems.  As I understand it, this is the basic conception behind the
> "sustainability movement", somehow capturing or understanding
> externalities and engineering organizations so that their waste is more
> useful to other organizations.
>
> What Rosen tried to do (in my _opinion_) is help us specify what parts
> of our modeling methods are inadequate to the task of capturing certain
> broken closures.  I.e. I think he tried to explain _why_ so many of our
> models are so fragile, namely, because they cannot capture the closure
> of efficient cause (agency).  That concept requires no mathematics (ala
> category theory).  But he tried to communicate the concept using
> mathematics and logic via the discussions of Poincare's
> "impredicativity" and rhetorical vs. causal loops.

[ph] I haven't studied Rosen enough the really know if he's pointing to the
same conflict between living things and machines that I am, but there
clearly is a conflict.  Machines are the produce of a self-consistent model
in the mind of the inventor, cities and technologies are complex learning
processes that grow out of their own environments like all other natural
systems..etc.

Phil

>
> So, yes, I think these things can help with our understanding of the
> fragility of _simple_ models ("mechanism" in Rosen's peculiar
> terminology).  Even if Rosen's MR-systems or his "closure to efficient
> cause" are inadequate to the task (which I think they _are_), at least
> considering those attempts and how/where they may fail facilitates our
> progress toward other, hopefully more successful, solutions.
>
> - --
> glen e. p. ropella, 971-219-3846, http://tempusdictum.com
> Ours is a world of nuclear giants and ethical infants. -- Omar N.
> Bradley
>
> -----BEGIN PGP SIGNATURE-----
> Version: GnuPG v1.4.6 (GNU/Linux)
> Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org
>
> iD8DBQFIEjtUpVJZMHoGoM8RAt6gAJkB0y2YDBB3/LsFr8i561UrfEPvsgCggAKu
> I8mcbIbWrFljoixYiONhrCg=
> =CxBC
> -----END PGP SIGNATURE-----
>
> ============================================================
> FRIAM Applied Complexity Group listserv
> Meets Fridays 9a-11:30 at cafe at St. John's College
> lectures, archives, unsubscribe, maps at http://www.friam.org





Reply | Threaded
Open this post in threaded view
|

recap on Rosen

Russell Standish
On Fri, Apr 25, 2008 at 10:21:59PM -0400, phil henshaw wrote:
>
> [ph] I haven't studied Rosen enough the really know if he's pointing to the
> same conflict between living things and machines that I am, but there
> clearly is a conflict.  Machines are the produce of a self-consistent model
> in the mind of the inventor, cities and technologies are complex learning
> processes that grow out of their own environments like all other natural
> systems..etc.
>
> Phil

Only simple machines. More complex machines (eg the Intel Pentium
processor) show definite signs of evolutionary accretion, as no one
person can design such a complex thing from scratch, but rather
previous designs are used and optimised.

--

----------------------------------------------------------------------------
A/Prof Russell Standish                  Phone 0425 253119 (mobile)
Mathematics                        
UNSW SYDNEY 2052                 hpcoder at hpcoders.com.au
Australia                                http://www.hpcoders.com.au
----------------------------------------------------------------------------


Reply | Threaded
Open this post in threaded view
|

recap on Rosen

Phil Henshaw-2
>
> Only simple machines. More complex machines (eg the Intel Pentium
> processor) show definite signs of evolutionary accretion, as no one
> person can design such a complex thing from scratch, but rather
> previous designs are used and optimised.

[ph] Right!  Layered design is sort of a universal signature of learning
processes, in this case the chip designers resourcefully adapting pieces of
the old design in making new designs for new problems.  Eventually any
direction of development or learning runs into diminishing returns, either
inherent in the design, or relative to competition with some other.  

I understand there's also a great deal of arguably creative machine design
in chip design too, still accumulative in nature, but I don't think we have
processors that 'design themselves', however, nor would they do very well
with multiple disconnected parts with different operating systems that only
communicated by dumping their waste products on each other... :-)  that's
the trick that organisms do so nicely and that our way of explaining them
misses when we describe their functions and relationships in a
self-consistent way.   Unlike a logical medium, a physical medium tolerates
inconsistently designed and behaving things and allows them to capitalize on
each other's unintended side behavior and effects.

Phil

> --
>
> -----------------------------------------------------------------------
> -----
> A/Prof Russell Standish                  Phone 0425 253119 (mobile)
> Mathematics
> UNSW SYDNEY 2052                 hpcoder at hpcoders.com.au
> Australia                                http://www.hpcoders.com.au
> -----------------------------------------------------------------------
> -----
>
> ============================================================
> FRIAM Applied Complexity Group listserv
> Meets Fridays 9a-11:30 at cafe at St. John's College
> lectures, archives, unsubscribe, maps at http://www.friam.org





Reply | Threaded
Open this post in threaded view
|

recap on Rosen

Marcus G. Daniels
In reply to this post by Phil Henshaw-2
phil henshaw wrote:

> Glen wrote:
>  
>> I believe so.  At least 1/2 of the solution to any problem lies in a
>> good formulation of the problem.  And in that sense, being able to
>> state
>> (as precisely as possible) which closures are maintained in which
>> context and which closures are broken in which context, therefore,
>> contributes immensely to the solution.
>>    
>
> [ph] the requirement is that your model describe new behavior of independent
> organisms or communities things you have no information about because they
> never occurred before.  What's the modeling strategy for that?
>  
Find a function that well describes a state of a thing or aggregate
measurement of interest at t - 2 that gives the state at t - 1 that
gives a state at t.  Then prediction is a matter of applying the
function more times.  Add more functions to describe more individual
things or aggregates and note when there are shared functions in those
definitions (e.g. food web fundamentally depends photosynthesis).

If you want to define all things to be independent, then there is no
point in talking about interactions -- you've already defined away the
possibility of that!    Covariance is zero.

Marcus


Reply | Threaded
Open this post in threaded view
|

recap on Rosen

Phil Henshaw-2
Ok, 'find a function' assumes there is one to find, but the problem set is
running into behavior which has already had major consequences (like
starvation for 100million people because of an unexpected world food price
level shift) and the question is what 'function' would you use to not be
caught flat footed like that.   Is there some general function to use in
cases where you have no function and don't even know what the problem
definition will be?  

I actually have a very good one, but you won't like it because it means
using the models to understand what they fail to describe rather than the
usual method of using them to represent other things.

Phil Henshaw???????????????????
??? ????.?? ? `?.????
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
680 Ft. Washington Ave   NY NY 10040? tel: 212-795-4844?????
e-mail: pfh at synapse9.com?????explorations: www.synapse9.com??
?in the last 200 years the amount of change that once needed a century?of
thought now takes just five weeks?


> -----Original Message-----
> From: friam-bounces at redfish.com [mailto:friam-bounces at redfish.com] On
> Behalf Of Marcus G. Daniels
> Sent: Saturday, April 26, 2008 12:36 AM
> To: The Friday Morning Applied Complexity Coffee Group
> Subject: Re: [FRIAM] recap on Rosen
>
> phil henshaw wrote:
> > Glen wrote:
> >
> >> I believe so.  At least 1/2 of the solution to any problem lies in a
> >> good formulation of the problem.  And in that sense, being able to
> >> state
> >> (as precisely as possible) which closures are maintained in which
> >> context and which closures are broken in which context, therefore,
> >> contributes immensely to the solution.
> >>
> >
> > [ph] the requirement is that your model describe new behavior of
> independent
> > organisms or communities things you have no information about because
> they
> > never occurred before.  What's the modeling strategy for that?
> >
> Find a function that well describes a state of a thing or aggregate
> measurement of interest at t - 2 that gives the state at t - 1 that
> gives a state at t.  Then prediction is a matter of applying the
> function more times.  Add more functions to describe more individual
> things or aggregates and note when there are shared functions in those
> definitions (e.g. food web fundamentally depends photosynthesis).
>
> If you want to define all things to be independent, then there is no
> point in talking about interactions -- you've already defined away the
> possibility of that!    Covariance is zero.
>
> Marcus
>
> ============================================================
> FRIAM Applied Complexity Group listserv
> Meets Fridays 9a-11:30 at cafe at St. John's College
> lectures, archives, unsubscribe, maps at http://www.friam.org





Reply | Threaded
Open this post in threaded view
|

recap on Rosen

Marcus G. Daniels
phil henshaw wrote:
> Ok, 'find a function' assumes there is one to find, but the problem set is
> running into behavior which has already had major consequences (like
> starvation for 100million people because of an unexpected world food price
> level shift) and the question is what 'function' would you use to not be
> caught flat footed like that.
The caloric requirements of a person are autocorrelated, but probably
for a lot of models a constant will suffice -- a certain amount of body
weight decrease, and then the probability of death goes up.   As for
price fluctuations, that's a matter of modeling the natural resources
that go in to food, the costs and benefits to motivate farmers, the
commodity markets, and so on.   Certainly we can try to understand how
each of these work, and then do what-if scenarios when one or more
components are perturbed (or destroyed).   It's still a matter of
finding stories (functions) to fit observables.  The availability and
accuracy of those observables may be poor, and sometimes all that is
possible to imagine worst and best cases, run the numbers, and see how
the result changes.
> Is there some general function to use in
> cases where you have no function and don't even know what the problem
> definition will be?  
>  
I think you do know what the problem could look like, but most details
remain unspecified.   If you can construct an example that has
catastrophes of the kind you often talk about, and spell out all of the
details of your work of fiction (that even may happen to resemble
reality), such that the what-if scenarios can be reproduced in
simulations, then others can study the sensitivities.   If there is a
`forcing structure' that will occur in many, many variant forms, then
you can demonstrate that.
> I actually have a very good one, but you won't like it because it means
> using the models to understand what they fail to describe rather than the
> usual method of using them to represent other things.
Right.  Model predicts something, it turns out to have some error
structure and that structure suggests ways to improve the model or make
a new one.  Paper published. Meanwhile another guy makes a different
model on the same phenomena and publishes a paper.   Third person reads
the two papers and has idea that accounts for problems in both.   So she
makes a new model!

Marcus



Reply | Threaded
Open this post in threaded view
|

recap on Rosen

Phil Henshaw-2
No, that does not work at all.  Patching together a model to suite a symptom
in retrospect does not help you with being ready for unexpected eventfulness
in nature that you previously had no idea that you should be looking for.

Phil Henshaw???????????????????
??? ????.?? ? `?.????
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
680 Ft. Washington Ave   NY NY 10040? tel: 212-795-4844?????
e-mail: pfh at synapse9.com?????explorations: www.synapse9.com??
?in the last 200 years the amount of change that once needed a century?of
thought now takes just five weeks?


> -----Original Message-----
> From: friam-bounces at redfish.com [mailto:friam-bounces at redfish.com] On
> Behalf Of Marcus G. Daniels
> Sent: Saturday, April 26, 2008 10:45 AM
> To: The Friday Morning Applied Complexity Coffee Group
> Subject: Re: [FRIAM] recap on Rosen
>
> phil henshaw wrote:
> > Ok, 'find a function' assumes there is one to find, but the problem
> set is
> > running into behavior which has already had major consequences (like
> > starvation for 100million people because of an unexpected world food
> price
> > level shift) and the question is what 'function' would you use to not
> be
> > caught flat footed like that.
> The caloric requirements of a person are autocorrelated, but probably
> for a lot of models a constant will suffice -- a certain amount of body
> weight decrease, and then the probability of death goes up.   As for
> price fluctuations, that's a matter of modeling the natural resources
> that go in to food, the costs and benefits to motivate farmers, the
> commodity markets, and so on.   Certainly we can try to understand how
> each of these work, and then do what-if scenarios when one or more
> components are perturbed (or destroyed).   It's still a matter of
> finding stories (functions) to fit observables.  The availability and
> accuracy of those observables may be poor, and sometimes all that is
> possible to imagine worst and best cases, run the numbers, and see how
> the result changes.
> > Is there some general function to use in
> > cases where you have no function and don't even know what the problem
> > definition will be?
> >
> I think you do know what the problem could look like, but most details
> remain unspecified.   If you can construct an example that has
> catastrophes of the kind you often talk about, and spell out all of the
> details of your work of fiction (that even may happen to resemble
> reality), such that the what-if scenarios can be reproduced in
> simulations, then others can study the sensitivities.   If there is a
> `forcing structure' that will occur in many, many variant forms, then
> you can demonstrate that.
> > I actually have a very good one, but you won't like it because it
> means
> > using the models to understand what they fail to describe rather than
> the
> > usual method of using them to represent other things.
> Right.  Model predicts something, it turns out to have some error
> structure and that structure suggests ways to improve the model or make
> a new one.  Paper published. Meanwhile another guy makes a different
> model on the same phenomena and publishes a paper.   Third person reads
> the two papers and has idea that accounts for problems in both.   So
> she
> makes a new model!
>
> Marcus
>
>
> ============================================================
> FRIAM Applied Complexity Group listserv
> Meets Fridays 9a-11:30 at cafe at St. John's College
> lectures, archives, unsubscribe, maps at http://www.friam.org





Reply | Threaded
Open this post in threaded view
|

recap on Rosen

Marcus G. Daniels
phil henshaw wrote:
> No, that does not work at all.  Patching together a model to suite a symptom
> in retrospect does not help you with being ready for unexpected eventfulness
> in nature that you previously had no idea that you should be looking for.
>  
Never said anything about symptoms.   I did suggest maybe you ought to
plan on measuring something in particular to see if models (whether your
own or those you are interpreting) are consistent with reality in a
statistically meaningful way.  You can posit whatever driving events or
processes you want in-silco.  A comet striking the earth, people selling
their organs to increase the profit margins of the companies, the
importance of prophets in collective decision making, or whatever..


Reply | Threaded
Open this post in threaded view
|

recap on Rosen

Günther Greindl
In reply to this post by glen ep ropella
Dear Glen,

> OK.  But you must realize that this is not really a _refutation_ or
> disproof.  It's just one guy (Rosen) arguing with another guy (G?nther).
>  For an actual refutation (proof that Rosen's claim is false), you'd
> have to provide an explicit (effective) construction of a computational
> living system.

It is neither a mathematically rigorous nor an empirically grounded
refutation, I agree, but rather in the sense of Occam's razor/Laplacean
"I do not need this hypothesis".

> And you haven't done that. [grin] Hence, you haven't proven Rosen wrong
> ... yet.  ALifers across the planet are working on this constructive
> proof feverishly, of course.

That proof would then be rigorous, agreed.

Have you perchance read

Wells, A. J. In Defense of Mechanism Ecological Psychology, 2006, 18, 39-65

? He takes on Rosen's claims, I have queued the paper for reading, will
probably get there in July (have a lot to do at the moment ;-)); and
would be glad to continue the conversation.

> Or, you could show us specifically where Rosen's claim contradicts the
> recursion theorem.  But to my knowledge nobody has formalized Rosen's
> work to the degree of specificity we'd need to show such a
> contradiction.  I could easily be wrong about that, of course.  So, if
> you'll point to such a rigorous formulation of Rosen's claim and
> precisely how it contradicts the recursion theorem, then we could say
> that one or the other (Rosen's or the recursion theorem) is refuted.

Ack, I also think that the problem is that Rosen's ideas are not
formalized enough to present a contradiction.

Cheers,
G?nther


--
G?nther Greindl
Department of Philosophy of Science
University of Vienna
guenther.greindl at univie.ac.at
http://www.univie.ac.at/Wissenschaftstheorie/

Blog: http://dao.complexitystudies.org/
Site: http://www.complexitystudies.org


Reply | Threaded
Open this post in threaded view
|

recap on Rosen

Phil Henshaw-2
In reply to this post by Marcus G. Daniels
Marcus,
The 'symptom' I was referring to was being caught flat footed without a
model to warn you about the approach of major environmental change.  You
offered the solution of developing a model that you should have had before
you knew it was needed.  It appears to violate the direction of time??

I guess what I'm talking about is that the 'bubbles in our minds' are
different from the 'bubbles in the world'...  For the physical systems we
fail to understand there is nothing for us to 'see'.    That's always a
problem.   I think that closely watching for the classic patterns of
discrepancy between the world full of otherwise invisible bubbles and our
models is possible.   It may be imperfect but a vast improvement on not
looking at all.  What do you think of that?

Phil

> Behalf Of Marcus G. Daniels
> Sent: Saturday, April 26, 2008 3:08 PM
> To: The Friday Morning Applied Complexity Coffee Group
> Subject: Re: [FRIAM] recap on Rosen
>
> phil henshaw wrote:
> > No, that does not work at all.  Patching together a model to suite a
> symptom
> > in retrospect does not help you with being ready for unexpected
> eventfulness
> > in nature that you previously had no idea that you should be looking
> for.
> >
> Never said anything about symptoms.   I did suggest maybe you ought to
> plan on measuring something in particular to see if models (whether
> your
> own or those you are interpreting) are consistent with reality in a
> statistically meaningful way.  You can posit whatever driving events or
> processes you want in-silco.  A comet striking the earth, people
> selling
> their organs to increase the profit margins of the companies, the
> importance of prophets in collective decision making, or whatever..
>
> ============================================================
> FRIAM Applied Complexity Group listserv
> Meets Fridays 9a-11:30 at cafe at St. John's College
> lectures, archives, unsubscribe, maps at http://www.friam.org





Reply | Threaded
Open this post in threaded view
|

recap on Rosen

Marcus G. Daniels
phil henshaw wrote:
> I guess what I'm talking about is that the 'bubbles in our minds' are
> different from the 'bubbles in the world'...
The `bubbles in our minds' must come from the world we witness and say
something about the world that will be witnessed.
They certainly don't need to be a literal interpretation.   Of course,
in social matters, there's a question of art imitating life vs. life
imitating art..


Reply | Threaded
Open this post in threaded view
|

recap on Rosen

glen ep ropella
In reply to this post by Günther Greindl
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

G?nther Greindl wrote:
> It is neither a mathematically rigorous nor an empirically grounded
> refutation, I agree, but rather in the sense of Occam's razor/Laplacean
> "I do not need this hypothesis".

Excellent!  We pretty much agree.  The only area where I might disagree
is in attempts to develop measures of complexity.  Forget the whole
"life <=> non-life" red herring.  The simple <=> complex spectrum,
however, can be useful.

And, in that sense, Rosen's attempts to formalize simple systems as
"defined from the outside" versus complex systems as "defined from the
inside" is interesting.  Albeit, we may not NEED such a theorem because
we have plenty of measures of complexity which work to greater or lesser
extent in different contexts.  (I'm fond of "logical depth" myself,
though I admit that I haven't used it successfully.)

But I can imagine that certain concepts that are currently used all the
time in complexity circles, and which are always horribly vague despite
the credentials of the users, ... I can imagine that these concepts will
never become clear and concrete until we have such a theorem.

And that's where non-well-founded set theory seems useful.  What is the
ultimate difference between formalisms (models) requiring the foundation
axiom and those that do NOT require it?

It seems to me that formalisms built without the foundation axiom will
lack some of the definiteness we find and expect in our mathematics.
And, surprise, we also see a lack of definiteness in complex systems.
Now, I'm not just trying to combine two unknowns in an attempt to use
one to explain the other. [grin]  My point is that this circularity
Rosen points out is fundamentally related to cycles in non-well-founded
set theory.  And it also seems related to the rampant abuse of concepts
like iteration (e.g. recursion).

Anyway, my thoughts are a jumble of unjustified nonsense at this stage.
 I need a sugar-momma to pay me to sit around and think.  Any takers? [grin]

> Have you perchance read
>
> Wells, A. J. In Defense of Mechanism Ecological Psychology, 2006, 18, 39-65

Nope.  It sure sounds familiar, though.

> ? He takes on Rosen's claims, I have queued the paper for reading, will
> probably get there in July (have a lot to do at the moment ;-)); and
> would be glad to continue the conversation.

I'll add it to my queue, too, though I'm way beyond being able to commit
to it or estimate when I would ever read it.  I've always been a slow
reader ... though when I do read something, I usually remember it.
It'll help if you spontaneously re-start the conversation when you get
to Wells' paper.  Then make fun of me if I haven't read it, yet.
That'll coerce me into reading it.

- --
glen e. p. ropella, 971-219-3846, http://tempusdictum.com
The fear of death follows from the fear of life. A man who lives fully
is prepared to die at any time. -- Mark Twain

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.6 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFIFf8ApVJZMHoGoM8RAt+gAKCB20DpxiyJ8nwVJeSXVYFG/xHR1wCfX5dG
w6gansrDVkGFmZ4GoCQIx5I=
=yHDb
-----END PGP SIGNATURE-----


Reply | Threaded
Open this post in threaded view
|

recap on Rosen

glen ep ropella
In reply to this post by Phil Henshaw-2
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

phil henshaw wrote:
> The 'symptom' I was referring to was being caught flat footed without a
> model to warn you about the approach of major environmental change.

It's not clear to me what you and Marcus are arguing about... But I'll
offer the only real insight I've gained over the past few years. [grin]

There is only one way to prepare for potentially catastrophic change:
agility.  We can, post hoc, find examples where an entity (lineage,
organization, organism, etc) is pre-adapted for some change such that it
_seemed_ like that entity somehow predicted the change.  But this isn't
an effective tactic.  Complex systems are unpredictable (by definition)
in the concrete.

The only way to be prepared for some unspecified, truly novel,
abstractly named "change" is to be as agile as possible.  And the best
way to develop agility is to rapidly swap out "vignettes" (scenarios,
use cases, aspects, stories, models) on a regular basis.  The point is
not to make attempts to ensure that your suite of vignettes contains a
semblance of the coming change, however.  The point is to smear the risk
by practicing/training in as many different vignettes as possible.

And the only way to do this is by continually maintaining multiple
models of reality, all the while staying agnostic about the meaning and
usefulness any of those models.  You don't commit to any one model as
the Truth if you want to remain agile.

Of course, in stable times, exploitation (commitment) is the rule and
exploration is the exception.  But in unstable times, exploration is the
rule and exploitation is the exception.  The trick is to be willing to
sacrifice your exploitative efforts when the landscape starts to
destabilize.  The committed end up dying because their, once true
enough, convictions are no longer true enough.

This is why small businesses are the heart and soul of
capitalism/liberalism and why it's more agile than other organizational
strategies.  The high attrition rate of small businesses allows us to
balance exploration and exploitation.  When times are stable we grow big
behemoth exploiters.  When times become more chaotic, those behemoths
come crashing down and us little guys scramble and wander like ants,
with all our various deviant models and expectations of the world,
exploring the dynamic landscape and hoping to stumble into a niche and
become the next behemoth exploiter.  Then we hope to hoard enough
resources to skate through the next period of instability.

The trouble with applying this to "sustainability" is that we define
"sustainable" in terms of human comforts, wants, and needs.  What I
think Rosen would try to justify is the idea that we _cannot_ engineer a
world that sustains _human_ comforts, wants, and needs.  A sustainable
("living") system can only be designed holistically, from the inside.
Any design based on external or sliced up and extracted aspects/purposes
will eventually fail (or grow out of "control").  "Humanity" is an
abstract and pitifully impoverished _slice_ of Gaia (for lack of a
better term).  So any design we put in place to preserve the system from
the perspective of the human slice will eventually fail or mutate into
something not so human friendly.

Note that I'm _merely_ arguing from that perspective.  I don't
personally believe it wholeheartedly.  The only part I do believe is
that agility is the key to handling novelty and multi-modeling is the
key to maintaining agility (as well as _generating_ novelty).

- --
glen e. p. ropella, 971-219-3846, http://tempusdictum.com
A government which robs Peter to pay Paul, can always count on the
support of Paul -- George Bernard Shaw

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.6 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFIFgpDpVJZMHoGoM8RAls6AJ0W4AHcuSgus9c+FlazwtaDq6tXsgCeNLtt
8SfCOG7wvVA+a9G7u5ar9rQ=
=cZOR
-----END PGP SIGNATURE-----


Reply | Threaded
Open this post in threaded view
|

recap on Rosen

Phil Henshaw-2
That's closer I think.  There's little point to agility for a little fish after it has been swallowed.  All that helps then is making excuses... briefly.  Agility only helps if you sense the 'disturbance' and avoid the attack entirely.  Derivatives are long range indicators of out of model events approaching.

Phil
Sent from my Verizon Wireless BlackBerry

-----Original Message-----
From: "glen e. p. ropella" <[hidden email]>

Date: Mon, 28 Apr 2008 10:32:51
To:The Friday Morning Applied Complexity Coffee Group <friam at redfish.com>
Subject: Re: [FRIAM] recap on Rosen


-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

phil henshaw wrote:
> The 'symptom' I was referring to was being caught flat footed without a
> model to warn you about the approach of major environmental change.

It's not clear to me what you and Marcus are arguing about... But I'll
offer the only real insight I've gained over the past few years. [grin]

There is only one way to prepare for potentially catastrophic change:
agility.  We can, post hoc, find examples where an entity (lineage,
organization, organism, etc) is pre-adapted for some change such that it
_seemed_ like that entity somehow predicted the change.  But this isn't
an effective tactic.  Complex systems are unpredictable (by definition)
in the concrete.

The only way to be prepared for some unspecified, truly novel,
abstractly named "change" is to be as agile as possible.  And the best
way to develop agility is to rapidly swap out "vignettes" (scenarios,
use cases, aspects, stories, models) on a regular basis.  The point is
not to make attempts to ensure that your suite of vignettes contains a
semblance of the coming change, however.  The point is to smear the risk
by practicing/training in as many different vignettes as possible.

And the only way to do this is by continually maintaining multiple
models of reality, all the while staying agnostic about the meaning and
usefulness any of those models.  You don't commit to any one model as
the Truth if you want to remain agile.

Of course, in stable times, exploitation (commitment) is the rule and
exploration is the exception.  But in unstable times, exploration is the
rule and exploitation is the exception.  The trick is to be willing to
sacrifice your exploitative efforts when the landscape starts to
destabilize.  The committed end up dying because their, once true
enough, convictions are no longer true enough.

This is why small businesses are the heart and soul of
capitalism/liberalism and why it's more agile than other organizational
strategies.  The high attrition rate of small businesses allows us to
balance exploration and exploitation.  When times are stable we grow big
behemoth exploiters.  When times become more chaotic, those behemoths
come crashing down and us little guys scramble and wander like ants,
with all our various deviant models and expectations of the world,
exploring the dynamic landscape and hoping to stumble into a niche and
become the next behemoth exploiter.  Then we hope to hoard enough
resources to skate through the next period of instability.

The trouble with applying this to "sustainability" is that we define
"sustainable" in terms of human comforts, wants, and needs.  What I
think Rosen would try to justify is the idea that we _cannot_ engineer a
world that sustains _human_ comforts, wants, and needs.  A sustainable
("living") system can only be designed holistically, from the inside.
Any design based on external or sliced up and extracted aspects/purposes
will eventually fail (or grow out of "control").  "Humanity" is an
abstract and pitifully impoverished _slice_ of Gaia (for lack of a
better term).  So any design we put in place to preserve the system from
the perspective of the human slice will eventually fail or mutate into
something not so human friendly.

Note that I'm _merely_ arguing from that perspective.  I don't
personally believe it wholeheartedly.  The only part I do believe is
that agility is the key to handling novelty and multi-modeling is the
key to maintaining agility (as well as _generating_ novelty).

- --
glen e. p. ropella, 971-219-3846, http://tempusdictum.com
A government which robs Peter to pay Paul, can always count on the
support of Paul -- George Bernard Shaw

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.6 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFIFgpDpVJZMHoGoM8RAls6AJ0W4AHcuSgus9c+FlazwtaDq6tXsgCeNLtt
8SfCOG7wvVA+a9G7u5ar9rQ=
=cZOR
-----END PGP SIGNATURE-----

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org


Reply | Threaded
Open this post in threaded view
|

recap on Rosen

Marcus G. Daniels
Glen wrote:
> We can, post hoc, find examples where an entity (lineage,
> organization, organism, etc) is pre-adapted for some change such that it
> _seemed_ like that entity somehow predicted the change.  But this isn't
> an effective tactic.
It's very effective if the population is large enough.   6.6 billion
humans is quite a few.


123