FRIAM and causality

classic Classic list List threaded Threaded
25 messages Options
12
Reply | Threaded
Open this post in threaded view
|

FRIAM and causality

Phil Henshaw-2
Glen & Marcus,
Well, hopefully returning to the main thread.   The question seems to
concern an observation that information can be 'misused', letting people
capitalize on the interesting ways in which 'bad models' don't fit, to
display a 'reality' beyond the information which is both verifiably
present and verifiably explorable.   To me that seems to have a bearing
on the sort of opposite principle of Niels Bohr.  I believe Bohr's idea
was that because science works only with information that a fundamental
assumption of science must be that nothing exists which can not be
represented with information, ...and so, only immature thinkers could
possibly doubt that at the most fundamental level the structure of the
universe is that "God rolls dice", I think it goes.

Do you see that connection or any bits and pieces of it?  Or are these
durable shapes in the fog between the models something different?

>
> Phil Henshaw on 12/06/2007 10:53 AM:
> > The hard part seems to be to take the first dark step to accepting
> > there might be a shape of another form that the measures
> are missing
> > (like the whole tree or person).  It means looking for how to best
> > extend and complete your image based on the limited cast of the
> > measures at hand. Interpolation gone wild?? Free form projection
> > perhaps??  Sort of... You just gotta do something to make
> sense of the
> > larger continuities that develop in natural complex
> systems.  What I
> > think we can see clearly is that our measures and models are highly
> > incomplete.
>
> I think we agree, which normally means there's nothing to
> talk about! [grin]  But, I thought I'd throw out my term for
> what you're describing:  "triangulation".
>
> It's not really triangulation, of course.  But it's certainly
> more like triangulation than, say, population sampling.  
> Perhaps we could call it "tuple-angulation"???  [grin]
>
> Here's a paper in which "we" (i.e. my outrageous rhetoric is
> reigned in and made coherent by the authors of the paper ;-)
> try to describe it:
>
   http://www.biomedcentral.com/1752-0509/1/14/abstract

See Figure 1.  This particular example is just one sub-type of the
general method we're talking about, here, though.

- --
glen e. p. ropella, 971-219-3846, http://tempusdictum.com
If this were a dictatorship, it would be a heck of a lot easier, just so
long as I'm the dictator. -- George W. Bush

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.6 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFHWX6LZeB+vOTnLkoRAhfiAJ4ldUf3p2wtlih3736TIp28uVtEZACfWyMf
Pi/MX4iy1xD4PrqQNyNvbYo=
=9GWs
-----END PGP SIGNATURE-----

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org





Reply | Threaded
Open this post in threaded view
|

complexity and emergence (was: FRIAM and causality)

glen ep ropella
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Phil Henshaw on 12/09/2007 01:13 PM:

> Well, hopefully returning to the main thread.   The question seems to
> concern an observation that information can be 'misused', letting people
> capitalize on the interesting ways in which 'bad models' don't fit, to
> display a 'reality' beyond the information which is both verifiably
> present and verifiably explorable.   To me that seems to have a bearing
> on the sort of opposite principle of Niels Bohr.  I believe Bohr's idea
> was that because science works only with information that a fundamental
> assumption of science must be that nothing exists which can not be
> represented with information, ...and so, only immature thinkers could
> possibly doubt that at the most fundamental level the structure of the
> universe is that "God rolls dice", I think it goes.
>
> Do you see that connection or any bits and pieces of it?  Or are these
> durable shapes in the fog between the models something different?

I definitely see a connection.  The "interstitial spaces" or
"interactions" that are the primary subject of complexity studies fall
(to my mind) squarely in the category of "implicit" or "not clearly
identified, named, or described".

To me, much of the controversy around both "complexity" and "emergence"
lies in this very sense of the "unameable".  It's not so much that the
words are meaningless, abused, or reflect subjective phenomena, as it is
that these are words intended to refer to un-identified, un-named, or
un-described things.  Once a phenomenon is identified, named, and
described explicitly, it ceases to be "emergent" or "complex" in some
(non-technical) uses of those terms.

I don't particularly relate it to Bohr's principle (as you've described
it), though.  I'm a fan of _naive_ approaches to understanding and
manipulating things because a naive perspective can help one escape
infinite regress ("rat holes") and paradox set up by historical trends.
 So, when convenient, it's a good thing to just assume reality is as its
portrayed in our (always false) models.  But, like all perspectives,
it's useful to be able to don and doff them in order to achieve some end.

In the end, most of the "shapes in the fog" _can_ be identified, named,
and described.  But, some of them resist.  It's tough to tell whether
such "shapes in the fog" are real or just an artifact of the models
through which we look.  In the end, given the tools we have available,
we can't state, definitively, that some thing we cannot identify, name,
or describe clearly is a thing at all.  We are left with falsification
as the only reliable method.  We can never say:  "Bob's description is
true."  We can only say: "Bob's description has not yet been shown
false."  Likewise, we can't say "that shape in the fog _is_ merely bias
resulting from millenia of bad language".  We can only say "models 1-n
fail to capture that shape in the fog".

- --
glen e. p. ropella, 971-219-3846, http://tempusdictum.com
There is all the difference in the world between treating people equally
and attempting to make them equal. -- F.A. Hayek

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.6 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFHXbHLZeB+vOTnLkoRAlNIAKDCRLEyine+p53KPPP6sLqXfQxQHQCeN/RV
c5GMWPMa+MFvVCXGKnfPODY=
=bMOA
-----END PGP SIGNATURE-----


Reply | Threaded
Open this post in threaded view
|

complexity and emergence (was: FRIAM and causality)

Phil Henshaw-2
Glen,
What I actually find to be the rewarding part of it, though  breaking
the theoretical boundaries to allow the indefinable things outside our
models into the discussion is quite necessary, is to then develop
confidence with exploring them.   I'm at a conference on sustainable
design methods this week and the standard problem solving models are
still very much in place.  They still view the problem as solving a
problem inside a box, trying to get the 'correct' stuff in the box
first.   The failure of that, of course, is when the box floating along
in its environment bumps into things actually built in a rather
different way.  

Human designs have long tended to be abstractions 'in a box', like
equations, and machines, which have no capability themselves of
exploring or adapting to their environments.  _If_ people are paying
attention models evolve by people making new ones.  When you look to see
why that is you find it is achieved by building the box and essentially
defining whatever is outside the structures of models away.   It
introduces bias.   Learning to do the opposite, exploring the complex
world around our models, and asking other questions, needs the aid of
methods though.  I sort of approach it as a 'mining' exercise, looking
for certain 'veins of silver' in the mountain of information flowing
bye.   Having a way to identify the time and place where complex systems
are developing their organization saves a lot more than time.  It seems
to lead to better questions.   I also, for measuring total environmental
impacts, use the tried and true way to look outside any box... "follow
the money".  Some people are even responding to how very effective it is
as a measure!

gtg

Do you have theory or method for visualizing or exploring the stuff
outside the box?


Phil Henshaw                       ????.?? ? `?.????
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
680 Ft. Washington Ave
NY NY 10040                      
tel: 212-795-4844                
e-mail: pfh at synapse9.com          
explorations: www.synapse9.com  


> -----Original Message-----
> From: friam-bounces at redfish.com
> [mailto:friam-bounces at redfish.com] On Behalf Of Glen E. P. Ropella
> Sent: Monday, December 10, 2007 4:38 PM
> To: The Friday Morning Applied Complexity Coffee Group
> Subject: [FRIAM] complexity and emergence (was: FRIAM and causality)
>
>
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
>
> Phil Henshaw on 12/09/2007 01:13 PM:
> > Well, hopefully returning to the main thread.   The
> question seems to
> > concern an observation that information can be 'misused', letting
> > people capitalize on the interesting ways in which 'bad
> models' don't
> > fit, to display a 'reality' beyond the information which is
> both verifiably
> > present and verifiably explorable.   To me that seems to
> have a bearing
> > on the sort of opposite principle of Niels Bohr.  I believe Bohr's
> > idea was that because science works only with information that a
> > fundamental assumption of science must be that nothing exists which
> > can not be represented with information, ...and so, only immature
> > thinkers could possibly doubt that at the most fundamental
> level the
> > structure of the universe is that "God rolls dice", I think it goes.
> >
> > Do you see that connection or any bits and pieces of it?  
> Or are these
> > durable shapes in the fog between the models something different?
>
> I definitely see a connection.  The "interstitial spaces" or
> "interactions" that are the primary subject of complexity
> studies fall (to my mind) squarely in the category of
> "implicit" or "not clearly identified, named, or described".
>
> To me, much of the controversy around both "complexity" and
> "emergence" lies in this very sense of the "unameable".  It's
> not so much that the words are meaningless, abused, or
> reflect subjective phenomena, as it is that these are words
> intended to refer to un-identified, un-named, or un-described
> things.  Once a phenomenon is identified, named, and
> described explicitly, it ceases to be "emergent" or "complex" in some
> (non-technical) uses of those terms.
>
> I don't particularly relate it to Bohr's principle (as you've
> described it), though.  I'm a fan of _naive_ approaches to
> understanding and manipulating things because a naive
> perspective can help one escape infinite regress ("rat
> holes") and paradox set up by historical trends.  So, when
> convenient, it's a good thing to just assume reality is as
> its portrayed in our (always false) models.  But, like all
> perspectives, it's useful to be able to don and doff them in
> order to achieve some end.
>
> In the end, most of the "shapes in the fog" _can_ be
> identified, named, and described.  But, some of them resist.  
> It's tough to tell whether such "shapes in the fog" are real
> or just an artifact of the models through which we look.  In
> the end, given the tools we have available, we can't state,
> definitively, that some thing we cannot identify, name, or
> describe clearly is a thing at all.  We are left with
> falsification as the only reliable method.  We can never say:
>  "Bob's description is true."  We can only say: "Bob's
> description has not yet been shown false."  Likewise, we
> can't say "that shape in the fog _is_ merely bias resulting
> from millenia of bad language".  We can only say "models 1-n
> fail to capture that shape in the fog".
>
> - --
> glen e. p. ropella, 971-219-3846, http://tempusdictum.com
> There is all the difference in the world between treating
> people equally and attempting to make them equal. -- F.A. Hayek
>
> -----BEGIN PGP SIGNATURE-----
> Version: GnuPG v1.4.6 (GNU/Linux)
> Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org
>
> iD8DBQFHXbHLZeB+vOTnLkoRAlNIAKDCRLEyine+p53KPPP6sLqXfQxQHQCeN/RV
> c5GMWPMa+MFvVCXGKnfPODY=
> =bMOA
> -----END PGP SIGNATURE-----
>
> ============================================================
> FRIAM Applied Complexity Group listserv
> Meets Fridays 9a-11:30 at cafe at St. John's College
> lectures, archives, unsubscribe, maps at http://www.friam.org
>
>




Reply | Threaded
Open this post in threaded view
|

complexity and emergence (was: FRIAM and causality)

Phil Henshaw-2
In reply to this post by glen ep ropella
**Glenn,
from your 12/10 post "In the end, most of the "shapes in the fog" _can_
be identified, named, and described.  But, some of them resist. "  

   I'd say it backward, sort of, that "there are things beyond our
knowledge that are worth exploring", or "That it seems worth the trouble
and cast mistrust on one's own operating assumptions in order to puzzle
over 'shapes in the fog' since some lead to wonderful discoveries and
they all test and hone our method of finding them."  I would technically
disagree with "most ... _can_ be identified, named and described".  To
even count them I think we need to have already IND'd them.  The key
moment in that process is when we 'get our hooks on' something, when our
grasp of something new has reached a point where we see that more effort
will produce a positive return.  For me that's often coincides with the
time when I can identify the tell tale signs of a natural system
beginning or ending.

**Nick,
from your 12/11 post on returning to discuss causality "What we DO know
about, because our brains are very good at detecting patterns, is
patterns amongst events in the past.  So to speak of causality in the
particular instance in the PRESENT is ACTUALLY to speak of patterns in
the past."

    Yes this clearly identifies the flaw in the traditional model of
causality.   When we use the rule 'what happened before will happen
again' we loose sight of nature's more dominant pattern of 'what
happened before never happens again'.  When you put the two together,
though, you get a new sort of tool that's really powerful, 'what didn't
continue before won't continue again'.   What that forces you to do is
look to the future and see how the present is producing new conditions
that will result in completely new things emerging.  

    For example, if you have a system for responding to change that
requires ever more rapid and complex response, that's something that
'didn't continue before won't continue again'(i.e. all growth processes
end by upsetting their own growth mechanisms).  It may then be worth
looking into how the future will alter the past, if perhaps our survival
depends on it...etc.  The problem, of course, is all human thinking
seems culture bound, making it highly difficult to question the
assumptions that are built-in, i.e. our guiding purposes and meaning of
'good'.  So, in as much as we use 'necessity is the mother of
invention'(holding to our assumptions above all), I think we should also
use 'impossibility is the mother of invention'(testing our assumptions
against physical possibility).

**Glen,
taking your four modeling principles,
R1: co-simulation of 3 models:
    M1: a synthetic model,
    M2: a reference/pedigreed model, and
    M3: a data model
R2: inter-aspect influences are discrete
R3: models are designed to be transpersonal
R4: well-defined similarity measure(s)

  I see outward search in R1, but more traditional problem solving
toward a deterministic result in R2 R3 & R4, i.e. first 'searching out
of the box' followed by 'working in the box'.  I think it's good to
explicitly focus on a process of alternating 'search' and 'work' tasks,
continually asking, am I asking the right questions, etc.

   For an example, you might extract a network of nodes and interactions
from a complex system. That projects (reduces) the natural physical
object onto a certain plane of definable relationships suitable for
analysis.  Then make your 3 types of models.  Once you do the analysis
you might try seeing how it fits back in the subject, what's happening
in the physical system to make the identified features of the network
possible, and what's changing in the physical system that makes the
features impossible, etc.  

   My approaches to this 'multi-modeling methodology' tend to be more
focused on the search methodology.  There's my 'bump on a curve for
dummies' and related physics principles for reading continuity for
organizational change.  In teams doing sustainable design for
development projects (where there are billions of different independent
relationships to include..) I use R1: focus on the methods of search and
documentation of the paths people take as a discovery/design
cycle..(many stakeholders, many models, many alternates + ranging
exploratory study), R2: a focus on 'bright green spots' where the team
decides to go deeper, and R3: have an inclusive (if imperfect)
accounting of Total Balance... an inventory of how the project changes
the earth.

**Marcus,
>From your 12/12 post, You say: "Yes, I agree that it's better to have
many models of something than just one, as that will tend to explore at
least some `interstitial space'.  No, I don't agree that just because
there are multiple models that those models won't be prone to
correlation"

   One of the problems is that we don't know why our thinking is biased,
because, well.., of our thinking being biased.  It's not a small effect!
Take that the world sustainability movement is putting all our eggs in
the basket of continued economic growth with radically improving
efficiency that gets ever easier to do and relieves all the
environmental crises.   That's the consensus policy even though it
describes a physical phenomenon never before seen, and, the data clearly
shows that throughout the economies efficiency improvement is getting
harder and harder (i.e. more effort & less progress) and is just plane
leveling off.  

    You'd think... a bunch of smart people could discuss that.   Our
thinking is biased, though, and questioning what is 'good' is just out
of the question it seems!   For myself I know what to do when I'm
trapped, I just smash everything I've made to bits, knowing it's only in
my mind anyway,  mix the fragments that remain with other left-overs and
shake them all up to see where they might lead.  That's not a big seller
in the open market though...

**Robert,
   In your 12/12 post you say: "Correct me if I'm wrong Nick, but isn't
this all simply a case of hard scientists (physicists, chemists etc.)
understanding causality and attributing it appropriately and soft
scientists (biologists, ethologists etc.) not?"

I think the advantage 'hard scientists' have had is that they're
original focus was on the things of nature that seemed to never change.
The 'soft sciences' were mainly interested in the things that are always
changing.   I approach that as a 'hard science' problem because of the
shift in interest of 'hard science' to studying change.  Rules don't
work for that because that's what's changing...  The shift in method, I
think, will turn on switching from studying global generalities to real
individual things.  Most people in the 'hard sciences' don't realize
this shift might result in producing an almost entirely new method.
'The equation' is dead not because it's useless, but because we're
turning our attention away from abstractions as the end point of
science, and toward using abstractions to help us learn about the
complex, real and ever changing things of our world.


All the best,


Phil Henshaw                       ????.?? ? `?.????
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
680 Ft. Washington Ave
NY NY 10040                      
tel: 212-795-4844                
e-mail: sy at synapse9.com          
explorations: www.synapse9.com    





Reply | Threaded
Open this post in threaded view
|

complexity and emergence (was: FRIAM and causality)

glen ep ropella
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Phil Henshaw on 12/16/2007 10:24 AM:

> from your 12/10 post "In the end, most of the "shapes in the fog" _can_
> be identified, named, and described.  But, some of them resist. "  
>
>    I'd say it backward, sort of, that "there are things beyond our
> knowledge that are worth exploring", or "That it seems worth the trouble
> and cast mistrust on one's own operating assumptions in order to puzzle
> over 'shapes in the fog' since some lead to wonderful discoveries and
> they all test and hone our method of finding them."  I would technically
> disagree with "most ... _can_ be identified, named and described".  To
> even count them I think we need to have already IND'd them.  The key
> moment in that process is when we 'get our hooks on' something, when our
> grasp of something new has reached a point where we see that more effort
> will produce a positive return.  For me that's often coincides with the
> time when I can identify the tell tale signs of a natural system
> beginning or ending.

Well, I'm not going to vehemently disagree with you about the IND before
counting; but, I am going to point out your reliance on sequentiality
because it relates below as well.

First, though, I want to defend my statement that the "shapes in the
fog" (SOF) _can_ be IND&C.  The primary thread, here, has been that all
models are always false.  And that means that it is not only possible
but common and reasonable to identify, name, describe, and count things
of which we're partly (or mostly) ignorant.

So, you may be right if you said that we may not be able to IND&C all
the SOF perfectly accurately.  But, we _can_ IND&C all the SOF
inaccurately and imperfectly.

But, more importantly, the IND&C is not a sequential process.  We don't
IND a shape and _then_ count it.  We're always doing some subset of the
4 on some subset of the SOF concurrently and at varying degrees of accuracy.

I know I'm preaching to the choir, here.  But, I wanted to make that
form of methodological non-linearity clear.

> taking your four modeling principles,
> R1: co-simulation of 3 models:
>     M1: a synthetic model,
>     M2: a reference/pedigreed model, and
>     M3: a data model
> R2: inter-aspect influences are discrete
> R3: models are designed to be transpersonal
> R4: well-defined similarity measure(s)
>

Well, to start out, these are not sequential but concurrent principles.

>   I see outward search in R1, but more traditional problem solving
> toward a deterministic result in R2 R3 & R4, i.e. first 'searching out
> of the box' followed by 'working in the box'.  I think it's good to
> explicitly focus on a process of alternating 'search' and 'work' tasks,
> continually asking, am I asking the right questions, etc.
>
>    For an example, you might extract a network of nodes and interactions
> from a complex system. That projects (reduces) the natural physical
> object onto a certain plane of definable relationships suitable for
> analysis.  Then make your 3 types of models.  Once you do the analysis
> you might try seeing how it fits back in the subject, what's happening
> in the physical system to make the identified features of the network
> possible, and what's changing in the physical system that makes the
> features impossible, etc.  

I object to the sequentiality.  These principles don't really work if
you apply 1 _then_ apply the other.  They should all apply, in varying
degrees under varying circumstances, concurrently.  So, it is weak to
adhere to R2-R4 and _then_ coerce your methods so that they adhere to
R1.  _Weak_ but not misguided.

But your response above makes me think I've done a bad job outlining
these principles.  R1-R4 are not constructive.  You seem to be using
them as if they were constructive methods rather than selective
principles.  It doesn't really matter where the models come from, how
they were constructed, or whether they even match R1-R4 in the first
place.  But, a good multi-modeling selection methodology will use all of
R1-R4 (if not more, as Marcus haphazardly points out).

It helps to use the word "apply" rather than the word "use"... as if you
were applying a predicate to pre-existing models.

- --
glen e. p. ropella, 971-219-3846, http://tempusdictum.com
There is nothing so bad that politics cannot make it worse. -- Thomas Sowell

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.6 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFHdXlMZeB+vOTnLkoRAtl+AJ9Yqq0DpqXsWzm2n/IC7mn11yZ6AwCg31tY
uwn/wuElukxEb1uf6xUJ09k=
=go1M
-----END PGP SIGNATURE-----


12