complexity and emergence (was: FRIAM and causality)

Posted by Phil Henshaw-2 on
URL: http://friam.383.s1.nabble.com/FRIAM-and-causality-tp525337p525424.html

**Glenn,
from your 12/10 post "In the end, most of the "shapes in the fog" _can_
be identified, named, and described.  But, some of them resist. "  

   I'd say it backward, sort of, that "there are things beyond our
knowledge that are worth exploring", or "That it seems worth the trouble
and cast mistrust on one's own operating assumptions in order to puzzle
over 'shapes in the fog' since some lead to wonderful discoveries and
they all test and hone our method of finding them."  I would technically
disagree with "most ... _can_ be identified, named and described".  To
even count them I think we need to have already IND'd them.  The key
moment in that process is when we 'get our hooks on' something, when our
grasp of something new has reached a point where we see that more effort
will produce a positive return.  For me that's often coincides with the
time when I can identify the tell tale signs of a natural system
beginning or ending.

**Nick,
from your 12/11 post on returning to discuss causality "What we DO know
about, because our brains are very good at detecting patterns, is
patterns amongst events in the past.  So to speak of causality in the
particular instance in the PRESENT is ACTUALLY to speak of patterns in
the past."

    Yes this clearly identifies the flaw in the traditional model of
causality.   When we use the rule 'what happened before will happen
again' we loose sight of nature's more dominant pattern of 'what
happened before never happens again'.  When you put the two together,
though, you get a new sort of tool that's really powerful, 'what didn't
continue before won't continue again'.   What that forces you to do is
look to the future and see how the present is producing new conditions
that will result in completely new things emerging.  

    For example, if you have a system for responding to change that
requires ever more rapid and complex response, that's something that
'didn't continue before won't continue again'(i.e. all growth processes
end by upsetting their own growth mechanisms).  It may then be worth
looking into how the future will alter the past, if perhaps our survival
depends on it...etc.  The problem, of course, is all human thinking
seems culture bound, making it highly difficult to question the
assumptions that are built-in, i.e. our guiding purposes and meaning of
'good'.  So, in as much as we use 'necessity is the mother of
invention'(holding to our assumptions above all), I think we should also
use 'impossibility is the mother of invention'(testing our assumptions
against physical possibility).

**Glen,
taking your four modeling principles,
R1: co-simulation of 3 models:
    M1: a synthetic model,
    M2: a reference/pedigreed model, and
    M3: a data model
R2: inter-aspect influences are discrete
R3: models are designed to be transpersonal
R4: well-defined similarity measure(s)

  I see outward search in R1, but more traditional problem solving
toward a deterministic result in R2 R3 & R4, i.e. first 'searching out
of the box' followed by 'working in the box'.  I think it's good to
explicitly focus on a process of alternating 'search' and 'work' tasks,
continually asking, am I asking the right questions, etc.

   For an example, you might extract a network of nodes and interactions
from a complex system. That projects (reduces) the natural physical
object onto a certain plane of definable relationships suitable for
analysis.  Then make your 3 types of models.  Once you do the analysis
you might try seeing how it fits back in the subject, what's happening
in the physical system to make the identified features of the network
possible, and what's changing in the physical system that makes the
features impossible, etc.  

   My approaches to this 'multi-modeling methodology' tend to be more
focused on the search methodology.  There's my 'bump on a curve for
dummies' and related physics principles for reading continuity for
organizational change.  In teams doing sustainable design for
development projects (where there are billions of different independent
relationships to include..) I use R1: focus on the methods of search and
documentation of the paths people take as a discovery/design
cycle..(many stakeholders, many models, many alternates + ranging
exploratory study), R2: a focus on 'bright green spots' where the team
decides to go deeper, and R3: have an inclusive (if imperfect)
accounting of Total Balance... an inventory of how the project changes
the earth.

**Marcus,
>From your 12/12 post, You say: "Yes, I agree that it's better to have
many models of something than just one, as that will tend to explore at
least some `interstitial space'.  No, I don't agree that just because
there are multiple models that those models won't be prone to
correlation"

   One of the problems is that we don't know why our thinking is biased,
because, well.., of our thinking being biased.  It's not a small effect!
Take that the world sustainability movement is putting all our eggs in
the basket of continued economic growth with radically improving
efficiency that gets ever easier to do and relieves all the
environmental crises.   That's the consensus policy even though it
describes a physical phenomenon never before seen, and, the data clearly
shows that throughout the economies efficiency improvement is getting
harder and harder (i.e. more effort & less progress) and is just plane
leveling off.  

    You'd think... a bunch of smart people could discuss that.   Our
thinking is biased, though, and questioning what is 'good' is just out
of the question it seems!   For myself I know what to do when I'm
trapped, I just smash everything I've made to bits, knowing it's only in
my mind anyway,  mix the fragments that remain with other left-overs and
shake them all up to see where they might lead.  That's not a big seller
in the open market though...

**Robert,
   In your 12/12 post you say: "Correct me if I'm wrong Nick, but isn't
this all simply a case of hard scientists (physicists, chemists etc.)
understanding causality and attributing it appropriately and soft
scientists (biologists, ethologists etc.) not?"

I think the advantage 'hard scientists' have had is that they're
original focus was on the things of nature that seemed to never change.
The 'soft sciences' were mainly interested in the things that are always
changing.   I approach that as a 'hard science' problem because of the
shift in interest of 'hard science' to studying change.  Rules don't
work for that because that's what's changing...  The shift in method, I
think, will turn on switching from studying global generalities to real
individual things.  Most people in the 'hard sciences' don't realize
this shift might result in producing an almost entirely new method.
'The equation' is dead not because it's useless, but because we're
turning our attention away from abstractions as the end point of
science, and toward using abstractions to help us learn about the
complex, real and ever changing things of our world.


All the best,


Phil Henshaw                       ????.?? ? `?.????
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
680 Ft. Washington Ave
NY NY 10040                      
tel: 212-795-4844                
e-mail: sy at synapse9.com          
explorations: www.synapse9.com