Welcome, Jim

classic Classic list List threaded Threaded
55 messages Options
123
Reply | Threaded
Open this post in threaded view
|

recap on Rosen

glen ep ropella
sy at synapse9.com wrote:
> That's closer I think.  There's little point to agility for a little
> fish after it has been swallowed.  All that helps then is making
> excuses... briefly.  Agility only helps if you sense the
> 'disturbance' and avoid the attack entirely.  Derivatives are long
> range indicators of out of model events approaching.

No, there's much point to agility even if the little fish is
_eventually_ swallowed.  Agility allows the little fish to avoid being
swallowed for a longer time than her clumsy siblings.  More time means
more chances to mate, which is the whole point of the exercise.

As for sensing the disturbance, agility helps no matter _when_ you sense
the disturbance.  (You _always_ sense the disturbance, even if it's only
after the teeth sink into your flesh.)  The point of being agile is to
allow you a larger window and more options between the time of sensing
the disturbance and your subsequent action.

The larger point is that the best methods for handling potentially
catastrophic change derive from a tight feedback loop with one's
environment.  Abstraction is the enemy.  Embeddedness and high
interactivity are key.  Agility is an ability that comes from being
deeply embedded in the context.

It's true that abstraction allows one to estimate long-range patterns
and long-term trends.  But commitment to those abstract patterns and
trends does NOT help one survive potentially catastrophic change.  It
can only help one avoid such change.  And when the change is waaaay too
big to avoid?  Well, then agility is the key.

--
glen e. p. ropella, 971-219-3846, http://tempusdictum.com
There is nothing as permanent as a temporary government program. --
Milton Friedman



Reply | Threaded
Open this post in threaded view
|

recap on Rosen

glen ep ropella
In reply to this post by Marcus G. Daniels
Marcus G. Daniels wrote:
> Glen wrote:
>> We can, post hoc, find examples where an entity (lineage,
>> organization, organism, etc) is pre-adapted for some change such that it
>> _seemed_ like that entity somehow predicted the change.  But this isn't
>> an effective tactic.
>
> It's very effective if the population is large enough.   6.6 billion
> humans is quite a few.

No, a suite of trials is an effective strategy for a multi-farious
composite (e.g. an army or a species); but pre-adaptation is an
ineffective tactic for a small unit -- limited resources -- with an
explicit objective.

--
glen e. p. ropella, 971-219-3846, http://tempusdictum.com
Everything that is really great and inspiring is created by the
individual who can labor in freedom. -- Albert Einstein



Reply | Threaded
Open this post in threaded view
|

recap on Rosen

Marcus G. Daniels
glen e. p. ropella wrote:

> Marcus G. Daniels wrote:
>  
>> Glen wrote:
>>    
>>> We can, post hoc, find examples where an entity (lineage,
>>> organization, organism, etc) is pre-adapted for some change such that it
>>> _seemed_ like that entity somehow predicted the change.  But this isn't
>>> an effective tactic.
>>>      
>> It's very effective if the population is large enough.   6.6 billion
>> humans is quite a few.
>>    
>
> No, a suite of trials is an effective strategy for a multi-farious
> composite (e.g. an army or a species); but pre-adaptation is an
> ineffective tactic for a small unit -- limited resources -- with an
> explicit objective.
>  
I thought we were sort of talking about large units, e.g. sustainability
efforts as it relates to survival of governments or the even the human
species?   It seems to me a government or large company can be agile by
through use of non-agile specialists (and more powerful) than small but
agile groups -- economies of scale.    A benefit of the exploitation
phase, also comes with the benefit of the diversification of those
exploitable specialists.



Reply | Threaded
Open this post in threaded view
|

recap on Rosen

glen ep ropella
Marcus G. Daniels wrote:
> I thought we were sort of talking about large units, e.g. sustainability
> efforts as it relates to survival of governments or the even the human
> species?

Yes, we were.  But, you cut out the context of my original comment,
which was that:  It's true _some_ entities can seem, post hoc, to have
been pre-adapted to some context.  I.e. Some entities may seem to have
successfully used _commitment_ to a single model (or small set of
models).  But commitment and pre-adaptation are not an effective _tactic_.

Then you said that "it" can be effective, wherein you conflated tactics
and strategy.  Pre-adaptation and commitment to a single model (or small
set of models) is NOT an effective tactic for achieving an explicit
objective.  On the contrary, however, agnostic multi-modeling can be a
strategy for achieving vague, abstract, or implicit objectives.

"Sustainability" is, as yet, vague and abstract.  And if we buy Rosen's
argument, it must be implicit.

> It seems to me a government or large company can be agile by
> through use of non-agile specialists (and more powerful) than small but
> agile groups -- economies of scale.

Only _if_ the overwhelming majority of those specialists are sacrificed
(or "re-used").  And only _if_ there are plenty of those specialists.
Which means pre-adaptation is not an effective tactic for an
overwhelming majority of those specialists.

You're talking about a strategy, not a tactic.  And, at that composite
(army, population, collective) level, you're also NOT talking about a
strategy of pre-adaptation/commitment.  You're talking about a strategy
of agnosticism and multi-modeling.

At the individual unit level (even if the unit is composite), the most
relevant tactic for surviving potentially catastrophic change is
maximized agility, not commitment to a given model.

If you want to draw a _metaphor_ between "collective agility" and
agnostic multi-modeling, then go ahead.  But be clear that it's a
metaphor.  Agility comes from embeddedness and a tight feedback loop
with the environment.  Large collectives cannot both be a very abstract
unit/entity _and_ be tightly coupled to the environment.  Hence, saying
something like "Intel is an agile multi-national corporation" is either
a self-contradiction or an equivocation on the word "agile".

--
glen e. p. ropella, 971-219-3846, http://tempusdictum.com
The most merciful thing in the world, I think, is the inability of the
human mind to correlate all its contents. -- H. P. Lovecraft



Reply | Threaded
Open this post in threaded view
|

recap on Rosen

Phil Henshaw-2
In reply to this post by Marcus G. Daniels

> -----Original Message-----
> From: friam-bounces at redfish.com [mailto:friam-bounces at redfish.com] On
> Behalf Of Marcus G. Daniels
> Sent: Monday, April 28, 2008 12:37 PM
> To: The Friday Morning Applied Complexity Coffee Group
> Subject: Re: [FRIAM] recap on Rosen
>
> phil henshaw wrote:
> > I guess what I'm talking about is that the 'bubbles in our minds' are
> > different from the 'bubbles in the world'...
> The `bubbles in our minds' must come from the world we witness and say
> something about the world that will be witnessed.
> They certainly don't need to be a literal interpretation.   Of course,
> in social matters, there's a question of art imitating life vs. life
> imitating art..

[ph] A couple of the big differences are that the 'bubbles in our minds' are
stitched together by personal and cultural values, and they have lots of
things of the world which are continually changing represented by fixed
images or definitions.   The 'bubbles in the world' are organized around
local physical processes, with lots of separate learning system parts, which
learn by exploring pathways THEY find.  The natural assumption then would be
for their design to always be changing in ways we can't see at all without
some hints of where to look.  It's one of the deep problems of knowledge.
Acknowledging it is mainly just a solution for denying it, but it also
allows one to get a little warning about the systems of the world that are
behaving independent of our models for them.  

Does that help?


>
> ============================================================
> FRIAM Applied Complexity Group listserv
> Meets Fridays 9a-11:30 at cafe at St. John's College
> lectures, archives, unsubscribe, maps at http://www.friam.org





Reply | Threaded
Open this post in threaded view
|

recap on Rosen

Marcus G. Daniels
In reply to this post by glen ep ropella
glen e. p. ropella wrote:
> Large collectives cannot both be a very abstract
> unit/entity _and_ be tightly coupled to the environment.  Hence, saying
> something like "Intel is an agile multi-national corporation" is either
> a self-contradiction or an equivocation on the word "agile".
>  
Given the fast and impressive beating that AMD just got at Intel's hand,
that example strikes me as weird!


Reply | Threaded
Open this post in threaded view
|

recap on Rosen

Phil Henshaw-2
In reply to this post by glen ep ropella

Glen,

> sy at synapse9.com wrote:
> > That's closer I think.  There's little point to agility for a little
> > fish after it has been swallowed.  All that helps then is making
> > excuses... briefly.  Agility only helps if you sense the
> > 'disturbance' and avoid the attack entirely.  Derivatives are long
> > range indicators of out of model events approaching.
>
> No, there's much point to agility even if the little fish is
> _eventually_ swallowed.  Agility allows the little fish to avoid being
> swallowed for a longer time than her clumsy siblings.  More time means
> more chances to mate, which is the whole point of the exercise.
>
> As for sensing the disturbance, agility helps no matter _when_ you sense
> the disturbance.  (You _always_ sense the disturbance, even if it's only
> after the teeth sink into your flesh.)  The point of being agile is to
> allow you a larger window and more options between the time of sensing
> the disturbance and your subsequent action.

[ph] why make it so complicated?  You don't need to explain why it's good to
survive. It's good to survive.  The agility only makes a difference in that
*before* being swallowed, when you have an ability to respond to the
information of *approaching danger*.   No info, no avoidance of danger.  

> The larger point is that the best methods for handling potentially
> catastrophic change derive from a tight feedback loop with one's
> environment.  Abstraction is the enemy.  Embeddedness and high
> interactivity are key.  Agility is an ability that comes from being
> deeply embedded in the context.
>
[ph] Yes, the apparent reason people are constantly walking blindly into
conflict is a lack of information on it's approach.  The clear evidence,
like the whole environmental movement spending 30 years promoting energy
solutions that would trigger a world food crisis, is that we are missing the
signals of approaching danger.  We read 'disturbances in the force' (i.e.
alien derivatives like diminishing returns) very skillfully in one
circumstance and miss them entirely in others.  We constantly walk smack
into trouble because we do something that selectively blocks that kind of
information.   The evidence seems to closely fit the 'functional fixation'
of using fixed representations for changing things in our models.

> It's true that abstraction allows one to estimate long-range patterns
> and long-term trends.  But commitment to those abstract patterns and
> trends does NOT help one survive potentially catastrophic change.  It
> can only help one avoid such change.  And when the change is waaaay too
> big to avoid?  Well, then agility is the key.

[ph] again, agility only helps avoid the catastrophe *before* the
catastrophe.  Here you're saying it mainly helps after, and that seems to be
incorrect.

Phil

>
> --
> glen e. p. ropella, 971-219-3846, http://tempusdictum.com
> There is nothing as permanent as a temporary government program. --
> Milton Friedman
>
>
> ============================================================
> FRIAM Applied Complexity Group listserv
> Meets Fridays 9a-11:30 at cafe at St. John's College
> lectures, archives, unsubscribe, maps at http://www.friam.org





Reply | Threaded
Open this post in threaded view
|

recap on Rosen

Kenneth Lloyd
In reply to this post by Phil Henshaw-2
Phil,

Thank you for acknowledging the Popper / Penrose "Three Worlds" context.

Models exist in what Penrose refers to as the "Platonic world of
mathematical forms". Better models reflect both the spatio-temporal dynamics
of the context in which they exist and the mereology of their components -
meaning that often examining localized model components reveal little of the
nature of the system of the models.

While I am unqualified to address art imitating life, I can address models
of life imitating life. This is where the science of Compositional Pattern
Producing Networks holds advantage over more tradition methods.  In effect,
we evolve a Platonic world which discovers the mathematical forms,
independent of our subjective interpretation.

Ken

> -----Original Message-----
> From: friam-bounces at redfish.com
> [mailto:friam-bounces at redfish.com] On Behalf Of phil henshaw
> Sent: Tuesday, April 29, 2008 9:15 AM
> To: 'The Friday Morning Applied Complexity Coffee Group'
> Subject: Re: [FRIAM] recap on Rosen
>
>
> > -----Original Message-----
> > From: friam-bounces at redfish.com
> [mailto:friam-bounces at redfish.com] On
> > Behalf Of Marcus G. Daniels
> > Sent: Monday, April 28, 2008 12:37 PM
> > To: The Friday Morning Applied Complexity Coffee Group
> > Subject: Re: [FRIAM] recap on Rosen
> >
> > phil henshaw wrote:
> > > I guess what I'm talking about is that the 'bubbles in our minds'
> > > are different from the 'bubbles in the world'...
> > The `bubbles in our minds' must come from the world we
> witness and say
> > something about the world that will be witnessed.
> > They certainly don't need to be a literal interpretation.  
> Of course,
> > in social matters, there's a question of art imitating life
> vs. life
> > imitating art..
>
> [ph] A couple of the big differences are that the 'bubbles in
> our minds' are stitched together by personal and cultural
> values, and they have lots of things of the world which are
> continually changing represented by fixed
> images or definitions.   The 'bubbles in the world' are
> organized around
> local physical processes, with lots of separate learning
> system parts, which learn by exploring pathways THEY find.  
> The natural assumption then would be for their design to
> always be changing in ways we can't see at all without some
> hints of where to look.  It's one of the deep problems of knowledge.
> Acknowledging it is mainly just a solution for denying it,
> but it also allows one to get a little warning about the
> systems of the world that are
> behaving independent of our models for them.  
>
> Does that help?
>
>
> >
> > ============================================================
> > FRIAM Applied Complexity Group listserv Meets Fridays
> 9a-11:30 at cafe
> > at St. John's College lectures, archives, unsubscribe, maps at
> > http://www.friam.org
>
>
>
>
> ============================================================
> FRIAM Applied Complexity Group listserv
> Meets Fridays 9a-11:30 at cafe at St. John's College
> lectures, archives, unsubscribe, maps at http://www.friam.org
>



Reply | Threaded
Open this post in threaded view
|

recap on Rosen

glen ep ropella
In reply to this post by Phil Henshaw-2
phil henshaw wrote:
> [ph] why make it so complicated?  You don't need to explain why it's good to
> survive. It's good to survive.  The agility only makes a difference in that
> *before* being swallowed, when you have an ability to respond to the
> information of *approaching danger*.   No info, no avoidance of danger.  

You're oversimplifying the problem and, I suspect, the solution.  The
problem of "unanticipated potentially catastrophic change" is handled by
the abilities derived from embeddedness.  One of those abilities is
agility and it has a sibling sensory ability that allows one to "feel"
fine-grained disturbances.  (I don't have a word for that sensory
ability.  It's like using peripheral vision when riding a motorcycle or
watching your opponents eyes when fighting.  I'll use "sensitivity".)

You're right that agility helps one avoid an avoidable change ... e.g.
like a big fish snapping at a small fish.  And you're right that such
avoidable changes are only avoidable if one can sense the change coming.

But, what if the change is totally unavoidable?  I.e. it's going to get
you regardless of whether or not you sense it?  In such cases, the
canalizing ability is agility.  Its sibling sensory ability _helps_, for
sure.  But when the unavoidable change is in full swing and you cannot
predict the final context that will obtain, then agility is the key.

> The clear evidence, [...], is that we are missing the
> signals of approaching danger.  We read 'disturbances in the force' (i.e.
> alien derivatives like diminishing returns) very skillfully in one
> circumstance and miss them entirely in others.  We constantly walk smack
> into trouble because we do something that selectively blocks that kind of
> information.

I disagree.  We don't continually walk smack into trouble _because_ we
selectively block a kind of information.  Our trouble is two-fold: 1) we
are _abstracted_ from the environment and 2) we don't adopt a manifold,
agnostic, multi-modeling strategy.

If we were not abstracted, then we'd be something like hunter-gatherers,
destroying our local environments willy-nilly, but never deeply enough
in a single exploitation such that the environment (including other
humans) can't compensate for our exploitation.

But we _are_ abstracted.

If we were to adopt a manifold, agnostic, multi-modeling strategy to
integrating with the environment, then we'd be OK because most of our
models would fail but our overall suite would find some that work.

But we do NOT use such a strategy.

Instead, primarily because of cars, airplanes, the printing press, and
the internet, we succumb to rhetoric and justification of some
persuasive people, and we all jump on the same damned bandwagon time and
time again.  That commitment to a single (or small set of) model(s)
condemns us to failure, regardless of the particular model(s) to which
we commit.

> [ph] again, agility only helps avoid the catastrophe *before* the
> catastrophe.  Here you're saying it mainly helps after, and that seems to be
> incorrect.

Wrong.  Agility helps keep you in tune with your environment, which
percolates back up to how embedded you _can_ be, which flows back down
to how _aware_ you can be.  The more agile you are, the finer your
sensory abilities will be and vice versa, the more sensitive you are,
the more agile you will be.

You seem to be trying to linearize the problem and solution and say that
maximizing awareness, knowledge, and information is always the
canalizing method for avoiding unanticipated potentially catastrophic
change.  I'm saying that embeddedness is the general key and when the
coming change is totally unavoidable, agility is the specific key.

Further, the less avoidable the change, the more agility matters.  The
more avoidable the change, the more sensitivity matters.  But they are
not orthogonal by any stretch of the imagination.  So, I'm not "making
it complicated", I'm saying it is complex, intertwined.  You can't
_both_ separate/isolate your abstract self and be agile enough to handle
unanticipated potentially catastrophic change.

You _can_ separate/isolate your abstract self and handle unanticipated
potentially catastrophic change if you use a multi-modeling strategy so
that any change only kills off a subset of your models.  The problems
with that are: a) as technology advances, our minds are becoming more
homogenous, meaning it's increasingly difficult for _us_ to maintain a
multi-modeling strategy, and b) we really don't have the resources to
create and maintain lots of huge models.

Hence, agility is the key.

--
glen e. p. ropella, 971-219-3846, http://tempusdictum.com
Philosophy is a battle against the bewitchment of our intelligence by
means of language. -- Ludwig Wittgenstein



Reply | Threaded
Open this post in threaded view
|

recap on Rosen

Günther Greindl
In reply to this post by glen ep ropella
Dear Glen,

> And that's where non-well-founded set theory seems useful.  What is the
> ultimate difference between formalisms (models) requiring the foundation
> axiom and those that do NOT require it?

That is a very interesting question. Do you have some good references
which look at this?

> It seems to me that formalisms built without the foundation axiom will
> lack some of the definiteness we find and expect in our mathematics.
...
> set theory.  And it also seems related to the rampant abuse of concepts
> like iteration (e.g. recursion).

Could you give examples of abuses, I would be interested?

> to Wells' paper.  Then make fun of me if I haven't read it, yet.
> That'll coerce me into reading it.

OK :-))

Cheers,
G?nther



--
G?nther Greindl
Department of Philosophy of Science
University of Vienna
guenther.greindl at univie.ac.at
http://www.univie.ac.at/Wissenschaftstheorie/

Blog: http://dao.complexitystudies.org/
Site: http://www.complexitystudies.org


Reply | Threaded
Open this post in threaded view
|

recap on Rosen

glen ep ropella
G?nther Greindl wrote:
> That is a very interesting question. Do you have some good references
> which look at this?

No, not really.  My favorite reference is "Vicious Circles" by Barwise
and Moss.  But it doesn't talk too much about practical application,
which is necessary to get a handle on the effective differences.

As I said, I haven't been able to slice off the time I'd need to dig
deeper.  I've been wanting to dig deeper for about 3 years; but other
(revenue generating) tasks keep interfering.

>> It seems to me that formalisms built without the foundation axiom will
>> lack some of the definiteness we find and expect in our mathematics.
> ...
>> set theory.  And it also seems related to the rampant abuse of concepts
>> like iteration (e.g. recursion).
>
> Could you give examples of abuses, I would be interested?

Well, before I give any explicit examples, I'll want to make sure a)
that it's actually an abuse ... no sense accusing someone if there's
doubt and b) make them relevant to this e-mail list.  Give me some time
to do a little of that work and I'll get back to you.

--
glen e. p. ropella, 971-219-3846, http://tempusdictum.com
Last year I went fishing with Salvador Dali. He was using a dotted line.
He caught every other fish. -- Steven Wright



Reply | Threaded
Open this post in threaded view
|

recap on Rosen

Phil Henshaw-2
In reply to this post by glen ep ropella
Glen,
...

> You're right that agility helps one avoid an avoidable change ... e.g.
> like a big fish snapping at a small fish.  And you're right that such
> avoidable changes are only avoidable if one can sense the change
> coming.
>
> But, what if the change is totally unavoidable?  I.e. it's going to get
> you regardless of whether or not you sense it?  In such cases, the
> canalizing ability is agility.  Its sibling sensory ability _helps_,
> for
> sure.  But when the unavoidable change is in full swing and you cannot
> predict the final context that will obtain, then agility is the key.

[ph] trying to understand, if you're surprised and unable to respond to
change because you were not able to respond in time (or in kind), so the
circumstances exceeded the range of your agility, how did the agility become
the key?

> > The clear evidence, [...], is that we are missing the
> > signals of approaching danger.  We read 'disturbances in the force'
> (i.e.
> > alien derivatives like diminishing returns) very skillfully in one
> > circumstance and miss them entirely in others.  We constantly walk
> smack
> > into trouble because we do something that selectively blocks that
> kind of
> > information.
>
> I disagree.  We don't continually walk smack into trouble _because_ we
> selectively block a kind of information.  Our trouble is two-fold: 1)
> we
> are _abstracted_ from the environment and 2) we don't adopt a manifold,
> agnostic, multi-modeling strategy.

[ph] how is that not stated in more general terms in saying we're often
clueless and get caught flat footed?

>
> If we were not abstracted, then we'd be something like hunter-
> gatherers,
> destroying our local environments willy-nilly, but never deeply enough
> in a single exploitation such that the environment (including other
> humans) can't compensate for our exploitation.

[ph] I was sort of thinking you used _abstracted_ to refer to our use of an
artificial environment in our minds to guide us in navigating the real one.
All our troubles with the environment come from the ignorant design of our
abstractions it seems to me.  I can identify a number in particular having
to do with the inherent design of modeling, but I mean, it's tautological.
If our abstraction worked well we wouldn't be an endangered species.

>
> But we _are_ abstracted.
>
> If we were to adopt a manifold, agnostic, multi-modeling strategy to
> integrating with the environment, then we'd be OK because most of our
> models would fail but our overall suite would find some that work.
>
> But we do NOT use such a strategy.

[ph] well, and we also don't look where we're going.  That is actually the
first step in any strategy isn't it?  If we have functional fixations that
redefine what's in front of us as something that should never be looked at,
like saying that switching ever more land from making food to making fuel is
'renewable energy', say, then we run smack into things without any chance to
engage any strategy no matter how good our strategy might have been had we
developed one.

>
> Instead, primarily because of cars, airplanes, the printing press, and
> the internet, we succumb to rhetoric and justification of some
> persuasive people, and we all jump on the same damned bandwagon time
> and
> time again.  That commitment to a single (or small set of) model(s)
> condemns us to failure, regardless of the particular model(s) to which
> we commit.

[ph] and to correct a lack of models do you not first need to look around to
see what you might need a model for before making them?

>
> > [ph] again, agility only helps avoid the catastrophe *before* the
> > catastrophe.  Here you're saying it mainly helps after, and that
> seems to be
> > incorrect.
>
> Wrong.  Agility helps keep you in tune with your environment, which
> percolates back up to how embedded you _can_ be, which flows back down
> to how _aware_ you can be.  The more agile you are, the finer your
> sensory abilities will be and vice versa, the more sensitive you are,
> the more agile you will be.

[ph] agility is technically the versatility of your response to a signal,
not the listening for or recognition of the signal.   The listening part is
what's missing in our not knowing what models we'll need, and so having no
response.  A dog sleeping with one ear open is alert and listening, but not
displaying his agility.  He's sleeping.  The chain of events from alertness,
to recognizing a signal, to developing a response and then doing it, is
complex.   Maybe you mean to have that whole chain of different things as
'agility'?

The limits to growth signal is thermodynamic diminishing returns on
investment which started long ago... and then it proceeds to an ever steeper
learning curve on the way to system failure, which has now begun.  If people
saw that as something a model was needed for I could contribute a few of my
solutions to begin the full course correction version.  It seems the
intellectual community is not listening for the signal yet though... having
some functional fixation that says none will ever be needed.

>
> You seem to be trying to linearize the problem and solution and say
> that
> maximizing awareness, knowledge, and information is always the
> canalizing method for avoiding unanticipated potentially catastrophic
> change.  I'm saying that embeddedness is the general key and when the
> coming change is totally unavoidable, agility is the specific key.

[ph] You leave 'embeddedness' undescribed. How do you achieve it without
paying attention to the things in the world for which you have no model?
How would you know if there are things for which you have no model?

> Further, the less avoidable the change, the more agility matters.  The
> more avoidable the change, the more sensitivity matters.  But they are
> not orthogonal by any stretch of the imagination.  So, I'm not "making
> it complicated", I'm saying it is complex, intertwined.  You can't
> _both_ separate/isolate your abstract self and be agile enough to
> handle
> unanticipated potentially catastrophic change.
>
> You _can_ separate/isolate your abstract self and handle unanticipated
> potentially catastrophic change if you use a multi-modeling strategy so
> that any change only kills off a subset of your models.  The problems
> with that are: a) as technology advances, our minds are becoming more
> homogenous, meaning it's increasingly difficult for _us_ to maintain a
> multi-modeling strategy, and b) we really don't have the resources to
> create and maintain lots of huge models.
>
> Hence, agility is the key.

[ph] Maybe I'm being too practical.  You're not being at all clear how you'd
get models for things without a way of knowing you need to making them.
What in your system would signal you that the systems of the world your
models describe were developing new behavior?

Phil

>
> --
> glen e. p. ropella, 971-219-3846, http://tempusdictum.com
> Philosophy is a battle against the bewitchment of our intelligence by
> means of language. -- Ludwig Wittgenstein
>
>
> ============================================================
> FRIAM Applied Complexity Group listserv
> Meets Fridays 9a-11:30 at cafe at St. John's College
> lectures, archives, unsubscribe, maps at http://www.friam.org





Reply | Threaded
Open this post in threaded view
|

recap on Rosen

glen ep ropella
phil henshaw wrote:
> [ph] trying to understand, if you're surprised and unable to respond to
> change because you were not able to respond in time (or in kind), so the
> circumstances exceeded the range of your agility, how did the agility become
> the key?

Sorry if I'm not being clear.  I'd like to leave out terms like
"surprised" and "unable to respond".  Think of senses and actions as a
continuum.  We _always_ sense a coming change.  Sometimes we sense it
early and sometimes late.  We _always_ have time to act.  Sometimes we
have enough time for complicated action and sometimes we can only
instinctively twitch (as we're being eaten).

_If_ we sense a coming event too late to perform a complicated action,
_then_ the more agile we are, the more likely we are to survive.

To be concrete, as the big fish snaps at the little fish, if the little
fish can wiggle fast enough (agility), he may only lose a small section
of his tail fin.  If the little fish cannot wiggle fast enough, he'll
end up halfway inside the big fish's mouth.

Or, more precisely, let's say an event will occur at time T and the
event is sensed delta_T before the event.  Then as delta_T decreases (to
zero), agility becomes more important than sensitivity.

_Yes_ sensitivity clued us in to the event in the first place; but we
continue to sense our environment all through delta_T.  Likewise, we
continue to _act_ all through delta_T.  These two abilities are not
disjoint or decoupled.  They are intertwined and (effectively) continuous.

My point is that _after_ we know the event is coming, as delta_T
shrinks, agility becomes most important.

>>> The clear evidence, [...], is that we are missing the signals of
>>> approaching danger. We read 'disturbances in the force' (i.e.
>>> alien derivatives like diminishing returns) very skillfully in
>>> one circumstance and miss them entirely in others. We constantly
>>> walk smack into trouble because we do something that selectively
>>> blocks that kind of information.
>>>
>> I disagree. We don't continually walk smack into trouble _because_
>> we selectively block a kind of information. Our trouble is
>> two-fold: 1) we are _abstracted_ from the environment and 2) we
>> don't adopt a manifold, agnostic, multi-modeling strategy.
>
> [ph] how is that not stated in more general terms in saying we're often
> clueless and get caught flat footed?

My statement is more precise.  Specifically, I _disagree_ with the idea
that this happens because, as you said, we "selectively block that kind
of information".  We do NOT selectively block that kind of information.
 Rather we are abstracted (removed from the concrete detail) from the
environment, which means we cannot be agile.

That's why we're often clueless and get caught flat footed.  It's not
because information is _blocked_.  It's because we're not even involved.
 That information is literally _below_ the level of sensitivity of our
sensors.  It's like not being able to see microscopic objects with our
naked eye or when we can't see people on the ground from an airplane
window at 50,000 feet.  We're flying way up here and the info we need in
order to be agile is way down there.  In order to be agile, we need to
be embedded, on the ground, where the rubber meets the road, as it were.

I'm really confused as to why this concept isn't clear. [grin]

> [ph] I was sort of thinking you used _abstracted_ to refer to our use of an
> artificial environment in our minds to guide us in navigating the real one.
> All our troubles with the environment come from the ignorant design of our
> abstractions it seems to me.  I can identify a number in particular having
> to do with the inherent design of modeling, but I mean, it's tautological.
> If our abstraction worked well we wouldn't be an endangered species.

Sorry.  By "abstracted", I mean: "taken away, removed, remote, ignorant
of particular or concrete detail".  This is the standard definition of
the word, I think.  It's antonym is "concrete" or "particular".

"Sustainability" is a _great_ example.  The word is often used in a very
abstract way.  Sustain what?  Sustain forests?  Sustain grasslands?
Sustain the current panoply of species?  Sustain low human population so
that we don't swamp the earth with humans?  Sustain our standard of
living?  Of course, in some sense "sustainability" means all of these
things and many more.  And that is what makes it abstract.

When you add the concrete detail, it shifts from being "sustainability"
into something like logistics, epidemiology, ecology, etc.  The term is
used not to mean a particular effort or method.  The term is used to
describe a meta-method (or even a strategy) that helps organize
particular efforts so that the whole outcome has some certain character
to it.

That's an example of what I mean by "abstracted".  I'm not saying it's
bad.  In fact, abstraction is good and necessary.  But one can not be
both embedded and abstracted at the same time.

> [ph] well, and we also don't look where we're going.  That is actually the
> first step in any strategy isn't it?

Not necessarily.  Often a strategy requires a reference point.  In such
cases, we often take some blind action _first_ and only _then_ can we
look at the effect of the blind action and refine things so that our
second action is more on target.  "Reconnaissance" might be a good term
for that first blind action, except there is an expertise to good
recon... it's largely an introspective expertise, though.  "What types
of patterns am I prepared to recognize?"

> [ph] and to correct a lack of models do you not first need to look around to
> see what you might need a model for before making them?

"To look" is an action, not a passive perception.  The two are
inextricably coupled.  You can't observe without _taking_ an
observation.  Chicken or egg?  All data is preceded by a model by which
the data was taken and all models are preceded by data from which the
model was inferred.

That's why I say that agility cannot be decoupled from sensitivity.
They are both abilities intertwined in what I'm calling embeddedness.

>>> [ph] again, agility only helps avoid the catastrophe *before* the
>>> catastrophe.  Here you're saying it mainly helps after, and that
>>> seems to be incorrect.
>>>
>> Wrong.  Agility helps keep you in tune with your environment, which
>> percolates back up to how embedded you _can_ be, which flows back down
>> to how _aware_ you can be.  The more agile you are, the finer your
>> sensory abilities will be and vice versa, the more sensitive you are,
>> the more agile you will be.
>
> [ph] agility is technically the versatility of your response to a signal,
> not the listening for or recognition of the signal.

You cannot decouple, isolate, linearize, simplify them like this.  Or, I
suppose you _can_... [grin] ... but you'd be _abstracting_ out the
concrete reality.

> Maybe you mean to have that whole chain of different things as
> 'agility'?

No.  You and I agree on the definition of "agility".  What we disagree
on is whether or not agility can be separated from sensitivity.  I claim
it cannot.  They are part and parcel of each other.

_However_, as delta_T shrinks, agility becomes canalizing.  Acting
without sensing or thinking is the key to surviving when delta_T is
small.  This is why we practice, practice, practice in things like
sports and music.  The idea is to push these actions down into our
lizard brain so that we can do them immediately without thinking (but
not without sensing, of course, _never_ without sensing because ... wait
for it ... sensing and acting are tightly coupled).

> The limits to growth signal is thermodynamic diminishing returns on
> investment which started long ago... and then it proceeds to an ever steeper
> learning curve on the way to system failure, which has now begun.  If people
> saw that as something a model was needed for I could contribute a few of my
> solutions to begin the full course correction version.  It seems the
> intellectual community is not listening for the signal yet though... having
> some functional fixation that says none will ever be needed.

You rightly identify functional fixation as a problem.  But I maintain
that it's a _symptom_ of abstraction.  To break the fixation, go dig in
the dirt, put your feet on the ground, embed yourself in the system, and
your fixations will dissipate and new ones will form and dissipate in
tight correlation with the changing context.

> [ph] You leave 'embeddedness' undescribed. How do you achieve it without
> paying attention to the things in the world for which you have no model?
> How would you know if there are things for which you have no model?

[sigh]  "To embed" means "To cause to be an integral part of a
surrounding whole".  "Embedded" means "the state of being an integral
part of a surrounding whole."  "Embeddedness" means "the property or
characteristic of being, or the degree to which something is, embedded".

If you are not embedded in some system and you want to embed yourself,
then you simply begin poking and peeking at that system.  And you
_continue_ to (and continually) poke and peek at the system.  You poke
and peek wherever and whenever you can for as long as you can.

One consequence to being embedded is that you can no longer "see the
forest" because you're too busy poking and peeking at the trees.  I.e.
you are no longer abstracted.  You become part of the forest.... just
another one of the many animals running around poking and peeking at the
other stuff in the forest.

And that means that you don't build a model of the _forest_ (or if you
do, you shelve it for later modification after you're finished poking
and peeking at the trees).  If you want to build an accurate model of
the forest, then you slowly (regimented) abstract yourself out.  Go from
poking and peeking at the trees to poking and peeking at copses or
canopies, then perhaps to species of tree, then perhaps to the whole forest.

When you're finally fully abstracted away from the concrete details of
the forest, you can assemble your model of the forest.

> [ph] Maybe I'm being too practical.  You're not being at all clear how you'd
> get models for things without a way of knowing you need to making them.
> What in your system would signal you that the systems of the world your
> models describe were developing new behavior?

Sorry if I'm not being clear.  I just assumed this point was common
sense and fairly clear already.  I think I first learned it when
learning to ride a bicycle.  You act and sense _simultaneously_, not
separately.  Control is real-time.

The only way you're going to get a signal that you need a new model is
if you're embedded in some system that is evolving in a way that
discomforts (or stimulates) you.  And embedding means both sensitivity
and agility.  If delta_T is large, sensitivity is key.  If delta_T is
small, agility is key.

--
glen e. p. ropella, 971-219-3846, http://tempusdictum.com
Communism doesn't work because people like to own stuff. -- Frank Zappa



Reply | Threaded
Open this post in threaded view
|

recap on Rosen

Phil Henshaw-2
In reply to this post by Phil Henshaw-2

Glen,
It's interesting how you're approaching this whole thing coming to many of
the same questions with different branches.  That's what independent
learning processes do.  If I mostly divide things in more pieces, like
having 'sensing'  before  'acting' as two steps in sequence, it's just
because they do start that way and not because they don't sometimes become
integrated as with becoming one with your world (embedded in as you'd say).
I also am quite concerned about the very major functional fixations that are
not being acknowledged.   Did you look a bit at either of my new short
papers on how to use our more visible fixations (blinders) to help us see
where the others are, and help reveal the amazing world that's been hidden
from us by them?    
Less formal http://www.synapse9.com/drafts/Hidden-Life.pdf 
More theory http://www.synapse9.com/drafts/SciMan-2Draft.pdf 

>
> phil henshaw wrote:
> > [ph] trying to understand, if you're surprised and unable to respond to
> > change because you were not able to respond in time (or in kind), so the
> > circumstances exceeded the range of your agility, how did the agility
become

> > the key?
>
> Sorry if I'm not being clear.  I'd like to leave out terms like
> "surprised" and "unable to respond".  Think of senses and actions as a
> continuum.  We _always_ sense a coming change.  Sometimes we sense it
> early and sometimes late.  We _always_ have time to act.  Sometimes we
> have enough time for complicated action and sometimes we can only
> instinctively twitch (as we're being eaten).
>
> _If_ we sense a coming event too late to perform a complicated action,
> _then_ the more agile we are, the more likely we are to survive.

[ph] in sensing we start with a kind of radically impossible learning task
of matching unrecognized data to an infinite variety of possible models with
which to read it.  The hazard in taking unknown information as hints because
people are *highly* suggestible, is we almost always 'round up the usual
suspects' as Captain Renault in Casablanca is famous for demonstrating the
universal police investigation method.

One of those 'usual suspects' is the functional fixation that we should push
harder when we meet a little resistance.   When a direction of progress runs
into increasing difficulty there are *two* choices, to increase effort or
look for a new path.  With paths of limitlessly increasing difficulty, like
the attempt to get ever increasing returns from steadily diminishing
resources, the evidence is somewhat similar to the 'hump' of difficulty that
problem solvers like to define every problem as being.   The error is fatal,
though.    Now that the little fish is essentially wiggling around in the
closing jaws of the big fish, how would you suggest we disgust the big fish
enough for it to spit us out and go off looking elsewhere?   Could we tickle
his gills or something??

We've had the terminal diminishing returns warning for natural resources for
50 years, and the price explosions are the clear message that we missed the
opportunity to hit the physical limits at less than the highest accelerating
rate we could muster.   The confirming evidence is how the signals to speed
up our learning rate as the curve gets steeper and steeper are all being
ignored.   We're in nearly a complete learning stall and the physical
systems are coming apart.  So... for a problem solving exercise, How do we
learn fast when the symptom is that the problem we chose is slowing down
everyone's learning to a crawl?

>
> To be concrete, as the big fish snaps at the little fish, if the little
> fish can wiggle fast enough (agility), he may only lose a small section
> of his tail fin.  If the little fish cannot wiggle fast enough, he'll
> end up halfway inside the big fish's mouth.
>
> Or, more precisely, let's say an event will occur at time T and the
> event is sensed delta_T before the event.  Then as delta_T decreases
> (to
> zero), agility becomes more important than sensitivity.
>
> _Yes_ sensitivity clued us in to the event in the first place; but we
> continue to sense our environment all through delta_T.  Likewise, we
> continue to _act_ all through delta_T.  These two abilities are not
> disjoint or decoupled.  They are intertwined and (effectively)
> continuous.
>
> My point is that _after_ we know the event is coming, as delta_T
> shrinks, agility becomes most important.
>
> >>> The clear evidence, [...], is that we are missing the signals of
> >>> approaching danger. We read 'disturbances in the force' (i.e.
> >>> alien derivatives like diminishing returns) very skillfully in
> >>> one circumstance and miss them entirely in others. We constantly
> >>> walk smack into trouble because we do something that selectively
> >>> blocks that kind of information.
> >>>
> >> I disagree. We don't continually walk smack into trouble _because_
> >> we selectively block a kind of information. Our trouble is
> >> two-fold: 1) we are _abstracted_ from the environment and 2) we
> >> don't adopt a manifold, agnostic, multi-modeling strategy.
> >
> > [ph] how is that not stated in more general terms in saying we're
> often
> > clueless and get caught flat footed?
>
> My statement is more precise.  Specifically, I _disagree_ with the idea
> that this happens because, as you said, we "selectively block that kind
> of information".  We do NOT selectively block that kind of information.
>  Rather we are abstracted (removed from the concrete detail) from the
> environment, which means we cannot be agile.

[ph] I call that not knowing what's happening and so not being aware of the
choices.  I don't see that as having much to do with how vigorously we might
investigate new solutions if we had any idea what to solve.

>
> That's why we're often clueless and get caught flat footed.  It's not
> because information is _blocked_.  It's because we're not even
> involved.

[ph] What sort of 'dis-involvement' is not 'dis-information'?  I would have
thought you'd consider the physical world to be made of information, or at
least what it could mean to us to be.  I think the world is also full of
physical things that for many reasons are better observed for their own new
behavior rather than treated as abstractions.

>  That information is literally _below_ the level of sensitivity of our
> sensors.  It's like not being able to see microscopic objects with our
> naked eye or when we can't see people on the ground from an airplane
> window at 50,000 feet.  We're flying way up here and the info we need
> in
> order to be agile is way down there.  In order to be agile, we need to
> be embedded, on the ground, where the rubber meets the road, as it
> were.
>
> I'm really confused as to why this concept isn't clear. [grin]

[ph] That seems out of time sequence as if you could recognize the model
before interpreting the signal.  When two things are 'in synch' perhaps, but
not in general.   Early recognition of new behavior is the thing that is
easiest for functional fixations to block.  It happens because they block
you from asking exploratory questions.

>
> > [ph] I was sort of thinking you used _abstracted_ to refer to our use of
an
> > artificial environment in our minds to guide us in navigating the real
one.
> > All our troubles with the environment come from the ignorant design of
our
> > abstractions it seems to me.  I can identify a number in particular
having
> > to do with the inherent design of modeling, but I mean, it's
tautological.
> > If our abstraction worked well we wouldn't be an endangered species.
>
> Sorry.  By "abstracted", I mean: "taken away, removed, remote, ignorant
> of particular or concrete detail".  This is the standard definition of
> the word, I think.  It's antonym is "concrete" or "particular".

[ph] To me "abstracted" means thinking in terms of "abstractions" which my
dictionary has as "a general concept formed by extracting common features
from specific examples".  How it disconnects your thinking is by replacing
references to complex and possibly changing things with simple fixed ones.
That does disconnect, but as a consequence of the cognitive process, not the
environment.  Fixation is a 'do it yourself' thing...

> "Sustainability" is a _great_ example.  The word is often used in a very
> abstract way.  Sustain what?  Sustain forests?  Sustain grasslands?
> Sustain the current panoply of species?  Sustain low human population so
> that we don't swamp the earth with humans?  Sustain our standard of
> living?  Of course, in some sense "sustainability" means all of these
> things and many more.  And that is what makes it abstract.

[ph] right, precisely. And because people use it as a simple culture-laden
image instead of a learning task about complex relationships of change, we
get the great majority of users of the term to mean 'sustaining prosperity',
as the easy way to combine 'goodness' with 'goodness', not because it makes
the least bit of sense.

>
> When you add the concrete detail, it shifts from being "sustainability"
> into something like logistics, epidemiology, ecology, etc.  The term is
> used not to mean a particular effort or method.  The term is used to
> describe a meta-method (or even a strategy) that helps organize
> particular efforts so that the whole outcome has some certain character
> to it.

[ph] that the popular meaning is useless does not mean the term does not
have useful meanings if you actually use it to refer to reducing our impacts
on the earth and making it a good home (or as you'd say 'embedding' in the
earth).  Ever since the success of the word 'sustainability' the
acceleration of increasing impacts has been increasing, and coincidentally
the people leading the organizations involved have *all* been *fiercely*
resistant discussing whether their measures showed the totals...  

>
> That's an example of what I mean by "abstracted".  I'm not saying it's
> bad.  In fact, abstraction is good and necessary.  But one can not be
> both embedded and abstracted at the same time.

[ph] I'm slowly getting your word usage, and I'd be amazed if anyone else
would not have the same difficulty.  You seem to use 'embedded' to mean
'aware' and 'abstracted' to mean 'unaware'.   At least there are a great
many kinds and levels of 'awareness' not covered by the ends of only one
polarity.  

>
> > [ph] well, and we also don't look where we're going.  That is
> actually the
> > first step in any strategy isn't it?
>
> Not necessarily.  Often a strategy requires a reference point.  In such
> cases, we often take some blind action _first_ and only _then_ can we
> look at the effect of the blind action and refine things so that our
> second action is more on target.  "Reconnaissance" might be a good term
> for that first blind action, except there is an expertise to good
> recon... it's largely an introspective expertise, though.  "What types
> of patterns am I prepared to recognize?"

[ph] Oh sure, if you've gotten the signal that a complex process is
underway, and starting out 'blind' as to how to respond, and needing to
discover what to do, people can be highly creative in inventing unexpected
good solutions as you suggest.   If you're living an abstraction ('embedded'
in the joy of fanning flames like the economists) and denying the signal of
your house being on fire...  then you don't get the advantage of our natural
agility in discovering and inventing solutions.  

>
> > [ph] and to correct a lack of models do you not first need to look
> around to
> > see what you might need a model for before making them?
>
> "To look" is an action, not a passive perception.  The two are
> inextricably coupled.  You can't observe without _taking_ an
> observation.  Chicken or egg?  All data is preceded by a model by which
> the data was taken and all models are preceded by data from which the
> model was inferred.

[ph] well, observation invariably starts with not knowing what you're
looking for.  It's just dropping your pretenses, nothing more.  The object
of the observation may well get abstracted fairly quickly, or take a long
time and keep you in a quandary nearly forever.   People often take a long
time to recognize what they're seeing or hearing and the reaction may be
either to listen ever more intently, or just check back now and then, or not
able to place it right off and dismiss it with no concern.   One of the
interesting things about elephant behavior is how they periodically all
'freeze' and stand stock still for minutes at a time. Ethologists puzzling
over this apparent group dysfunction discovered they were 'listening' to the
low frequency messages, apparently from other elephants over the horizon.

>
> That's why I say that agility cannot be decoupled from sensitivity.
> They are both abilities intertwined in what I'm calling embeddedness.

[ph] To de-abstract that would seem to make it more useful.  You mean
agility in problem solving.  Some would call becoming 'embedded' in a
problem 'engaged' or 'immersed' in the problem.  It would imply having
previously avoided being blocked by your abstractions.  I see that as a
state in which you have access to both the situations complexities and
simplicities at the same time.  The kind of high level involvement with a
real problem and all its variables (the period of peak creative intensity)
is the culmination of meeting the problem, not the beginning.  The lead-up
to that when defining the problem and assigning resources to it is where the
problem solving research suggests all the big mistakes are made...  The
intense period of creative 'flow' is where the full realization of the
solution comes about.   I expect creative programming tasks are somewhat
similar to building design tasks in having that intense creative peak moment
just before the deadline, and the quality of early decisions being the real
determinants of success or failure.  

>
> >>> [ph] again, agility only helps avoid the catastrophe *before* the
> >>> catastrophe.  Here you're saying it mainly helps after, and that
> >>> seems to be incorrect.
> >>>
> >> Wrong.  Agility helps keep you in tune with your environment, which
> >> percolates back up to how embedded you _can_ be, which flows back
> down
> >> to how _aware_ you can be.  The more agile you are, the finer your
> >> sensory abilities will be and vice versa, the more sensitive you
> are,
> >> the more agile you will be.
> >
> > [ph] agility is technically the versatility of your response to a
> signal,
> > not the listening for or recognition of the signal.
>
> You cannot decouple, isolate, linearize, simplify them like this.  Or,
> I
> suppose you _can_... [grin] ... but you'd be _abstracting_ out the
> concrete reality.
>
> > Maybe you mean to have that whole chain of different things as
> > 'agility'?
>
> No.  You and I agree on the definition of "agility".  What we disagree
> on is whether or not agility can be separated from sensitivity.  I
> claim
> it cannot.  They are part and parcel of each other.

[ph] listening requires no plan or problem or anything, and is the beginning
of raising the question of whether one might look for an explanation to
answer some unclear potential signal, to then sort through some 'usual
suspects' of models to compare to the possible signal and decide whether to
drop it there if you don't see anything suspicious.  I think the term
'agility' applied to problem solving identifies the tail end of the process,
and leaves out the major error creation period of problem sensing,
identification and resource allocation.

>
> _However_, as delta_T shrinks, agility becomes canalizing.  Acting
> without sensing or thinking is the key to surviving when delta_T is
> small.  This is why we practice, practice, practice in things like
> sports and music.  The idea is to push these actions down into our
> lizard brain so that we can do them immediately without thinking (but
> not without sensing, of course, _never_ without sensing because ...
> wait
> for it ... sensing and acting are tightly coupled).

[ph] well, learning the high art of making things 'second nature', to become
'adept' and learn to *act without thinking* in a highly successful way also
has a flip side.   That's learning to *think without acting*, becoming adept
in true unbiased observation, to open the full door of awareness.

>
> > The limits to growth signal is thermodynamic diminishing returns on
> > investment which started long ago... and then it proceeds to an ever
> steeper
> > learning curve on the way to system failure, which has now begun.  If
> people
> > saw that as something a model was needed for I could contribute a few
> of my
> > solutions to begin the full course correction version.  It seems the
> > intellectual community is not listening for the signal yet though...
> having
> > some functional fixation that says none will ever be needed.
>
> You rightly identify functional fixation as a problem.  But I maintain
> that it's a _symptom_ of abstraction.  To break the fixation, go dig in
> the dirt, put your feet on the ground, embed yourself in the system,
> and
> your fixations will dissipate and new ones will form and dissipate in
> tight correlation with the changing context.

[ph] that seems to acknowledge the problem, but is said as if there's a way
to see your blind spots.  The evidence is that the problem is completely
rampant with everyone.   Every self-consistent representation of anything
necessarily contains the fault, as I hope I effectively describe in those
two short essays.

>
> > [ph] You leave 'embeddedness' undescribed. How do you achieve it
> without
> > paying attention to the things in the world for which you have no
> model?
> > How would you know if there are things for which you have no model?
>
> [sigh]  "To embed" means "To cause to be an integral part of a
> surrounding whole".  "Embedded" means "the state of being an integral
> part of a surrounding whole."  "Embeddedness" means "the property or
> characteristic of being, or the degree to which something is,
> embedded".

[ph] I use the term 'engage' or 'engaged' for that as the systems you're
connecting with have their own learning processes and so engaging with them
is importantly coordinating your learning with theirs.  Words come to mean
how they're used, of course.

>
> If you are not embedded in some system and you want to embed yourself,
> then you simply begin poking and peeking at that system.  And you
> _continue_ to (and continually) poke and peek at the system.  You poke
> and peek wherever and whenever you can for as long as you can.

[ph] there are observation methods designed for identify the multiple
independent systems in an environment.  Yes, poking and peeking, what I call
search and explore, finding things for raising questions is an important
part of the learning process.
>
> One consequence to being embedded is that you can no longer "see the
> forest" because you're too busy poking and peeking at the trees.  I.e.
> you are no longer abstracted.  You become part of the forest.... just
> another one of the many animals running around poking and peeking at
> the
> other stuff in the forest.

[ph] loosing the blocks of your own abstractions and the functional deficits
they give you is quite endless it seems.   Our whole culture developed in a
most negligent way with respect to the many independent kinds of learning
systems in which we are physically embedded but profoundly mentally
detached.   It seriously looks like we are truly blowing our chance to have
a stable home on earth, you know.  If we crash this civilization there are
no cheap resources with which to start-up another one...

>
> And that means that you don't build a model of the _forest_ (or if you
> do, you shelve it for later modification after you're finished poking
> and peeking at the trees).  If you want to build an accurate model of
> the forest, then you slowly (regimented) abstract yourself out.  Go
> from
> poking and peeking at the trees to poking and peeking at copses or
> canopies, then perhaps to species of tree, then perhaps to the whole
> forest.

[ph] preventing models from becoming blinders can be done two ways.  You can
put them aside, but then people become insecure and keep asking if they can
have their model back.   I prefer to make them mentally transparent, and
look *through* them, so it's like having night vision that picks out all the
living things in view that have original behavior, the things that are alien
to the model...  That way models don't become blind spots but head lights.

>
> When you're finally fully abstracted away from the concrete details of
> the forest, you can assemble your model of the forest.
>
> > [ph] Maybe I'm being too practical.  You're not being at all clear
> how you'd
> > get models for things without a way of knowing you need to making
> them.
> > What in your system would signal you that the systems of the world
> your
> > models describe were developing new behavior?
>
> Sorry if I'm not being clear.  I just assumed this point was common
> sense and fairly clear already.  I think I first learned it when
> learning to ride a bicycle.  You act and sense _simultaneously_, not
> separately.  Control is real-time.

[ph] yes, *once* something becomes second nature.  That's a process, that
some do well for some things and some for other things.  One of the things
hardly anyone has identified as a skill that could be made second nature is
finding the life around them.  It's importantly because our culture has a
functional fixity of looking for the fixity, i.e. representing things with
models to control them instead of using models to shine on the life...
>
> The only way you're going to get a signal that you need a new model is
> if you're embedded in some system that is evolving in a way that
> discomforts (or stimulates) you.  And embedding means both sensitivity
> and agility.  If delta_T is large, sensitivity is key.  If delta_T is
> small, agility is key.

[ph] one of the things science should look into is the curious phenomenon
that every experiment misbehaves.  It's a wide open field as far as I can
tell.  

Thanks for pushing my thinking and yours, but we should shorten a bit.

Phil

>
> --
> glen e. p. ropella, 971-219-3846, http://tempusdictum.com
> Communism doesn't work because people like to own stuff. -- Frank Zappa
>
>
> ============================================================
> FRIAM Applied Complexity Group listserv
> Meets Fridays 9a-11:30 at cafe at St. John's College
> lectures, archives, unsubscribe, maps at http://www.friam.org





Reply | Threaded
Open this post in threaded view
|

recap on Rosen

glen ep ropella
phil henshaw wrote:
> not being acknowledged.   Did you look a bit at either of my new short
> papers on how to use our more visible fixations (blinders) to help us see
> where the others are, and help reveal the amazing world that's been hidden
> from us by them?    
> Less formal http://www.synapse9.com/drafts/Hidden-Life.pdf 
> More theory http://www.synapse9.com/drafts/SciMan-2Draft.pdf 

No.  I will, though.

> Thanks for pushing my thinking and yours, but we should shorten a bit.

I agree.  Thanks for the dialog.  It's time to rest. [grin]

--
glen e. p. ropella, 971-219-3846, http://tempusdictum.com
I have the heart of a child. I keep it in a jar on my shelf. -- Robert Bloch




123