Seminal Papers in Complexity

classic Classic list List threaded Threaded
53 messages Options
123
Reply | Threaded
Open this post in threaded view
|

Seminal Papers in Complexity

glen ep ropella
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Phil Henshaw wrote:
> Na, I think even the most sophisticated math misses all the truly supple
> shape of natural form, and it it's of huge signifiance in our
> missunderstanding of natural phenomena.

I _strongly_ disagree with that.  I talk to many people who say things
like "I'm not good at math" or "I don't understand math".  And I can't
help thinking that they must not be talking about the same thing I'm
talking about when I say "math".  It seems impossible for a person to
not understand math because math is pervasive in human activity.

For example, all musicians are mathematicians.  All brick layers are
mathematicians.  All lawyers are at least logicians if not
mathematicians.  Architects, nurses, truck drivers, corporate
bureaucrats, and some skate punks are mathematicians.

Now, it's true that most of these people don't know how to _describe_
what it is that they do.  They just _do_ it without trying to formalize
what they're doing.  But, as Wittgenstein, Tarski, Goedel, and many
others have shown us, math is _more_ than formalization.  The working
mathematician doesn't spend her days trying to demonstrate the
differences between ZF and PA.  The working mathematician spends her
days thinking about the world and trying to _intervene_ in the world to
make something happen (or to explain, predict, describe some thing).

Hence, math doesn't _miss_ the "supple shape of natural form", it is
derived directly from such natural form.  When/if it misses some
element, it is because the mathematician failed to capture that element,
usually on purpose.

p.s. My argument above does not make the word "mathematician" useless by
ascribing it to _everyone_ (as Bristol did when implying that every
thing is emergent).  It is only ascribed to those who attempt to form
rigorous conceptions of the things around them and use those conceptions
to interact with the world.  There are some (Paul Feyerabend comes to
mind) who make the case that such strict adherence to method can impede
understanding.  And that may be true.  (I believe it is.)  As such,
there are plenty of people out there who actively resist the development
of and application of rigorous method.  Those are not (always)
mathematicians.

- --
glen e. p. ropella, 971-219-3846, http://tempusdictum.com
... given any rule, however "fundamental" or "necessary" for science,
there are always circumstances when it is advisable not only to ignore
the rule, but to adopt its opposite. -- Paul Feyerabend

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.6 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFGgq55ZeB+vOTnLkoRAk/AAKCWbJFht9SLbIeQV2RGLqHXFWIuDwCfcZk0
gVV8VJ3rnrsyZD+NZOjJh/c=
=apsV
-----END PGP SIGNATURE-----


Reply | Threaded
Open this post in threaded view
|

cognitive largess (was Re: reductionism)

glen ep ropella
In reply to this post by Phil Henshaw-2
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Phil Henshaw wrote:
> Well, the 'fault' of considering things from multiple points of view is
> not contradiction, but confusing all those who don't!

Well, for us Discordians, it is certainly not a fault to confuse!  In
fact, it is our holy obligation.  Hail Eris!  =><=

> Speaking of your observation that "Internal loci of control are
> sometimes useful" wouldn't it be wise for us to switch the exponential
> growth of exploiting the earth to refining our uses of it before it's
> too late?

No.  It would not be _wise_ to switch because all we'd be doing is
switching from one misconception to another.  If you can:  a) clearly
describe the goal of the switch, b) clearly describe what's happening
now, c) clearly describe what _will_ happen if we switch, and d) show
how (c) leads to (a), _then_ I would be capable of determining the
wisdom of a switch.  Until then, it's just change for change's sake...
or perhaps it's a mild form of revolution just to wrest power from those
who currently have it (which, by the way, I'm all for if I'm one of the
ones that will come into power after the switch ;-).

> On whether this confusion we all experience between information and
> action is robust or not, I certainly accept and observe some
> inconsistency, but think most people remain hamstrung by generally not
> knowing when.   The simple case in point is how easily and confidently
> we understand some things going out of control, as with a singer's voice
> cracking, or a businessman not getting expert help until it's too late,
> or a party or a friendship erupting in thrill that changes to tragedy,
> and see nothing wrong at all with  the speed, complexity and magnitude
> of unknown impacts of decision-making about our permanent life support
> system, doubling, regularly, forever.   It's the inconsistency!    All
> forms of excess look to be much the same problem to me, and can be read
> with the same metrics.  It shouldn't be a tough problem, well except for
> confusing information and action.

The trouble is that we _all_ view only a tiny portion of what's out
there.  And that includes those who believe they have a more synoptic
view than other people.  Actually, those who believe they're smarter
than other people are usually _more_ stupid than the people they accuse
of myopia because they have arrogance convoluted in with their inherent
myopia.

The humble myopic is less myopic than the arrogant myopic.

- --
glen e. p. ropella, 971-219-3846, http://tempusdictum.com
Seek simplicity, and distrust it. -- Alfred North Whitehead

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.6 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFGgrD6ZeB+vOTnLkoRAgUFAKC0lzOTJA3BsoUraUBy9kZMaJyh0wCeLTzk
TiF7TjVbMaeATInwhEeExs8=
=mEUX
-----END PGP SIGNATURE-----


Reply | Threaded
Open this post in threaded view
|

cognitive largess (was Re: reductionism)

Phil Henshaw-2
On 6/27/07, Glen E. P. Ropella <gepr at tempusdictum.com> wrote:

>
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
>
> Phil Henshaw wrote:
> > Well, the 'fault' of considering things from multiple points of view is
> > not contradiction, but confusing all those who don't!
>
> Well, for us Discordians, it is certainly not a fault to confuse!  In
> fact, it is our holy obligation.  Hail Eris!  =><=


I think I may have been mistakenly half serious in my above comment... A
consistent reality is always complex enough to have multiple points of view
that are really inconsistent only in the projections, like top view and side
view are hoplessly contradictory unless your realize you're looking at a
physical object and not just some interesting projected 2d display.   It
does confuse that we seem to need to look at real systems with simplifying
projections that look different from each other.    The answer as to which
2D projection is the correct one is what seems most confusing.

> Speaking of your observation that "Internal loci of control are
> > sometimes useful" wouldn't it be wise for us to switch the exponential
> > growth of exploiting the earth to refining our uses of it before it's
> > too late?
>
> No.  It would not be _wise_ to switch because all we'd be doing is
> switching from one misconception to another.  If you can:  a) clearly
> describe the goal of the switch, b) clearly describe what's happening
> now, c) clearly describe what _will_ happen if we switch, and d) show
> how (c) leads to (a), _then_ I would be capable of determining the
> wisdom of a switch.  Until then, it's just change for change's sake...
> or perhaps it's a mild form of revolution just to wrest power from those
> who currently have it (which, by the way, I'm all for if I'm one of the
> ones that will come into power after the switch ;-).


Same error on my part.   Just thinking at this point, considering where
things are headed, it might be better to look at cooling change down a
little rather than continuing to heat it up explosively as an obsession and
commmitment to a steady exploding state...  Just maybe...    The interesting
thing about growth systems is that they're not composed of information, but
physcially real things that we see as information because that's what we
view them with.   A useful principle is that growth systems always have a
recoil, like hitting the rubber band and snapping back.   It's one of the
complete certainties of nature (100% +/- 0%). You can't see it in the curves
(the information) until it hits, but you can learn to read it early in the
process, in the higher derivatives.   It's an extremely consistent signal,
and allows time for collecting options for ready use when feedback's switch
and the information as to where they're going is better.   I'm talking about
predictive observation, watching things coming by closely observing the
processes on nature.

When I said "wouldn't it be wise", I certainly meant it loosely, and your
interpretation is entirely appropriate.    I'm only concerned that the
general discussions of otherwise competent thinking people in no way appear
to contemplate easing off on the exponential (% adding) accleration of ever
more complex change until we do actually do 'hit the wall' and completely
loose control of the shock waves of multiplying
repercussions.    It's generally better to turn the wheel before hitting the
wall.    Not all steering would be good steering, naturally, but you don't
need to have all knowledge to have useful clues and uncertainty, and things
that are 100% certain are useful clues.   It's better than no clue at all
anyway.

> On whether this confusion we all experience between information and
> > action is robust or not, I certainly accept and observe some
> > inconsistency, but think most people remain hamstrung by generally not
> > knowing when.   The simple case in point is how easily and confidently
> > we understand some things going out of control, as with a singer's voice
> > cracking, or a businessman not getting expert help until it's too late,
> > or a party or a friendship erupting in thrill that changes to tragedy,
> > and see nothing wrong at all with  the speed, complexity and magnitude
> > of unknown impacts of decision-making about our permanent life support
> > system, doubling, regularly, forever.   It's the inconsistency!    All
> > forms of excess look to be much the same problem to me, and can be read
> > with the same metrics.  It shouldn't be a tough problem, well except for
> > confusing information and action.
>
> The trouble is that we _all_ view only a tiny portion of what's out
> there.  And that includes those who believe they have a more synoptic
> view than other people.  Actually, those who believe they're smarter
> than other people are usually _more_ stupid than the people they accuse
> of myopia because they have arrogance convoluted in with their inherent
> myopia.
>
> The humble myopic is less myopic than the arrogant myopic.



yep, always with a grain of salt.   Supreme confidence that there is nothing
to know wouldn't seem to fit that principle though....

- --

> glen e. p. ropella, 971-219-3846, http://tempusdictum.com
> Seek simplicity, and distrust it. -- Alfred North Whitehead
>
> -----BEGIN PGP SIGNATURE-----
> Version: GnuPG v1.4.6 (GNU/Linux)
> Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org
>
> iD8DBQFGgrD6ZeB+vOTnLkoRAgUFAKC0lzOTJA3BsoUraUBy9kZMaJyh0wCeLTzk
> TiF7TjVbMaeATInwhEeExs8=
> =mEUX
> -----END PGP SIGNATURE-----
>
> ============================================================
> FRIAM Applied Complexity Group listserv
> Meets Fridays 9a-11:30 at cafe at St. John's College
> lectures, archives, unsubscribe, maps at http://www.friam.org
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://redfish.com/pipermail/friam_redfish.com/attachments/20070627/29a94ce3/attachment-0001.html 

Reply | Threaded
Open this post in threaded view
|

cognitive largess (was Re: reductionism)

glen ep ropella
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Phil Henshaw wrote:
> I think I may have been mistakenly half serious in my above
> comment... A consistent reality is always complex enough to have
> multiple points of view that are really inconsistent only in the
> projections, like top view and side view are hoplessly contradictory
> unless your realize you're looking at a physical object and not just
> some interesting projected 2d display.   It does confuse that we seem
>  to need to look at real systems with simplifying projections that
> look different from each other.    The answer as to which 2D
> projection is the correct one is what seems most confusing.

Ahhh.  Sorry.  I tend to be a bit flippant sometimes.  You're making a
very good point, here.  I can't tell you how many times I've argued with
biological modelers... well, actually many people from many different
domains ... where they seem hooked on the idea that there can only be 1
true model of some referent.

My most heated was with a guy who claimed that model selectors (e.g.
AIC) that rely on a posited "perfect" or "optimal" model can actually
help one get at the true generator of whatever data set is being
examined.  It took a lot of verbage on my part to draw a detailed enough
picture for him to understand that the data were taken by a method that
presumed a model (because all observation requires a model and all
models require observations).  And the best a selector can do is find
that occult model by which the data was collected.  Compounding the
problem is that multiple data sets can _claim_ to be observations on the
same system; but since the experimental protocol is always subtly
different, it makes it difficult even to triangulate toward _the_ one
true system (or description of the system).  Hence, it's wisest just to
toss the idea of "truth" out the window completely (except while praying
or dredging up whatever personal motivation you need to keep spinning
the hamster wheel) and maintain explicit adherence to your concrete
objectives during each particular project/task.

> When I said "wouldn't it be wise", I certainly meant it loosely, and
> your interpretation is entirely appropriate.    I'm only concerned
> that the general discussions of otherwise competent thinking people
> in no way appear to contemplate easing off on the exponential (%
> adding) accleration of ever more complex change until we do actually
> do 'hit the wall' and completely loose control of the shock waves of
> multiplying repercussions.    It's generally better to turn the wheel
>  before hitting the wall.    Not all steering would be good steering,
>  naturally, but you don't need to have all knowledge to have useful
> clues and uncertainty, and things that are 100% certain are useful
> clues.   It's better than no clue at all anyway.

Well, in general, I'm conservative.  And I agree that, from a
conservative perspective, if one _must_ change without any specific
knowledge of what to change, the best change is to ratchet back on your
actions.  I.e. do less of everything.  But, there's no convincing reason
to believe that ratcheting back on our burn rate will have a positive
effect.

An added complication is that most _people_, including those who profess
themselves to be defenders of our current biosphere, are greedy and
self-interested above any other trait.  Even the ones that don't realize
that their words and actions are greedy and self-interested are usually
just trying to engineer/coerce the world into being more friendly to
them and their ilk.  For example, those who would have us stop burning
fossil fuels in order to keep the environment comfortable for humans.
They _say_ they care about life, the planet, whatever.  But what they
really want is to preserve their current way of life at the top of the
food chain.  Hence any argument they offer has that bias and must be
reformulated in order to determine the real value of whatever action
they suggest.

> yep, always with a grain of salt.   Supreme confidence that there is
>  nothing to know wouldn't seem to fit that principle though....

It's not that there's nothing to know.  It's just that there's no real
separation between blind action and wise action.  Wisdom, intelligence,
etc. are all ascribed _after_ one has taken their actions and proven
successful in some sense.  Prior to the success, active people are often
labeled quacks or wackos.  The ultimately unsuccessful wackos stay
classified as wackos.  The ultimately successful wackos are
re-classified into geniuses.

The point is that we are what we _do_, not what we think.  And if my
actions cause people pain, then those people will call me a name
associated with that pain.  E.g. If I'm an Enron executive and I get
busted, they call me a criminal.  If I'm a self-aggrandizing biological
idealist ... ?

We see this developing now in farmers chopping down trees to farm for
bio-fuel, the corn shortage in the face of using corn to produce
ethanol, the German beer price increases, etc.  These are not bad ideas.
 But, the ultimate consequences determine the value-classification.  An
individual farmer cannot wisely decide (based on his thoughts) whether
chopping down a few trees to farm the land is good or bad.

p.s. In case nobody notices, my ultimate _point_ in this dialog is that
simulation (particularly via concrete modeling methods like ABM) is
_necessary_ to wisdom.  These systems are not analytic/separable (and
functional relationships within them are not analytical soluble).  They
must be examined using simulation.  And to this topic of sustainability,
until we see some concrete, detailed, not-toy, not-abstract models
showing how a particular condition X leads to a particular condition Y,
it's all just mumbo-jumbo.

- --
glen e. p. ropella, 971-219-3846, http://tempusdictum.com
None are more hopelessly enslaved than those who falsely believe they
are free. -- Goethe

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.6 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFGgvliZeB+vOTnLkoRApFEAKCxEGDnvrEFJjDWfuA17hvUHvW6mwCcCeCj
xo9sR/qiBWQOQ9zG+65RF7w=
=WDJP
-----END PGP SIGNATURE-----


Reply | Threaded
Open this post in threaded view
|

cognitive largess (was Re: reductionism)

Marcus G. Daniels
Phil Henshaw wrote:
> It does confuse that we seem
> to need to look at real systems with simplifying projections that
> look different from each other.    The answer as to which 2D
> projection is the correct one is what seems most confusing.
Glen E. P. Ropella wrote:
> My most heated was with a guy who claimed that model selectors (e.g.
> AIC) that rely on a posited "perfect" or "optimal" model can actually
> help one get at the true generator of whatever data set is being
> examined.  It took a lot of verbage on my part to draw a detailed enough picture for him to understand that the data were taken by a method that
> presumed a model (because all observation requires a model and all
> models require observations).  And the best a selector can do is find
> that occult model by which the data was collected.
Fine, but more models won't help that problem.    The data is the
data.    In contrast, Phil's example would be addressed by AIC.





Reply | Threaded
Open this post in threaded view
|

cognitive largess (was Re: reductionism)

glen ep ropella
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Marcus G. Daniels wrote:
> Fine, but more models won't help that problem.    The data is the
> data.    In contrast, Phil's example would be addressed by AIC.

How so?

I'll reformulate Phil's statement as: "Because understanding a referent
requires multiple simplifying projections (models), the question of
which particular model is the correct one is confusing."  Phil, if that
isn't a good paraphrase, please correct me.

But, if it is a good paraphrase, selectors like the AIC that assume some
ideal perfect (largest) description of the referent will NOT help.  In
fact, they hinder understanding because they _imply_ that there is a
single, true, perfect, ideal, largest model, which is false.

Operationally, they do not help because the data are at least one
(probably many many more) modeling level(s) removed from the referent
system.

To be clear, the process works this way:

1) casual observation and psychological induction leads to a (usually
mental) model
2) an experiment is designed based on that model
3) data are taken from the experiment
4) a more rigorous model is derived from the data (perhaps regulated by
the prior model)
5) repeat as necessary

Each data set is derived from a prior model.  Hence, the best a
data-driven model selector can do is find the model(s) upon which the
data are based.  It only targets the referent to the extent that the
original (usually mental) models target the referent.

And if those original models were induced by a perverse person with
perverse thoughts, then the original model is probably way off and
false.  Hence, selectors like the AIC will only lead us to that false
model, not to the truth.  Worse yet, because they used a rigorous
(hermeneutic) mathematical technique to find that false model, they will
be strongly inclined to believe in that false model.  Just like the old
adage "Don't believe everything you read", we could state an analogous
adage in modeling and simulation:  "Don't believe everything described
mathematically."  Of course, those aphorisms are way too moderate.
[grin]  In fact, to quote Sturgeon "Sure, 90% of science fiction is
crud. That's because 90% of everything is crud."  So, we should change
the aphorism to "Don't believe 90% of the math you read."

> Phil Henshaw wrote:
>> It does confuse that we seem
>> to need to look at real systems with simplifying projections that
>> look different from each other.    The answer as to which 2D
>> projection is the correct one is what seems most confusing.
>>
> Glen E. P. Ropella wrote:
>> My most heated was with a guy who claimed that model selectors (e.g.
>> AIC) that rely on a posited "perfect" or "optimal" model can actually
>> help one get at the true generator of whatever data set is being
>> examined.  It took a lot of verbage on my part to draw a detailed enough picture for him to understand that the data were taken by a method that
>> presumed a model (because all observation requires a model and all
>> models require observations).  And the best a selector can do is find
>> that occult model by which the data was collected.

- --
glen e. p. ropella, 971-219-3846, http://tempusdictum.com
Shallow men believe in luck ... Strong men believe in cause and effect.
- -- Ralph Waldo Emerson

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.6 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFGg+bXZeB+vOTnLkoRAn48AKCSs6TtvhgzoPivHZV9qp3QTtfTewCfdThI
Jx3uOb0ycXiJ9/DpYW6dF2U=
=RkyW
-----END PGP SIGNATURE-----


Reply | Threaded
Open this post in threaded view
|

cognitive largess (was Re: reductionism)

Marcus G. Daniels
Glen E. P. Ropella wrote:
> To be clear, the process works this way:
>
> 1) casual observation and psychological induction leads to a (usually
> mental) model
> 2) an experiment is designed based on that model
> 3) data are taken from the experiment
> 4) a more rigorous model is derived from the data (perhaps regulated by
> the prior model)
> 5) repeat as necessar
That can happen, but it isn't necessary.   An extreme example is study
of financial instruments, where it is very clear how the `thing' works,
and the process of measurement has nothing to do with modelers that
might find the thing useful or interesting to study.  The data from
trading systems is complete and precise.   The psychology or motives of
the people using the system of course aren't directly measured, but at
least this is not an example of a pathological circularity of data
collection being biased by a mental model of the system.

In practice, I think often 1-3 are decoupled from 4, especially for `big
science' where a lot of competitive people are involved.

Even if it they were often tightly coupled, it strikes me as absurd to
equate the value of multiple perspectives with experiment.  (Not  that
you are..)  If a whole range of mechanisms or parameterizations can
reproduce an inadequate validation data set, and there is no way to
imagine a way to measure a better one, then that's a clue the modeling
exercise may lack utility.



Reply | Threaded
Open this post in threaded view
|

cognitive largess (was Re: reductionism)

glen ep ropella
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Marcus G. Daniels wrote:

> Glen E. P. Ropella wrote:
>> To be clear, the process works this way:
>>
>> 1) casual observation and psychological induction leads to a (usually
>> mental) model
>> 2) an experiment is designed based on that model
>> 3) data are taken from the experiment
>> 4) a more rigorous model is derived from the data (perhaps regulated by
>> the prior model)
>> 5) repeat as necessary
>
> That can happen, but it isn't necessary.   An extreme example is study
> of financial instruments, where it is very clear how the `thing' works,
> and the process of measurement has nothing to do with modelers that
> might find the thing useful or interesting to study.  The data from
> trading systems is complete and precise.   The psychology or motives of
> the people using the system of course aren't directly measured, but at
> least this is not an example of a pathological circularity of data
> collection being biased by a mental model of the system.

Well, I suppose that begs the question of what we mean by "system".  In
the case of the financial machinery, it is clear how that part of the
thing works.  But, it is not at all clear how the whole system works.
If it were, then predictive algorithms would be reasonably simple to
develop.  Even if they weren't analytically soluble, they would submit
to brute force and simulation.  To my knowledge nobody's gotten rich
with such algorithms, and not for lack of attention.  I'm too ignorant
of the domain to know the particular reasons for the failures.  But, I
suspect it has to do with the other part of the system (psychology or
motives of the people using the machinery) not having such a simple
(non-complex) model.

Your point folds us nicely back into the conversation about
"nonlinearity".  You can't cleanly separate the machinery from the users
of the machinery.  It's not a "system" if you do that. [grin] Just
because we have complete and precise understanding of some particular
(separable) aspect of a system doesn't mean we can predict the behavior
of the system.  It takes a complete and precise understanding of
multiple aspects (or a single aspect that accounts for some large
percentage of the system... say 80% ... but it's questionable whether
the concept of a "large aspect" even makes sense).

So, we end up at the same spot with a circularity of models modeling
models and the only real touch point is between psychological induction
and the referent (where the referent is the system being studied, not
some minor machinery within the system).

BTW, I don't think it's appropriate to call this circularity
"pathological".  "Pathology" generally means something like "abnormal"
or a maladaptive departure from a normal condition.  Because this
circularity is pervasive in the interaction between our thoughts and the
reality around us, I think it's safe to say the circularity is quite
normal.  Just like nonlinearity is the norm and linearity the exception,
circularity and paradox are the norm when humans interact with their
environment and acyclic processes are pathological.  Note that I'm not
suggesting that acyclicity (?) is maladaptive or bad... though it
certainly might be.

> In practice, I think often 1-3 are decoupled from 4, especially for `big
> science' where a lot of competitive people are involved.
>
> Even if it they were often tightly coupled, it strikes me as absurd to
> equate the value of multiple perspectives with experiment.  (Not  that
> you are..)  If a whole range of mechanisms or parameterizations can
> reproduce an inadequate validation data set, and there is no way to
> imagine a way to measure a better one, then that's a clue the modeling
> exercise may lack utility.

I agree completely, here, at least as a matter of attentional emphasis.
 It's true that it is very important to emphasize experiment over
multi-modeling.  However, that's _only_ true because the multi-modeling
occurs as a natural consequence of the decentralized nature of
collective human science.  Because we're separate humans and have
separate perspectives, we will _always_ have multiple models.  Even in
the most extreme apprenticeship (parent to child), the apprentice always
comes out with a different model than the master.  So, there's no
necessity to emphasizing multi-modeling.  It happens regardless.

But, in terms of methodology (the study of method), it can be important
to understand that multi-modeling is required for science.

- --
glen e. p. ropella, 971-219-3846, http://tempusdictum.com
You work three jobs?  Uniquely American, isn't it? I mean, that is
fantastic that you're doing that. -- George W. Bush

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.6 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFGhARmZeB+vOTnLkoRApvfAJ9ga29R18bHoq8ooVCT4Qsk0xQFhwCdGQ60
qknd2QDpXYv92zaQRQZ7+aw=
=srhK
-----END PGP SIGNATURE-----


Reply | Threaded
Open this post in threaded view
|

cognitive largess (was Re: reductionism)

Marcus G. Daniels
Glen E. P. Ropella wrote:
> Well, I suppose that begs the question of what we mean by "system".  In
> the case of the financial machinery, it is clear how that part of the
> thing works.  But, it is not at all clear how the whole system works.
> If it were, then predictive algorithms would be reasonably simple to
> develop.  
It's price and risk that matters to quantitative traders.  Price and
risk are measurable on any historical timescale that might be of
interest.   Bias in data collection takes the form of "How do I reduce
all this individual trading activity to something I can manage?"

That sort of `bias' can be greatly attenuated using terabyte RAM
multiprocessor systems .  No lie, it's feasible to correlate all sorts
of things with thousands of processors at your disposal.   Why model
when you can measure?

I suppose a lot of `ideal experiments' are obvious enough to experts,
but perceived as too expensive.    In other situations, like in the
study of social phenomena, it may not be clear at all what to measure or
for that matter how to do it, so there's the risk that the conventional
wisdom comes to dominate the kinds of data that are sought.



Reply | Threaded
Open this post in threaded view
|

cognitive largess (was Re: reductionism)

glen ep ropella
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Marcus G. Daniels wrote:
> It's price and risk that matters to quantitative traders.  Price and
> risk are measurable on any historical timescale that might be of
> interest.   Bias in data collection takes the form of "How do I reduce
> all this individual trading activity to something I can manage?"
>
> That sort of `bias' can be greatly attenuated using terabyte RAM
> multiprocessor systems .  No lie, it's feasible to correlate all sorts
> of things with thousands of processors at your disposal.   Why model
> when you can measure?

But you're focusing on extrapolation, right?  It strikes me that you're
not talking about heuristic (a.k.a. explanatory) models but about
aggregative extrapolation.

In general, extrapolative models can be developed fairly easily.  All
one need do is run as much data as possible through the best fitting
algorithms available.  The result is a black-box machine that can inter-
and extra-polate, but that can't provide any [pre|retro]diction beyond
the tolerances used to fit the data.

And as I understand it, those tolerances are pretty tight.  Granted, if
you toss more resources at the problem (more data, more algorithms, more
hardware, more people, etc), you should be able to loosen up those
tolerances and make better [pre|retro]diction.  But, you will always be
limited by the data (and other resources) you chose.

(And, BTW, that data (and other resources... especially the people) are
built upon prior (always false) models.)

In contrast, if one were to develop heuristic models (with as many
concrete mechanistic details as possible), in principle, the ability to
[retro|pre]dict would show a manifold increase.

So, when you ask "why model when you can measure" the answer depends on
whether the model is a black box aggregative function induced from data
or a white box particular analog to the referent.  If the model is the
former type, then you're right.  More measurement will always lead to
better models of that type.  If, however, the model is a kind of scheme
(as in databases) in which to capture and understand the data you've
taken so far (a heuristic model), then the answer is that modeling
allows you to understand the data you've taken to the degree that you
may not _need_ to do any more measurement!

And that's only _if_ the objective of the effort is pure
[pre|retro]diction regardless of understanding.  If understanding (i.e.
science) is an objective, then there are additional reasons to model
regardless of how much measurement you can afford.

p.s. I just re-read that and it's clear that I use too many
parentheses... Oh well, it is just a mailing list after all. [grin]
- --
glen e. p. ropella, 971-219-3846, http://tempusdictum.com
The most merciful thing in the world, I think, is the inability of the
human mind to correlate all its contents. -- H. P. Lovecraft

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.6 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFGhD67ZeB+vOTnLkoRAstrAJ9fQzeBWKJub4CoAnvuZmaMHTRiSACgvE2a
4i8xKHuijXxYe23Oo3GzjZM=
=fO7R
-----END PGP SIGNATURE-----


Reply | Threaded
Open this post in threaded view
|

cognitive largess (was Re: reductionism)

Marcus G. Daniels
In reply to this post by glen ep ropella
Glen E. P. Ropella wrote:
> You can't cleanly separate the machinery from the users
> of the machinery.  It's not a "system" if you do that. [grin] Just
> because we have complete and precise understanding of some particular
> (separable) aspect of a system doesn't mean we can predict the behavior
> of the system.  It takes a complete and precise understanding of
> multiple aspects (or a single aspect that accounts for some large
> percentage of the system... say 80% ... but it's questionable whether
> the concept of a "large aspect" even makes sense).
>  
Right, in this case the trading book has a public interface and some
physics that come from its rules.
I'm not saying that understanding those physics ought to give the best
insight into the the ways firms' execute their strategies.  Rather,
simulation of the book, together with general financial indicators can
help separate expected changes from novel ones.  Some datasets even give
trader identities to the resolution of firms (e.g. Goldman Sachs, not
Joe Smith), so those signatures can potentially be pretty information
rich.


Reply | Threaded
Open this post in threaded view
|

cognitive largess (was Re: reductionism)

Marcus G. Daniels
In reply to this post by glen ep ropella
Glen E. P. Ropella wrote:
> But you're focusing on extrapolation, right?  It strikes me that you're
> not talking about heuristic (a.k.a. explanatory) models but about
> aggregative extrapolation.
More like looking for exploitable, repeatable cause/effect
inefficiencies in an ocean of activity.    My objection to ABM in this
kind of context is that the individual `strategies' may range from super
smart to virtually random.    The data stream can tell you when someone
made money, and there are lots of these examples each having particular
state of the book (for that equity and maybe a bunch of related ones)
and real world when it happened.    Correlating and deconstructing the
two into a taxonomy seems much more direct than tweaking agent rules
which are just educated guesses to begin with.   One could think about
skipping the deconstruction/taxonomy stuff for an automated learning
procedure (e.g. hidden markov model) that reproduced the individual firm
trading patterns given certain signals, but since money is on the line
perhaps that's not a great plan..



Reply | Threaded
Open this post in threaded view
|

Seminal Papers in Complexity

Phil Henshaw-2
In reply to this post by glen ep ropella

 Glen E. P. Ropella wrote

> Sent: Wednesday, June 27, 2007 2:38 PM
>
>
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
>
> Phil Henshaw wrote:
> > Na, I think even the most sophisticated math misses all the truly
> > supple shape of natural form, and it it's of huge
> signifiance in our
> > missunderstanding of natural phenomena.
>
> I _strongly_ disagree with that.  I talk to many people who
> say things like "I'm not good at math" or "I don't understand
> math".  And I can't help thinking that they must not be
> talking about the same thing I'm talking about when I say
> "math".  It seems impossible for a person to not understand
> math because math is pervasive in human activity.
oh for sure, people almost universally are quite confident that what
they imagine is what any person they listened to was intending to say,
and it mostly ain't so.


> For example, all musicians are mathematicians.  All brick
> layers are mathematicians.  All lawyers are at least
> logicians if not mathematicians.  Architects, nurses, truck
> drivers, corporate bureaucrats, and some skate punks are
> mathematicians.
Well, normal intuitions do in fact often appear to emulate the juggling
of exceedingly sophisticated differential equations, which we then
display little or no ability to trace and confirm.. etc.   It's part of
the 'fun'.


> Now, it's true that most of these people don't know how to
> _describe_ what it is that they do.  They just _do_ it
> without trying to formalize what they're doing.  But, as
> Wittgenstein, Tarski, Goedel, and many others have shown us,
> math is _more_ than formalization.  The working mathematician
> doesn't spend her days trying to demonstrate the differences
> between ZF and PA.  The working mathematician spends her days
> thinking about the world and trying to _intervene_ in the
> world to make something happen (or to explain, predict,
> describe some thing).
but a secondary reason for that phenomenon, it appears to me, is that
nature isn't 'doing' math either, exactly, but making up altogether new
math continually to fool us into thinking there must be some kind of
formula.   That provides a fairly concrete behavioral reason for why
intelligent life is quite inexplicable.  It's 'inexplicable' because
explanation is actually not how it works!   Neither nature, nor normal
intelligence, follow explanations.   That's got he cart before the
horse.

>
> Hence, math doesn't _miss_ the "supple shape of natural
> form", it is derived directly from such natural form.  
> When/if it misses some element, it is because the
> mathematician failed to capture that element, usually on purpose.
Perhaps you'll allow that there may be a 'grey' area created if the
mathematician creates such a good illusion of reality that an observer
has a hard time telling the difference.  Is there a difference then or
not?  Well, "can't tell" is the correct answer.   Then the question is
whether that difference ever makes a difference, and whether one might
be better watching what's really happening and ignoring the model  of
the mathematician (every third wink at least) which would lead you far
astray if you were not to pay attention to the widening discrepancy of
things "going wrong with the model"... QED   "You just gotta pay
attention" is the one rule that one necessarily always needs to fall
back on.

>
> p.s. My argument above does not make the word "mathematician"
> useless by ascribing it to _everyone_ (as Bristol did when
> implying that every thing is emergent).  
well, I too think there is a very non-trivial understanding of 'every
thing is emergent', since in fact all physical phenomena do individually
emerge as evolving complex systems, whether our information makes that
entirely predictable or not.  Our information may miss it, and not
appear to be, but close observation demonstrates quite conclusively that
the physical stuff is.

> It is only ascribed
> to those who attempt to form rigorous conceptions of the
> things around them and use those conceptions to interact with
> the world.  
not in the sense I used above.  It's a practical operational,
productivity improving sense, of learning how to watch what's happening
to get better kinds of clues kind of 'everything emerges'.


> There are some (Paul Feyerabend comes to
> mind) who make the case that such strict adherence to method
> can impede understanding.  And that may be true.  (I believe
> it is.)  As such, there are plenty of people out there who
> actively resist the development of and application of
> rigorous method.  Those are not (always) mathematicians.
>
> - --
> glen e. p. ropella, 971-219-3846, http://tempusdictum.com
> ... given any rule, however "fundamental" or "necessary" for
> science, there are always circumstances when it is advisable
> not only to ignore the rule, but to adopt its opposite. --
> Paul Feyerabend

Ah well, play is good, particularly good, but not always to take
entirely seriously itself, perhaps....

Phil




123