cognitive largess (was Re: reductionism)

Posted by glen ep ropella on
URL: http://friam.383.s1.nabble.com/Seminal-Papers-in-Complexity-tp524047p524147.html

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Marcus G. Daniels wrote:
> It's price and risk that matters to quantitative traders.  Price and
> risk are measurable on any historical timescale that might be of
> interest.   Bias in data collection takes the form of "How do I reduce
> all this individual trading activity to something I can manage?"
>
> That sort of `bias' can be greatly attenuated using terabyte RAM
> multiprocessor systems .  No lie, it's feasible to correlate all sorts
> of things with thousands of processors at your disposal.   Why model
> when you can measure?

But you're focusing on extrapolation, right?  It strikes me that you're
not talking about heuristic (a.k.a. explanatory) models but about
aggregative extrapolation.

In general, extrapolative models can be developed fairly easily.  All
one need do is run as much data as possible through the best fitting
algorithms available.  The result is a black-box machine that can inter-
and extra-polate, but that can't provide any [pre|retro]diction beyond
the tolerances used to fit the data.

And as I understand it, those tolerances are pretty tight.  Granted, if
you toss more resources at the problem (more data, more algorithms, more
hardware, more people, etc), you should be able to loosen up those
tolerances and make better [pre|retro]diction.  But, you will always be
limited by the data (and other resources) you chose.

(And, BTW, that data (and other resources... especially the people) are
built upon prior (always false) models.)

In contrast, if one were to develop heuristic models (with as many
concrete mechanistic details as possible), in principle, the ability to
[retro|pre]dict would show a manifold increase.

So, when you ask "why model when you can measure" the answer depends on
whether the model is a black box aggregative function induced from data
or a white box particular analog to the referent.  If the model is the
former type, then you're right.  More measurement will always lead to
better models of that type.  If, however, the model is a kind of scheme
(as in databases) in which to capture and understand the data you've
taken so far (a heuristic model), then the answer is that modeling
allows you to understand the data you've taken to the degree that you
may not _need_ to do any more measurement!

And that's only _if_ the objective of the effort is pure
[pre|retro]diction regardless of understanding.  If understanding (i.e.
science) is an objective, then there are additional reasons to model
regardless of how much measurement you can afford.

p.s. I just re-read that and it's clear that I use too many
parentheses... Oh well, it is just a mailing list after all. [grin]
- --
glen e. p. ropella, 971-219-3846, http://tempusdictum.com
The most merciful thing in the world, I think, is the inability of the
human mind to correlate all its contents. -- H. P. Lovecraft

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.6 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFGhD67ZeB+vOTnLkoRAstrAJ9fQzeBWKJub4CoAnvuZmaMHTRiSACgvE2a
4i8xKHuijXxYe23Oo3GzjZM=
=fO7R
-----END PGP SIGNATURE-----