Posted by
glen ep ropella on
URL: http://friam.383.s1.nabble.com/Seminal-Papers-in-Complexity-tp524047p524145.html
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Marcus G. Daniels wrote:
> Glen E. P. Ropella wrote:
>> To be clear, the process works this way:
>>
>> 1) casual observation and psychological induction leads to a (usually
>> mental) model
>> 2) an experiment is designed based on that model
>> 3) data are taken from the experiment
>> 4) a more rigorous model is derived from the data (perhaps regulated by
>> the prior model)
>> 5) repeat as necessary
>
> That can happen, but it isn't necessary. An extreme example is study
> of financial instruments, where it is very clear how the `thing' works,
> and the process of measurement has nothing to do with modelers that
> might find the thing useful or interesting to study. The data from
> trading systems is complete and precise. The psychology or motives of
> the people using the system of course aren't directly measured, but at
> least this is not an example of a pathological circularity of data
> collection being biased by a mental model of the system.
Well, I suppose that begs the question of what we mean by "system". In
the case of the financial machinery, it is clear how that part of the
thing works. But, it is not at all clear how the whole system works.
If it were, then predictive algorithms would be reasonably simple to
develop. Even if they weren't analytically soluble, they would submit
to brute force and simulation. To my knowledge nobody's gotten rich
with such algorithms, and not for lack of attention. I'm too ignorant
of the domain to know the particular reasons for the failures. But, I
suspect it has to do with the other part of the system (psychology or
motives of the people using the machinery) not having such a simple
(non-complex) model.
Your point folds us nicely back into the conversation about
"nonlinearity". You can't cleanly separate the machinery from the users
of the machinery. It's not a "system" if you do that. [grin] Just
because we have complete and precise understanding of some particular
(separable) aspect of a system doesn't mean we can predict the behavior
of the system. It takes a complete and precise understanding of
multiple aspects (or a single aspect that accounts for some large
percentage of the system... say 80% ... but it's questionable whether
the concept of a "large aspect" even makes sense).
So, we end up at the same spot with a circularity of models modeling
models and the only real touch point is between psychological induction
and the referent (where the referent is the system being studied, not
some minor machinery within the system).
BTW, I don't think it's appropriate to call this circularity
"pathological". "Pathology" generally means something like "abnormal"
or a maladaptive departure from a normal condition. Because this
circularity is pervasive in the interaction between our thoughts and the
reality around us, I think it's safe to say the circularity is quite
normal. Just like nonlinearity is the norm and linearity the exception,
circularity and paradox are the norm when humans interact with their
environment and acyclic processes are pathological. Note that I'm not
suggesting that acyclicity (?) is maladaptive or bad... though it
certainly might be.
> In practice, I think often 1-3 are decoupled from 4, especially for `big
> science' where a lot of competitive people are involved.
>
> Even if it they were often tightly coupled, it strikes me as absurd to
> equate the value of multiple perspectives with experiment. (Not that
> you are..) If a whole range of mechanisms or parameterizations can
> reproduce an inadequate validation data set, and there is no way to
> imagine a way to measure a better one, then that's a clue the modeling
> exercise may lack utility.
I agree completely, here, at least as a matter of attentional emphasis.
It's true that it is very important to emphasize experiment over
multi-modeling. However, that's _only_ true because the multi-modeling
occurs as a natural consequence of the decentralized nature of
collective human science. Because we're separate humans and have
separate perspectives, we will _always_ have multiple models. Even in
the most extreme apprenticeship (parent to child), the apprentice always
comes out with a different model than the master. So, there's no
necessity to emphasizing multi-modeling. It happens regardless.
But, in terms of methodology (the study of method), it can be important
to understand that multi-modeling is required for science.
- --
glen e. p. ropella, 971-219-3846,
http://tempusdictum.comYou work three jobs? Uniquely American, isn't it? I mean, that is
fantastic that you're doing that. -- George W. Bush
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.6 (GNU/Linux)
Comment: Using GnuPG with Mozilla -
http://enigmail.mozdev.orgiD8DBQFGhARmZeB+vOTnLkoRApvfAJ9ga29R18bHoq8ooVCT4Qsk0xQFhwCdGQ60
qknd2QDpXYv92zaQRQZ7+aw=
=srhK
-----END PGP SIGNATURE-----