Re: Reductionism - was: Young but distant gallaxies
Posted by
Kenneth Lloyd on
URL: http://friam.383.s1.nabble.com/Young-but-distant-gallaxies-tp839193p1075494.html
Robert,
Are you referring to the Casamir effect in a
vacuum? I suppose it all depends on how you conceptualize a
void.
I would certainly agree that inequality and
irreversibility are forms of asymmetry. Asymmetry is what
happens in reflection and dispersion in recurrent networks. There are
temporal forms of asymmetry too, which was part of the original discussion about
distant galaxies, the speed of light, and what inference we can make about
distant objects from light that's possibly billions of years old that is just
now reaching us.
Read: The Ransom of Red Shift - no don't, I got
confused. That was Ransom of Red Chief. Never mind.
Ken
I'm reading "The Book of Nothing" by John D.
Barrow which begins with a history of the concepts of zero, nothing, 0 (the
place holder) and the void and moves smoothly on through sets and on to
quantum physics. The book raises lots of questions for me and Ken's post
struck a chord. On page 235:
"Yet, despite the symmetry of the laws of
Nature, we observe the outcomes of those symmetrical laws to be
asymmetrical states and structures. Each of us is a complicated
asymmetrical outcome of the laws of electromagnetism and gravity. ... One of
Nature's deep secrets is the fact that the outcomes of the laws of Nature do
not have to possess the same symmetries as the laws themselves.... it is
possible to have a Universe governed by a very small number of simple
symmetrical laws (perhaps just a single law) yet manifesting a stupendous
array of complex, asymmetrical states and structures that might even be able
to think about themselves."
If physicists find the perhaps one law (the
Grand Unified Theory?) isn't that the ultimate in reductionism?
Everything else is just playing in the resulting stardust.
So is the
study of complexity just another way of looking at the
asymmetries?
Apparently too Descartes denied that a vacuum could exist
(ibid p119), let alone 0, but now physicists
ideas of what a vacuum is seem to make it something
other than a complete void, possessing zero-point energy. So may be D
had a point?
Robert C
Kenneth Lloyd wrote:
Steve,
Good job on the defense of a reductionist
position. I utilize a five phase approach to the study of complex
systems.
Definition - Analysis - Normalization - Synthesis -
Realization (DANSR)
Reductionism has its place in the analytical
phase at equilibrium. Analysis is normally a study of integrable,
often linear systems, but it can be accomplished on non-linear,
feed-forward systems as well. The synthesis phase puts
information re: complex behavior and emergence back into the integrated mix
and may be "analyzed" in non-linear, recurrent networks. This is
actually a probabilistic inversion of analysis as described in Inverse
Theory.
Bayesian refinement cycles (forward <-> inverse)
are applied to new information as one progresses through the
DANSR cycle. This refines the effect of new information on prior
information - which I hope folks see is not simply additive - and which
may be entirely disruptive (see evolution of science
itself) .
The fact this seems to work for complex systems is
philosophically uninteresting, and may ignored - so the discussion can
continue.
Final point: Descartes ultimately rejected the
concept of zero because of historical religious orthodoxy - so he personally
never applied it to the continuum extension of negative numbers. All
his original Cartesian coordinates started with 1 on a finite bottom,
left-hand boundary - according to Zero, The Biography of a Dangerous Idea,
by Charles Seife.
Ken
Orlando-
You
can find good references in Wikipedia on this topic, including the
Descartes references.
Reductionism
From Wikipedia, the
free encyclopedia
Descartes held that non-human animals could be
reductively explained as automata — De homines
1662.
Reductionism can
either mean (a) an approach to understanding the nature of complex
things by reducing them to the interactions of their parts, or to
simpler or more fundamental things or (b) a philosophical position that
a complex system is nothing but the sum of its parts, and that an
account of it can be reduced to accounts of individual constituents.[1]
This can be said of objects, phenomena, explanations, theories, and
meanings.
All -
IMO,
Reductionism(a) is a highly
utilitarian approach to understanding complex problems, but in some
important cases insufficient. It applies well to easily observable
systems of distinct elements with obvious relations operating within the
regime they were designed, evolved, or selected for. It applies even
better to engineered systems which were designed, built and tested using
reductionist principles. I'm not sure how useful or apt it is
beyond that. Some might argue, that this covers so much, who
cares about what is left over?... and this might distinguish the rest
of us from hard-core reductionists... we are interested in the phenomena,
systems, and regimes where such does not apply. This is perhaps what
defines Complexity Scientists and Practitioners.
Reductionism(b) is
a philosophical extension of (a) which has a nice feel to it for those who
operate in the regime where (a) holds well. To the extent that most
of the (non-social) problems we encounter in our man-made world tend to
lie (by design) in this regime, this is not a bad approach. To the
extent that much of science is done in the service of some kind of
engineering (ultimately to yield a better material, process or product),
it also works well.
Reductionism(b) might be
directly confronted by the "Halting Problem" in computability
theory. Reductionism in it's strongest form would suggest that
the behaviour of any given system could ultimately be predicted by
studying the behaviour of it's parts. There are certainly
large numbers of examples where this is at least approximately true (and
useful), otherwise we wouldn't have unit-testing in our software systems,
we wouldn't have interchangeable parts, we wouldn't be able to make any
useful predictions whatsoever about anything. But if it were fully
and literally true, it could be applied to programs in Turing-Complete
systems. My own argument here leads me to ponder what (if any)
range of interesting problems lie in the regime between the embarrassingly
reduceable and the (non)-halting program.
But to suggest (insist)
that *all* systems and *all* phenomenology can be understood (and
predicted) simply by reductionism seems to have been dismissed by most
serious scientists some while ago. Complexity Science and
those who study Emergent Phenomena implicitly leave Reductionism behind
once they get into "truly" complex systems and emergent
phenomena.
I, myself, prefer (simple) reductionistic
simplifications over (complex) handwaving ones (see Occam's Razor) most of
the time, but when the going gets tough (or the systems get complex),
reductionism *becomes* nothing more than handwaving in my experience.
- Steve
============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org
============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at
http://www.friam.org