Posted by
glen e. p. ropella-2 on
Nov 03, 2009; 1:07am
URL: http://friam.383.s1.nabble.com/Crutchfield-s-Is-anything-ever-new-tp3917261p3935901.html
First, I pick a few nits just to be sure we're communicating. Please
note that I almost didn't send this because too much of what I say is
just distracting nit-picking. But then I decided that's OK because the
people who don't want to read it can just hit the delete key. ;-)
Thus spake Ted Carmichael circa 09-11-01 05:53 PM:
> I'm actually fine with re-defining 'scale' to mean something along the lines
> of the amount of error in the mapping. That is mostly, I think, what I was
> trying to say.
Well, I couldn't redefine 'scale' that way. For me, the word "scale" is
really a synonym for the word "measure" (noun). It sets the granularity
at which one can distinguish parts. That means it's an aspect or
perspective taken when looking at some phenomena.
Now, it's true that indistinguishability or "wiggle room" is the dual of
scale. So, if I choose scale X, that implies a granularity below which
I can't distinguish things. So, error, noise, and uncertainty are
related to the dual of scale. But just because one cannot measure at a
finer grain does NOT imply that there are finer grained things wiggling
about down below the chosen scale, only that there COULD be.
Translating methods from one person to another involves scale to the
extent that the scale chosen for observing is capable of precisely
mapping measurements of the other guy's actions to controls for your
own. As such, it's not arbitrary, at all. In some contexts, scale must
be carefully chosen and in others scale is irrelevant. We can often
translate methods from human to human because regardless of what scale
is chosen, we are similar all the way up and down all the scales. And
this is also what allows us to trust the false concept of translating
ideas from human to human, which was what my original criticism was
about: Ideas should not be a part of this conversation of novelty.
> Let me see if I can clarify my points a little.
>
> There is definitely a large number of differences between two people using
> the same method to shoot a basket. All the things you mentioned - eye
> movement, exact combination of muscles, etc.
>
> [...]
>
> I agree that two people using the same method is an illusion.
Actually, I was arguing the opposite, that two people (as long as they
have the same physical, chemical, biological, anatomical, etc.
structure) _can_ use the same method. They do NOT because there's
always a mismatch between measurement and reconstruction (sensors and
effectors). But they could if the sensors were precise and complete enough.
But what is an illusion is the generic method. No such thing exists.
If, for example, you try to generalize a method from, say, 20
chimpanzees and 20 humans accomplishing the same objective... let's say
eating something, then the generalization is an illusion. And, I agree
that it's a useful illusion.
> I was trying to say that this
> is a different scale (a wider range of error, perhaps) when compared
to two
> shooters using different methods ... e.g., one person shoots in the
> traditional way and one person makes a 'granny shot.'
>
> [...]
>
> But it is a
> useful illusion, when differentiating between the traditional method and the
> granny method. Similarly, when Kareem Abdul-Jabbar used the hook shot, it
> was an innovative (hence: new) method for the NBA. In this way I would say
> there are different levels of abstraction available ... one simply picks the
> level of abstraction that is useful for analysis.
>
> I tried to use the mathematical example of calculating a product to
> illustrate this same idea. When calculating 49 * 12, one might use the
> common method of using first the one's column, then the ten's column, and
> adding the results, etc. Another person may invent a new method, noticing
> that 49 is one less than 50, and that half of 12 is 6, and say the answer is
> 600 - (12 * 1) = 588. Still another may say that 490 + 100 - 2 is the
> answer.
OK. I don't think methods can be tacitly distinguished by choice of
scale. To be clear, measurements (state) can be distinguished by choice
of scale; but actions (functions, methods) can't. So, if we choose the
coarsest scale for the basketball example, we have two states: 1) ball
at point A and 2) ball in hoop. At that scale, you're right that you
can't distinguish the measurements from the jump, hook, or granny shots.
Then add more states, let's say: 1) ball at point A, 2) ball at point
B, and 3) ball in hoop. Between the 3 methods, state (2) will be
different. So, again, you're right that you can distinguish the TRACE
of the methods.
And you can then argue (by the duality of congruence and bisimilarity)
that a distinction between the measurements implies a distinction
between the methods. But you can't distinguish between methods directly.
What I was arguing with, however, was your statement that the
distinction between thought and action was a somewhat arbitrary choice
of scale. The scale is not at all arbitrary. To distinguish between
the traces of the hook and jump shot, you need a smaller scale than that
required for the distinction between the traces of the hook or jump and
the granny shot.
All of which goes back to what I tried to say before. The
transferability of methods isn't really about scale but about the
mismatch between the measurements and the actions you have to take to
execute your particular method. I.e. the distinction between thoughts
and actions is NOT a matter of (even a somewhat) arbitrary choice of
scale. It's about whether the twitching we do as part of all our
methods is commensurate with the twitching others do as part of all
their methods. When tracing the method of someone very similar to us,
our methods of interpolating between states are similar enough to allow
us to execute a different method that has the same trace.
> What is innovative about these new methods is not that they ignore the
> common operations of adding, multiplying, and subtracting. It's that these
> basic operations are combined in an innovative way. If Crutchfield asks: is
> this really something new? I would say "yes." If he points out that all
> three methods use the same old operations, I would say that doesn't matter
> ... those operations are used in an innovative way; in a new combination.
>
> In a slightly different vein, Java is a "new" programming language even if
> it is only used to implement the same old algorithms. The implementation is
> new, even if the algorithm - the method - is the same. This is analogous to
> two mathematicians using the same "trick" to get a product, even if the
> respective neuron networks each person possesses to implement this method
> are slightly different.
I don't think Crutchfield's framework would classify the hook shot or
Java as novel because they aren't examples of movement to a more
expressive class of models. Languages of equivalent power are used to
express the jump and hook shots. Granted, perhaps the internal models
of the individual players use impoverished languages, in which case,
after seeing their first hook shot, they may realize that there's a more
expressive language they _could_ be using. But ontologically, the hook
shot is not really (for real, actually) new.
By analogy, imagine a 2 dimensional real (pun intended) plane. We
already know of all the functions like x^2, x^3, x+y, etc. Then when I
take my pencil and draw some squiggly line across the plane, is my new
"function" really new? If not, then how is the hook shot new?
Similarly, Java, as a language, is no more powerful than C. Granted,
perhaps the individuals who use C may actually used impoverished models
of C and when they see someone implement something cool, they may revise
their own internal model. But, ontologically, C and Java are just as
expressive as languages. (This argument depends on the distinction
between the language Java and the many libraries accessible to it and
the language C and the many libraries accessible to it.)
However, we can consider parallel systems more expressive than serial
systems. So, I think parallel computation would meet Crutchfield's
definition of novel.
> I do admit the term "level" or "scope" can exhibit ambiguities. But I still
> find that "level" is a useful distinction. It does imply varying degrees of
> complexity, and I think that is a valid implication, even if it is hard to
> nail down.
>
> I also find it hard to define a counter-example to the proposition that
> emergent features of a system are always produced from the interactions of
> elements "one level down."
Well, it would be hard to construct a counter example because "emergent
feature" is ambiguous, as is "produced", "interaction", and "element".
[grin] So, it's no surprise that it's difficult to construct such a
counter example. No matter what you come up with, all you need to do is
subtly redefine any of those words to fit the context.
I'm not being snarky, here, either. I truly believe the language you're
using to talk about this is hopelessly self-fulfilling ... and perhaps
even degenerate. Of course emergent features emerge up one level from
the level just below them! That sounds tautological to me. You can't
construct a counter example because it's like saying ( x == y ) implies
(x == y).
Perhaps it's my own personal mental disorder; but I see this sort of
thing everywhere people talk about "emergence". (Remember, though, that
Nick introduced critical rationalism into the conversation and, in that
context, _all_ rhetoric is degenerate ... all deduction is tautological.)
> Anyway, the larger point is that innovation happens by combining elements in
> a new way, however those elements are defined. A RISK processor is
> innovative in how it combines basic computer operations. Java is innovative
> in the instructions sent to the processor, and the package of common tools
> that comes with it. A new algorithm is innovative in how it uses these
> tools at a different level of abstraction. And a software package may be
> new in how it combines many existing algorithms and other elements of
> visualization and human-computer interaction.
I don't disagree with you, personally. But I think Crutchfield's
criteria are a bit stiffer. I think he's saying something's new only if
it the _class_ or family changes. I.e. as RussellS might say, when the
successor language expresses something the predecessor language can't
express. Since Java and C are equivalent and RISC and CISC are
equivalent, they're not novel with respect to each other.
Of course, I may be totally wrong. And I'd be grateful to anyone who
shows me where.
--
glen e. p. ropella, 971-222-9095,
http://agent-based-modeling.com============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at
http://www.friam.org