Login  Register

Re: Google Reader and More: Google Abandoning of Apps/Services

Posted by Steve Smith on Mar 15, 2013; 5:15pm
URL: http://friam.383.s1.nabble.com/Google-Reader-and-More-Google-Abandoning-of-Apps-Services-tp7582001p7582033.html

Glen -

 > I think it was RA Wilson who claimed that all it took was 20 years to
turn a liberal into a conservative.
     Oddly, I spent about 15 years turning from a raging Conservative
into a Progressive (if not precisely Liberal).  The next 15 seem to be
sending me off toward the Anarchist (Anachronist?) horizon.

 > Perhaps it's natural that, as we grow older, we want a more stable
tools ecology?
     There does seem to be a positive correlation there in general.

 > But, in general, I reject that. I think it's mostly a matter of
focus. When I'm tightly focused on a single objective, interference like
a broken tool really frustrates me.
     Yes and no.  When I'm tightly focused, the most frustrating thing
is anything *unexpected* such at I'd rather wield a familiar but
sub-optimal (possibly broken) tool than a new shiny one that I'm not
familiar with.   I feel that my peers (many right here on this list)
would *always* rather have a shiny new tool straight from the store (or
the magical commons where all free things come from?) even if they have
to spend hours/days/weeks figuring out how to operate it properly.

 >But being mostly a simulant, my focus goes tight-loose-tight-loose all
day long every day. So, perhaps it's my domain that prevents me from
becoming frustrated at the ability to predict the stability of my tools
ecology.
     You use "Simulant" in the same way Blade Runner has
"Replicants"...  is "Simulant" actually the preferred Subject in such a
sentence?  It sounds more like the Object?  As if you are a simulated
construct or the subject of a modeling-simulation project!  Perhaps we
all are?
>> In this case, C.
>> Elegans relative simplicity and ancient roots are roughly opposite
>> Google's complexity and very recent roots.
> I'm not convinced that the worm is relatively simple compared to Google.
I may have mis-stated my comparison.  C. Elegans compared to the rest of
biology and Google compared to the rest of the high-tech and corporate
ecology.
>   The closure between layers for Google seems pretty clear: machines vs.
> humans vs. corporate structures.  While it's true that there is some
> fuzziness between the layers, it's nothing like the fuzziness between,
> say, the neuronal network and the vasculature in the worm.
yes.
> So, I could say that while the complexity of the worm and Google are
> probably ontologically similar, the apparent complexity of the two will
> be quite stark depending on how they're measured.
Agreed.   I think my quibble (which went sideways anyway) had more to do
with Ontogeny than Ontology.

>> gepr said:
>>> Because of this, it strikes me that what you're expressing is some sort
>>> of deep seated pattern recognition bias towards centralized planning.
>>> You're looking for a homunculus inside a machine.
>> I'm not quite clear on this point.  It sounds as if you are identifying
>> corporations such as LockMart and Google as being more like evolved
>> organisms than machines?
> Sorry.  I'm asserting that organisms like Owen are pattern recognizing
> machines evolved to find patterns (even when there are none). I speak
> reflectively, here.  I'm arguably the most biased pattern recognizer I
> know, despite my Devil's Advocacy of arbitrary decision making within
> Google.  I find patterns everywhere, which is why I'm a fan of
> conspiracy theories.
Got it.  And as a sidenote, I transcended Conspiracy Theories early on,
filling the same niche with conspiracy theories *about* Conspiracy
Theories.  There is an Occam/anti-Occam arguement that suggests that all
first-order conspiracy theories are way too pat and *have to be* some
sort of conspiracy of their own.  It is a slippery slope into the mouth
of a vortex I fear... stay far back from the edge lest you be lost forever.
> To me, there's only one reason for frustration and that is when I hit a
> blockage I don't want (or didn't expect) to hit.  I wouldn't care if my
> home-made tires didn't work as well as tight tolerance, robot made
> tires.  I still might make and use them.  But I _would_ care if I
> couldn't find out how those robot made tires are made, even if just to
> satisfy my curiosity as to whether or not I should buy/steal my own
> robot ... or perhaps to be able to parse the gobbledygook coming out of
> the mouth of a professed tire robot maker.
Got it.  I share that.
> It's the lack of access that frustrates me, not the lack of any
> particular extant structure.  Hence, i don't care if Google Reader
> exists.  But I do care if I can't (pretend to) figure out how it works.
Your days must just be filled!  I share the sentiment but guess I gave
over a few years (decades now?) back on this... following RECs recent
reference to Hamming and complexity and ignorance, it *feels* like the
(science/techno) universe has been growing more complex superlinearly
(I'm not ready to say geometric nor exponential) but I'm pretty sure
that much of that experience is my (recognized) ignorance growing
superlinearly.

When we first learned to control fire, we noticed the ring of carnivore
eyes glowing at the edge of the light it shed.  We built the fire up
bigger to push the ring back and and all we did was attract more eyes
and make more room for them... making us feel the need to build the fire
yet bigger.   We are on our way to burning down the whole planet with
that bigger and bigger fire... and the eyes just keep coming...  (shit,
this sounds like a paranoid schizophrenic on dangerously strong
hallucinagens!)

- Steve


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com