I wrote:
> So I agree, in practice, to stop this sort of random growth of > nonsense, it is necessary to have a strong argument against a policy from > the perspective of the health of the organization (no agendas or > idealistic > motives allowed!) as well a specific and relevant set of targets for > blame, > and to pursue it all at once. On 7/26/13 11:18 AM, glen wrote: > Internally negotiated truth is not a bug. It's a feature. The trick > is that organizational truth is negotiated slower than individual > truth. And societal truth is even more inertial. A set of people ought to be able to falsify a proposition faster than one person, who may be prone to deluding themselves, among other things. This is the function of peer review, and arguing on mailing lists. Identification of truth is something that should move slowly. I think `negotiated truth' occurs largely because people in organizations have different amounts of power, and the powerful ones may insist on something false or sub-optimal. The weak, junior, and the followers are just fearful of getting swatted. Marcus ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com |
Marcus G. Daniels wrote at 07/26/2013 10:42 AM:
> A set of people ought to be able to falsify a proposition faster than one person, who may be prone to deluding themselves, among other things. This is the function of peer review, and arguing on mailing lists. Identification of truth is something that should move slowly. I think `negotiated truth' occurs largely because people in organizations have different amounts of power, and the powerful ones may insist on something false or sub-optimal. The weak, junior, and the followers are just fearful of getting swatted. Fantastic point. So, the (false or true) beliefs of the more powerful people are given more weight than the (true or false) beliefs of the less powerful. That would imply that the mechanism we need is a way to tie power to calibration, i.e. the more power you have, the smaller your error must be. If an objective ground is impossible, we still have parallax ... a kind of continually updating centroid, like that pursued by decision markets. But a tight coupling between the most powerful and a consensual centroid would stultify an organization. It would destroy the ability to find truth in outliers, disruptive innovation. I suppose that can be handled by a healthy diversity of organizations (scale free network). But we see companies like Intel or Microsoft actively opposed to that... they seem to think such behemoths can be innovative. So, it's not clear to me we can _design_ an artificial system where calibration (tight or loose) happens against a parallax ground for truth (including peer review or mailing lists). It still seems we need an objective ground in order to measure belief error. The only way around it is to rely on natural selection, wherein "problems" with organizatinos may well turn out to be the particular keys to their survival/success. So, that would fail to address the objective of this conversation, which I presume is how to reorg. orgs either before they die off naturally (because they cause so much harm) or without letting them die off at all. (Few sane people want, say, GM to die, or our government to shut down ... oh wait, many of our congressional reps _do_ want our govt to shut down.) -- ⇒⇐ glen e. p. ropella Some of our guests are ... how shall I say? Hyperbolic V.I.P. ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com |
In reply to this post by glen ropella
Glen/Marcus -
Once again, lots of good back-forth here. I can't claim to follow each of the subthreads of your arguments and in the interest of not flooding the airwaves with my nonsense have been holding back a bit. > I've been having lots of good conversations about the distinction > between "identity" and "self" on other mailing lists lately. In > particular, you are not who you _think_ you are. This type of > internally negotiated truth seems to relate ... or, more likely, I'm > just a muddy thinker. I am reminded of the aphorism "I am who you think I think I am". This has to be unpacked thoroughly to be appreciated for it's (fairly tautological) truth. I think this 2 levels of indirection is both the least and most that is appropriate. > Internally negotiated truth is not a bug. It's a feature. The trick > is that organizational truth is negotiated slower than individual > truth. And societal truth is even more inertial. I think this is a very key concept... and while I whinge at the implied "moral relativism" in this talk-talk, I think it is not that. I think some of our discussions about "what is Science" a while back relates to this. To the uninitiated, it might sound as if Scientific Truth were a simple popularity contest. Were that true, I think that the ratio of the circumference of a circle to it's diameter *would be* precisely "3" inside the state of Kansas (and probably many of the Red States?). But this doesn't mean that truth isn't in some sense also negotiated... I don't have a clear way to express this, but appreciate that this conversation is chipping away at the edges of this odd conundrum. > In some cases (Manning and the Army, Snowden and CIA/NSA/BAH), > individual's have a higher turnover (material as well as intellectual > and emotional) than organizations, it makes complete sense to me that > a ladder-climber would lose sight of their motivations by the time > they reached the appropriate rung on the ladder. (I think this is > very clear in Obama's climb from community organizer to president.) > And, in that context, the slower organizational turnover should > provide a stabilizer for the individual (and society should provide a > stabilizer for the organizations). "truth" is like encrypted or compressed symbol streams which require a certain amount of context to decompress and/or decrypt. If you don't have the proper codebook/keys/etc... you either have nonsense or at least poorly rendered versions. Obama's "truth" may have been highly adaptive in the context of the community organizing context but not so much as president (this was the best argument against his candidacy) but then we WERE looking for HOPE and CHANGE (well, something like 50% were) which *requires* injecting some new perspective into the context. > > The real trick is whether these negotiated truths have an objective > ground, something to which they can be recalibrated if/when the error > (distance between their negotiated truth and the ground) grows too > large. I don't know if/how such a "compass" is related to the health > of an organization. But it seems more actionable than health ... > something metrics like financials or social responsibility might be > more able to quantify. I have a hard time imagining a fully objective ground, only one with a larger base perhaps? What is a negotiated/negotiateable truth across a whole "tribe" might be served by negotiating across a larger group (think federation), and across a whole broad category of "culture" (e.g. Western, etc.). or even interspecies (primate/cetacean?)... but It isn't clear to me how to obtain this kind of "greater truth" outside of the context of those for/by whom it is to be experienced? - Steve ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com |
In reply to this post by glen ropella
On 7/26/13 1:30 PM, glen wrote:
> Marcus G. Daniels wrote at 07/26/2013 10:42 AM: >> A set of people ought to be able to falsify a proposition faster than >> one person, who may be prone to deluding themselves, among other >> things. This is the function of peer review, and arguing on mailing >> lists. Identification of truth is something that should move slowly. >> I think `negotiated truth' occurs largely because people in >> organizations have different amounts of power, and the powerful ones >> may insist on something false or sub-optimal. The weak, junior, and >> the followers are just fearful of getting swatted. > > Fantastic point. So, the (false or true) beliefs of the more powerful > people are given more weight than the (true or false) beliefs of the > less powerful. That would imply that the mechanism we need is a way > to tie power to calibration, i.e. the more power you have, the smaller > your error must be. depending on domain. To some extent we are very bimodal about this... we both hold our public officials to higher standards and to lower ones at the same time. > > If an objective ground is impossible, we still have parallax ... a > kind of continually updating centroid, like that pursued by decision > markets. Or a continually refining confidence distribution which we can hope for/seek a nice steep gaussianesque shape. > But a tight coupling between the most powerful and a consensual > centroid would stultify an organization. It would destroy the ability > to find truth in outliers, disruptive innovation. I suppose that can > be handled by a healthy diversity of organizations (scale free > network). But we see companies like Intel or Microsoft actively > opposed to that... they seem to think such behemoths can be innovative. I think they *can* drive the consensual reality to some extent... to the point that counterpoint minority opinions polyp off (Apple V MS, Linux V Commercial, Debian V RedHat V Ubuntu, etc.) > So, it's not clear to me we can _design_ an artificial system where > calibration (tight or loose) happens against a parallax ground for > truth (including peer review or mailing lists). It seems intuitively obvious to me that such *can*, and that most of it is about *specifying* the domain... but maybe we are talking about different things? > > It still seems we need an objective ground in order to measure belief > error. It think this is true by defnition. In my work in this area, we instead sought measures of belief and plausibility at the atomic level, then composing that up to aggregations. Certainly, V&V is going to require an "objective ground" but it is only "relatively objective" if that even vaguely makes sense to you? > The only way around it is to rely on natural selection, wherein > "problems" with organizatinos may well turn out to be the particular > keys to their survival/success. So, that would fail to address the > objective of this conversation, which I presume is how to reorg. orgs > either before they die off naturally (because they cause so much harm) > or without letting them die off at all. (Few sane people want, say, > GM to die, or our government to shut down ... oh wait, many of our > congressional reps _do_ want our govt to shut down.) <grin> I think we are really talking about "theories of life" here, really... - Steve ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com |
Steve Smith wrote at 07/26/2013 01:27 PM:
> On 7/26/13 1:30 PM, glen wrote: >> >> But a tight coupling between the most powerful and a consensual centroid would stultify an organization. It would destroy the ability to find truth in outliers, disruptive innovation. I suppose that can be handled by a healthy diversity of organizations (scale free network). But we see companies like Intel or Microsoft actively opposed to that... they seem to think such behemoths can be innovative. > > I think they *can* drive the consensual reality to some extent... to the point that counterpoint minority opinions polyp off (Apple V MS, Linux V Commercial, Debian V RedHat V Ubuntu, etc.) Yeah, I agree behemoths can drive consensual reality. I just don't think they can be innovative at the same time. The innovation comes from outside, much smaller actors. And when the innovation does come from inside a behemoth, I posit that some forensic analysis will show that it actually came from either a (headstrong/tortured) individual inside the behemoth, or from the behemoth's predation. >> So, it's not clear to me we can _design_ an artificial system where calibration (tight or loose) happens against a parallax ground for truth (including peer review or mailing lists). > > It seems intuitively obvious to me that such *can*, and that most of it is about *specifying* the domain... but maybe we are talking about different things? I don't know what you're saying. 8^) Are you disagreeing with me? Are you saying that it seems obvious to you we _can_ design an artificial system which calibrates against a consensual truth? Superficially, I would agree that we can build one... after all, we already have one. But I don't think we can design one. I think such a design would either be useless _or_ self-contradictory. >> It still seems we need an objective ground in order to measure belief error. > > It think this is true by defnition. In my work in this area, we instead sought measures of belief and plausibility at the atomic level, then composing that up to aggregations. Certainly, V&V is going to require an "objective ground" but it is only "relatively objective" if that even vaguely makes sense to you? Well, I take "relative objectivity" to mean (simply) locally true ... like, say, the temperature inside my fridge has one value and that outside my fridge has another value. But local truth usually has a reductive global truth behind it (except QM and gravity). So, I don't think "relative objectivity" really makes much sense. Scope and locality do make sense, though. You define a measure, which includes a domain and a co-domain. Part of consensual truth is settling on a small set of measures, despite the fact that there are other measures that would produce completely different output given the same input. So, by "objective ground", I mean _the_ truth... the theory of everything. And, to date, the only access I think we have to _the_ truth is through natural selection. I.e. If it's right, it'll survive... but just because it survived doesn't mean it was right. ;-) -- -- ⇒⇐ glen e. p. ropella The seven habits of the highly infected calf ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com |
Glen -
> Yeah, I agree behemoths can drive consensual reality. I just don't > think they can be innovative at the same time. The innovation comes > from outside, much smaller actors. And when the innovation does come > from inside a behemoth, I posit that some forensic analysis will show > that it actually came from either a (headstrong/tortured) individual > inside the behemoth, or from the behemoth's predation. I suppose the definition of "innovation" is part of the point. I agree that *radical* innovation is hard by definition when tied to the momentum vector of a behemoth... even a (headstrong/tortured) individual gets damped by this. See Steve Jobs. >>> So, it's not clear to me we can _design_ an artificial system where >>> calibration (tight or loose) happens against a parallax ground for >>> truth (including peer review or mailing lists). >> >> It seems intuitively obvious to me that such *can*, and that most of >> it is about *specifying* the domain... but maybe we are talking about >> different things? > > I don't know what you're saying. 8^) Are you disagreeing with me? > Are you saying that it seems obvious to you we _can_ design an > artificial system which calibrates against a consensual truth? a consensual truth. It is not clear to me that we can design one that succeeds. That proof is in the pudding. Of course, the definition and scope (geotemporal as well as sociopolitical) of *consensual* comes into play... which may provide a logical bound to what can be done (can a sufi, a taoist, a fundamentalist LDS member, and a Doug Roberts share a consensual truth bigger than the color of the sky on a clear day? If that?). (sidenote, Doug and Ingrun came to the house for whiskey, burned flesh and root vegetables just the other night, and he says he misses us but might stay on separate vacations anyway). > > Superficially, I would agree that we can build one... after all, we > already have one. But I don't think we can design one. I think such > a design would either be useless _or_ self-contradictory. I think that there may be logical bounds to this (see above) based on the nature of human nature and what "consensual reality" might mean. We can define it as "whatever a group converges on" which is OK... but probably not exactly what we are striving for? >>> It still seems we need an objective ground in order to measure >>> belief error. >> >> It think this is true by defnition. In my work in this area, we >> instead sought measures of belief and plausibility at the atomic >> level, then composing that up to aggregations. Certainly, V&V is >> going to require an "objective ground" but it is only "relatively >> objective" if that even vaguely makes sense to you? > > Well, I take "relative objectivity" to mean (simply) locally true ... > like, say, the temperature inside my fridge has one value and that > outside my fridge has another value. But local truth usually has a > reductive global truth behind it (except QM and gravity). So, I don't > think "relative objectivity" really makes much sense. stretch "locality" beyond the geotemporal... there is a sociopolitical/psychological/spiritual/religious/??? domain in which the idea of locality is also required for this definition. > Scope and locality do make sense, though. You define a measure, which > includes a domain and a co-domain. Part of consensual truth is > settling on a small set of measures, despite the fact that there are > other measures that would produce completely different output given > the same input. So, by "objective ground", I mean _the_ truth... the > theory of everything. And, to date, the only access I think we have > to _the_ truth is through natural selection. I.e. If it's right, > it'll survive... but just because it survived doesn't mean it was > right. ;-) I'm not holding my breath waiting for a "theory of everything". I'm pretty sure the likes of Godel Incompleteness already blew that concept right off the table... and that may be only the smallest of reasons for it. Right/Wrong are only relative to a given set of Axioms which we can (in principle) come to a consensual agreement on (the Axioms and perhaps how well a given situation aligns with them). So far, I'm pretty happy with "do unto others" as an axiom of human intention and action and not much more. That leaves a lot of room for interpretation and may describe Genghis Khan's behavior (as moral) as easily as that of Gandhi's. The proverbial 10 commandments tend to over and underspecify and by the time you get to the entire codex of the Ibrahamic religions, it is definitely over/under specified. The I Ching doesn't represent axioms for human morality as much as a scaffolding for perception. I gave up on the search for a GUT about the time I graduated from undergrad Physics 35 years ago... I mean I gave up believing it would be achieved, not that the search and the infinitude of approximations and new formulations would be useful and entertaining. I continue to be entertained and continue to trust there is utility (at least to drive a powerful capitalistic entertainment/military/industrial society like our own). Carry on! - Steve ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com |
Steve Smith wrote at 07/27/2013 08:12 AM:
> I think I am saying we can design one that *tries to* calibrate against a consensual truth. It is not clear to me that we can design one that succeeds. That proof is in the pudding. Of course, the definition and scope (geotemporal as well as sociopolitical) of *consensual* comes into play... So, the conversation was about how to reorganize organizations like the NSA (or the FISA court... or whatever) so that a problem with such an organization doesn't _always_ reduce to a problem with a human within that organization. In other words, when is a classified leak a systemic problem (e.g. the way things are classified) versus when is it reducible to a single cause/flaw? In that context, I can agree with you that we _could_ arbitrarily throw solutions at the wall and hope one of them sticks. But, in the meantime, lots of well-intentioned and valuable people will have their lives destroyed merely for trying to serve their country. The point being that it's not clear to me that we can design an organizational accountability/calibration system using consenus reality. We need an objective ground. And if we can't agree that objective grounds exist, then we have to resort to natural selection: the orgs that behave badly will die off. There is a middle ground, I suppose, in "directed evolution". But, writ large, it strikes me that this is more co-evolution between regulators and the regulated. Competent regulators reproduce, incompetent regulators die off. The only complication I see with designing that sort of system is that regulators are always seen as parasites, making their living off the regulated (through taxes). Or, in the NSA case, it's often argued that the civil liberties watch dogs have the liberties they have _because_ the NSA does what it does ... again, the watch dogs are considered parasites. The relationship can't be purely parasitic. Symbiosis requires each class depend on the other classes ... feedback. I don't see much feedback between the NSA and the press... mostly, I see stories about Snowden or other individuals like him. Hence, the press is a parasite, not a symbiote. > I'm not holding my breath waiting for a "theory of everything". I'm pretty sure the likes of Godel Incompleteness already blew that concept right off the table... and that may be only the smallest of reasons for it. Right/Wrong are only relative to a given set of Axioms which we can (in principle) come to a consensual agreement on (the Axioms and perhaps how well a given situation aligns with them). Naa. I think that's a [mis|over]application of Gödel's results. E.g. there are arithmetic systems that are both complete and consistent. And, semantic grounding is always possible by enlarging the language. But an org/reorg method based on calibration against a consensual truth does lead to inconsistency, I think. The calibration is supposed to keep the org. _open_, preserve a puncture in its membrane. If it bases this calibration on its own opinions, then that defeats the purpose of calibrating at all ... it would be useless overhead, a navel-gazing closure. You may as well let the org. run free and thrive or die efficiently. -- ⇒⇐ glen e. p. ropella They will make us strong ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com |
Free forum by Nabble | Edit this page |