Thus spake Marcus G. Daniels circa 10/06/2008 03:28 PM:
> If a community doesn't access to people with the skills to effectively > solve a problem, then the problem won't get solved. Management is just > one skill set. But, this is precisely the problem, not the solution. This abstraction away from the fully embedded _human_ to idealistic "skill sets" is the problem. It's what leads us to hire "experts" and then remove them from their proper context and place them in positions where they do unimaginable and unforeseen harm (or good). E.g. GW Bush, who, in his context, is probably a great asset. But taken out of his context, he's extremely dangerous. I am not a set of skills. [grin] (To be spoken in the same tone as "I am not a number! I am a free man!") > I could have a brain tumor that's just a lump of harmless gunk, or one > that was likely to kill me, or one that would be likely to kill me, but > intervention will only kill me sooner. The `management' decision I can > make is basically limited to how many opinions I can get or how much > research it's feasible for me to do in a short amount of time. It's > parameterized by my desire for quality of life over a certain amount of > time and tolerance for risk. The medical advice drives the decision > and in this sense, the decision is made for me. No. The decision is _never_ made for you. If it is ... well, if you give up that responsibility ... hand it over to someone else, particularly an algorithm, ... well, then you deserve what you get, I suppose. >> And it >> is best to have someone from the space shuttle affected regions to >> decide the when/where/who of building a space shuttle. > > Here again, the benefits of developing a space program are intangible to > many, yet hugely valuable in the end. The car salesman that didn't > want his taxes going to (frivolously) a send a man to the moon, doesn't > connect the fact that 45 years later she is watching DirectTV thanks to > that leadership and the national aggregation of wealth that facilitated > it. That's true. However, you seem to be implying that DirectTV is a good thing. I agree that unforeseen consequences _can_ be good things. But, I don't think they are always good. There's just as many bad unforeseen things that come from big government programs like the space shuttle as there are bad unforeseen things. The question is, do the good unforeseens outweigh the bad? And how would we go about measuring such without the continual hindsight bias (those who were for it are biased to filter out the bad and those who were against it are biased to filter out the good)? > The most real stuff there is comes from sustained developed of theory > and technology, and that often takes real money, beyond what local > communities can fund. No. The most real stuff comes from real action... embedded action in a context. Theory (and all inference, thought, etc.) _can_ guide action to create more good than bad, in my opinion. But ultimately, unless and until we have some relatively objective way to measure good and bad (ultimately a religious or moral judgement), that's all a wash and there's no evidence that theory guides action to good or bad outcomes. The best we can do is measure whether theory leads more effectively and efficiently to the achievement of some objective (discounting externalities). That's what we're doing now (though we could do it better). But it doesn't handle the externalities that can be handled by ensuring every decision-maker is embedded in the context of those decisions. ... i.e. local government... i.e. "eat your own dog food". -- glen e. p. ropella, 971-219-3846, http://tempusdictum.com ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College lectures, archives, unsubscribe, maps at http://www.friam.org |
glen e. p. ropella wrote:
> This abstraction > away from the fully embedded _human_ to idealistic "skill sets" is the > problem. It's what leads us to hire "experts" and then remove them from > their proper context and place them in positions where they do > unimaginable and unforeseen harm (or good). If there are no meaningful way to talk about skill-sets then there isn't any meaningful way to talk about proper context. Proper context is just a refinement of a skill-set, perhaps down to even 1 or 0 individuals. (The cookie and the cookie cutter.) If that number is 0, then might as well start with the next closest apparent person (the one with the ill-defined `skill set'). A "fully embedded human" sounds like it might be important. But is it? I'd have the most optimism for a person with a track record solving similar problems as the one that needs to be solved. That would suggest to me they've been *able* to become embedded. I'm not denying there are situations where having detailed practical, historical, psychological context is important for making a productive contribution (e.g. some kinds of diplomacy, or human-resources problems), but when I hear that I'm immediately suspicious of organizational dysfunction. > > That's true. However, you seem to be implying that DirectTV is a good > thing. I agree that unforeseen consequences _can_ be good things. But, > I don't think they are always good. There's just as many bad unforeseen > things that come from big government programs like the space shuttle as > there are bad unforeseen things. > Yes, I would rather live in a world of unforeseen consequences driven by (universal) scientific curiosity than one driven only by local needs. Out on the farms, the lowest common denominator can get mighty low. >> The most real stuff there is comes from sustained developed of theory >> and technology, and that often takes real money, beyond what local >> communities can fund. >> > > No. The most real stuff comes from real action... embedded action in a > context. Theory (and all inference, thought, etc.) _can_ guide action > to create more good than bad, in my opinion. But ultimately, unless and > until we have some relatively objective way to measure good and bad > (ultimately a religious or moral judgement), that's all a wash and > there's no evidence that theory guides action to good or bad outcomes. > could lead to a more *stable* world, but I can't say that I am particularly interested in optimizing for that. Also I said `real', as in a sufficiently good model of the world such that, say, an iPod plays music, or the DirectTV puts pictures on the screen, or the JDAM kills the terrorist. Other kinds of model of control systems that are less interesting to me are those that concern advancing stable social configurations, esp. the ones that make claims about `good' and `bad' -- they seem to usually have the opposite outcome and destabilize. Marcus ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College lectures, archives, unsubscribe, maps at http://www.friam.org |
Thus spake Marcus G. Daniels circa 10/06/2008 04:33 PM:
> glen e. p. ropella wrote: >> This abstraction >> away from the fully embedded _human_ to idealistic "skill sets" is the >> problem. It's what leads us to hire "experts" and then remove them from >> their proper context and place them in positions where they do >> unimaginable and unforeseen harm (or good). > > If there are no meaningful way to talk about skill-sets then there isn't > any meaningful way to talk about proper context. Proper context is > just a refinement of a skill-set, perhaps down to even 1 or 0 > individuals. (The cookie and the cookie cutter.) If that number is 0, > then might as well start with the next closest apparent person (the one > with the ill-defined `skill set'). I disagree. Viewing proper context in terms of skill sets is merely one way of cutting it up. One can also view proper context in terms of "stands to gain or lose the most". I.e. in terms of consequences. And I prefer to cut it up that way. I want the decision to be made by the person who will pay for a failure or benefit from a success. This is an accountability based embedding as opposed to a capability based embedding. Of course, when I use the word "embedding", I _intend_ to imply both accountability and capability. Ideally, the person who makes the decision is _both_ the most capable and the closest to the consequences. But it's not an ideal world. So, when compromise is necessary, I would compromise on capability. (I may be ignorant of how my motorcycle works, but when _I_ try to fix it, at least it's _my_ bike that I break!) Besides, it's better to focus on getting it right than it is to focus on being right. Nobody can _ever_ be perfectly capable. But it's common for someone to bear all (or seemingly all) the consequences of a decision. No. I reject the whole skills based decision making. It's that abstraction that is killing us. People spending other people's money. People investing other people's money. People designing military equipment that other people depend on for their lives. Ugh. > Yes, I would rather live in a world of unforeseen consequences driven by > (universal) scientific curiosity than one driven only by local needs. > Out on the farms, the lowest common denominator can get mighty low. [grin] Well, _personally_ I agree. But we're not talking about anarchists and borderline anarco-capitalists. We're talking about the government for and by "normal" people who revere safety and convenience (which they misname "freedom"). And in that context, they prefer predictability and a minimum of unforeseen consequences... even to the point that they like and want fascism. > I can certainly see that conservative governmental aggregation policies > could lead to a more *stable* world, but I can't say that I am > particularly interested in optimizing for that. Also I said `real', as > in a sufficiently good model of the world such that, say, an iPod plays > music, or the DirectTV puts pictures on the screen, or the JDAM kills > the terrorist. Other kinds of model of control systems that are less > interesting to me are those that concern advancing stable social > configurations, esp. the ones that make claims about `good' and `bad' -- > they seem to usually have the opposite outcome and destabilize. Yeah, again I agree, personally. But that's not what this thread is about, not really. The thread is about how to build -- or constructive criticisms of -- a government to gracefully handle things like corruption, greed, and stupidity, which are permanent (and beautiful and necessary, by the way) properties of humans. And, in that sense, my claim is that the primary problem is the way government accumulates (aggregates) up from the purely local to the non-local. In short, because we _don't_ design government to accumulate nicely, and instead we patch in silo'ed band-aides at each level with no regard to other levels, that we have the critical weaknesses we have. Hence, if I'm right, then no amount of single level patchwork (e.g. limiting campaign contributions or creating crisp party categorizations of the population, etc.) will cure the disease. It will only treat the symptoms. -- glen e. p. ropella, 971-219-3846, http://tempusdictum.com ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College lectures, archives, unsubscribe, maps at http://www.friam.org |
glen e. p. ropella wrote:
> We're talking about the > government for and by "normal" people who revere safety and convenience > (which they misname "freedom"). And in that context, they prefer > predictability and a minimum of unforeseen consequences... even to the > point that they like and want fascism. > > Thanks but I'll advocate interests that make sense to me, and not limit myself to those concerns. Stress and distress are different things.. ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College lectures, archives, unsubscribe, maps at http://www.friam.org |
In reply to this post by glen ep ropella
Well, the reliance on competence is relative to the difficulty of the task.
As our world explodes with new connections and complexity that's sort of in doubt, isn't it? Isn't Taleb's observation that when you have increasingly complex problems with increasingly 'fat tailed' distributions of correlation then you better not rely on analysis? Anyone who takes that job is probably running into 'black swans' aren't they? Phil > -----Original Message----- > From: [hidden email] [mailto:[hidden email]] On > Behalf Of glen e. p. ropella > Sent: Monday, October 06, 2008 5:23 PM > To: The Friday Morning Applied Complexity Coffee Group > Subject: Re: [FRIAM] government hierarchy (was Re: Willful Ignorance) > > Thus spake Marcus G. Daniels circa 10/06/2008 01:49 PM: > > I expect capable, intelligent managers are a subset of the > population. > > If a local government represents too small of a region, there won't > be > > competent people available to run things. > > Good point. However, a complement is that if you have a small enough > region, only those within that region can _possibly_ be competent > enough > to run things. A great example is an individual human. If _you_ can't > manage your own mind/body, then nobody else has any hopes of doing it > either. > > > I've seen plenty of > > incompetence and outright corruption in local governments too. > > Allowing for some expensive mistakes (and expensive successes) may > > encourage people to pay attention and engage -- they have something > on > > the line. > > Yes. The beauty of local government is that it's easy to put someone > in > charge and it's easy to remove them, too. Sure, there's plenty of > corruption and incompetence at any level; but the degree of > accountability, installation, and removal scale, too. Likewise, the > stakes for success and failure scale. > > One reason for the "nasty" politics we see is this very scaling. If > you've got someone in an aggregated seat of power, then a) it was > difficult for them to get there and b) it will be difficult to get them > out of there. The trick is to find the critical spot in the hierarchy. > And that usually turns out to be illegal behavior (based on nefarious > and ridiculous nooks and crannies of the law) or _disgrace_. So, we > politick by calling people hypocrites, racists, or whatever epithet may > fit the bill because these control points trigger catastrophic > collapses > of the inertial systems built up in the government hierarchy. Of > course > politics for heavily inertial aggregated government positions will > hinge > on nasty cheap shots and sound bites. > > As much as I hate the idea, we _need_ things like President Bush's > immunity from prosecution for decisions he made while doing his job. > We > need it to preserve the stability of the office in correspondence with > the amount of effort it took to put him in that office. > > But what this leads one to (I think) is the conclusion that high office > should be pressed upon the unwilling rather than sought out by those > who > want to hold that office. Perhaps we should make it a requirement of > citizenship that you can be drafted into office when a "jury" of your > peers decides that you're the best person to fill that role? Of > course, > that would lead to an entirely different selection mechanism that would > encourage the occult jockeying for nomination, false modesty, etc. But > I wonder how different (or how much worse) it could be than what we > have > now? It may even result in a "brain drain" where all the people at > risk > for being drafted move to Canada or something to avoid being forced to > play President. ;-) > > -- > glen e. p. ropella, 971-219-3846, http://tempusdictum.com > > > ============================================================ > FRIAM Applied Complexity Group listserv > Meets Fridays 9a-11:30 at cafe at St. John's College > lectures, archives, unsubscribe, maps at http://www.friam.org ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College lectures, archives, unsubscribe, maps at http://www.friam.org |
Thus spake Phil Henshaw circa 10/07/2008 12:15 PM:
> Well, the reliance on competence is relative to the difficulty of the task. > As our world explodes with new connections and complexity that's sort of in > doubt, isn't it? Isn't Taleb's observation that when you have increasingly > complex problems with increasingly 'fat tailed' distributions of correlation > then you better not rely on analysis? Anyone who takes that job is > probably running into 'black swans' aren't they? Of course more complex processes mean more difficulty in handling them. But that's what "expertise" is all about. The more difficult the handling, the more one needs expertise. The simpler the processes, the more one can rely on yokels or algorithms. So, I think the opposite of your conclusion is justifiable: The more complex the processes, the more powerful the "skill set" sales pitch becomes because the customers are aggressively hunting for expertise. But even in a very complex domain, regular, somewhat predictable patterns of observation/manipulation can yield success, despite the occult possibility of unexpected wonky trajectories. And people who have those patterns of observation/manipulation down pat are also experts. They just run the risk of being wrong when/if the system does happen to take a wonky trajectory. There's no reason to avoid relying on historically successful patterns of control. You just have to accumulate enough momentum while successful to survive the black swans. The trick is that when experts sell themselves to you, they tend toward optimism (and underestimate the risks) because they don't eat their own dog food ... they won't really suffer the consequences the customer will suffer when their expertise fails. They _tend_ to promise what they really can't deliver ... or they're extremely vague about what they promise so they can hold up whatever they happen to deliver as a refined version of what they promised ... like politicians and outsource code shops. In contrast, if your "skin is in it", then you tend to be a bit more pessimistic (and conservative) with what you promise. -- glen e. p. ropella, 971-219-3846, http://tempusdictum.com ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College lectures, archives, unsubscribe, maps at http://www.friam.org |
Glen,
No, taking on impossible tasks is what true stupidity is about, not expertise, and the best way to hire a stupid expert is to hire people ready to do it. Come on... heading into impenetrable walls of complexity is the stupidest thing any 'expert' could possibly recommend but we've gone and hired an entire world full of so called 'experts' doing exactly that. It ain't gonna work. You say: > There's no reason to avoid relying on historically successful patterns > of control. You just have to accumulate enough momentum while > successful to survive the black swans. Momentum is what is causing you to be blind sided by them, not what will let you barrel through them. It's not 'bumps in the road' to crash through but true ends of the road that go unseen. > The trick is that when experts > sell themselves to you, they tend toward optimism (and underestimate the > risks) because they don't eat their own dog food ... they won't really > suffer the consequences the customer will suffer when their expertise > fails. They _tend_ to promise what they really can't deliver ... or > they're extremely vague about what they promise so they can hold up > whatever they happen to deliver as a refined version of what they > promised ... like politicians and outsource code shops. Right, but totally inconsistent with your first statement "just hire an expert". > > In contrast, if your "skin is in it", then you tend to be a bit more > pessimistic (and conservative) with what you promise. Right, but inconsistent with how mistaken self-interest by the experts has spread so far and wide that our global life-support system becomes fragile enough to collapse. What I've been trying to point out is that nature is full of signals for when patching up the old model will soon fail. The hunt for the new one can be the fun you need to replace our natural disappointment that nature has wriggled out of our feeble grasp yet again!! If we don't look out for change, change will surely not look out for us. Phil > -----Original Message----- > From: [hidden email] [mailto:[hidden email]] On > Behalf Of glen e. p. ropella > Sent: Tuesday, October 07, 2008 8:06 PM > To: The Friday Morning Applied Complexity Coffee Group > Subject: Re: [FRIAM] government hierarchy (was Re: Willful Ignorance) > > Thus spake Phil Henshaw circa 10/07/2008 12:15 PM: > > Well, the reliance on competence is relative to the difficulty of the > task. > > As our world explodes with new connections and complexity that's sort > of in > > doubt, isn't it? Isn't Taleb's observation that when you have > increasingly > > complex problems with increasingly 'fat tailed' distributions of > correlation > > then you better not rely on analysis? Anyone who takes that job is > > probably running into 'black swans' aren't they? > > Of course more complex processes mean more difficulty in handling them. > But that's what "expertise" is all about. The more difficult the > handling, the more one needs expertise. The simpler the processes, the > more one can rely on yokels or algorithms. So, I think the opposite of > your conclusion is justifiable: The more complex the processes, the > more powerful the "skill set" sales pitch becomes because the customers > are aggressively hunting for expertise. > > But even in a very complex domain, regular, somewhat predictable > patterns of observation/manipulation can yield success, despite the > occult possibility of unexpected wonky trajectories. And people who > have those patterns of observation/manipulation down pat are also > experts. They just run the risk of being wrong when/if the system does > happen to take a wonky trajectory. > > There's no reason to avoid relying on historically successful patterns > of control. You just have to accumulate enough momentum while > successful to survive the black swans. The trick is that when experts > sell themselves to you, they tend toward optimism (and underestimate > the > risks) because they don't eat their own dog food ... they won't really > suffer the consequences the customer will suffer when their expertise > fails. They _tend_ to promise what they really can't deliver ... or > they're extremely vague about what they promise so they can hold up > whatever they happen to deliver as a refined version of what they > promised ... like politicians and outsource code shops. > > In contrast, if your "skin is in it", then you tend to be a bit more > pessimistic (and conservative) with what you promise. > > -- > glen e. p. ropella, 971-219-3846, http://tempusdictum.com > > > ============================================================ > FRIAM Applied Complexity Group listserv > Meets Fridays 9a-11:30 at cafe at St. John's College > lectures, archives, unsubscribe, maps at http://www.friam.org ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College lectures, archives, unsubscribe, maps at http://www.friam.org |
Thus spake Phil Henshaw circa 10/09/2008 04:48 AM:
> Right, but totally inconsistent with your first statement "just hire an > expert". You must be confusing me with someone else. I've been arguing _against_ "just hire an expert" the whole time. -- glen e. p. ropella, 971-219-3846, http://tempusdictum.com ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College lectures, archives, unsubscribe, maps at http://www.friam.org |
Glen,
Oh, I read it wrong then, sorry! What about the other stuff, like we have an unusually large number of experts taking on, and not letting on about, ever increasingly complex problems. Phil > -----Original Message----- > From: [hidden email] [mailto:[hidden email]] On > Behalf Of glen e. p. ropella > Sent: Thursday, October 09, 2008 10:20 AM > To: The Friday Morning Applied Complexity Coffee Group > Subject: Re: [FRIAM] government hierarchy (was Re: Willful Ignorance) > > Thus spake Phil Henshaw circa 10/09/2008 04:48 AM: > > Right, but totally inconsistent with your first statement "just hire > an > > expert". > > You must be confusing me with someone else. I've been arguing > _against_ > "just hire an expert" the whole time. > > -- > glen e. p. ropella, 971-219-3846, http://tempusdictum.com > > > ============================================================ > FRIAM Applied Complexity Group listserv > Meets Fridays 9a-11:30 at cafe at St. John's College > lectures, archives, unsubscribe, maps at http://www.friam.org ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College lectures, archives, unsubscribe, maps at http://www.friam.org |
Free forum by Nabble | Edit this page |