-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1 I have an ongoing cognitive conflict w.r.t. the principles I infer from complexity theory and my ethical indoctrination/rearing. Perhaps some of you wise ones can throw some words at the conflict to help me sort it out. The primary principle I've inferred from complexity theory (such as it is) is: the extent versus the objectives of control structures should show something like an inverse power law to maintain a balance between diversity and efficacy. (It's not my intention to start an argument about whether complexity theory really implies this... So, if you criticize that part of this e-mail, I'll just remove the reference to complexity theory and such removal won't damage the point.) The primary ethical rule I've been taught to hold is that all people are equivalent but never equal and that the extent of the equivalence depends on the chosen equivalence class. I'm currently living through a transition in my political views. I used to be a hardcore libertarian and believed, fundamentally, that non-local government is incapable of governing many variables. I'm not saying that there are particular variables they can or can't regulate. I'm saying there's a limit to the total _number_ of variables, whatever they are, that a massive, global structure like the feds can handle. For example, the federal government here in the states can govern some number of variables (say 10 million) but cannot govern as many as can be governed by decentralized, local government. But, the implications of the limitation are that humans in one part of our country may be horribly abused, oppressed, ignored because the federal government has chosen to concentrate its energies on a set of variables unrelated to that particular local abuse or oppression. And my ethical upbringing makes me think that our nation-wide government ought to govern all the variables according to some universally applicable human standards, regardless of how many variables that comes to. For example, I tend to believe that nobody in the US should starve. In the past, I would have argued against the centralized control over food distribution. I would have said that it's good for a small segment of the population to enjoy steak and champagne while the large segments have to stick to McDonald's and Schlitz Malt Liquor. But, as I get older, my resolve has started to crumble. This is made especially acute when I see blatantly unethical behavior on the part of the rich white guys who run our government. Of course, my libertarian mind makes the statement that all of us are just exploiting the resources available to us. And that makes me want to cheer on the Karl Rove's of the world! Congrats! You win! Guys like that are a healthy example of the rich diversity of control structures we facilitate in our society, evidence that the inverse power law remains. But then my upbringing tells me that Karl Rove is just a slimy perverted opportunist who needs regulation by the populace. The problem with that upbringing is that the more of these regulations we make more universal (increase the extent of a control structure), the less agile we'll be when the environment changes (e.g. climate change forcing evacuation of coastal cities or the collapse of the dollar in the wake of a financial attack by China... or whatever). Hence, the more we _allow_ diverse individuals (including slimy perverts) their diversity, the more agile we'll be as a collective when the sh*t hits the fan. For example, look at all the people who are _completely_ dependent on the federal government for their well-being: FDA, Army Corps of Engineers, FEMA, high-risk mortgage bail-outs for low-income home owners, FDIC insured banks, well-maintained highway infrastructure, etc. Any thoughts on how to reconcile these two contradictory principles (high diversity versus universal human properties) are welcome. Luckily, as Lovecraft once said: "The most merciful thing in the world, I think, is the inability of the human mind to correlate all its contents." So, even if they remain contradictory, I can retain (and be hypocritical about) both of them. But, given the recent conversation about networks and cliques, I figured I'd throw this out and see what came back. [grin] - -- glen e. p. ropella, 971-219-3846, http://tempusdictum.com I have an existential map. It has 'You are here' written all over it. -- Steven Wright -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.2.2 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iD8DBQFG2HTNZeB+vOTnLkoRAjkeAKDHERJCD6OsA3WGQFJ35469axQRBQCfU1U+ eri5t4s24t0/lL9yNTU3lsU= =1DIU -----END PGP SIGNATURE----- |
Glen E. P. Ropella wrote:
> The problem with that upbringing is that the more of these regulations > we make more universal (increase the extent of a control structure), the > less agile we'll be when the environment changes (e.g. climate change > forcing evacuation of coastal cities or the collapse of the dollar in > the wake of a financial attack by China... or whatever). Hence, the > more we _allow_ diverse individuals (including slimy perverts) their > diversity, the more agile we'll be as a collective when the sh*t hits > the fan. > Sorry, no matter how you want to look at it the libertarian is going to lose.. -- People demand freedom of speech as a compensation for the freedom of thought which they seldom use. -Soren Kierkegaard |
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1 Marcus G. Daniels wrote: > Sorry, no matter how you want to look at it the libertarian is going to > lose.. I have no idea what you're implying, here. If your intention was to be pithy, then you failed and the result is a simple non sequitur. - -- glen e. p. ropella, 971-219-3846, http://tempusdictum.com I have the heart of a child. I keep it in a jar on my shelf. -- Robert Bloch -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.2.2 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iD8DBQFG2IEJZeB+vOTnLkoRAr40AJ9ZE2ICyswgQzD6XFLRwe7gYf3ysACeJEQo 3Wwovwlvxi33oOVr+RT/BCQ= =/kNY -----END PGP SIGNATURE----- |
In reply to this post by glen ep ropella
Glen,
Gee, I don't know if it helps with your philosophy, but I think you're making a common mistake with the inverse square relation. It's an indicator of complex system organization, not a design principle. 'A' implies 'B' but 'B' in no way implies 'A'. It's like a thermometer, if a thermometer reads 98.6 it's likely you've found a human but heating something up to 98.6 and trying to talk to it is nutty. The inverse square metric is a time saving empirical tool for helping to locate and investigate complex systems. You have to look into the system to find what makes it organized, though. The network science people seem to have a better way of using it than the other mainstream science disciplines interested in the subject I think. They're looking at complex systems from the inside out (though maybe not having quite realized that networks are artifacts of the complex systems they are embedded in). Their identification of the elaboration and refinement of network connections during network development as the origin of the inverse square metric and 'scale-free' distribution of internal connectedness of natural networks is very helpful. There should logically be some kind of connection with the thinking of people taking an outside in approach to complexity, but I have not been able to figure out what it is. As far as the limits of control, don't all complex systems have significantly independent design and behavior? It seems to me that the first thing anything with independent design and behavior requires is basic respect, otherwise you make large mistakes with it, right? We so often forget that finding the easy ways for independent things to get along is a great design strategy. Nature seems to like it quite a lot for evolutionary survival too! phil > > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > > I have an ongoing cognitive conflict w.r.t. the principles I > infer from complexity theory and my ethical > indoctrination/rearing. Perhaps some of you wise ones can > throw some words at the conflict to help me sort it out. > > The primary principle I've inferred from complexity theory (such as it > is) is: the extent versus the objectives of control > structures should show something like an inverse power law to > maintain a balance between diversity and efficacy. (It's not > my intention to start an argument about whether complexity > theory really implies this... So, if you criticize that part > of this e-mail, I'll just remove the reference to complexity > theory and such removal won't damage the point.) > > The primary ethical rule I've been taught to hold is that all > people are equivalent but never equal and that the extent of > the equivalence depends on the chosen equivalence class. > > I'm currently living through a transition in my political > views. I used to be a hardcore libertarian and believed, > fundamentally, that non-local government is incapable of > governing many variables. I'm not saying that there are > particular variables they can or can't regulate. I'm saying > there's a limit to the total _number_ of variables, whatever > they are, that a massive, global structure like the feds can > handle. For example, the federal government here in the > states can govern some number of variables (say 10 million) > but cannot govern as many as can be governed by > decentralized, local government. > > But, the implications of the limitation are that humans in > one part of our country may be horribly abused, oppressed, > ignored because the federal government has chosen to > concentrate its energies on a set of variables unrelated to > that particular local abuse or oppression. And my ethical > upbringing makes me think that our nation-wide government > ought to govern all the variables according to some > universally applicable human standards, regardless of how > many variables that comes to. For example, I tend to believe > that nobody in the US should starve. In the past, I would > have argued against the centralized control over food > distribution. I would have said that it's good for a small > segment of the population to enjoy steak and champagne while > the large segments have to stick to McDonald's and Schlitz > Malt Liquor. But, as I get older, my resolve has started to > crumble. This is made especially acute when I see blatantly > unethical behavior on the part of the rich white guys who run > our government. > > Of course, my libertarian mind makes the statement that all > of us are just exploiting the resources available to us. And > that makes me want to cheer on the Karl Rove's of the world! > Congrats! You win! Guys like that are a healthy example of > the rich diversity of control structures we facilitate in our > society, evidence that the inverse power law remains. > > But then my upbringing tells me that Karl Rove is just a > slimy perverted opportunist who needs regulation by the populace. > > The problem with that upbringing is that the more of these > regulations we make more universal (increase the extent of a > control structure), the less agile we'll be when the > environment changes (e.g. climate change forcing evacuation > of coastal cities or the collapse of the dollar in the wake > of a financial attack by China... or whatever). Hence, the > more we _allow_ diverse individuals (including slimy > perverts) their diversity, the more agile we'll be as a > collective when the sh*t hits the fan. > > For example, look at all the people who are _completely_ > dependent on the federal government for their well-being: > FDA, Army Corps of Engineers, FEMA, high-risk mortgage > bail-outs for low-income home owners, FDIC insured banks, > well-maintained highway infrastructure, etc. > > Any thoughts on how to reconcile these two contradictory > principles (high diversity versus universal human properties) > are welcome. Luckily, as Lovecraft once said: "The most > merciful thing in the world, I think, is the inability of the > human mind to correlate all its contents." So, even if they > remain contradictory, I can retain (and be hypocritical > about) both of them. But, given the recent conversation > about networks and cliques, I figured I'd throw this out and > see what came back. [grin] > > - -- > glen e. p. ropella, 971-219-3846, http://tempusdictum.com > I have an existential map. It has 'You are here' written all > over it. -- Steven Wright -----BEGIN PGP SIGNATURE----- > Version: GnuPG v1.4.2.2 (GNU/Linux) > Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org > > iD8DBQFG2HTNZeB+vOTnLkoRAjkeAKDHERJCD6OsA3WGQFJ35469axQRBQCfU1U+ > eri5t4s24t0/lL9yNTU3lsU= > =1DIU > -----END PGP SIGNATURE----- > > ============================================================ > FRIAM Applied Complexity Group listserv > Meets Fridays 9a-11:30 at cafe at St. John's College > lectures, archives, unsubscribe, maps at http://www.friam.org > > |
In reply to this post by glen ep ropella
Dear Glen, Dear List,
> I have an ongoing cognitive conflict w.r.t. the principles I infer from > complexity theory and my ethical indoctrination/rearing. In this case, objectivity should prevail: as you say, the one is _inferred_ and the other is result of indoctrination. ;-) > The primary principle I've inferred from complexity theory (such as it > is) is: the extent versus the objectives of control structures should > show something like an inverse power law to maintain a balance between > diversity and efficacy. > The primary ethical rule I've been taught to hold is that all people are > equivalent but never equal and that the extent of the equivalence > depends on the chosen equivalence class. What do mean exactly by this? Does it correspond to: 1a) People should have the same duties and rights before the law if they belong to the same class. (I'm not sure I understand you correctly here?) 2) People are different by genetics and socialisation and therefor have different abilities/skills/opportunities/weaknesses. > I'm currently living through a transition in my political views. I used > to be a hardcore libertarian and believed, fundamentally, that non-local > government is incapable of governing many variables. I'm not saying > that there are particular variables they can or can't regulate. I'm > saying there's a limit to the total _number_ of variables, whatever they > are, that a massive, global structure like the feds can handle. For > example, the federal government here in the states can govern some > number of variables (say 10 million) but cannot govern as many as can be > governed by decentralized, local government. Hmm -especially with compututational assistance how should this apply (a limit on the number?) - I think it's rather a problem of knowledge: the non-local government does not _know_ about local problems, and this is a matter of principle because knowledge is not easily transferred (only information is, which is a different thing). So locality ensures that the people who know about the problems are doing things about the problem. On the other hand, the non-local/local distinction _is_ important, I think, for the _type_ of variables. There are problems which need concerted efforts -> central control. This is domain specific. > But, the implications of the limitation are that humans in one part of > our country may be horribly abused, oppressed, ignored because the > federal government has chosen to concentrate its energies on a set of > variables unrelated to that particular local abuse or oppression. And > my ethical upbringing makes me think that our nation-wide government > ought to govern all the variables according to some universally > applicable human standards, regardless of how many variables that comes > to. In the EU we have the principle of subsidiarity for the level at which control should be exerted (this is an ideal, not always found in the real control structures). The principal says that it should be analyzed at which level of oranization a problem is best addressed, and that level should then take care of it. There is no general rule: on has to look at the problems as they arrive (one can classify known problems beforehand of course). For example, I tend to believe that nobody in the US should starve. > In the past, I would have argued against the centralized control over > food distribution. I would have said that it's good for a small segment > of the population to enjoy steak and champagne while the large segments > have to stick to McDonald's and Schlitz Malt Liquor. But, as I get > older, my resolve has started to crumble. This is made especially acute > when I see blatantly unethical behavior on the part of the rich white > guys who run our government. I think we should not mix up the control/diversity question with that of social justice. > Of course, my libertarian mind makes the statement that all of us are > just exploiting the resources available to us. And that makes me want > to cheer on the Karl Rove's of the world! Congrats! You win! Guys > like that are a healthy example of the rich diversity of control > structures we facilitate in our society, evidence that the inverse power > law remains. I think the libertarian needn't be classic egoistic homo oeconomicus. The libertarian can resent central control but still acknowledge that it is important for certain problems so that his freedom is preserved in th e long run. Being rational does not mean being short-sighted :-) > The problem with that upbringing is that the more of these regulations > we make more universal (increase the extent of a control structure), the > less agile we'll be when the environment changes (e.g. climate change > forcing evacuation of coastal cities or the collapse of the dollar in > the wake of a financial attack by China... or whatever). Hence, the > more we _allow_ diverse individuals (including slimy perverts) their > diversity, the more agile we'll be as a collective when the sh*t hits > the fan. Striking the balance is all the difficulty, of course - but I think that is what it's about - not going into one extreme or the other, but teetering on that edge (of chaos SCNR ;-)). > Any thoughts on how to reconcile these two contradictory principles > (high diversity versus universal human properties) are welcome. > Luckily, as Lovecraft once said: "The most merciful thing in the world, > I think, is the inability of the human mind to correlate all its > contents." Hehe, Lovecraft has his moments indeed :-) One of my favourites is (not related to here) - "Do not call up what ye cannot put down" (The Case of Charles Dexter Ward, his best story IMHO) > So, even if they remain contradictory, I can retain (and be > hypocritical about) both of them. But, given the recent conversation > about networks and cliques, I figured I'd throw this out and see what > came back. [grin] As I said, I think they need not be contradictory -rather complementary - but before I say more I would like to know if I have understood you correctly so far. All the best, G?nther -- G?nther Greindl Department of Philosophy of Science University of Vienna guenther.greindl at univie.ac.at http://www.univie.ac.at/Wissenschaftstheorie/ Blog: http://dao.complexitystudies.org/ Site: http://www.complexitystudies.org |
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1 G?nther Greindl wrote: > In this case, objectivity should prevail: as you say, the one is > _inferred_ and the other is result of indoctrination. ;-) That's an excellent point! However, since the heart of the contradiction lies in non-local vs. local control structures, I don't think objectivity _can_ prevail. Child rearing is one of the most powerful forms of very local control. Indoctrination by a local community (church, neighborhood, family, department colleagues, etc.) is a very important form of control. If we all (even newly born babies) were objective and able to think rationally about their world, can you imagine the onslaught of homogeneity we would see? It seems like we'd immediately snap into a gravity well of conservatism governed by "rationality". Perhaps the indoctrination and irrational, knee-jerk impulses add a necessary "heat bath" to society. And that heat bath might allow the collective to find better global optima by sacrificing individuals to wacky extrema. Let's just say the earth is populated by indoctrinated, myopic individuals and a single individual begins to think rationally. (This is just a reformulation of the argument against Utopia where everyone is altruistic except for one or a few exploiters.) In such a case, it's very nice to be the rational guy. But, it is not necessarily in the rational guy's best interests to recruit more rational people! >> The primary ethical rule I've been taught to hold is that all people are >> equivalent but never equal and that the extent of the equivalence >> depends on the chosen equivalence class. > > What do mean exactly by this? Does it correspond to: > > 1a) People should have the same duties and rights before the law if they > belong to the same class. > (I'm not sure I understand you correctly here?) > > 2) People are different by genetics and socialisation and therefor have > different abilities/skills/opportunities/weaknesses. I mean (2). That we are all idiosyncratic; but we are so flexible and can combine efforts in so many different combinations that extremely different sets of people can be functionally equivalent... it all depends on the function. Basically, this is just an informal statement that the map between generators and phenomena is non-isomorphic. It's not 1-1 or onto. There are many ways to generate the same phenomenon and there are many phenomenon that can result from the same generators. Hence, it is ultimately disrespectful to say to a particular person (with particular phenotype) that they are incapable or less capable of achieving some goal. This applies to variations skin color, variations in upbringing, degree of wealth, formal or informal training, or even psychological "disorders". Anyone who makes claims (even those driven by statistics but with little clarity of causality) that one sub-group/clique is less capable of achieving some particular outcome makes those claims in an unjustified way. That is the ethical indoctrination I received as a kid. It conflicts with my inference from complexity that there _must_ be a few control systems that homogenize people and restrict them to particular (low, high, medium, whatever) achievement levels. E.g. even in a room full of hard-working geniuses, not everyone can be a Newton or an Einstein. Some few of the geniuses will get lucky and see great success. The rest will disappear in apparent mediocrity, despite their genius. > In the EU we have the principle of subsidiarity for the level at which > control should be exerted (this is an ideal, not always found in the > real control structures). The principal says that it should be analyzed > at which level of oranization a problem is best addressed, and that > level should then take care of it. There is no general rule: on has to > look at the problems as they arrive (one can classify known problems > beforehand of course). Interesting. When you say "one has to look ...", I presume the "one" you're talking about is a committee of some kind? Or is it really an individual who determines these things? > I think we should not mix up the control/diversity question with that of > social justice. But that's where the contradiction occurs! I _like_ trying to apply the principles I infer from my technical work onto problems I find in my social interactions. It's a form of falsification for those principles. And, of course, since FRIAM is supposed to be about "applied complexity", I figured this particular contradiction would be a natural consideration for this list. Given that, I'd be interested in hearing why you think the two questions shouldn't be conflated? > I think the libertarian needn't be classic egoistic homo oeconomicus. > The libertarian can resent central control but still acknowledge that it > is important for certain problems so that his freedom is preserved in th > e long run. Being rational does not mean being short-sighted :-) Yes. I agree. In fact, that's the entire reasoning behind libertarianism. Without a belief that some form of Hobbesian 3rd party, the libertarian turns into an anarchist. (By which I mean "anarchist" in the naive sense... not the crypto-communist sophisticated form of it.) Libertarianism advocates _for_ some non-local control structures, just not as many as other -isms advocate. In many ways, libertarianism is an admission of the inverse power law between the extent of control structures and the number of objectives for any single control structure. A very extensive controller like the federal government should (can) only have a very few objectives. The huge diversity of small, local control structures (like raising your children to brush their teeth twice a day versus once a day) are maximally efficient at controlling the huge diversity of other objectives. > Striking the balance is all the difficulty, of course - but I think that > is what it's about - not going into one extreme or the other, but > teetering on that edge (of chaos SCNR ;-)). Yes. But, the question comes down to which few objectives should the large control structures take on? E.g. should abortion laws be handled by the states in the US or the feds? What about euthanasia? "Universal" health care? Taxes? Defense? Production infrastructure (like rails and roads)? Etc. The number of objectives is _huge_. And I think the federal government is too non-local to handle that many objectives competently. How do we know that our policies are "striking a balance"? It seems to me that complexity theory could help us answer questions like this. I don't like that hungry children exist; but, does complexity theory tell us that at least some children _must_ go hungry? > Hehe, Lovecraft has his moments indeed :-) One of my favourites is (not > related to here) - "Do not call up what ye cannot put down" (The Case of > Charles Dexter Ward, his best story IMHO) Excellent quote! Thanks. I haven't read that story. > As I said, I think they need not be contradictory -rather complementary > - but before I say more I would like to know if I have understood you > correctly so far. I can see how they might be complementary. But, I can only see it in the sense of a _dualism_. It is difficult for me to consider both sides at the same time. The sides being a) the ethical consideration of things like abject poverty, epidemic diseases, starvation, etc. and b) the objective necessity that, with a population-based search method, some individuals are destined for extrema, often very unpleasant extrema. And it is especially difficult to simultaneously consider both sides when the members of the population who are destined for horrible extrema like AIDS or starvation are innocents who didn't have any chance to _choose_ their extreme destiny. - -- glen e. p. ropella, 971-219-3846, http://tempusdictum.com Power never takes a back step - only in the face of more power. -- Malcolm X -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.6 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iD8DBQFG2vvpZeB+vOTnLkoRAjqWAKDQeC9yjvBIN0JwGswIUqUzVhB65QCgviW5 gMjIupp0T0EzQyBTxNBTGVU= =dMeH -----END PGP SIGNATURE----- |
In reply to this post by Phil Henshaw-2
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1 Phil Henshaw wrote: > Gee, I don't know if it helps with your philosophy, but I think you're > making a common mistake with the inverse square relation. It's an > indicator of complex system organization, not a design principle. 'A' > implies 'B' but 'B' in no way implies 'A'. It's like a thermometer, if > a thermometer reads 98.6 it's likely you've found a human but heating > something up to 98.6 and trying to talk to it is nutty. The inverse > square metric is a time saving empirical tool for helping to locate and > investigate complex systems. You have to look into the system to find > what makes it organized, though. Hmmm. I don't think I'm making the mistake you're citing. But, it's plenty likely. I make all sorts of mistakes all the time. [grin] To be clear, let me paraphrase what you're saying: By saying the inverse power law is a result/indicator rather than a design principle, you're saying that the generators of these results could be multifarious and not determined. I.e. just because a system turns out to require things extreme behavior and circumstances doesn't mean the organization of the system is the only organization that could possibly achieve its objectives. You're saying that another system may be organized differently, achieve the same objectives, and not exhibit the same extrema. Is that right or did I misunderstand you? > The network science people seem to have a better way of using it than > the other mainstream science disciplines interested in the subject I > think. They're looking at complex systems from the inside out (though > maybe not having quite realized that networks are artifacts of the > complex systems they are embedded in). Their identification of the > elaboration and refinement of network connections during network > development as the origin of the inverse square metric and 'scale-free' > distribution of internal connectedness of natural networks is very > helpful. There should logically be some kind of connection with the > thinking of people taking an outside in approach to complexity, but I > have not been able to figure out what it is. I don't quite buy this. But, my criticism of it takes us on a tangent. I'll state my criticism anyway and if you choose to pursue the tangent, then so be it. [grin] I don't believe there is a fundamental difference between constructivism and formalism. I.e. one cannot study a system from the inside out without also studying it from the outside in, and vice versa. When one uses a phrase like "studying complex systems from the inside out", the phrase merely _emphasizes_ one part of the studying. Objectively, all studies involve an iterative approach that cycles between inside and outside studies. This seems to be true of everything from riding a bicycle to cosmology. > As far as the limits of control, don't all complex systems have > significantly independent design and behavior? It seems to me that the > first thing anything with independent design and behavior requires is > basic respect, otherwise you make large mistakes with it, right? We so > often forget that finding the easy ways for independent things to get > along is a great design strategy. Nature seems to like it quite a lot > for evolutionary survival too! It's not clear to me what you're saying, here. But, I don't really believe in "design". Design is a cognitive fiction we use to rationalize/justify our behavior. The causes of the phenomena generated by a complex system are... occult, occluded, at least to some extent. I think that's why we call these systems "complex". And its for these reasons that simulation is such a necessary and powerful tool in the study of these systems. We can't readily find "laws" that compress the description of the system. For less complex systems, we can infer these laws. Ultimately, however, even when we can (seem to) achieve some descriptive compression, the causes of the behavior are still occult. But we gain some confidence through temporal and spatial extrapolation (we repeat experiments through time and check to be sure the results are the same and we have different people in different locations repeat the experiments to see if the results are the same). Through such indirect "validation", we come to trust that our compressed description is _correct_ or true. But, ultimately, the causes are still occult. There is always the chance of a black swan. So, "don't all complex systems have significantly independent design and behavior"? For the above reasons, my answer is _no_ because "design" is a figment of our imagination. A better answer would be that the question is ill-formed and unanswerable. Complex systems are not _designed_ at all. They grow and evolve through the propagation of happenstance. - -- glen e. p. ropella, 971-219-3846, http://tempusdictum.com A random group of homeless people under a bridge would be far more intellectually sound and principled than anything I've encountered at the university so far. -- Ward Churchill -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.6 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iD8DBQFG2wGFZeB+vOTnLkoRAq3eAKCAI+vbRLTFlFKeT7WV7k3QEeh6KwCfRLQ9 phZzjO6yjThsCbm74urmILA= =IWll -----END PGP SIGNATURE----- |
In reply to this post by glen ep ropella
Glen,
It seems the world has had for a long time, and still has, oppression, poverty and poor education of segments of its population. Perhaps we can say that the developed world has managed to lower their own deprived segment size while the un(der)developed hasn't made so much progress. (Do you remember the TADtalk visualization on poverty?) It is considered by many, including you and me, that having deprived segments of the world's population is unethical because of the ethical standards we hold, have learned (and have been indoctrinated in, if you will). It remains ethical to work towards the reduction and elimination of these deprived segments - it's a big job. The argument is over how. I don't believe complexity science or studies and simulations of Complex Adaptive Systems (CAS) are yet sufficiently mature to help very far in this endeavor, but I'm not an expert in the field. It just seems that way from the perspective of an observer. That complexity studies indicate emergent behavior that is otherwise hard to predict and matches small systems (ie < 10^6 agents) behavior is *very* interesting and justifies further work. I don't think it separates cause and effect which is the primary reason for not using such studies for predictive purposes. And there is no evidence yet of successful studies or simulations that model social change, e.g. the French or Russian Revolutions. (Please correct me if this is wrong). So it seems that the problems of society (including trying to figure out what is the 'best' form of government) are not yet subject to relief from CAS studies. Many would not want one small class of experts to be responsible for this task anyway. Going back to your original ethical dilemma, if one agrees with what is ethical and one's political position doesn't then one will change/adjust/modify one's political position to maintain one's internal integrity. Labels and technicalities in definitions may be part of the problem: I am a democrat because I believe everyone should have a say in government, I am an environmentalist because we should take care of our biosphere so it remains habitable for us, I am a monarchist because I don't want to disband the Royal Family, I am libertarian because I don't want a Big Brother government, I am conservative because I think we shouldn't waste our resources, I am a republican in the sense I don't want to dismantle the US federal system and its three branches of government, I am a capitalist because I believe in free-markets, I am socialist because I believe everyone deserves basic health care, education, justice, I am a moderate because I believe we deserve a system of justice that can reign in man's excesses. etc If complexity science turns out to be a powerful technology it may take it's place along side fire, nuclear power and genetic engineering. All are amoral. It's how we use them for our benefit that will exercise our morals (ethics). Robert C Glen E. P. Ropella wrote: > The sides being a) the ethical consideration of > things like abject poverty, epidemic diseases, starvation, etc. and b) > the objective necessity that, with a population-based search method, > some individuals are destined for extrema, often very unpleasant > extrema. And it is especially difficult to simultaneously consider both > sides when the members of the population who are destined for horrible > extrema like AIDS or starvation are innocents who didn't have any chance > to _choose_ their extreme destiny. > > - -- > glen e. p. ropella, 971-219-3846, http://tempusdictum.com > Power never takes a back step - only in the face of more power. -- Malcolm X > > ============================================================ > FRIAM Applied Complexity Group listserv > Meets Fridays 9a-11:30 at cafe at St. John's College > lectures, archives, unsubscribe, maps at http://www.friam.org > > An HTML attachment was scrubbed... URL: http://redfish.com/pipermail/friam_redfish.com/attachments/20070902/d2ab36b6/attachment.html |
In reply to this post by glen ep ropella
Glen,
> Phil Henshaw wrote: > > Gee, I don't know if it helps with your philosophy, but I > think you're > > making a common mistake with the inverse square relation. It's an > > indicator of complex system organization, not a design > principle. 'A' > > implies 'B' but 'B' in no way implies 'A'. It's like a > thermometer, if > > a thermometer reads 98.6 it's likely you've found a human > but heating > > something up to 98.6 and trying to talk to it is nutty. > The inverse > > square metric is a time saving empirical tool for helping > to locate and > > investigate complex systems. You have to look into the > system to find > > what makes it organized, though. > > Hmmm. I don't think I'm making the mistake you're citing. > But, it's plenty likely. I make all sorts of mistakes all > the time. [grin] To be clear, let me paraphrase what you're saying: > > By saying the inverse power law is a result/indicator rather > than a design principle, you're saying that the generators of > these results could be multifarious and not determined. I.e. > just because a system turns out to require things extreme > behavior and circumstances doesn't mean the organization of > the system is the only organization that could possibly > achieve its objectives. I may not be speaking directly to your actual phrase, describing what you've gathered from complexity theory: "the extent versus the objectives of control structures should show something like an inverse power law to maintain a balance between diversity and efficacy." I read that as meaning that you'd design an inverse square relation into your control systems. I don't know what actual kind of controls you may be thinking of, or how you'd measure their diversity or efficacy, of course. The 'generators' of the inverse square measure are the self-organizations of the particular complex system you then try to understand. If you design a procedure by which self-organization develops it's quite likely it would behave like natural self-organized systems and be structurally different every time, and still have metrics like the inverse square distributions of their parts which are similar. That there might also be various different kinds of solution to a given objective is a separate issue to me. > You're saying that another system may be organized > differently, achieve the same objectives, and not exhibit the > same extrema. > > Is that right or did I misunderstand you? I'm not quite sure it addresses your question, but I'm was saying the process by which complex systems evolve does not follow an inverse square pattern or series of steps. The measure is generally only found in systems after they have been built by other means. > > The network science people seem to have a better way of > using it than > > the other mainstream science disciplines interested in the subject I > > think. They're looking at complex systems from the inside > out (though > > maybe not having quite realized that networks are artifacts of the > > complex systems they are embedded in). Their identification of the > > elaboration and refinement of network connections during network > > development as the origin of the inverse square metric and > > 'scale-free' distribution of internal connectedness of natural > > networks is very helpful. There should logically be some kind of > > connection with the thinking of people taking an outside in > approach > > to complexity, but I have not been able to figure out what it is. > > I don't quite buy this. But, my criticism of it takes us on > a tangent. I'll state my criticism anyway and if you choose > to pursue the tangent, then so be it. [grin] > > I don't believe there is a fundamental difference between > constructivism and formalism. I.e. one cannot study a system > from the inside out without also studying it from the outside > in, and vice versa. When one uses a phrase like "studying > complex systems from the inside out", the phrase merely > _emphasizes_ one part of the studying. Objectively, all > studies involve an iterative approach that cycles between > inside and outside studies. This seems to be true of > everything from riding a bicycle to cosmology. Well, it's not half well enough studied, but inside and outside perspectives of organization in systems are so very different it takes special care to keep them straight it seems to me. I'm not even sure if one can discuss a system as having an inside (network cell of relations) since I haven't heard the 'news' in the journals yet and it seems to require a radical exception to the traditional view of determinism. Isn't the traditional view that all causation comes from the outside still the most widespread? One of the differences between the two perspectives is the huge difference inside and outside views is in the information content of your observations. If your view of the world is based on an insider's perspective of some self-organized 'hive' of activity, say a religious or social movement, it may be extremely hard to make sense of an outsider's view of exactly the same thing. The insider's view is of all the internalized connections, and the outsider's view of essentially all the loose ends. Getting them to connect can be very difficult. > > As far as the limits of control, don't all complex systems have > > significantly independent design and behavior? It seems > to me that the > > first thing anything with independent design and behavior > requires is > > basic respect, otherwise you make large mistakes with it, > right? We so > > often forget that finding the easy ways for independent > things to get > > along is a great design strategy. Nature seems to like it > quite a lot > > for evolutionary survival too! > > It's not clear to me what you're saying, here. But, I don't > really believe in "design". Design is a cognitive fiction we > use to rationalize/justify our behavior. The causes of the > phenomena generated by a complex system are... occult, > occluded, at least to some extent. I think that's why we > call these systems "complex". And its for these reasons that > simulation is such a necessary and powerful tool in the study > of these systems. We can't readily find "laws" that compress > the description of the system. For less complex systems, we > can infer these laws. In studying natural systems it's apparent that lots of intricate 'design' develops without any 'design'. I was using first sense above, that complex systems may develop all kinds of organization and activity that were neither preconceived nor predetermined. Whether you can find useful 'laws' to describe complex systems I think is like other real scientific questions, more dependence on whether you ask the right questions. I was looking for years for some clear evidence that the economic systems all act as a single complex system, behaving as a whole. The fact that the embodied energy of economic value (btu/$GDP) is asymptotically approaching around 8000btu/$ in all the economies of the world seems to say it's all one system in a highly useful way. The self-organization of the economies gives us a conversion and equivalence between a physical measure and what humans value. I expect these things are lying all over the place, but we're just beginning to recognize them. > > Ultimately, however, even when we can (seem to) achieve some > descriptive compression, the causes of the behavior are still > occult. But we gain some confidence through temporal and > spatial extrapolation (we repeat experiments through time and > check to be sure the results are the same and we have > different people in different locations repeat the > experiments to see if the results are the same). Through > such indirect "validation", we come to trust that our > compressed description is _correct_ or true. But, > ultimately, the causes are still occult. There is always the > chance of a black swan. I think it's more productive, when you're well beaten, to accept that systems with complex internal network designs we tend not to even see are beyond our understanding. There's still good sense to making models of things, and developing ways of determining if the models behave like what it imitates. One of the interesting subjects that came up at the SASO conference is that no one in the information network control systems field seems to know how to do that for self-organizing and self-adapting software controls... except random experiment. That may lead to 'gaining some confidence', as you say, but it's not the same as the narrowly defined uncertainties of the deterministic controls of the past. > > So, "don't all complex systems have significantly independent > design and behavior"? For the above reasons, my answer is > _no_ because "design" is a figment of our imagination. A > better answer would be that the question is ill-formed and > unanswerable. Complex systems are not _designed_ at all. > They grow and evolve through the propagation of happenstance. Well, that's kind of abstract. It's a simpler issue when talking about real things. Any ecology or social group, etc., will have different networks emerge within them as they develop and so they will respond differently too. That's all I mean by independent design and behavior. When I speak of 'designing' complex systems, as I do for architectural and planning projects, it's more about setting up a system learning process. Discovering how to make links between previously disconnected parts of communities takes an effort at exploratory learning about the disconnected parts of your community. Once you then design ways to link them the end product is their own creative interactions which the planners would never think of. Instead of 'propagation of happenstance' I'd use 'development of opportunity'. The latter covers both truly random events and the exploratory path-finding processes also prominent in self-organization. Phil > > - -- > glen e. p. ropella, 971-219-3846, http://tempusdictum.com > A random group of homeless people under a bridge would be far > more intellectually sound and principled than anything I've > encountered at the university so far. -- Ward Churchill > > -----BEGIN PGP SIGNATURE----- > Version: GnuPG v1.4.6 (GNU/Linux) > Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org > > iD8DBQFG2wGFZeB+vOTnLkoRAq3eAKCAI+vbRLTFlFKeT7WV7k3QEeh6KwCfRLQ9 > phZzjO6yjThsCbm74urmILA= > =IWll > -----END PGP SIGNATURE----- > > |
In reply to this post by Robert J. Cordingley
Hi,
Many years ago when I was working on my undergraduate thesis in the jungle of Amazonas in Colombia, I knew a North American Anthropologist whom had been working there for a long time studying the way how an indigenous culture disappeared. I horrified with it and thought It was inmoral. Older members of the team of researchers where I was working told me that she was making science and that a scientific must be neutral. I think it's totally false. A scientific has an emotional and political charge, deep inside feels himself like a demiurge and for these reasons can't be completely impartial. What is science for? Science have a social function, must help us to understand and resolve problems but of course is an instruments of politics because finally we are in a world of gangs. I have an hypothesis: biotechnology, robotics, informatics, smart software and internationalization of economy will increase poverty in the underdeveloped world. I'm not a scientific but suppose I am, I take data and develop a sophisticated model. Maybe, be sure, I'll conclude that my hypothesis is true and I'll say for first time something brilliant like "Poverty is a emergent process"... wow, what a conclusion!!!. If a guy which dream is to be high executive of the World Bank, IMF or WTO takes data and develops a sophisticated model will conclude that my hypothesis is false and will say "Richness is an emergent process". Maybe neither of us will be telling lies, of course I'll be right but I'll pray for his conclusion to be right because at the end he will be a high executive and will have the last word. Alfredo CV Robert Cordingley wrote: > Glen, > > It seems the world has had for a long time, and still has, oppression, > poverty and poor education of segments of its population. Perhaps we > can say that the developed world has managed to lower their own > deprived segment size while the un(der)developed hasn't made so much > progress. (Do you remember the TADtalk visualization on poverty?) > It is considered by many, including you and me, that having deprived > segments of the world's population is unethical because of the ethical > standards we hold, have learned (and have been indoctrinated in, if > you will). > > It remains ethical to work towards the reduction and elimination of > these deprived segments - it's a big job. The argument is over how. > I don't believe complexity science or studies and simulations of > Complex Adaptive Systems (CAS) are yet sufficiently mature to help > very far in this endeavor, but I'm not an expert in the field. It just > seems that way from the perspective of an observer. > > That complexity studies indicate emergent behavior that is otherwise > hard to predict and matches small systems (ie < 10^6 agents) behavior > is *very* interesting and justifies further work. I don't think it > separates cause and effect which is the primary reason for not using > such studies for predictive purposes. And there is no evidence yet of > successful studies or simulations that model social change, e.g. the > French or Russian Revolutions. (Please correct me if this is wrong). > So it seems that the problems of society (including trying to figure > out what is the 'best' form of government) are not yet subject to > relief from CAS studies. Many would not want one small class of > experts to be responsible for this task anyway. > > Going back to your original ethical dilemma, if one agrees with what > is ethical and one's political position doesn't then one will > change/adjust/modify one's political position to maintain one's > internal integrity. Labels and technicalities in definitions may be > part of the problem: > > I am a democrat because I believe everyone should have a say in > government, > I am an environmentalist because we should take care of our biosphere > so it remains habitable for us, > I am a monarchist because I don't want to disband the Royal Family, > I am libertarian because I don't want a Big Brother government, > I am conservative because I think we shouldn't waste our resources, > I am a republican in the sense I don't want to dismantle the US > federal system and its three branches of government, > I am a capitalist because I believe in free-markets, > I am socialist because I believe everyone deserves basic health care, > education, justice, > I am a moderate because I believe we deserve a system of justice that > can reign in man's excesses. > etc > > If complexity science turns out to be a powerful technology it may > take it's place along side fire, nuclear power and genetic > engineering. All are amoral. It's how we use them for our benefit > that will exercise our morals (ethics). > > Robert C > > Glen E. P. Ropella wrote: > >>The sides being a) the ethical consideration of >>things like abject poverty, epidemic diseases, starvation, etc. and b) >>the objective necessity that, with a population-based search method, >>some individuals are destined for extrema, often very unpleasant >>extrema. And it is especially difficult to simultaneously consider both >>sides when the members of the population who are destined for horrible >>extrema like AIDS or starvation are innocents who didn't have any chance >>to _choose_ their extreme destiny. >> >>- -- >>glen e. p. ropella, 971-219-3846, http://tempusdictum.com >>Power never takes a back step - only in the face of more power. -- Malcolm X >> >>============================================================ >>FRIAM Applied Complexity Group listserv >>Meets Fridays 9a-11:30 at cafe at St. John's College >>lectures, archives, unsubscribe, maps at http://www.friam.org >> >> >> >------------------------------------------------------------------------ > >============================================================ >FRIAM Applied Complexity Group listserv >Meets Fridays 9a-11:30 at cafe at St. John's College >lectures, archives, unsubscribe, maps at http://www.friam.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://redfish.com/pipermail/friam_redfish.com/attachments/20070903/c6da72d8/attachment.html |
I have an hypothesis: biotechnology, robotics, informatics, smart software and internationalization of economy will increase poverty in the underdeveloped world.
I think: 1) it can be proved as the theorem of general systems theory & cybernetics; and 2) it's a part of the fight for the future --Mikhail ----- Original Message ----- From: Alfredo CV To: robert at cirrillian.com ; The Friday Morning Applied Complexity Coffee Group Sent: Monday, September 03, 2007 12:48 PM Subject: Re: [FRIAM] politics and cliques Hi, Many years ago when I was working on my undergraduate thesis in the jungle of Amazonas in Colombia, I knew a North American Anthropologist whom had been working there for a long time studying the way how an indigenous culture disappeared. I horrified with it and thought It was inmoral. Older members of the team of researchers where I was working told me that she was making science and that a scientific must be neutral. I think it's totally false. A scientific has an emotional and political charge, deep inside feels himself like a demiurge and for these reasons can't be completely impartial. What is science for? Science have a social function, must help us to understand and resolve problems but of course is an instruments of politics because finally we are in a world of gangs. I have an hypothesis: biotechnology, robotics, informatics, smart software and internationalization of economy will increase poverty in the underdeveloped world. I'm not a scientific but suppose I am, I take data and develop a sophisticated model. Maybe, be sure, I'll conclude that my hypothesis is true and I'll say for first time something brilliant like "Poverty is a emergent process"... wow, what a conclusion!!!. If a guy which dream is to be high executive of the World Bank, IMF or WTO takes data and develops a sophisticated model will conclude that my hypothesis is false and will say "Richness is an emergent process". Maybe neither of us will be telling lies, of course I'll be right but I'll pray for his conclusion to be right because at the end he will be a high executive and will have the last word. Alfredo CV Robert Cordingley wrote: Glen, It seems the world has had for a long time, and still has, oppression, poverty and poor education of segments of its population. Perhaps we can say that the developed world has managed to lower their own deprived segment size while the un(der)developed hasn't made so much progress. (Do you remember the TADtalk visualization on poverty?) It is considered by many, including you and me, that having deprived segments of the world's population is unethical because of the ethical standards we hold, have learned (and have been indoctrinated in, if you will). It remains ethical to work towards the reduction and elimination of these deprived segments - it's a big job. The argument is over how. I don't believe complexity science or studies and simulations of Complex Adaptive Systems (CAS) are yet sufficiently mature to help very far in this endeavor, but I'm not an expert in the field. It just seems that way from the perspective of an observer. That complexity studies indicate emergent behavior that is otherwise hard to predict and matches small systems (ie < 10^6 agents) behavior is *very* interesting and justifies further work. I don't think it separates cause and effect which is the primary reason for not using such studies for predictive purposes. And there is no evidence yet of successful studies or simulations that model social change, e.g. the French or Russian Revolutions. (Please correct me if this is wrong). So it seems that the problems of society (including trying to figure out what is the 'best' form of government) are not yet subject to relief from CAS studies. Many would not want one small class of experts to be responsible for this task anyway. Going back to your original ethical dilemma, if one agrees with what is ethical and one's political position doesn't then one will change/adjust/modify one's political position to maintain one's internal integrity. Labels and technicalities in definitions may be part of the problem: I am a democrat because I believe everyone should have a say in government, I am an environmentalist because we should take care of our biosphere so it remains habitable for us, I am a monarchist because I don't want to disband the Royal Family, I am libertarian because I don't want a Big Brother government, I am conservative because I think we shouldn't waste our resources, I am a republican in the sense I don't want to dismantle the US federal system and its three branches of government, I am a capitalist because I believe in free-markets, I am socialist because I believe everyone deserves basic health care, education, justice, I am a moderate because I believe we deserve a system of justice that can reign in man's excesses. etc If complexity science turns out to be a powerful technology it may take it's place along side fire, nuclear power and genetic engineering. All are amoral. It's how we use them for our benefit that will exercise our morals (ethics). Robert C Glen E. P. Ropella wrote: The sides being a) the ethical consideration of things like abject poverty, epidemic diseases, starvation, etc. and b) the objective necessity that, with a population-based search method, some individuals are destined for extrema, often very unpleasant extrema. And it is especially difficult to simultaneously consider both sides when the members of the population who are destined for horrible extrema like AIDS or starvation are innocents who didn't have any chance to _choose_ their extreme destiny. - -- glen e. p. ropella, 971-219-3846, http://tempusdictum.com Power never takes a back step - only in the face of more power. -- Malcolm X ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College lectures, archives, unsubscribe, maps at http://www.friam.org ---------------------------------------------------------------------------- ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College lectures, archives, unsubscribe, maps at http://www.friam.org ------------------------------------------------------------------------------ ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College lectures, archives, unsubscribe, maps at http://www.friam.org -------------- next part -------------- An HTML attachment was scrubbed... URL: http://redfish.com/pipermail/friam_redfish.com/attachments/20070903/d4a519d5/attachment.html |
In reply to this post by Robert J. Cordingley
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1 You bring up two very important points: 1) a strong hypo-thesis (somewhat but not completely justified) that it _is_ ethical to attempt to reduce "deprived" segments and 2) ethical justification for various labels (democrat, monarchist, etc.). You also brought up the point that the techniques of complexity studies are, yet, too immature to really bring them to bear on the problem. I don't regard this as an important point because tools must be _used_ to become mature. So, it doesn't matter how immature the techniques are, they must be used on the problems we have at hand. And the corollary point about them not separating out cause/effect and re: prediction are premature conclusions in my opinion. So, I'll leave these points alone for now. A solution to my dilemma might involve _rejecting_ the ethical premise that the deprived segments should not be so deprived. E.g. some children _must_ starve in order for life to effectively do whatever it's doing. That is a completely reasonable solution (and one taken by many of us lucky ones whose selves, friends, family, tribe, etc. have their basic needs taken care of). Because that is a completely reasonable solution, we have to not only question _how_ alternative solutions (maintain the ethical premise) obtain; but we also have to question the entire process of _justification_. Can the ethical premise be more completely justified? This same question comes into your second important point. When I call myself a "monarchist" and that "theorem" is somehow justified via some form of rhetoric, we not only have to question the conclusions derived from the premise. We also have to question the rhetorical justification of the premise, itself. Am I really a "monarchist", regardless of what I call myself? Does the rhetoric: "because I don't want to disband the Royal Family" deductively lead to the label "monarchist"? Etc. This relates fundamentally to the question of whether things like inverse power laws between particular measures can be effectively applied to social and/or ethical problems. It relates because of the following. The results of complexity studies are telling us (in my opinion) _nothing_ about actual (ontological) reality. These results merely tell us how we as ignorant individuals _learn_ about actuality. They are at their core a psychological bridge between reductionism and holism. The dilemma, as I formulated it, relates two unjustified measures: the extent of a control structure and the number of objectives any control structure can competently achieve. I believe the epistemological results of complexity theory can help either: a) justify the two measures, or b) demonstrate how one or both of the measures are unjustified. It's also possible that either measure is justified but falsified (a.k.a. valid but unsound in logic-speak or verified but invalid in M&S-speak). We can't currently falsify the measures and their relationship because we haven't done the science (though I believe it's relatively easy to formulate a falsifiable hypothesis). And whether or not the science is _worth_ pursuing depends on the justification. So, the questions become: Q1) Do non-local control structures exist that regulate many variables? Q2) Can particular variables (e.g. hunger) be factored completely out of the system so that no animal/plant experiences extreme changes in those variables? These are _justification_ questions, not falsification questions. Hence, they are perfectly suited for the toy-world models currently being built by social scientists and mathematicians. Once the justification is well-stated; falsification questions can be competently posed. Robert Cordingley wrote: > It seems the world has had for a long time, and still has, oppression, > poverty and poor education of segments of its population. Perhaps we > can say that the developed world has managed to lower their own deprived > segment size while the un(der)developed hasn't made so much progress. > (Do you remember the TADtalk visualization on poverty?) It is > considered by many, including you and me, that having deprived segments > of the world's population is unethical because of the ethical standards > we hold, have learned (and have been indoctrinated in, if you will). > > It remains ethical to work towards the reduction and elimination of > these deprived segments - it's a big job. The argument is over how. I > don't believe complexity science or studies and simulations of Complex > Adaptive Systems (CAS) are yet sufficiently mature to help very far in > this endeavor, but I'm not an expert in the field. It just seems that > way from the perspective of an observer. > > That complexity studies indicate emergent behavior that is otherwise > hard to predict and matches small systems (ie < 10^6 agents) behavior is > *very* interesting and justifies further work. I don't think it > separates cause and effect which is the primary reason for not using > such studies for predictive purposes. And there is no evidence yet of > successful studies or simulations that model social change, e.g. the > French or Russian Revolutions. (Please correct me if this is wrong). > So it seems that the problems of society (including trying to figure out > what is the 'best' form of government) are not yet subject to relief > from CAS studies. Many would not want one small class of experts to be > responsible for this task anyway. > > Going back to your original ethical dilemma, if one agrees with what is > ethical and one's political position doesn't then one will > change/adjust/modify one's political position to maintain one's internal > integrity. Labels and technicalities in definitions may be part of the > problem: > > I am a democrat because I believe everyone should have a say in government, > I am an environmentalist because we should take care of our biosphere so > it remains habitable for us, > I am a monarchist because I don't want to disband the Royal Family, > I am libertarian because I don't want a Big Brother government, > I am conservative because I think we shouldn't waste our resources, > I am a republican in the sense I don't want to dismantle the US federal > system and its three branches of government, > I am a capitalist because I believe in free-markets, > I am socialist because I believe everyone deserves basic health care, > education, justice, > I am a moderate because I believe we deserve a system of justice that > can reign in man's excesses. > etc > > If complexity science turns out to be a powerful technology it may take > it's place along side fire, nuclear power and genetic engineering. All > are amoral. It's how we use them for our benefit that will exercise our > morals (ethics). - -- glen e. p. ropella, 971-219-3846, http://tempusdictum.com There is a tragic flaw in our precious Constitution, and I don't know what can be done to fix it. This is it: Only nut cases want to be president. -- Kurt Vonnegut -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.6 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iD8DBQFG3amOZeB+vOTnLkoRAgEUAKDK7Mjc3EpNgOjqjmIiyyLJ6ppxygCg0n0J 1bFC1hz8fvBJr8cypjkfUGE= =5ozy -----END PGP SIGNATURE----- |
re: important point 1. It is easier for me to see/say that it is
_unethical_ to _not_ lend some assistance to deprived segments in order to improve their lot. Reduce the segment to one deprived human being that you pass in the street. There are may variables in the encounter: one's schedule, feeling of well-being, attire of the unfortunate being and the urge to extend a helping hand. Where does that come from if not from one's ethical background. re: important point 2 It wasn't my point to say the labels were ethically justified but to point out that labels e.g. one being "libertarian", were not clear cut definitions. One can hold x political view in some issues and y on others when pedants might object to say that x and y were incompatible. There may be no ethical dilemma for one to believe in x and y, though other's may debate it. Your 'reasonable' solution might suit a callous person. We have to guard against trends towards 'final solutions'. I thought the "extent of a control structure" and "the number of objectives" were two attributes of government that your studies, or at least your thinking, had connected as related through an inverse power law. Neither needs justifying. I'm probably missing the point or not familiar with your definition of 'justified'. re: Q1) "Do non-local control structures exist that regulate many variables?" - I have no idea, but suggest that getting some agreement on the definition of the terms of the question may take some time even if it's possible. re: Q2) Can particular variables (e.g. hunger) be factored completely out of the system so that no animal/plant experiences extreme changes in those variables? - I'd vote for working towards improvement in the social variables knowing that absolute success may be beyond us - but wait, what about small-pox, or death by dinosaur? When you say 'variable' do you mean 'vector'? But then there are 8 meanings of "vector" in Wiktionary. So much epistemology, so little time... Robert C Glen E. P. Ropella wrote: > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > > You bring up two very important points: > > 1) a strong hypo-thesis (somewhat but not completely justified) that it > _is_ ethical to attempt to reduce "deprived" segments and > > 2) ethical justification for various labels (democrat, monarchist, etc.). > > You also brought up the point that the techniques of complexity studies > are, yet, too immature to really bring them to bear on the problem. I > don't regard this as an important point because tools must be _used_ to > become mature. So, it doesn't matter how immature the techniques are, > they must be used on the problems we have at hand. And the corollary > point about them not separating out cause/effect and re: prediction are > premature conclusions in my opinion. So, I'll leave these points alone > for now. > > A solution to my dilemma might involve _rejecting_ the ethical premise > that the deprived segments should not be so deprived. E.g. some > children _must_ starve in order for life to effectively do whatever it's > doing. That is a completely reasonable solution (and one taken by many > of us lucky ones whose selves, friends, family, tribe, etc. have their > basic needs taken care of). > > Because that is a completely reasonable solution, we have to not only > question _how_ alternative solutions (maintain the ethical premise) > obtain; but we also have to question the entire process of > _justification_. Can the ethical premise be more completely justified? > > This same question comes into your second important point. When I call > myself a "monarchist" and that "theorem" is somehow justified via some > form of rhetoric, we not only have to question the conclusions derived > from the premise. We also have to question the rhetorical justification > of the premise, itself. Am I really a "monarchist", regardless of what > I call myself? Does the rhetoric: "because I don't want to disband the > Royal Family" deductively lead to the label "monarchist"? Etc. > > This relates fundamentally to the question of whether things like > inverse power laws between particular measures can be effectively > applied to social and/or ethical problems. It relates because of the > following. > > The results of complexity studies are telling us (in my opinion) > _nothing_ about actual (ontological) reality. These results merely tell > us how we as ignorant individuals _learn_ about actuality. They are at > their core a psychological bridge between reductionism and holism. > > The dilemma, as I formulated it, relates two unjustified measures: the > extent of a control structure and the number of objectives any control > structure can competently achieve. I believe the epistemological > results of complexity theory can help either: > > a) justify the two measures, or > b) demonstrate how one or both of the measures are unjustified. > > It's also possible that either measure is justified but falsified > (a.k.a. valid but unsound in logic-speak or verified but invalid in > M&S-speak). We can't currently falsify the measures and their > relationship because we haven't done the science (though I believe it's > relatively easy to formulate a falsifiable hypothesis). And whether or > not the science is _worth_ pursuing depends on the justification. > > So, the questions become: > > Q1) Do non-local control structures exist that regulate many variables? > > Q2) Can particular variables (e.g. hunger) be factored completely out of > the system so that no animal/plant experiences extreme changes in those > variables? > > These are _justification_ questions, not falsification questions. > Hence, they are perfectly suited for the toy-world models currently > being built by social scientists and mathematicians. Once the > justification is well-stated; falsification questions can be competently > posed. > > Robert Cordingley wrote: > >> It seems the world has had for a long time, and still has, oppression, >> poverty and poor education of segments of its population. Perhaps we >> can say that the developed world has managed to lower their own deprived >> segment size while the un(der)developed hasn't made so much progress. >> (Do you remember the TADtalk visualization on poverty?) It is >> considered by many, including you and me, that having deprived segments >> of the world's population is unethical because of the ethical standards >> we hold, have learned (and have been indoctrinated in, if you will). >> >> It remains ethical to work towards the reduction and elimination of >> these deprived segments - it's a big job. The argument is over how. I >> don't believe complexity science or studies and simulations of Complex >> Adaptive Systems (CAS) are yet sufficiently mature to help very far in >> this endeavor, but I'm not an expert in the field. It just seems that >> way from the perspective of an observer. >> >> That complexity studies indicate emergent behavior that is otherwise >> hard to predict and matches small systems (ie < 10^6 agents) behavior is >> *very* interesting and justifies further work. I don't think it >> separates cause and effect which is the primary reason for not using >> such studies for predictive purposes. And there is no evidence yet of >> successful studies or simulations that model social change, e.g. the >> French or Russian Revolutions. (Please correct me if this is wrong). >> So it seems that the problems of society (including trying to figure out >> what is the 'best' form of government) are not yet subject to relief >> from CAS studies. Many would not want one small class of experts to be >> responsible for this task anyway. >> >> Going back to your original ethical dilemma, if one agrees with what is >> ethical and one's political position doesn't then one will >> change/adjust/modify one's political position to maintain one's internal >> integrity. Labels and technicalities in definitions may be part of the >> problem: >> >> I am a democrat because I believe everyone should have a say in government, >> I am an environmentalist because we should take care of our biosphere so >> it remains habitable for us, >> I am a monarchist because I don't want to disband the Royal Family, >> I am libertarian because I don't want a Big Brother government, >> I am conservative because I think we shouldn't waste our resources, >> I am a republican in the sense I don't want to dismantle the US federal >> system and its three branches of government, >> I am a capitalist because I believe in free-markets, >> I am socialist because I believe everyone deserves basic health care, >> education, justice, >> I am a moderate because I believe we deserve a system of justice that >> can reign in man's excesses. >> etc >> >> If complexity science turns out to be a powerful technology it may take >> it's place along side fire, nuclear power and genetic engineering. All >> are amoral. It's how we use them for our benefit that will exercise our >> morals (ethics). >> > > > - -- > glen e. p. ropella, 971-219-3846, http://tempusdictum.com > There is a tragic flaw in our precious Constitution, and I don't know > what can be done to fix it. This is it: Only nut cases want to be > president. -- Kurt Vonnegut > > -----BEGIN PGP SIGNATURE----- > Version: GnuPG v1.4.6 (GNU/Linux) > Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org > > iD8DBQFG3amOZeB+vOTnLkoRAgEUAKDK7Mjc3EpNgOjqjmIiyyLJ6ppxygCg0n0J > 1bFC1hz8fvBJr8cypjkfUGE= > =5ozy > -----END PGP SIGNATURE----- > > ============================================================ > FRIAM Applied Complexity Group listserv > Meets Fridays 9a-11:30 at cafe at St. John's College > lectures, archives, unsubscribe, maps at http://www.friam.org > > > An HTML attachment was scrubbed... URL: http://redfish.com/pipermail/friam_redfish.com/attachments/20070904/b2ef7fc9/attachment.html |
In reply to this post by glen ep ropella
Glen,
You make a remarkably cogent argument, up until you frame the dilemma entirely in terms of control. Natural systems orchestrate things leaving most things working independently and 'out of control'. We should have some better reason than frustration for ignoring that rather effective 'collaborative' approach, shouldn't we? Phil Sent from my Verizon Wireless BlackBerry -----Original Message----- From: "Glen E. P. Ropella" <[hidden email]> Date: Tue, 04 Sep 2007 11:53:02 To:The Friday Morning Applied Complexity Coffee Group <friam at redfish.com> Subject: Re: [FRIAM] politics and cliques -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 You bring up two very important points: 1) a strong hypo-thesis (somewhat but not completely justified) that it _is_ ethical to attempt to reduce "deprived" segments and 2) ethical justification for various labels (democrat, monarchist, etc.). You also brought up the point that the techniques of complexity studies are, yet, too immature to really bring them to bear on the problem. I don't regard this as an important point because tools must be _used_ to become mature. So, it doesn't matter how immature the techniques are, they must be used on the problems we have at hand. And the corollary point about them not separating out cause/effect and re: prediction are premature conclusions in my opinion. So, I'll leave these points alone for now. A solution to my dilemma might involve _rejecting_ the ethical premise that the deprived segments should not be so deprived. E.g. some children _must_ starve in order for life to effectively do whatever it's doing. That is a completely reasonable solution (and one taken by many of us lucky ones whose selves, friends, family, tribe, etc. have their basic needs taken care of). Because that is a completely reasonable solution, we have to not only question _how_ alternative solutions (maintain the ethical premise) obtain; but we also have to question the entire process of _justification_. Can the ethical premise be more completely justified? This same question comes into your second important point. When I call myself a "monarchist" and that "theorem" is somehow justified via some form of rhetoric, we not only have to question the conclusions derived from the premise. We also have to question the rhetorical justification of the premise, itself. Am I really a "monarchist", regardless of what I call myself? Does the rhetoric: "because I don't want to disband the Royal Family" deductively lead to the label "monarchist"? Etc. This relates fundamentally to the question of whether things like inverse power laws between particular measures can be effectively applied to social and/or ethical problems. It relates because of the following. The results of complexity studies are telling us (in my opinion) _nothing_ about actual (ontological) reality. These results merely tell us how we as ignorant individuals _learn_ about actuality. They are at their core a psychological bridge between reductionism and holism. The dilemma, as I formulated it, relates two unjustified measures: the extent of a control structure and the number of objectives any control structure can competently achieve. I believe the epistemological results of complexity theory can help either: a) justify the two measures, or b) demonstrate how one or both of the measures are unjustified. It's also possible that either measure is justified but falsified (a.k.a. valid but unsound in logic-speak or verified but invalid in M&S-speak). We can't currently falsify the measures and their relationship because we haven't done the science (though I believe it's relatively easy to formulate a falsifiable hypothesis). And whether or not the science is _worth_ pursuing depends on the justification. So, the questions become: Q1) Do non-local control structures exist that regulate many variables? Q2) Can particular variables (e.g. hunger) be factored completely out of the system so that no animal/plant experiences extreme changes in those variables? These are _justification_ questions, not falsification questions. Hence, they are perfectly suited for the toy-world models currently being built by social scientists and mathematicians. Once the justification is well-stated; falsification questions can be competently posed. Robert Cordingley wrote: > It seems the world has had for a long time, and still has, oppression, > poverty and poor education of segments of its population. Perhaps we > can say that the developed world has managed to lower their own deprived > segment size while the un(der)developed hasn't made so much progress. > (Do you remember the TADtalk visualization on poverty?) It is > considered by many, including you and me, that having deprived segments > of the world's population is unethical because of the ethical standards > we hold, have learned (and have been indoctrinated in, if you will). > > It remains ethical to work towards the reduction and elimination of > these deprived segments - it's a big job. The argument is over how. I > don't believe complexity science or studies and simulations of Complex > Adaptive Systems (CAS) are yet sufficiently mature to help very far in > this endeavor, but I'm not an expert in the field. It just seems that > way from the perspective of an observer. > > That complexity studies indicate emergent behavior that is otherwise > hard to predict and matches small systems (ie < 10^6 agents) behavior is > *very* interesting and justifies further work. I don't think it > separates cause and effect which is the primary reason for not using > such studies for predictive purposes. And there is no evidence yet of > successful studies or simulations that model social change, e.g. the > French or Russian Revolutions. (Please correct me if this is wrong). > So it seems that the problems of society (including trying to figure out > what is the 'best' form of government) are not yet subject to relief > from CAS studies. Many would not want one small class of experts to be > responsible for this task anyway. > > Going back to your original ethical dilemma, if one agrees with what is > ethical and one's political position doesn't then one will > change/adjust/modify one's political position to maintain one's internal > integrity. Labels and technicalities in definitions may be part of the > problem: > > I am a democrat because I believe everyone should have a say in government, > I am an environmentalist because we should take care of our biosphere so > it remains habitable for us, > I am a monarchist because I don't want to disband the Royal Family, > I am libertarian because I don't want a Big Brother government, > I am conservative because I think we shouldn't waste our resources, > I am a republican in the sense I don't want to dismantle the US federal > system and its three branches of government, > I am a capitalist because I believe in free-markets, > I am socialist because I believe everyone deserves basic health care, > education, justice, > I am a moderate because I believe we deserve a system of justice that > can reign in man's excesses. > etc > > If complexity science turns out to be a powerful technology it may take > it's place along side fire, nuclear power and genetic engineering. All > are amoral. It's how we use them for our benefit that will exercise our > morals (ethics). - -- glen e. p. ropella, 971-219-3846, http://tempusdictum.com There is a tragic flaw in our precious Constitution, and I don't know what can be done to fix it. This is it: Only nut cases want to be president. -- Kurt Vonnegut -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.6 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iD8DBQFG3amOZeB+vOTnLkoRAgEUAKDK7Mjc3EpNgOjqjmIiyyLJ6ppxygCg0n0J 1bFC1hz8fvBJr8cypjkfUGE= =5ozy -----END PGP SIGNATURE----- ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College lectures, archives, unsubscribe, maps at http://www.friam.org |
In reply to this post by Phil Henshaw-2
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1 Phil Henshaw wrote: > I may not be speaking directly to your actual phrase, describing what > you've gathered from complexity theory: "the extent versus the > objectives of control structures should show something like an > inverse power law to maintain a balance between diversity and > efficacy." I read that as meaning that you'd design an inverse square > relation into your control systems. I don't know what actual kind of > controls you may be thinking of, or how you'd measure their diversity > or efficacy, of course. The actual controls I'm talking about are simple positive and negative reinforcement of behavior. For example, if someone breaks a law, we try to apply negative reinforcement through punishment. If someone does a good job, we try to apply positive reinforcement through compensation. But, I think the principle would also hold in engineering control systems. When I say that a graph of extent versus number of objectives _shows_ an inverse power law, I am not saying that I would design an inverse square law into a control system. I don't know why you insist on replacing "power" with "square". And I don't know how one would mistake "design" for "show". I simply mean that if you measured the panoply of existing control structures using two measures: extent of the control structure in space and time and number of objectives for that control structure, you would see an inverse power relationship between the two measures. I.e. the larger the extent of the control structure, the fewer its objectives. The smaller the extent, the higher its number of objectives. I have no idea if the power of the relation would turn out to be 2 or not. > Well, it's not half well enough studied, but inside and outside > perspectives of organization in systems are so very different it > takes special care to keep them straight it seems to me. I'm not > even sure if one can discuss a system as having an inside (network > cell of relations) since I haven't heard the 'news' in the journals > yet and it seems to require a radical exception to the traditional > view of determinism. Isn't the traditional view that all causation > comes from the outside still the most widespread? I don't know what the general view of causation is. But, the general categories for observation from the inside versus the outside are: constructivism versus formalism. When one observes a system objectively, from the outside, it seems the tendency is to formalize everything (a.k.a. remove the semantic grounding of the tokens that represent constituents of the system). When one observes a system subjectively, from the inside, it seems the tendency is to retain the semantics and construct explanations directly from the constituents of the system. My point was that, ultimately, there's no fundamental difference between the two because even a subjective account of a phenomenon will involve objectively defined sub-elements and an objective account of a phenomenon will involve subjectively interpreted sub-elements. The difference is one of _method_ not of substance. > One of the differences between the two perspectives is the huge > difference inside and outside views is in the information content of > your observations. If your view of the world is based on an > insider's perspective of some self-organized 'hive' of activity, say > a religious or social movement, it may be extremely hard to make > sense of an outsider's view of exactly the same thing. The insider's > view is of all the internalized connections, and the outsider's view > of essentially all the loose ends. Getting them to connect can be > very difficult. But, as eluded to above, the reasons for this is that the inside view retains the semantics and the outside view tries to reduce the relations to pure syntax. Pure syntax is best for prediction but piss-poor for heuristic value. Pure semantics is best for understanding but near useless for prediction. - -- glen e. p. ropella, 971-219-3846, http://tempusdictum.com Whenever we depart from voluntary cooperation and try to do good by using force, the bad moral value of force triumphs over good intentions. - -- Milton Friedman -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.6 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iD8DBQFG3tEPZeB+vOTnLkoRAgGsAKCCOqFC8IQ8Tl28hseuUv2jYSvalQCfaDNL sAAHFL7qRIRyq6QJx9t2iY4= =77Tp -----END PGP SIGNATURE----- |
In reply to this post by Alfredo Covaleda
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1 Your use of English confuses me at some points; but, I think I've gotten your gist. I agree that a scientist is not (indeed _cannot_) be neutral or amoral. Scientists are humans first and foremost. And humans cannot be neutral. They are indocrinated throughout their life and learn things IN context. ("Neutrality", to some extent, implies a large percentage of invariants through context changes.) So, it is false to claim that a scientist should be neutral. However, it is not false to claim that science (in contrast to the humans who engage in it) can be mostly neutral. It can't be completely neutral because every product of science assumes some sub-set of all the other products of science. But, it can at least be somewhat spatially and temporally context independent. In fact, that's part of the definition of science, that it contain invariants through time and space. The point you bring up about individuals (or sub-groups) and their posited models is a good one. But, a model is NOT scientific if it is only posited, held, or tested once (by one individual or one execution). A model can only be scientific if it's been posited, held, and/or tested by multiple people, in multiple contexts, and executed multiple times. Science is a social phenomenon, external to any single individual and (hopefully) external to any single sub-group. The interesting part of science, to me, lies in applying its results. And in that sense, science definitely has a social role to play. In fact, there's little point in engaging in science if all you want is to understand the universe, by yourself in your closet. You can understand the universe in purely metaphysical or metaphorical terms if you like. The point of science is to collectively pursue not just understanding but meaning and engineering. Alfredo CV wrote: > Many years ago when I was working on my undergraduate thesis in the > jungle of Amazonas in Colombia, I knew a North American Anthropologist > whom had been working there for a long time studying the way how an > indigenous culture disappeared. I horrified with it and thought It was > inmoral. Older members of the team of researchers where I was working > told me that she was making science and that a scientific must be > neutral. I think it's totally false. A scientific has an emotional and > political charge, deep inside feels himself like a demiurge and for > these reasons can't be completely impartial. What is science for? > Science have a social function, must help us to understand and resolve > problems but of course is an instruments of politics because finally we > are in a world of gangs. > > I have an hypothesis: biotechnology, robotics, informatics, smart > software and internationalization of economy will increase poverty in > the underdeveloped world. I'm not a scientific but suppose I am, I take > data and develop a sophisticated model. Maybe, be sure, I'll conclude > that my hypothesis is true and I'll say for first time something > brilliant like "Poverty is a emergent process"... wow, what a > conclusion!!!. If a guy which dream is to be high executive of the > World Bank, IMF or WTO takes data and develops a sophisticated model > will conclude that my hypothesis is false and will say "Richness is an > emergent process". Maybe neither of us will be telling lies, of course > I'll be right but I'll pray for his conclusion to be right because at > the end he will be a high executive and will have the last word. - -- glen e. p. ropella, 971-219-3846, http://tempusdictum.com The poet, the artist, the sleuth - whoever sharpens our perception tends to be antisocial... he cannot go along with currents and trends. -- Alfred North Whitehead -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.6 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iD8DBQFG3w2QZeB+vOTnLkoRAvTvAKCOHjUrSsXO/lCyBWnwVYICC0yQ+ACghv0J KWkWPa9aYyKhDcXvvV29U+c= =xOP5 -----END PGP SIGNATURE----- |
In reply to this post by Robert J. Cordingley
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1 Robert Cordingley wrote: > re: important point 1. It is easier for me to see/say that it is > _unethical_ to _not_ lend some assistance to deprived segments in order > to improve their lot. Reduce the segment to one deprived human being > that you pass in the street. There are may variables in the encounter: > one's schedule, feeling of well-being, attire of the unfortunate being > and the urge to extend a helping hand. Where does that come from if not > from one's ethical background. Exactly! For problems with many variables, _local_ controls are adequate (or even common). An example of a local control would be an individual's ability to regulate how much "spare change" they hand to a transient based on the measurements they take in context. An example of a non-local control would be, e.g., banning all transients from the city of Santa Cruz. In the first case, the individual gets to handle it all, including how much money (resources) is doled out, whether the subject is a "transient", the attire of the subject, how much spare change is available in the individual's pocket or in the subject's can, etc. In the second case, some generic definition of "transient" must be found, some definition of "ban" must be found, some definition of "Santa Cruz" must be found. And these definitions would provide umbrellas for many finer-grained variables. > Where does that come from if not from one's ethical background. This is where the trick lies. Ethical indocrination is sub-group dependent. Let's say I was reared in New York where it is ethically acceptable to ignore transients. Then there are zero problems when I translate to Santa Cruz and ignore transients there (except for the aggressive ones, of course ;-). But, if I were reared in Santa Cruz, where it used to be considered "good" to help homeless people, and I translated to New York, I'd soon be broke from handing out all my cash! This is handled in a non-local way, however. In Santa Cruz and New York, the collective gets together and hammers out policy that somehow embodies the generalized individual ethics of many of the people. But when an individual translates from one context to the other, the non-local control structure changes (the individual's ethics don't... or not as fast, anyway). And the result is dissonance between the individual ethic (local control) and the non-local control structure. Hence, how much money I give to a beggar does NOT merely come from my ethical background; but, it also comes from whatever non-local control structure in which I sit. In places where the homeless are partially taken care of through government sponsored programs, I may choose not to give anything to a transient even though my ethical background would suggest otherwise. > re: important point 2 It wasn't my point to say the labels were > ethically justified but to point out that labels e.g. one being > "libertarian", were not clear cut definitions. One can hold x political > view in some issues and y on others when pedants might object to say > that x and y were incompatible. There may be no ethical dilemma for one > to believe in x and y, though other's may debate it. Right. I did not intend to suggest that you were providing ethical justifications for any given label. But, your list of causal relations between the label and some context points out that justification is important. Not necessarily "ethical justification"... plain old rhetoric. If the justification for a label is not accepted by others, then the justification is _questionable_. This covers your point that the labels are not clear cut. But it also includes situations where the definition is fine but the grammar that leads from one statement to another can be called into question. Sorry for my poor choice of words before. > I thought the "extent of a control structure" and "the number of > objectives" were two attributes of government that your studies, or at > least your thinking, had connected as related through an inverse power > law. Neither needs justifying. I'm probably missing the point or not > familiar with your definition of 'justified'. It's mostly just my _thinking_, not my studies. I don't work in sociology, politics, or any of that. But both measures need justification. A measure of the extent of a control structure could be manipulated to give any sort of answer. So, a concrete measure of extent needs justifying. For example, is it enough to define "extent" in terms of space and time? Can a politician in DC actually write, enforce, or judge actions based on laws governing people in Washington state? Is a law written in 1878 (Posse Comitatus) applicable in 2006? Or is it also necessary to consider some sort of cultural extent as well as spatial and temporal extent? Such rhetoric is "justification". And both measures (extent and number of objectives) require such justification. > re: Q1) "Do non-local control structures exist that regulate many > variables?" - I have no idea, but suggest that getting some agreement > on the definition of the terms of the question may take some time even > if it's possible. Well, as usual, we won't get agreement first then experiment later. It's normally the case that some yahoo just settles on concrete meanings of the terms and does the experiment. If they're a scientist, they tend to also write down their definitions and methods. After several such experiments have been executed and argued about, agreement starts to settle in. > re: Q2) Can particular variables (e.g. hunger) be factored completely > out of the system so that no animal/plant experiences extreme changes in > those > variables? - I'd vote for working towards improvement in the social > variables knowing that absolute success may be beyond us - but wait, > what about small-pox, or death by dinosaur? When you say 'variable' do > you mean 'vector'? But then there are 8 meanings of "vector" in Wiktionary. Yes, we could easily rid the world of Poverty and Hunger (note the capital letters) by ridding the world of humans! (Analogous to the "death by dinosaur".) When I say "variable" I don't tend to mean "vector". But, a "vector" can be a variable and vice versa. For example, "poverty" might be a variable and it (as currently understood) has several components. Hence the modern concept of poverty (or "poverty level") is a vector in a pseudo-mathematical sense. But you can understand my language (if not a path to concreteness ;-) by thinking of variables as scalars. They imply not only a quantification but also a common medium (a space or hyper-space) in which they are embedded. Otherwise, it would be silly to relate them. In the case of something like small-pox, my ethical background tells me that we ought to prevent any further outbreak or transmission of small-pox over the entire globe. And such a non-local control would not violate an IPL between extent and number of objectives. But, my common sense tells me that there's a cost to such prevention. And that cost is not necessarily all in money. For example, what if we only have the resources to feasibly control, say, 10 diseases in this global way? This leads me to consider the pros and cons of small-pox. Of course, were I to make serious attempts to justify (or even hunt for) the good side of the sporadic small-pox epidemic, I would (rightfully) be vilified. So, one can never _seriously_ consider the pros and cons of it. And therein lies the dilemma. - -- glen e. p. ropella, 971-219-3846, http://tempusdictum.com We think in generalities, but we live in detail. -- Alfred North Whitehead -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.6 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iD8DBQFG3xnKZeB+vOTnLkoRAm+2AJ0RizrHrPGkgcLP3X7yMieL6Zh2qgCgvoKm 6lOHjlLdiJ2bsAk4/xzxQwY= =0S48 -----END PGP SIGNATURE----- |
In reply to this post by glen ep ropella
Glen,
> Phil Henshaw wrote: > > I may not be speaking directly to your actual phrase, > describing what > > you've gathered from complexity theory: "the extent versus the > > objectives of control structures should show something like > an inverse > > power law to maintain a balance between diversity and efficacy." I > > read that as meaning that you'd design an inverse square > relation into > > your control systems. I don't know what actual kind of > controls you > > may be thinking of, or how you'd measure their diversity or > efficacy, > > of course. > > The actual controls I'm talking about are simple positive and > negative reinforcement of behavior. For example, if someone > breaks a law, we try to apply negative reinforcement through > punishment. If someone does a good job, we try to apply > positive reinforcement through compensation. But, I think the > principle would also hold in engineering control systems. OK, pushes and pulls, in directions chosen by a 'controller'. Does that include looking at the subjects of control to see how they find it easiest and hardest to respond? In other words is the thing you call 'control' just as easily a complex system learning process? > When I say that a graph of extent versus number of objectives > _shows_ an inverse power law, I am not saying that I would > design an inverse square law into a control system. I don't > know why you insist on replacing "power" with "square". And > I don't know how one would mistake "design" for "show". I > simply mean that if you measured the panoply of existing > control structures using two measures: extent of the control > structure in space and time and number of objectives for that > control structure, you would see an inverse power > relationship between the two measures. I.e. the larger the > extent of the control structure, the fewer its objectives. > The smaller the extent, the higher its number of objectives. > I have no idea if the power of the relation would turn out to > be 2 or not. I guess bending my mind to directly think about the distributed 'process ecologies' of complex systems, leaves me to make occasional odd errors in math... No, I do mean to be talking about Pareto distributions and the inverse power law family or relationships. I also should not dismiss the usefulness of designing a control strategy to fit the statistically probable shape of the problem you're dealing with. Whether statistics ignore individual characteristics or not they still do save a lot of time! I suppose there are lots of good examples of exceptions to my notion that designing systems to follow inverse power laws is an error. > > > Well, it's not half well enough studied, but inside and outside > > perspectives of organization in systems are so very different it > > takes special care to keep them straight it seems to me. I'm not > > even sure if one can discuss a system as having an inside (network > > cell of relations) since I haven't heard the 'news' in the journals > > yet and it seems to require a radical exception to the traditional > > view of determinism. Isn't the traditional view that all causation > > comes from the outside still the most widespread? > > I don't know what the general view of causation is. But, the > general categories for observation from the inside versus the > outside are: constructivism versus formalism. When one > observes a system objectively, from the outside, it seems the > tendency is to formalize everything (a.k.a. remove the > semantic grounding of the tokens that represent constituents > of the system). When one observes a system subjectively, > from the inside, it seems the tendency is to retain the > semantics and construct explanations directly from the > constituents of the system. > > My point was that, ultimately, there's no fundamental > difference between the two because even a subjective account > of a phenomenon will involve objectively defined sub-elements > and an objective account of a phenomenon will involve > subjectively interpreted sub-elements. I think my point would be that outside perspectives are highly naturally subjective in a hidden way, causing there to be a big difference between inside and outside views. Your premise seems to be that your observer is all seeing. For a real outside observer of any independent cell of relationships, the relationships are not participated in and the existence of the system they are part of is thus completely invisible. It's only when the observer steps inside the system, getting into the loop, that they suddenly become aware of the whole other world of relationships it represents. We see this over and over, that systems develop in secret from us and then our awareness of them bursts into our attention. I think that's a direct effect of systems developing as truly independent cells of relationships. > > The difference is one of _method_ not of substance. > > > One of the differences between the two perspectives is the huge > > difference inside and outside views is in the information > content of > > your observations. If your view of the world is based on an > > insider's perspective of some self-organized 'hive' of activity, say > > a religious or social movement, it may be extremely hard to make > > sense of an outsider's view of exactly the same thing. The > insider's > > view is of all the internalized connections, and the outsider's view > > of essentially all the loose ends. Getting them to connect can be > > very difficult. > > But, as eluded to above, the reasons for this is that the > inside view retains the semantics and the outside view tries > to reduce the relations to pure syntax. Pure syntax is best > for prediction but piss-poor for heuristic value. Pure > semantics is best for understanding but near useless for prediction. Wouldn't it be nice to have a heuristics machine to convert pure syntax in to meaningful gobbely gook for any particular inside view...! I'm not sure how, but this might connect with the structural dilemma that nature's design is deceptive because we all think the world we see is the one that's there, and we all see different ones, partly because of the inverse power law distributions of network connections as I was describing to Bill. Phil > > - -- > glen e. p. ropella, 971-219-3846, http://tempusdictum.com > Whenever we depart from voluntary cooperation and try to do > good by using force, the bad moral value of force triumphs > over good intentions. > - -- Milton Friedman > > -----BEGIN PGP SIGNATURE----- > Version: GnuPG v1.4.6 (GNU/Linux) > Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org > > iD8DBQFG3tEPZeB+vOTnLkoRAgGsAKCCOqFC8IQ8Tl28hseuUv2jYSvalQCfaDNL > sAAHFL7qRIRyq6QJx9t2iY4= > =77Tp > -----END PGP SIGNATURE----- > > ============================================================ > FRIAM Applied Complexity Group listserv > Meets Fridays 9a-11:30 at cafe at St. John's College > lectures, archives, unsubscribe, maps at http://www.friam.org > > |
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1 Phil Henshaw wrote: > OK, pushes and pulls, in directions chosen by a 'controller'. Does that > include looking at the subjects of control to see how they find it > easiest and hardest to respond? In other words is the thing you call > 'control' just as easily a complex system learning process? Yes. In fact, I tend to believe in the "law" of requisite variety and would say that a controller for a complex system must, itself, be complex at least in the sense that it contains a complex model of the regulated system. But, it's important to state that the complexity of the control system can be high and need _not_ be a function of the complexity of the system it controls. Since I'm putting forth an unjustified thesis (a.k.a. hypothesis), I'm not really making detailed claims about the control systems being measured. I'm merely trying to justify taking the data in the first place. And part of the justification for taking the data can be toy models arguing for/against the hypothesis. Just to keep it straight, the hypothesis is that there's an IPL between the extent and number of variables controlled by any given control system. And just to reiterate, _if_ that turned out to be true, then I have an ethical dilemma w.r.t. particular variables that come under the heading of "healthcare", "abortion", etc. > I guess bending my mind to directly think about the distributed 'process > ecologies' of complex systems, leaves me to make occasional odd errors > in math... No, I do mean to be talking about Pareto distributions and > the inverse power law family or relationships. I gathered as much. But I just wanted to make it clear. > I think my point would be that outside perspectives are highly naturally > subjective in a hidden way, causing there to be a big difference between > inside and outside views. Your premise seems to be that your observer > is all seeing. Well, to some extent I want it to be. On the one hand, if we had the budget to take the data (even if only with the maximum scale set at something like city ordinances and a minimum scale set at some small number of human attributes), we'd have to settle on some concrete measures that will, by definition, be limited in what they measure. And all subsequent observations would be similarly limited. So, any feasible observation or experiment will be practically limited. But, I have in mind a limit process where _if_ we executed some large number of observations (from neighborhood association, village, town, city, county, state, all the way up to the feds or perhaps the globe), then I imagine the whole gamut would show the IPL. (This statement is partially circular because invariance to scale is part of the hypothesis.) And in that limit, then, yes, I'm suggesting the accumulated measures are "all seeing". > For a real outside observer of any independent cell of > relationships, the relationships are not participated in and the > existence of the system they are part of is thus completely invisible. > It's only when the observer steps inside the system, getting into the > loop, that they suddenly become aware of the whole other world of > relationships it represents. We see this over and over, that systems > develop in secret from us and then our awareness of them bursts into our > attention. I think that's a direct effect of systems developing as > truly independent cells of relationships. I can see the picture you're drawing and agree in the abstract. But, I still don't know how this applies to the dilemma. Sorry for being dense. > Wouldn't it be nice to have a heuristics machine to convert pure syntax > in to meaningful gobbely gook for any particular inside view...! LoL! Thanks for that joke. It's the first laugh I've had today. > I'm not sure how, but this might connect with the structural dilemma > that nature's design is deceptive because we all think the world we see > is the one that's there, and we all see different ones, partly because > of the inverse power law distributions of network connections as I was > describing to Bill. Yes, it certainly is related. Any control has a "surface" of levers and measures by which it manipulates the controlled system. That surface is limited to and a function of the controller. It's the controller's "world view". And to the extent that the controller consists of humans or human artifacts, it embodies the world views of those humans. And those humans _do_ tend to think that their world view is _true_. And when world views conflict, the opportunity is there to revise the conflicting world views; but, that opportunity is often lost on those who hold the world view. This is especially acute where the world views are fossilized into laws, rules, or policy. And it's worsened by the design by committee feature of most policy setting bodies. Indeed, the world view embodied by a policy is probably _not_ held by any of the members of the committee that created the policy, making the policy even more removed from reality than the original world views of the humans on the committee. But, I don't think this point is critical to finding and using a hypothetical IPL between the extent and objectives of a controller (policy + enforcer). It might become critical in the resolution of any conflict that IPL would present with an ethical standing, however. And if that's your point, then I'm starting to get it. Thanks for sticking with it. - -- glen e. p. ropella, 971-219-3846, http://tempusdictum.com Know ten things. Say nine. -- unknown -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.6 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iD8DBQFG4GPaZeB+vOTnLkoRAl97AJ4qpqnNn/rQf1KMu4JGSU7paHApRQCdH5Df 2mkaETatgPu9a//vaH6VgPA= =so80 -----END PGP SIGNATURE----- |
Glen,
> Since I'm putting forth an unjustified thesis (a.k.a. > hypothesis), I'm not really making detailed claims about the > control systems being measured. I'm merely trying to justify > taking the data in the first place. And part of the > justification for taking the data can be toy models arguing > for/against the hypothesis. Just to keep it straight, the > hypothesis is that there's an IPL between the extent and > number of variables controlled by any given control system. > > And just to reiterate, _if_ that turned out to be true, then > I have an ethical dilemma w.r.t. particular variables that > come under the heading of "healthcare", "abortion", etc. Yes, that's the point you're going to notice for certain that a one fixed control modality may be inappropriate when dealing with things that are naturally out of your control. That's where I think a more conscious effort to think of complex systems as independent entities with independent behavior is needed, and moving away from treating them as statistics. There are lots of situations where the design objective might be to get things to fit as if engineering a handshake (a *mutual* homing device) between independent things. I'm not sure if that departs entirely from the notion of 'control', though it's rather different from the narrow sense that quite ignored the presence of complex systems in the environment which we all inherited. > > > I guess bending my mind to directly think about the distributed > > 'process ecologies' of complex systems, leaves me to make > occasional odd errors > > in math... No, I do mean to be talking about Pareto > distributions and > > the inverse power law family or relationships. > > I gathered as much. But I just wanted to make it clear. > > > I think my point would be that outside perspectives are highly > > naturally subjective in a hidden way, causing there to be a big > > difference between inside and outside views. Your premise > seems to be > > that your observer is all seeing. > > Well, to some extent I want it to be. On the one hand, if we > had the budget to take the data (even if only with the > maximum scale set at something like city ordinances and a > minimum scale set at some small number of human attributes), > we'd have to settle on some concrete measures that will, by > definition, be limited in what they measure. And all > subsequent observations would be similarly limited. So, any > feasible observation or experiment will be practically limited. > > But, I have in mind a limit process where _if_ we executed > some large number of observations (from neighborhood > association, village, town, city, county, state, all the way > up to the feds or perhaps the globe), then I imagine the > whole gamut would show the IPL. (This statement is partially > circular because invariance to scale is part of the > hypothesis.) And in that limit, then, yes, I'm suggesting > the accumulated measures are "all seeing". But every node in the network of your model will represent a hive of complex behavior at another scale, and the model as a whole will be a greater complex environment. I think the fact that all system structures are embedded in larger complexities, that can't be described by the same mode of description, is part of what I was suggesting made finding an implied 'all seeing' observer in an augment raise questions. > > For a real outside observer of any independent cell of > relationships, > > the relationships are not participated in and the existence of the > > system they are part of is thus completely invisible. It's > only when > > the observer steps inside the system, getting into the > loop, that they > > suddenly become aware of the whole other world of > > relationships it represents. We see this over and over, > that systems > > develop in secret from us and then our awareness of them > bursts into > > our attention. I think that's a direct effect of systems > developing > > as truly independent cells of relationships. > > I can see the picture you're drawing and agree in the > abstract. But, I still don't know how this applies to the > dilemma. Sorry for being dense. It does seem to take getting used to, but a large portion of the complex systems of interest are of that type. They're the systems as 'things' that are organized around a continually evolving networks of relations that are original to them, and follow a developmental history of growth and decay as if organisms. ??.?? ? `?.?? Such a system's network of relations is hidden because it is largely self-referential, i.e. internalized. As in stepping into an unfamiliar conversation, you suddenly begin to see the complex relationships. That nature is full of these kinds of systems, and doesn't bother to provide bodies for them, is one of the curious surprises. :,) > > Wouldn't it be nice to have a heuristics machine to convert > pure syntax > > in to meaningful gobbely gook for any particular inside view...! > > LoL! Thanks for that joke. It's the first laugh I've had today. > > > I'm not sure how, but this might connect with the > structural dilemma > > that nature's design is deceptive because we all think the world we > > see is the one that's there, and we all see different ones, partly > > because of the inverse power law distributions of network > connections > > as I was describing to Bill. > > Yes, it certainly is related. Any control has a "surface" of > levers and measures by which it manipulates the controlled > system. That surface is limited to and a function of the > controller. It's the controller's "world view". And to the > extent that the controller consists of humans or human > artifacts, it embodies the world views of those humans. And > those humans _do_ tend to think that their world view is > _true_. And when world views conflict, the opportunity is > there to revise the conflicting world views; but, that > opportunity is often lost on those who hold the world view. > This is especially acute where the world views are fossilized > into laws, rules, or policy. And it's worsened by the design > by committee feature of most policy setting bodies. Indeed, > the world view embodied by a policy is probably _not_ held by > any of the members of the committee that created the policy, > making the policy even more removed from reality than the > original world views of the humans on the committee. > > But, I don't think this point is critical to finding and > using a hypothetical IPL between the extent and objectives of > a controller (policy + enforcer). It might become critical > in the resolution of any conflict that IPL would present with > an ethical standing, however. And if that's your point, then > I'm starting to get it. Thanks for sticking with it. Thanks, Well, your 'controller' is based on a model of some sort, that will be 'wrong' from the start in many ways and you want it to operate in a real and changing world of much higher complexity than the original model. I think what I can tell of your approach sounds like a sophisticated way to improve the controller's 'efficiency'. Maybe a key step to addressing the larger problem is to get the 'controller' to ask questions, to become a learning controller, maybe to recognize unfamiliar situations, or even recognize the presence of other emerging systems and things, perhaps... Cheers, Phil > > - -- > glen e. p. ropella, 971-219-3846, http://tempusdictum.com > Know ten things. Say nine. -- unknown > > -----BEGIN PGP SIGNATURE----- > Version: GnuPG v1.4.6 (GNU/Linux) > Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org > > iD8DBQFG4GPaZeB+vOTnLkoRAl97AJ4qpqnNn/rQf1KMu4JGSU7paHApRQCdH5Df > 2mkaETatgPu9a//vaH6VgPA= > =so80 > -----END PGP SIGNATURE----- > > ============================================================ > FRIAM Applied Complexity Group listserv > Meets Fridays 9a-11:30 at cafe at St. John's College > lectures, archives, unsubscribe, maps at http://www.friam.org > > |
Free forum by Nabble | Edit this page |