Glen -
Cherry picking here... > I actually distrust consensus and convergence, equally, I think. This > is for the same reason I think the "singularity" concept is suspicious. > It implies a closedness that I don't believe in. The universe seems > open to me, which implies that any process (including explanation) > _wanders_ significantly. I will admit constraints, though. Although > any process may wander, it may do so within some hard boundaries ... > like a sandwiched series that forever oscillates without actually > converging. I recently started a project which involves Quorum Sensing in the cell-cell signaling sense. I presume your beef with consensus (especially) and convergence (maybe less so) is the implied finality or totality of it? I presume you will agree that there are 'degrees' of recruitment that might lead to a quorum (and in the extreme a consensus?) and in entrainment as a form of recruitment? > For most of my career, I've tried to explain to my fellow simulants > that any particular snapshot of a modeling effort is not very useful. > I.e. any particular _model_ is not very useful (with an > anti-authoritarian prejudice against the much-abused "all models are > wrong, some are useful" aphorism -- I actually think that aphorism has > done more damage to the proper way to use simulation than any other > concept). This is an aside I'd like to try to untangle a little: The aphorism in question is, for me, an antidote to two extremes. One extreme of course, is the one where some people harp on the problems in a model to the extent of not appreciating that they are not by definition anything *but* a model, the finger pointing at the moon, the map which is not the territory (aphorismia ad nausea)... the other extreme is the one where some (other) people imagine that (usually their own) models are reality or more than "useful" in a given context (some kind of blessed state as a chosen or special model?). I assume you are not quibbling with those two uses of the aphorism? But more (maybe) with the nature of aphorisms (similar to the problems with models... trying to claim a universal truth?). A simple summary might just be an explanation of how you think this aphorism has "done more harm..." ? I'm sure it *has* done harm, but I'm not sure what it is you refer to? > > But the whole modeling and simulation (M&S) effort (trajectory or bundle > of trajectories, given model forking) _is_ useful. I worked on a project back around the turn of the century (I find that phrase entertaining, especially now that it is as relevant as it was to my Grandfather as he trundled off to WWI in 1918) which was a "composable simulation" framework. The prime example we used it on was for the Future Combat System (misbegotten/ill-fated? DoD project) where we applied something we dubbed as "generative analysis" to explore a subset of model-space via iterated simulation using a learning-classifier system (roughly a GA) to speciate and test the results. In this case, there was a single meta-model which was that Red Team and Blue Team forces could be conjured with a wide range of features within the meta-model's trade-space (range, speed, firepower, armor, comms, sensors, etc.) and pitted against eachother. Specifically, we would set up hundreds of Blue-Team compositions and run them against a fixed red-team composition and initial conditions, etc., obtain a multivariate effectivity function (mission success, Blue Loss, Red Loss, residual capability, etc.) to use to evaluate and spawn a new population, etc. I assume this is a (constrained?) variant of what you are calling "model-forking"? "trajectories" would seem to relate to varying initial conditions or boundary constraints to generate ensembles of results from a "single?" model? > In other words, iteration is "doing it again" and recursion is "doing it > to the result of the last time you did it", making recursion more > specific. Hence, recursion targets a more closed type chain. > > This is important to me because my work is multi-formalism, the model > produced in one iteration can be wildly different from the model > produced in prior or subsequent iterations, different in generating > structure and dynamics as well as phenomenal attributes. > > Hence I like the concept of filter explanations better than that of > recursive explanations, where the filter can co-evolve with the stuff > being filtered. offline parallel one)... maybe others are curious as well by what you mean by multi-formalism and these evolving models (My example with GA-designed ensembles of meta-model parameters might be the same thing roughly?). > Thanks. I've added it to my Powell's wishlist. You lucky boy, to live within a drive or rail to Powells on demand. My wife and I spend up to half our time there it seems when we visit the area. The independents are going slowly but surely. Powells is a bastion. I wonder if Dymocks "down under" is still viable? I believe we have a few Ozzies (and maybe a Kiwi or two?) here... I know anecdotally they are still alive, but with Amazon Worldwide and the changes(?) in the Commonwealth's practice around protecting it's own publishing industry, I suspect their niche has changed radically? - Steve ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com |
Steve Smith wrote at 04/11/2013 11:24 AM:
> I recently started a project which involves Quorum Sensing in the > cell-cell signaling sense. I presume your beef with consensus > (especially) and convergence (maybe less so) is the implied finality or > totality of it? I presume you will agree that there are 'degrees' of > recruitment that might lead to a quorum (and in the extreme a > consensus?) and in entrainment as a form of recruitment? Yes, my problem with both consensus and convergence is the downward causation, or more specifically, the extent to which that forcing structure can or cannot be escaped. With relatively independent things like zero-intelligence agents, this isn't as much a problem (I think) because the resistance to flip from one behavior (consensus participation, exploitation) to another behavior (exploration) should (ideally) be low, or at least bounded. But with intelligent agents (like humans), any behavior that obtains can be positively reinforced to a huge degree, perhaps infinitely. The little, programmable homunculus in side your head becomes specialized and stuck in its ways. That makes the "escape velocity" from a consensus much more difficult. That's also part of my suspicion of thought and preference for action. > I assume you are not quibbling with those two uses of the aphorism? But > more (maybe) with the nature of aphorisms (similar to the problems with > models... trying to claim a universal truth?). Right. The aphorism helps keep us out of those two rat holes ("my model's great" vs. "your model sucks". But the space has higher dimensionality than that spectrum. The other, more important rat hole, is a general push for The One True Model, the idea(l) that a most accurate model does/can exist and that we (whatever "we" might mean) can find it, characterize it, implement it, etc. > A simple summary might just be an explanation of how you think this > aphorism has "done more harm..." ? I'm sure it *has* done harm, but > I'm not sure what it is you refer to? When I hear "all models are wrong, some are useful", I hear "therefore, we need to keep modeling to make better models". And that's the problem. I have the same problem with people who think there is only 1 best way to _think_. Although I sound cynical when I use the aphorism "the problem with communication is the illusion that it exists", I'm not being cynical at all. It's actually a positive statement that argues _for_ variety and diversity in thought ... against consensus, pro exploration. To me, this is why the "Borg" is such a great enemy. To discover I think (nearly) exactly like another person would be the best argument for suicide I've ever heard. To discover the fantastic ways in which others do not think like me borders on the very purpose of life. Further, I don't think evolution would work without this balance between the extent to which internal models mismatch reality vs. the extent to which they match reality. I.e. to be wrong is beautiful and interesting. To be right is useless and boring. Therefore, phrases like "all models are wrong, some are useful" is a kind of crypto-idealism. A sneaky way to get us to converge and, thereafter, be entrapped by the convergence. Even if the limit point doesn't exist in itself, such crypto-idealism can trap us in an ever-shrinking _cone_ of constraints. > Specifically, we would set up hundreds of Blue-Team compositions and run > them against a fixed red-team composition and initial conditions, etc., > obtain a multivariate effectivity function (mission success, Blue Loss, > Red Loss, residual capability, etc.) to use to evaluate and spawn a new > population, etc. > > I assume this is a (constrained?) variant of what you are calling > "model-forking"? "trajectories" would seem to relate to varying initial > conditions or boundary constraints to generate ensembles of results from > a "single?" model? That's close, but not quite what I intended. I read your example as "automatic modeling", which is awesome and I'm sad that it faded away. But model forking, to me, means the responsibility (along with all 4 causes, efficient, material, formal, and final) may change with the changing of hands. The two extremes are _abuse_, where the model is being used for its side effects or purposes for which it was never intended to an _attempt_ to carry on an effort set out by the original modeler. There's a whole spectrum in between. The main difference I see between what I'm trying/failing to describe and automatic modeling lies in the [im|ex]plicit nature of the objective function(s) and the evolution of that(those) objective function(s), if they evolve at all. I'm also implying a full spectrum of asynchrony between forks, in time, space, purpose, use cases, etc. > I'd like to catch up on your definitions here (in this thread or our > offline parallel one)... maybe others are curious as well by what you > mean by multi-formalism and these evolving models (My example with > GA-designed ensembles of meta-model parameters might be the same thing > roughly?). I basically mean the use of different mechanisms for the internals and interactions of the various elements involved. The most straightforward, practical example are hybrid systems, where a discrete module must interact with a continuous module. But there are plenty of others practical examples, as well as metaphysical ones: How do you get an atheistic Hindu and a young earth Creationist to cooperate toward an objective? > You lucky boy, to live within a drive or rail to Powells on demand. My > wife and I spend up to half our time there it seems when we visit the > area. The independents are going slowly but surely. Powells is a > bastion. Yeah, Amazon's prices are always lower. But I pay a little extra if I meet employees or owners face to face. -- =><= glen e. p. ropella Broadcast dead revolution don't pay ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com |
Glen -
> Yes, my problem with both consensus and convergence is the downward > causation, or more specifically, the extent to which that forcing > structure can or cannot be escaped. > > With relatively independent things like zero-intelligence agents, this > isn't as much a problem (I think) because the resistance to flip from > one behavior (consensus participation, exploitation) to another behavior > (exploration) should (ideally) be low, or at least bounded. > > But with intelligent agents (like humans), any behavior that obtains can > be positively reinforced to a huge degree, perhaps infinitely. The > little, programmable homunculus in side your head becomes specialized > and stuck in its ways. That makes the "escape velocity" from a > consensus much more difficult. a social/political/spiritual one or a technical one. I think I hear you saying that in fact, consensus and convergence *are* real phenomena in human understanding but that you *do* think they are not good for the individual or the group (unless it is the Borg as you reference later?) I've been heard using the term Homo Hiveus to describe one end-state that humans may (socially) evolve to... I'm not convinced it is necessary or even possible... but I *do* feel it would be a tragic loss if humanity becomes one big lock-step colony (or set of competing/mutually-ignoring? colonies). I think we have a counter-example to this in societies such as the Japanese who (from my Western/American perspective) seem to be a lot more predisposed (culturally?) to give over to collective behaviour. The fascists of early last century seemed prone to this (in a top-down way?), and to some extent the collectivists (socialism, communism), and for the most part all of those have (mostly) failed to capture the hearts and minds of the members of the collective. Extremist fanatics might be the closest to this? Individuals being "captured" by a small set of very powerful and shared memes? Moonies, Muslim Bro'hood(???), Taliban(???), Aryan Nation, Extreme Right Christians. > > That's also part of my suspicion of thought and preference for action. And how do you feel about thoughful action and actionable though <grin>? >> A simple summary might just be an explanation of how you think this >> aphorism has "done more harm..." ? I'm sure it *has* done harm, but >> I'm not sure what it is you refer to? > When I hear "all models are wrong, some are useful", I hear "therefore, > we need to keep modeling to make better models". And that's the > problem. I have the same problem with people who think there is only 1 > best way to _think_. And I hear... "the map is not the territory", if you really want to see what is in that area labeled "there be dragons here", you need to go visit, and don't expect to come home having bagged a dragon, but you might get eaten! > Although I sound cynical when I use the aphorism "the problem with > communication is the illusion that it exists", I'm not being cynical at > all. It's actually a positive statement that argues _for_ variety and > diversity in thought ... against consensus, pro exploration. I think aphorisms are at their best when they are offering cynicism or polyannaism... in the latter I think we call them platitudes? > To me, this is why the "Borg" is such a great enemy. To discover I think > (nearly) exactly like another person would be the best argument for > suicide I've ever heard. To discover the fantastic ways in which others > do not think like me borders on the very purpose of life. I agree perfectly (now go ride off a cliff!) <smirk> > Further, I don't think evolution would work without this balance between > the extent to which internal models mismatch reality vs. the extent to > which they match reality. I.e. to be wrong is beautiful and > interesting. To be right is useless and boring. That explains a lot about my severe depression and feelings of uselessness! > Therefore, phrases like "all models are wrong, some are useful" is a > kind of crypto-idealism. A sneaky way to get us to converge and, > thereafter, be entrapped by the convergence. Even if the limit point > doesn't exist in itself, such crypto-idealism can trap us in an > ever-shrinking _cone_ of constraints. I like your term - Crypto Idealism... now, have you ever considered that you might be paranoid? <grin> > That's close, but not quite what I intended. I read your example as > "automatic modeling", which is awesome and I'm sad that it faded away. Or more to the point, "automated model exploration"? I'm still interested in re-igniting it if ever i find the right project/collaborators. My part was more on the analytics side of trying to understand the *results* of these ensemble runs... high dimensional correlation over a (sometimes rough) multidimensional landscape. But I also understand the ABM, GA and "design of experiments" aspects well enough to participate or lead others with fresh skills. > But model forking, to me, means the responsibility (along with all 4 > causes, efficient, material, formal, and final) may change with the > changing of hands. The two extremes are _abuse_, where the model is > being used for its side effects or purposes for which it was never > intended to an _attempt_ to carry on an effort set out by the original > modeler. There's a whole spectrum in between. Aha! Thanks for the disambiguation. I don't think we are converging, as that would be useless and boring. We are only converging enough that I can agree that "communication is an illusion"! > The main difference I see between what I'm trying/failing to describe > and automatic modeling lies in the [im|ex]plicit nature of the objective > function(s) and the evolution of that(those) objective function(s), if > they evolve at all. > > I'm also implying a full spectrum of asynchrony between forks, in time, > space, purpose, use cases, etc. Ah yes... in fact, the "automated modeling" project was a vague attempt to rein in and exploit what already happened. Build an effective (for some purposes) model and others with co-opt it and use it (modified or not) for (more or less) different things to (more or less) effect. The results will probably never be collectively compiled, and if they are, the biggest thing likely to be discovered is big holes in it's use/application. >> I'd like to catch up on your definitions here (in this thread or our >> offline parallel one)... maybe others are curious as well by what you >> mean by multi-formalism and these evolving models (My example with >> GA-designed ensembles of meta-model parameters might be the same thing >> roughly?). > I basically mean the use of different mechanisms for the internals and > interactions of the various elements involved. The most > straightforward, practical example are hybrid systems, where a discrete > module must interact with a continuous module. In my experience, this is usually limited to using one as a forcing function to the other, with the "other" being the dominant one? Fine grain discrete informing a lower resolution continuous? I'm not current in the field. > But there are plenty of > others practical examples, as well as metaphysical ones: How do you get > an atheistic Hindu and a young earth Creationist to cooperate toward an > objective? This sounds like a Sphinx - worthy riddle or the setup for a Steven Wright joke! >> You lucky boy, to live within a drive or rail to Powells on demand. My >> wife and I spend up to half our time there it seems when we visit the >> area. The independents are going slowly but surely. Powells is a >> bastion. > Yeah, Amazon's prices are always lower. But I pay a little extra if I > meet employees or owners face to face. I don't go to bookstores to buy books, I go there to browse them in the context of other booklovers. I *buy* books at bookstores to make sure they are there the next time I want to go browse (or people watch). Powells is by far my favorite since Cody's closed. Denver's Tattered Cover (downtown) is worthy as well and I'm sad that ABQs Bound to be Read failed. Borders and their ilk managed to capture the superficial feel of the best Indies, but it is like going to Chipotle's expecting to have a Tomasita's (only for SFe residents and visitors) experience. - Steve ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com |
In reply to this post by glen ropella
In an economics class we watched a video of Malcolm Gladwell at a TED talk relating the commercial history of spaghetti sauces, and demonstrating how an individual in the industry (I forget his name) changed the reigning paradigm from "finding the perfect spaghetti sauce" to "seeing what areas of preference of taste, texture, and so on people tend to cluster around, and then creating multiple varieties that offer the consumer a choice". Then we watched a different TED talk about how choice could be paralyzing, but it goes to show that there is a case where a non-convergence strategy was at least temporarily or partially successful compared to the convergence case.
-Arlo James Barnes ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com |
In reply to this post by Steve Smith
Uh .... The Village Pragmatist here: remember, is Pragmatist philosophy,
consensus or convergence is not the goal. It's the outcome that arises from people attempting to discover the Truth. It really is quite paradoxical. But just bear in mind the 100 years of chemistry by which we came by the periodic table. Peirce was saying, if people behave like those guys did, the truth is where you will eventually get. What he doesn't say -- but which I think is implicit -- is that The Truth is a mythology that everybody has to believe in to get them to play the game right, but given Descartes, its discovery is unattainable. -----Original Message----- From: Friam [mailto:[hidden email]] On Behalf Of Steve Smith Sent: Friday, April 12, 2013 12:25 PM To: The Friday Morning Applied Complexity Coffee Group Subject: Re: [FRIAM] scientific evidence Glen - > Yes, my problem with both consensus and convergence is the downward > causation, or more specifically, the extent to which that forcing > structure can or cannot be escaped. > > With relatively independent things like zero-intelligence agents, this > isn't as much a problem (I think) because the resistance to flip from > one behavior (consensus participation, exploitation) to another > behavior > (exploration) should (ideally) be low, or at least bounded. > > But with intelligent agents (like humans), any behavior that obtains > can be positively reinforced to a huge degree, perhaps infinitely. > The little, programmable homunculus in side your head becomes > specialized and stuck in its ways. That makes the "escape velocity" > from a consensus much more difficult. social/political/spiritual one or a technical one. I think I hear you saying that in fact, consensus and convergence *are* real phenomena in human understanding but that you *do* think they are not good for the individual or the group (unless it is the Borg as you reference later?) I've been heard using the term Homo Hiveus to describe one end-state that humans may (socially) evolve to... I'm not convinced it is necessary or even possible... but I *do* feel it would be a tragic loss if humanity becomes one big lock-step colony (or set of competing/mutually-ignoring? colonies). I think we have a counter-example to this in societies such as the Japanese who (from my Western/American perspective) seem to be a lot more predisposed (culturally?) to give over to collective behaviour. The fascists of early last century seemed prone to this (in a top-down way?), and to some extent the collectivists (socialism, communism), and for the most part all of those have (mostly) failed to capture the hearts and minds of the members of the collective. Extremist fanatics might be the closest to this? Individuals being "captured" by a small set of very powerful and shared memes? Moonies, Muslim Bro'hood(???), Taliban(???), Aryan Nation, Extreme Right Christians. > > That's also part of my suspicion of thought and preference for action. And how do you feel about thoughful action and actionable though <grin>? >> A simple summary might just be an explanation of how you think this >> aphorism has "done more harm..." ? I'm sure it *has* done harm, but >> I'm not sure what it is you refer to? > When I hear "all models are wrong, some are useful", I hear > "therefore, we need to keep modeling to make better models". And > that's the problem. I have the same problem with people who think > there is only 1 best way to _think_. And I hear... "the map is not the territory", if you really want to see what is in that area labeled "there be dragons here", you need to go visit, and don't expect to come home having bagged a dragon, but you might get eaten! > Although I sound cynical when I use the aphorism "the problem with > communication is the illusion that it exists", I'm not being cynical > at all. It's actually a positive statement that argues _for_ variety > and diversity in thought ... against consensus, pro exploration. I think aphorisms are at their best when they are offering cynicism or polyannaism... in the latter I think we call them platitudes? > To me, this is why the "Borg" is such a great enemy. To discover I > think > (nearly) exactly like another person would be the best argument for > suicide I've ever heard. To discover the fantastic ways in which > others do not think like me borders on the very purpose of life. I agree perfectly (now go ride off a cliff!) <smirk> > Further, I don't think evolution would work without this balance > between the extent to which internal models mismatch reality vs. the > extent to which they match reality. I.e. to be wrong is beautiful and > interesting. To be right is useless and boring. That explains a lot about my severe depression and feelings of uselessness! > Therefore, phrases like "all models are wrong, some are useful" is a > kind of crypto-idealism. A sneaky way to get us to converge and, > thereafter, be entrapped by the convergence. Even if the limit point > doesn't exist in itself, such crypto-idealism can trap us in an > ever-shrinking _cone_ of constraints. I like your term - Crypto Idealism... now, have you ever considered that you might be paranoid? <grin> > That's close, but not quite what I intended. I read your example as > "automatic modeling", which is awesome and I'm sad that it faded away. Or more to the point, "automated model exploration"? I'm still interested in re-igniting it if ever i find the right project/collaborators. My part was more on the analytics side of trying to understand the *results* of these ensemble runs... high dimensional correlation over a (sometimes rough) multidimensional landscape. But I also understand the ABM, GA and "design of experiments" aspects well enough to participate or lead others with fresh skills. > But model forking, to me, means the responsibility (along with all 4 > causes, efficient, material, formal, and final) may change with the > changing of hands. The two extremes are _abuse_, where the model is > being used for its side effects or purposes for which it was never > intended to an _attempt_ to carry on an effort set out by the original > modeler. There's a whole spectrum in between. Aha! Thanks for the disambiguation. I don't think we are converging, as that would be useless and boring. We are only converging enough that I can agree that "communication is an illusion"! > The main difference I see between what I'm trying/failing to describe > and automatic modeling lies in the [im|ex]plicit nature of the > objective > function(s) and the evolution of that(those) objective function(s), if > they evolve at all. > > I'm also implying a full spectrum of asynchrony between forks, in > time, space, purpose, use cases, etc. Ah yes... in fact, the "automated modeling" project was a vague attempt to rein in and exploit what already happened. Build an effective (for some purposes) model and others with co-opt it and use it (modified or not) for (more or less) different things to (more or less) effect. The results will probably never be collectively compiled, and if they are, the biggest thing likely to be discovered is big holes in it's use/application. >> I'd like to catch up on your definitions here (in this thread or our >> offline parallel one)... maybe others are curious as well by what you >> mean by multi-formalism and these evolving models (My example with >> GA-designed ensembles of meta-model parameters might be the same >> thing roughly?). > I basically mean the use of different mechanisms for the internals and > interactions of the various elements involved. The most > straightforward, practical example are hybrid systems, where a > discrete module must interact with a continuous module. In my experience, this is usually limited to using one as a forcing function to the other, with the "other" being the dominant one? Fine grain discrete informing a lower resolution continuous? I'm not current in the field. > But there are plenty of > others practical examples, as well as metaphysical ones: How do you > get an atheistic Hindu and a young earth Creationist to cooperate > toward an objective? This sounds like a Sphinx - worthy riddle or the setup for a Steven Wright joke! >> You lucky boy, to live within a drive or rail to Powells on demand. >> My wife and I spend up to half our time there it seems when we visit the >> area. The independents are going slowly but surely. Powells is a >> bastion. > Yeah, Amazon's prices are always lower. But I pay a little extra if I > meet employees or owners face to face. I don't go to bookstores to buy books, I go there to browse them in the context of other booklovers. I *buy* books at bookstores to make sure they are there the next time I want to go browse (or people watch). Powells is by far my favorite since Cody's closed. Denver's Tattered Cover (downtown) is worthy as well and I'm sad that ABQs Bound to be Read failed. Borders and their ilk managed to capture the superficial feel of the best Indies, but it is like going to Chipotle's expecting to have a Tomasita's (only for SFe residents and visitors) experience. - Steve ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com |
In reply to this post by Steve Smith
Steve Smith wrote at 04/12/2013 11:25 AM:
> I've been heard using the term Homo Hiveus to describe one end-state > that humans may (socially) evolve to... I'm not convinced it is > necessary or even possible... but I *do* feel it would be a tragic loss > if humanity becomes one big lock-step colony (or set of > competing/mutually-ignoring? colonies). I think we have a > counter-example to this in societies such as the Japanese who (from my > Western/American perspective) seem to be a lot more predisposed > (culturally?) to give over to collective behaviour. The fascists of > early last century seemed prone to this (in a top-down way?), and to > some extent the collectivists (socialism, communism), and for the most > part all of those have (mostly) failed to capture the hearts and minds > of the members of the collective. Yes, I agree. It may simply be one of population density. As long as we have the rural communities and inter-deme transmission, we can probably resist total over-convergence. > Extremist fanatics might be the closest to this? Individuals being > "captured" by a small set of very powerful and shared memes? Moonies, > Muslim Bro'hood(???), Taliban(???), Aryan Nation, Extreme Right Christians. But examples like these demonstrate that even if we won't end up in overwhelming over-convergence, there are isolated pockets where people are stuck in the ideological gravity wells of the consensus surrounding them ... like ants in an antlion pit. > And how do you feel about thoughful action and actionable though <grin>? At worst they're myths. At best, they are post-hoc justifications or rationalizations. Now concepts like "mindfullness" are simply misnomers because they're really about suspending your inner narrative and paying attention here and now. If there's a mind I approve of at all, it's the immediate mind, tightly coupled with the environment, as opposed to some hysterically deep hidden markov homunculus. > And I hear... "the map is not the territory", if you really want to see > what is in that area labeled "there be dragons here", you need to go > visit, and don't expect to come home having bagged a dragon, but you > might get eaten! The aphorism about models that maps directly to "the map is not the territory" is "all models are always wrong". No matter which way you cut the relationship between a model and its referent, it's false. The value in modeling lies in the constellation of models, not any one model. > I agree perfectly (now go ride off a cliff!) <smirk> Ha! No offense, but just because you tell me something doesn't mean it's true. I think one could drive a very large truck through the volumes with which we disagree. > I like your term - Crypto Idealism... now, have you ever considered that > you might be paranoid? <grin> Every minute of every day. -- =><= glen e. p. ropella I'm living in a room without any view ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com |
Glen -
>> And how do you feel about thoughful action and actionable though <grin>? > At worst they're myths. At best, they are post-hoc justifications or > rationalizations. Boy, when I first twigged to this (even a little) it freaked my ego out a LOT. >> And I hear... "the map is not the territory", if you really want to see >> what is in that area labeled "there be dragons here", you need to go >> visit, and don't expect to come home having bagged a dragon, but you >> might get eaten! > The aphorism about models that maps directly to "the map is not the > territory" is "all models are always wrong". I guess I thought that *was* the *main* point of "all models are wrong, some are useful"... >> I agree perfectly (now go ride off a cliff!) <smirk> > Ha! No offense, but just because you tell me something doesn't mean > it's true. > I think one could drive a very large truck through the > volumes with which we disagree. I'm sure... including the idea that recognizing someone else as thinking alike with me would be a justification for suicide? Or would it be a murder suicide? Or a domino suicide? >> I like your term - Crypto Idealism... now, have you ever considered that >> you might be paranoid? <grin> > Every minute of every day. Another place we differ. I only experience paranoia when people point out that they disagree with me! - Steve ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com |
Steve Smith wrote at 04/12/2013 01:42 PM:
>> The aphorism about models that maps directly to "the map is not the >> territory" is "all models are always wrong". > I guess I thought that *was* the *main* point of "all models are wrong, > some are useful"... Maybe that was the _intention_. But the pragmatic result is modeling opportunism. "Yeah, sure my model's wrong. So what. It's useful in this context." Now if that were _actually_ the case, then I'd be OK with it. But what usually turns out is that it's not useful in that context. It's only useful by that person in that context (or whatever other context that person decides to exploit by vaguely mapping the previous context to the next context). In short, that aphorism "all models are wrong, some are useful" opened modeling up to snake oil salesmen. You can see this because most models do not lay out what use cases they address at all. They are almost always inextricably tied to the modeler, not the context. However, if you look at the modeling trajectory, the revision history, or the suite of models surrounding any one model, you can begin to grok the context, the use cases, independent of the modeler(s). I.e. no individual model is useful. Only model constellations are useful. -- =><= glen e. p. ropella I have gazed beyond today ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com |
In reply to this post by Arlo Barnes
The issue keeps coming up. Perhaps I'm just sensitive to it, since my S.O. is (finally!) getting her B.S. in nursing at a Catholic university ... because she works for a Catholic hospital. And I can't think of a better example of "applied complexity". Here's a recent interview on the Cancer Network: ONS: Understanding Spirituality and How It Can Be Used to Help Patients http://www.cancernetwork.com/conference-reports/ons2013/content/article/10165/2139629 And here's a recent interview by Sam Harris of Ronald A. Howard: The Straight Path http://www.samharris.org/blog/item/the-path-of-honesty The irritating question is whether the Truth(TM) is _always_ in the best interests of the organism (not the species, necessarily, but the individual)? Even if I set aside my objections to the existence of a Grand Unified Truth and allow it for the sake of argument, the question retains its meaning and power. What are my responsibilities as I escort my mom into death? Or, were I a nurse, especially at something like a Catholic hospital, what would be my responsibilities as I escorted a Catholic into death? How about a Jew? Or an atheist? The same could be said of children, I suppose. When/how do you explain to your child that there is no Santa Claus? When/how do you explain to your child that there is no God and those who say there is are simply wrong, but perhaps not always wrong in a terrible way? And, most importantly, how do you explain to people that you reject treatments like homeopathy, chiropracty?, and acupuncture because there's no evidence to support their efficacy? A related issue surrounds DNR orders (Do not Resuscitate). I've _heard_ that most doctors sign them because they're aware of the relative ineffectiveness and physical trauma associated with techniques like CPR and defibrillation. Yet, most nurses, EMTs, firemen, life guards, local CERT traine[r|e]s insist on them. I don't have trustworthy data sources for the efficacy or side effects of resuscitation methods. So, I can't say which position is more sound. And I suspect doctors, like cops, are biased because of their occupation. But the question is, do the data even matter? Is a particular life _always_ so sacred to some particular other that the efficacy and side effects simply do not matter? That's related to things like accupuncture by the argument I often hear that "it can't hurt, so if it's even a little bit possible it'll help, then why not do it?" Arlo Barnes wrote at 04/05/2013 08:42 PM: > The first is in response to 'would I like people to burst my > placebo/nocebo bubble?': the latest issue of Science magazine has an > article on recommendations by the American College of Medicine of > whether people should be told without being asked that they have alleles > that indicate an elevated risk of disease when looking at genes related > to common diseases (mostly cancers and tissue defects) as a course of a > full-genome analysis for another disease/syndrome/disorder (pointing out > that people may already be in an emotionally fragile state from said > disease). Link here > <http://www.sciencemag.org/content/339/6127/1507.full?sid=7561e634-f578-431a-8299-e86ef03891f4>. =><= glen e. p. ropella I learned how to live true and somebody blew up ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com |
A better question might be: why are we still teaching them these dishonest little fairy tales in the first place, which we then have to un-teach later?
--Doug On Thu, Apr 25, 2013 at 10:29 AM, glen <[hidden email]> wrote:
============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com |
Douglas Roberts wrote at 04/25/2013 09:44 AM:
> A better question might be: why are we still teaching them these > dishonest little fairy tales in the first place, which we then have to > un-teach later? I admit that's a more philosophical question, but not a better one. It's not clear how answering that question will help address the applied complexity problem of handling the mature organism, where these beliefs are deeply rooted and may well affect their physiology in some way. Harris' questions get to the root of the applied complexity problem. Do you tell the whole truth and nothing but the truth to a dying old person? If so, is that medically beneficial or detrimental? -- =><= glen e. p. ropella Man alive the jive and lyrics, ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com |
Unrelated to the main topic here, but all the talk of DNR et al reminded me of this article earlier this week - http://www.bbc.co.uk/news/magazine-22154552 . Hmmm.On Thu, Apr 25, 2013 at 10:38 PM, glen <[hidden email]> wrote: Douglas Roberts wrote at 04/25/2013 09:44 AM: ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com |
In reply to this post by glen ropella
The intent was to produce a pragmatic perspective, not a philosophical one. By avoiding the telling of escapist fantasy-world fairy tails in the first place, there will be less untruth to deal with at later stages in life. --Doug On Thu, Apr 25, 2013 at 11:08 AM, glen <[hidden email]> wrote: Douglas Roberts wrote at 04/25/2013 09:44 AM: Doug Roberts
[hidden email] ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com |
Douglas Roberts wrote at 04/25/2013 10:16 AM:
> The intent was to produce a pragmatic perspective, not a philosophical > one. By avoiding the telling of escapist fantasy-world fairy tails in > the first place, there will be less untruth to deal with at later stages > in life. You're talking about a manipulation that might take generations to realize an effect. That's not very pragmatic. A pragmatic perspective is to look at the population we have right now and try to design our manipulation based on that population and whatever evidence we have now. If and when we can tease out some local (temporally and spatially) cause-effect relationships, then we can begin extrapolating to 30-80 years out, like you want to do. So, the question remains, is there a medical benefit to bursting the beliefs of a patient? And more refined, does the condition of the patient matter? E.g. I can see how bursting my friend, who is getting accupuncture for her neck pain, might help her. But how about a 50 year old prostate cancer patient with a good prognosis? Versus a 98 year old emphysema patient? -- =><= glen e. p. ropella I had my arm around a sundial ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com |
In reply to this post by Siddharth-3
siddharth wrote at 04/25/2013 10:16 AM:
> Unrelated to the main topic here, but all the talk of DNR et al reminded > me of this article earlier this week - > http://www.bbc.co.uk/news/magazine-22154552 . > Hmmm. Thanks. That's definitely relevant. But the trouble with that article (and most, actually) is the purely positive results reported. Here's one that _seems_ more objective. A practical first step might be to push for more realistic portrayals of CPR in the media. CPR: Less Effective Than You Might Think http://www.intelihealth.com/IH/ihtIH/WSIHW000/35320/35323/372221.html?d=dmtHMSContent > As opposed to many medical myths, researchers have reliable data concerning the success rates of CPR (without the use of automatic defibrillators) in a variety of settings: > > 2% to 30% effectiveness when administered outside of the hospital > 6% to 15% for hospitalized patients > Less than 5% for elderly victims with multiple medical problems > > In June 1996, the New England Journal of Medicine published a study about the success rates of CPR as shown on the television medical shows "ER," "Chicago Hope" and "Rescue 911." According to the shows, CPR successfully revived the victim 75% of the time, more than double the most conservative real-life estimates. A more recent study published in 2009 suggested that the immediate success rate of CPR on television may be more realistic; however, discharge from the hospital and longer-term survival were rarely mentioned in TV dramas. In addition, while most CPR is actually performed on sick, older individuals with cardiac disease, most victims in television dramas are young and required CPR following trauma or a near-drowning — conditions with the highest success rates. > > Finally, patients on TV shows usually die or fully recovered. In real life, many of those who are revived by CPR wind up severely debilitated. One reason may be that, as noted by a study published in the January 2005 issue of the Journal of the American Medical Association, CPR is frequently not administered adequately, even when provided by trained ambulance personnel. Improved technique (including more frequent and rapid compressions, as recommended in the new guidelines) and use of automatic defibrillators could dramatically improve success rates. > > The low success rate of CPR may be an example of how a medical myth is perpetuated by the media because it is more appealing than the truth. Unfortunately, sugar-coating the concept of CPR leads to unrealistic expectations when a loved one requires CPR or is ill, and heroic measures are under consideration. A better understanding of when CPR may be effective and when it is highly unlikely to help will better serve everyone in the unfortunate event of catastrophic illness or injury. If you learn to administer CPR, you may save someone's life, so learning the proper technique is worth the effort. However, you should not expect the results you see on television. -- =><= glen e. p. ropella And I'm never gonna tell you why ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com |
In reply to this post by glen ropella
On 4/25/13 11:36 AM, glen wrote:
> So, the question remains, is there a medical benefit to bursting the > beliefs of a patient? If the patient is asking a for an opinion, and the nurse has no reason to think the patient's mental faculties are especially compromised, then I think it is best to engage honestly. It could distract them from their physical condition. If the patient is asserting a bunch of random fundamentalist nutcase things about the nature of the universe and forcing the engagement of an otherwise uninterested professional, then that patient could be in the `burst' side of a side-by-side study. (In the case of being an employee of a hospital with a religious affiliation, this could be professionally risky.) If it is not a patient, but a relative or friend, then perhaps the best thing to do is to direct the conversation to shared journey together and not on a debate on the extent to which it will end. Marcus ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com |
In reply to this post by glen ropella
'Realistic portrayals of CPR' such as this one by the British Heart Foundation?!?! - https://www.youtube.com/watch?v=ILxjxfB4zNk
*sigh* <stomps off into the sunset>On Thu, Apr 25, 2013 at 11:34 PM, glen <[hidden email]> wrote: siddharth wrote at 04/25/2013 10:16 AM: ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com |
In reply to this post by Siddharth-3
We have a Wounded Warrior at Sandia who died three times - once on the battlefield, once in the medevac helo, and once in the field hospital.
We have several WWs at Sandia - I wonder how they received the news of their injuries? Combat injuries are surely a possible research pool to answer the question of tell or hide. A surgeon from either Beth Israel or Mass General said that marathon bombing victims were so happy to be alive that their limb loss didn't faze them. Ray Parks
From: siddharth [mailto:[hidden email]]
Sent: Thursday, April 25, 2013 11:16 AM Mountain Standard Time To: The Friday Morning Applied Complexity Coffee Group <[hidden email]> Subject: [EXTERNAL] Re: [FRIAM] bursting the placebo bubble Unrelated to the main topic here, but all the talk of DNR et al reminded me of this article earlier this week -
http://www.bbc.co.uk/news/magazine-22154552 .
Hmmm.On Thu, Apr 25, 2013 at 10:38 PM, glen <[hidden email]> wrote:
Douglas Roberts wrote at 04/25/2013 09:44 AM: ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com |
In reply to this post by Douglas Roberts-2
A question for Doug. Would you be so kind as to describe to me, in sufficient detail that I could mount a Pragmatic test, this god of his whose non-existence he so positively asserts? A question for the person who speaks of escorting somebody into death. I confess, being old, I quite like the concept. But I guess we have to remember that such an escort is always a Judas steer. Nick From: Friam [mailto:[hidden email]] On Behalf Of Douglas Roberts The intent was to produce a pragmatic perspective, not a philosophical one. By avoiding the telling of escapist fantasy-world fairy tails in the first place, there will be less untruth to deal with at later stages in life. --Doug On Thu, Apr 25, 2013 at 11:08 AM, glen <[hidden email]> wrote: Douglas Roberts wrote at 04/25/2013 09:44 AM: > A better question might be: why are we still teaching them these I admit that's a more philosophical question, but not a better one.
Man alive the jive and lyrics,
-- Doug Roberts
============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com |
Nicholas Thompson wrote at 04/25/2013 12:02 PM:
> A question for the person who speaks of escorting somebody into death. > I confess, being old, I quite like the concept. But I guess we have to > remember that such an escort is always a Judas steer. I could not disagree with you more. We're _all_ going to die. You may not believe that, but it's true. The trick is whether the _cattle_ who are heading toward their slaughter are self-aware enough to understand that they're going to die and that they have some control over how it happens. That's nothing like a judas steer. -- =><= glen e. p. ropella I'm seeing nowhere through the eyes of a lie ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com |
Free forum by Nabble | Edit this page |