Friends and Colleagues,
Dr. Suarez at Trinity University was kind enough to invite me to meet Senator John Edwards last night after he gave a speech in San Antonio. During our brief chat with Senator Edwards, we pitched him on using simulations to help in moving people from poverty to the middle clas in America. We explained that it could build on the work done in system dynamics at MIT by Professor Forrester and many others relating to urban planning. Dr. Warren at London Business School also has an excellent approach for policymakers interested in fact-based, quantitative decision-making. We can also tap into the work done by Stephen Guerin of RedfishGroup and Keith Hunter at Carnegie Mellon University on using agent based models in affordable housing searches. Senator Edwards was intrigued. He connected us with his Finance Director with whom I spent 10 minutes over drinks discussing Simulation Science. He gave me his email and suggested we talk in more detail. I wanted to ask the FRIAM and System Dynamics simulation communities if they had any thoughts on how we might use simulation to develop policies for getting rid of poverty in America? If you are interested in helping us, please send us your ideas within the next five days and we'll put them into a brief for Senator Edwards and his staff. We'll credit all ideas in the brief. I'll share the final brief with the community. You never know what might come of this, but I could not pass up the opportunity to bend Senator Edwards' ear about the power of simulation, agent based modeling and system dynamics! Warmly, Justin Lyon Skype: justinlyonandsimulation Justin's profile . . . https://www.linkedin.com/e/fps/2815771/ Justin's business blog . . . http://justinlyonandsimulation.blogspot.com Justin's personal blog . . . http://blog.360.yahoo.com/justin1028 I'm from Texas. What country are you from? http://www.freerepublic.com/focus/news/663131/posts |
Poverty is like wealthiness partially a consequence of the free market economy in the USA. Free market means always inequality and imparity. In Germany we have a social market economy and no real poverty (state-run support for jobless and benefit payments), but also a huge national deficit. I doubt that there are simple policies for "getting rid of poverty in America". That sounds stupid to me. If the majority of the people would really move from poverty to the middle clas in America, the firms and companies would no longer be able to pay them, and would either increase the prices or relocate their business to other, cheaper countries, which would in turn increase the unemployment rate in the USA. You cannot consider the problem of poverty in isolation. It is connected to other classic problems like unemployment, national deficit, economic growth and environmental pollution. -J. > -----Urspr?ngliche Nachricht----- > Von: Friam-bounces at redfish.com [mailto:Friam-bounces at redfish.com] Im Auftrag von Justin Lyon > Gesendet: Mittwoch, 30. November 2005 20:56 > An: system-dynamics at VENSIM.COM; friam at redfish.com > Cc: Dante Suarez > Betreff: [FRIAM] Open Invitation: Senator John Edwards and Josh Brumberger > > During our brief chat with Senator Edwards, we pitched > him on using simulations to help in moving people from > poverty to the middle clas in America. > > [...] > > I wanted to ask the FRIAM and System Dynamics > simulation communities if they had any thoughts on how > we might use simulation to develop policies for > getting rid of poverty in America? > |
Getting a sympathetic politician's ear is good. Getting his/her
staff's ear is even better. But a simulation isn't going to do anything about poverty. A simulation might model a particular argument, though one needs to pay attention to the ideological substrate of the code, and a simulation might be a persuasive rhetorical device, and it might serve as a point of reference to clarify debates, all good things. But it won't solve anything absent a sense of the problem and an argument for its solution, assuming that that sense and that argument have any chance of solving it, which in our current political climate is hard to hope for. Why not send the politicos the netlogo version of that part of sugarscape that shows how the model generates an inequitable distribution of wealth as an example together with some words to explain it? Mike On Dec 1, 2005, at 3:07 AM, Jochen Fromm wrote: > > Poverty is like wealthiness partially a consequence of > the free market economy in the USA. Free market means > always inequality and imparity. In Germany we have a > social market economy and no real poverty (state-run support > for jobless and benefit payments), but also a huge > national deficit. I doubt that there are simple policies > for "getting rid of poverty in America". That sounds > stupid to me. > > If the majority of the people would really move from > poverty to the middle clas in America, the firms and > companies would no longer be able to pay them, and > would either increase the prices or relocate their > business to other, cheaper countries, which would > in turn increase the unemployment rate in the USA. > You cannot consider the problem of poverty in isolation. > It is connected to other classic problems like unemployment, > national deficit, economic growth and environmental pollution. > > -J. > >> -----Urspr?ngliche Nachricht----- >> Von: Friam-bounces at redfish.com [mailto:Friam-bounces at redfish.com] Im > Auftrag von Justin Lyon >> Gesendet: Mittwoch, 30. November 2005 20:56 >> An: system-dynamics at VENSIM.COM; friam at redfish.com >> Cc: Dante Suarez >> Betreff: [FRIAM] Open Invitation: Senator John Edwards and Josh >> Brumberger >> >> During our brief chat with Senator Edwards, we pitched >> him on using simulations to help in moving people from >> poverty to the middle clas in America. >> >> [...] >> >> I wanted to ask the FRIAM and System Dynamics >> simulation communities if they had any thoughts on how >> we might use simulation to develop policies for >> getting rid of poverty in America? >> > > > > ============================================================ > FRIAM Applied Complexity Group listserv > Meets Fridays 9a-11:30 at Mission Cafe > lectures, archives, unsubscribe, maps at http://www.friam.org |
In reply to this post by Justin Lyon
Actually, I can think of a couple ways this could be useful. First,
there's often a huge difference between the actual consequences of a policy decision and its intended consequences. Second, Ross Perot's charts and graphs would have nothing in terms of rhetorical effectiveness when compared to a well-animated agent-based simulation. I can't think of anything more specific at the moment, though, because I've got a bit of a cold and for some reason those things always involve a lag factor on my brain. Probably the most obvious application would be to generate an agent-based model of the American economy for the past 50 years, start it in 1950, get it to finish up in 2005 with results which match reality, and then say, look, this thing models the American economy reasonably well, so let's plug in a few policy decisions and Presidential budgets and see what their results are. (Of course to actually model the entire American economy would be a pretty daunting task, it might be more effective with local economies, I don't know.) On 11/30/05, Justin Lyon <justin1028 at yahoo.com> wrote: > Friends and Colleagues, > > Dr. Suarez at Trinity University was kind enough to > invite me to meet Senator John Edwards last night > after he gave a speech in San Antonio. > > During our brief chat with Senator Edwards, we pitched > him on using simulations to help in moving people from > poverty to the middle clas in America. > > We explained that it could build on the work done in > system dynamics at MIT by Professor Forrester and many > others relating to urban planning. Dr. Warren at > London Business School also has an excellent approach > for policymakers interested in fact-based, > quantitative decision-making. We can also tap into the > work done by Stephen Guerin of RedfishGroup and Keith > Hunter at Carnegie Mellon University on using agent > based models in affordable housing searches. > > Senator Edwards was intrigued. He connected us with > his Finance Director with whom I spent 10 minutes over > drinks discussing Simulation Science. He gave me his > email and suggested we talk in more detail. > > I wanted to ask the FRIAM and System Dynamics > simulation communities if they had any thoughts on how > we might use simulation to develop policies for > getting rid of poverty in America? > > If you are interested in helping us, please send us > your ideas within the next five days and we'll put > them into a brief for Senator Edwards and his staff. > We'll credit all ideas in the brief. I'll share the > final brief with the community. > > You never know what might come of this, but I could > not pass up the opportunity to bend Senator Edwards' > ear about the power of simulation, agent based > modeling and system dynamics! > > > > Warmly, > Justin Lyon > > Skype: justinlyonandsimulation > > Justin's profile . . . > https://www.linkedin.com/e/fps/2815771/ > > Justin's business blog . . . > http://justinlyonandsimulation.blogspot.com > > Justin's personal blog . . . > http://blog.360.yahoo.com/justin1028 > > I'm from Texas. What country are you from? > http://www.freerepublic.com/focus/news/663131/posts > > ============================================================ > FRIAM Applied Complexity Group listserv > Meets Fridays 9a-11:30 at Mission Cafe > lectures, archives, unsubscribe, maps at http://www.friam.org > -- Giles Bowkett = Giles Goat Boy http://www.gilesgoatboy.org/ |
Yes, agent based modeling is of course interesting and useful. Justin only sounded like a used car saleman who wants to sell a car he does not even own. Simulations are certainly useful to study the effect of policies. Whether these simulations match with real systems and if they are already sophisticated enough to develop real-world policies is another question. The November 11 issue of Science (*) has a review article on using observed data patterns in multiple ways to distinguish the right sort of Multi-Agent System (MAS) or agent based model. Basically the authors say that patterns from field investigations and data of a natural system can be used to create and to adjust a model for the system. They define patterns as "defining characteristics of a system" which contain coded information on the internal organization of the system. These patterns are according to the authors often indicators of essential underlying processes and structures. Pattern-Oriented Modeling is proposed in the paper as a modeling process which concerns every step: from the initial model construction to stepwise model refinement, parameter estimation and finally to the detection of deficits and errors in the model structure. The authors argue that.. - a model should only be as complex as is required to reproduce the patterns, and patterns can be used to tune the level of complexity or detail in a model - comparison of observed and predicted patterns can be used to select the best among alternative behavior models, to determine and evaluate the values of unknown or uncertain model parameters, and to detect deficits and errors in the model structure. This is probably a good way to construct a model. If a number of good models exist that match reality reasonably well, then we can study the effect of policy decisions and see what their results are. Probably a reasonable number of models already exists. The problem is that people in economic sciences usually tend to produce a lot of "bullshit". -J. (*) Pattern-Oriented Modeling of Agent-Based Complex Systems: Lessons from Ecology Volker Grimm et al. Science Vol. 310 (2005) 987-991 -----Urspr?ngliche Nachricht----- Von: Friam-bounces at redfish.com [mailto:Friam-bounces at redfish.com] Im Auftrag von Giles Bowkett Gesendet: Donnerstag, 1. Dezember 2005 18:37 An: The Friday Morning Applied Complexity Coffee Group Cc: system-dynamics at vensim.com; friam at redfish.com; Dante Suarez Betreff: Re: [FRIAM] Open Invitation: Senator John Edwards and JoshBrumberger Actually, I can think of a couple ways this could be useful. First, there's often a huge difference between the actual consequences of a policy decision and its intended consequences. Second, Ross Perot's charts and graphs would have nothing in terms of rhetorical effectiveness when compared to a well-animated agent-based simulation. I can't think of anything more specific at the moment, though, because I've got a bit of a cold and for some reason those things always involve a lag factor on my brain. Probably the most obvious application would be to generate an agent-based model of the American economy for the past 50 years, start it in 1950, get it to finish up in 2005 with results which match reality, and then say, look, this thing models the American economy reasonably well, so let's plug in a few policy decisions and Presidential budgets and see what their results are. (Of course to actually model the entire American economy would be a pretty daunting task, it might be more effective with local economies, I don't know.) |
In reply to this post by Michael Agar
Policy simulations can be helpful on a number of levels. One would be
looking at self-organizing implications and outcomes for agents programmed with various assumptions about how markets or people behave and make choices in a particular environment. By varying the parameter one can tweak one's position assuming that a large number of simulations have been examined and that they align reasonably well with empirical studies and other simulation tests. Right now legislation that includes proportions or percentage typically derive these numbers from one or two experts or they are made up by staff with no foundation. In any case, most simulations see government as a black-box that puts out resources or information into a single social realm making it impossible to see how this complex relationship is working itself out. Probably most importantly, public policy is meant to influence some behavior in the broader society. It is part of a very complex system on the government side (multiple programs at multiple government levels affecting a particular sector say housing or education) and on the society side. As a complex system it is very difficult to "think out" the implications of a policy position either in terms of its unexpected consequences in relationship to other programs or in relation to social issue being influenced. All of this is tied to timing as well. Government legislative, regulatory, funding, litigation, and other processes tie into the public sector in complex ways that can disrupt or redirect this complex system in unexpected ways. Finally, modeling of simpler government processes like emergency medical disaster response can identify critical bifurcation points that can affect the level of mortality and morbidity. Such experiments in silicon can provide real world guidance. I've done research in these areas and published some things if anyone is interested. http://www.rand.org/scitech/stpi/Complexity/ Other examples: http://www.rand.org/scitech/stpi/Complexity/don.pdf http://www.pnas.org/cgi/content/abstract/99/suppl_3/7195 http://www.complexityandpolicy.org/dial.htm Gus Koehler, Ph.D. Principal Time Structures 1545 University Ave. Sacramento, CA 95825 916-564-8683, Fax: 916-564-7895 Cell: 916-716-1740 www.timestructures.com -----Original Message----- From: [hidden email] [mailto:[hidden email]] On Behalf Of Michael Agar Sent: Thursday, December 01, 2005 4:00 AM To: The Friday Morning Applied Complexity Coffee Group Subject: Re: [FRIAM] Open Invitation: Senator John Edwards and JoshBrumberger Getting a sympathetic politician's ear is good. Getting his/her staff's ear is even better. But a simulation isn't going to do anything about poverty. A simulation might model a particular argument, though one needs to pay attention to the ideological substrate of the code, and a simulation might be a persuasive rhetorical device, and it might serve as a point of reference to clarify debates, all good things. But it won't solve anything absent a sense of the problem and an argument for its solution, assuming that that sense and that argument have any chance of solving it, which in our current political climate is hard to hope for. Why not send the politicos the netlogo version of that part of sugarscape that shows how the model generates an inequitable distribution of wealth as an example together with some words to explain it? Mike On Dec 1, 2005, at 3:07 AM, Jochen Fromm wrote: > > Poverty is like wealthiness partially a consequence of > the free market economy in the USA. Free market means > always inequality and imparity. In Germany we have a > social market economy and no real poverty (state-run support for > jobless and benefit payments), but also a huge national deficit. I > doubt that there are simple policies for "getting rid of poverty in > America". That sounds stupid to me. > > If the majority of the people would really move from > poverty to the middle clas in America, the firms and companies would > no longer be able to pay them, and would either increase the prices or > relocate their business to other, cheaper countries, which would > in turn increase the unemployment rate in the USA. > You cannot consider the problem of poverty in isolation. > It is connected to other classic problems like unemployment, > national deficit, economic growth and environmental pollution. > > -J. > >> -----Urspr?ngliche Nachricht----- >> Von: Friam-bounces at redfish.com [mailto:Friam-bounces at redfish.com] Im > Auftrag von Justin Lyon >> Gesendet: Mittwoch, 30. November 2005 20:56 >> An: system-dynamics at VENSIM.COM; friam at redfish.com >> Cc: Dante Suarez >> Betreff: [FRIAM] Open Invitation: Senator John Edwards and Josh >> Brumberger >> >> During our brief chat with Senator Edwards, we pitched >> him on using simulations to help in moving people from poverty to the >> middle clas in America. >> >> [...] >> >> I wanted to ask the FRIAM and System Dynamics >> simulation communities if they had any thoughts on how >> we might use simulation to develop policies for >> getting rid of poverty in America? >> > > > > ============================================================ > FRIAM Applied Complexity Group listserv > Meets Fridays 9a-11:30 at Mission Cafe > lectures, archives, unsubscribe, maps at http://www.friam.org ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at Mission Cafe lectures, archives, unsubscribe, maps at http://www.friam.org |
Gus, who?s probably been at the policy/model game longer than anyone,
shows how a model/simulation is a character in a larger policy story, where the character role and the story plot can be quite different from time to time. Be interesting to have a collection of these stories, the good, the bad and the ugly. In the Redfish Zozobra project, for instance, a lot of good was done--a lot of that story was IT as a new actor who changed the conversation towards cooperation in a group of diverse interests with an oppositional history. A good experience, but more family therapy than bounded rationality. And then there will be evil stories--Did models/ simulations facilitate the decision to invade Iraq? What evil lurked in the hearts of that code? Mike On Dec 1, 2005, at 1:16 PM, Gus Koehler wrote: > Policy simulations can be helpful on a number of levels. One would be > looking at self-organizing implications and outcomes for agents > programmed > with various assumptions about how markets or people behave and > make choices > in a particular environment. By varying the parameter one can > tweak one's > position assuming that a large number of simulations have been > examined and > that they align reasonably well with empirical studies and other > simulation > tests. Right now legislation that includes proportions or percentage > typically derive these numbers from one or two experts or they are > made up > by staff with no foundation. In any case, most simulations see > government as > a black-box that puts out resources or information into a single > social > realm making it impossible to see how this complex relationship is > working > itself out. > > Probably most importantly, public policy is meant to influence some > behavior > in the broader society. It is part of a very complex system on the > government side (multiple programs at multiple government levels > affecting a > particular sector say housing or education) and on the society > side. As a > complex system it is very difficult to "think out" the implications > of a > policy position either in terms of its unexpected consequences in > relationship to other programs or in relation to social issue being > influenced. All of this is tied to timing as well. Government > legislative, > regulatory, funding, litigation, and other processes tie into the > public > sector in complex ways that can disrupt or redirect this complex > system in > unexpected ways. > > Finally, modeling of simpler government processes like emergency > medical > disaster response can identify critical bifurcation points that can > affect > the level of mortality and morbidity. Such experiments in silicon can > provide real world guidance. > > I've done research in these areas and published some things if > anyone is > interested. http://www.rand.org/scitech/stpi/Complexity/ > > Other examples: > > http://www.rand.org/scitech/stpi/Complexity/don.pdf > http://www.pnas.org/cgi/content/abstract/99/suppl_3/7195 > http://www.complexityandpolicy.org/dial.htm > > Gus Koehler, Ph.D. > Principal > Time Structures > 1545 University Ave. > Sacramento, CA 95825 > 916-564-8683, Fax: 916-564-7895 > Cell: 916-716-1740 > www.timestructures.com > > > |
I am told by reliable sources that battlefield modeling was used by both
side in Iraq War I. The story goes that we had better simulations with better information which contributed to our victory, at least in the desert. Michael's point about facilitation via simulation is excellent. When you can get decision makers to sit down and look at a simulation--very hard to do--that is as well done and accessible as Steve's, then a thoughtful dialogue does take place particularly if the interaction is facilitated well. Truly, like family therapy with home movies. Gus Gus Koehler, Ph.D. Principal Time Structures 1545 University Ave. Sacramento, CA 95825 916-564-8683, Fax: 916-564-7895 Cell: 916-716-1740 www.timestructures.com -----Original Message----- From: [hidden email] [mailto:[hidden email]] On Behalf Of Michael Agar Sent: Friday, December 02, 2005 4:36 AM To: The Friday Morning Applied Complexity Coffee Group Subject: Re: [FRIAM] Open Invitation: Senator John Edwards and JoshBrumberger Gus, who's probably been at the policy/model game longer than anyone, shows how a model/simulation is a character in a larger policy story, where the character role and the story plot can be quite different from time to time. Be interesting to have a collection of these stories, the good, the bad and the ugly. In the Redfish Zozobra project, for instance, a lot of good was done--a lot of that story was IT as a new actor who changed the conversation towards cooperation in a group of diverse interests with an oppositional history. A good experience, but more family therapy than bounded rationality. And then there will be evil stories--Did models/ simulations facilitate the decision to invade Iraq? What evil lurked in the hearts of that code? Mike On Dec 1, 2005, at 1:16 PM, Gus Koehler wrote: > Policy simulations can be helpful on a number of levels. One would be > looking at self-organizing implications and outcomes for agents > programmed > with various assumptions about how markets or people behave and > make choices > in a particular environment. By varying the parameter one can > tweak one's > position assuming that a large number of simulations have been > examined and > that they align reasonably well with empirical studies and other > simulation > tests. Right now legislation that includes proportions or percentage > typically derive these numbers from one or two experts or they are > made up > by staff with no foundation. In any case, most simulations see > government as > a black-box that puts out resources or information into a single > social > realm making it impossible to see how this complex relationship is > working > itself out. > > Probably most importantly, public policy is meant to influence some > behavior > in the broader society. It is part of a very complex system on the > government side (multiple programs at multiple government levels > affecting a > particular sector say housing or education) and on the society > side. As a > complex system it is very difficult to "think out" the implications > of a > policy position either in terms of its unexpected consequences in > relationship to other programs or in relation to social issue being > influenced. All of this is tied to timing as well. Government > legislative, > regulatory, funding, litigation, and other processes tie into the > public > sector in complex ways that can disrupt or redirect this complex > system in > unexpected ways. > > Finally, modeling of simpler government processes like emergency > medical > disaster response can identify critical bifurcation points that can > affect > the level of mortality and morbidity. Such experiments in silicon can > provide real world guidance. > > I've done research in these areas and published some things if > anyone is > interested. http://www.rand.org/scitech/stpi/Complexity/ > > Other examples: > > http://www.rand.org/scitech/stpi/Complexity/don.pdf > http://www.pnas.org/cgi/content/abstract/99/suppl_3/7195 > http://www.complexityandpolicy.org/dial.htm > > Gus Koehler, Ph.D. > Principal > Time Structures > 1545 University Ave. > Sacramento, CA 95825 > 916-564-8683, Fax: 916-564-7895 > Cell: 916-716-1740 > www.timestructures.com > > > ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at Mission Cafe lectures, archives, unsubscribe, maps at http://www.friam.org |
In reply to this post by Gus Koehler
On 12/1/05, Gus Koehler <rhythm3 at earthlink.net> wrote:
> most simulations see government as > a black-box that puts out resources or information into a single social > realm making it impossible to see how this complex relationship is working > itself out. apologies if it's not the most well-informed question, but wouldn't it make more sense to model government as the source of some rules governing agents in the system, and have those rules change over time to see how the agents' behavior adapted to those rules? -- Giles Bowkett = Giles Goat Boy http://www.gilesgoatboy.org/ |
All -
So, floating in the air is roughly a question like, "What can simulation/complexity 'experts' offer to politicians/policy-makers?" Apropos that question, there is an interesting essay / book review by Louis Menand in the current issue of New Yorker, with the title "Everybody's an Expert - Putting predictions to the test." (available here: http://www.newyorker.com/critics/books/ articles/051205crbo_books1 ) The review looks at the book "Expert Political Judgment: How good is it? How can we know?" by Philip Tetlock (a UC Berkeley Psychology prof.). The general message is (to an extent . . .) that being an "expert" makes you, on average, somewhat worse at predicting than otherwise. There are sort of two questions here: Why are (individual) humans so bad at predicting, and why would "knowing more" make one a worse predictor? There is a classic bit of research (done in various forms at various times . . .) described. A rat is put in a T-maze (two choices, left and right). Repetitively, food is put at the end of one branch or the other. 60% of the time it is on the left branch, 40% on the right branch (randomized). After a while, the rat "figures out" that going left is better, and so eventually, it "guesses correctly" about 60% of the time (eventually, it almost always goes left). On the other hand, if a human is asked to do the same task (i.e., choose left or right for a reward), the human will "apply problem solving / pattern recognition skills" to the problem, and will "see" spurious patterns in the data, and try to use those as a basis for judgment. In typical runs, humans will end up "guessing correctly" something like only 52% of the time! In some sense, the human "knows too much" (or at least, thinks they do . . . :-). Part of the problem here is that we humans "can't stand the truth" in the sense that we can't readily incorporate disconfirming evidence into our modeling process. On the other hand, we readily accept (apparently?) "confirming" evidence. Of course, in this example, we humans also "know" that the "real" default probability model is always 50/50, so we are likely to "correct" our observations to conform to that expectation. The rat, on the other hand, is agnostic on default probability models :-) So (perhaps counterintuitively?), an important first step for the simulation/complexity "expert advisor" is likely to be to induce "self-doubt" in the decision maker. As long as the decision-maker is sure they're right (e.g., ideologues . . .), they will simply ignore any advice that doesn't conform to their predispositions. (Of course, this is likely to make the prediction business a tough sell, unless you're willing and able to build models that simply confirm what the decision-maker "already knew" . . . :-) What, then, to do? Perhaps a first piece of advice is to engage in some "simulation jiu jitsu" . . . Have available a few disarming (and disconcerting?) dead-simple, very easy to understand simulations with counterintuitive (or at least unexpected) results. The first step in the "advice" process is likely to be inducing the decision- maker to deconstruct their own world-view. Induce self doubt. Then build from there . . . The Zen "beginner's mind" is a good thing. (Some of you who have seen me lecture have seen some of my attempts at putting aspects of this into practice . . .) (And thus, of course, the humility of Frisbeetarianism, with its motto: When throwing a frisbee (or running a new simulation, or living a life :-), never say anything more predictive than "Watch this!") Anyway, interesting essay by Louis Menand -- and my guess is, interesting book by Tetlock . . . tom p.s. This sort of thinking also leads to a potentially amusing metaphor for why, pragmatically, "democracy" is a good thing: the body politic as a giant rat, which, using a "delphi" method, has a reasonable likelihood of coming to "better decisions" than the so- called "experts" -- the Giant Rat isn't "smart enough" to make the "expert" mistakes . . . |
Administrator
|
God, what a great read. Stephen: we may put this in a customer
folder somewhere to explain why we don't predict. I think this is somewhat along the Freakonomics direction. Analysis is different from Prediction. Analysis can be a model with knobs, data mining, statistics and so on. It is prosaic. It does not predict -- rather it simply tries to dig out the details and use them to build a story, possibly with alternative outcomes (knobs in a model). Its dispassionate nature makes it far easier for the various players to think together without bias. I remember my first commercial potential hack here in Santa Fe: take an economic model (it was in an excel spreadsheet) and re-target it to a different market. My first look at the critter made me realize the "deliverable" was prediction, not clarification. The model was so arcane that no one other than the initial programmer could understand it. And the "product" could not be used or understood by a mortal. I dropped that sucker like a hot potato! Not to say that there are places for prediction, in stock markets. But even there it has a different form, and the best application is for the creators of the predictions to also make the bet! .. to have real skin in the game. But even these have the coin flipper "expert" problem. - Get 32 people to flip a coin, predicting their results. - On average half will be right, thus appearing more expert. - Have those 16 (roughly) folks repeat their prediction - Now we get 8 who've proven themselves quite expert indeed. - Continue .. You get the idea .. a lot of expertise is really exogenous (probability in this case) and eventually fails. We are realizing that the real experts are the clients, not us. Our job is to help them see their domain from a different, more analytic and data driven, direction and let THEM make decisions and predictions. -- Owen Owen Densmore http://backspaces.net - http://redfish.com - http://friam.org On Dec 2, 2005, at 5:42 PM, Tom Carter wrote: > All - > > So, floating in the air is roughly a question like, "What can > simulation/complexity 'experts' offer to politicians/policy-makers?" > > Apropos that question, there is an interesting essay / book review > by Louis Menand in the current issue of New Yorker, with the title > "Everybody's an Expert - Putting predictions to the > test." (available here: http://www.newyorker.com/critics/books/ > articles/051205crbo_books1 ) > > The review looks at the book "Expert Political Judgment: How good > is it? How can we know?" by Philip Tetlock (a UC Berkeley Psychology > prof.). The general message is (to an extent . . .) that being an > "expert" makes you, on average, somewhat worse at predicting than > otherwise. There are sort of two questions here: Why are > (individual) humans so bad at predicting, and why would "knowing > more" make one a worse predictor? > > There is a classic bit of research (done in various forms at > various times . . .) described. A rat is put in a T-maze (two > choices, left and right). Repetitively, food is put at the end of > one branch or the other. 60% of the time it is on the left branch, > 40% on the right branch (randomized). After a while, the rat > "figures out" that going left is better, and so eventually, it > "guesses correctly" about 60% of the time (eventually, it almost > always goes left). On the other hand, if a human is asked to do the > same task (i.e., choose left or right for a reward), the human will > "apply problem solving / pattern recognition skills" to the problem, > and will "see" spurious patterns in the data, and try to use those as > a basis for judgment. In typical runs, humans will end up "guessing > correctly" something like only 52% of the time! In some sense, the > human "knows too much" (or at least, thinks they do . . . :-). Part > of the problem here is that we humans "can't stand the truth" in the > sense that we can't readily incorporate disconfirming evidence into > our modeling process. On the other hand, we readily accept > (apparently?) "confirming" evidence. Of course, in this example, we > humans also "know" that the "real" default probability model is > always 50/50, so we are likely to "correct" our observations to > conform to that expectation. The rat, on the other hand, is agnostic > on default probability models :-) > > So (perhaps counterintuitively?), an important first step for the > simulation/complexity "expert advisor" is likely to be to induce > "self-doubt" in the decision maker. As long as the decision-maker is > sure they're right (e.g., ideologues . . .), they will simply ignore > any advice that doesn't conform to their predispositions. (Of > course, this is likely to make the prediction business a tough sell, > unless you're willing and able to build models that simply confirm > what the decision-maker "already knew" . . . :-) > > What, then, to do? Perhaps a first piece of advice is to engage > in some "simulation jiu jitsu" . . . Have available a few disarming > (and disconcerting?) dead-simple, very easy to understand simulations > with counterintuitive (or at least unexpected) results. The first > step in the "advice" process is likely to be inducing the decision- > maker to deconstruct their own world-view. Induce self doubt. Then > build from there . . . The Zen "beginner's mind" is a good thing. > (Some of you who have seen me lecture have seen some of my attempts > at putting aspects of this into practice . . .) > > (And thus, of course, the humility of Frisbeetarianism, with its > motto: When throwing a frisbee (or running a new simulation, or > living a life :-), never say anything more predictive than "Watch > this!") > > Anyway, interesting essay by Louis Menand -- and my guess is, > interesting book by Tetlock . . . > > tom > > p.s. This sort of thinking also leads to a potentially amusing > metaphor for why, pragmatically, "democracy" is a good thing: the > body politic as a giant rat, which, using a "delphi" method, has a > reasonable likelihood of coming to "better decisions" than the so- > called "experts" -- the Giant Rat isn't "smart enough" to make the > "expert" mistakes . . . > > > > > ============================================================ > FRIAM Applied Complexity Group listserv > Meets Fridays 9a-11:30 at Mission Cafe > lectures, archives, unsubscribe, maps at http://www.friam.org |
I'm not sure about your analysis vs. prediction distinction Owen. In ABM
analysis I'm trying - as you say - to "dig out the details", understand what happened, figure out how the narrative arc developed. Why? Probably because I want to use that understanding to inform some real-world situation. If I'm making models of poverty in the US for some policy maker, it's because the policy maker needs to get a better idea of what's going to happen if that policy is enacted. In other words the model is being used for prediction, albeit qualitative rather than quantitative. And if the 'understanding' isn't being used to help qualitative predictions, then exactly what was the point of uncovering it in the first place? Robert On 12/3/05, Owen Densmore <owen at backspaces.net> wrote: > > God, what a great read. Stephen: we may put this in a customer > folder somewhere to explain why we don't predict. > > I think this is somewhat along the Freakonomics direction. Analysis > is different from Prediction. Analysis can be a model with knobs, > data mining, statistics and so on. It is prosaic. It does not > predict -- rather it simply tries to dig out the details and use them > to build a story, possibly with alternative outcomes (knobs in a > model). Its dispassionate nature makes it far easier for the various > players to think together without bias. > > I remember my first commercial potential hack here in Santa Fe: take > an economic model (it was in an excel spreadsheet) and re-target it > to a different market. My first look at the critter made me realize > the "deliverable" was prediction, not clarification. The model was > so arcane that no one other than the initial programmer could > understand it. And the "product" could not be used or understood by > a mortal. I dropped that sucker like a hot potato! > > Not to say that there are places for prediction, in stock markets. > But even there it has a different form, and the best application is > for the creators of the predictions to also make the bet! .. to have > real skin in the game. > > But even these have the coin flipper "expert" problem. > - Get 32 people to flip a coin, predicting their results. > - On average half will be right, thus appearing more expert. > - Have those 16 (roughly) folks repeat their prediction > - Now we get 8 who've proven themselves quite expert indeed. > - Continue .. > You get the idea .. a lot of expertise is really exogenous > (probability in this case) and eventually fails. > > We are realizing that the real experts are the clients, not us. Our > job is to help them see their domain from a different, more analytic > and data driven, direction and let THEM make decisions and predictions. > > -- Owen > > Owen Densmore > http://backspaces.net - http://redfish.com - http://friam.org > > > On Dec 2, 2005, at 5:42 PM, Tom Carter wrote: > > > All - > > > > So, floating in the air is roughly a question like, "What can > > simulation/complexity 'experts' offer to politicians/policy-makers?" > > > > Apropos that question, there is an interesting essay / book review > > by Louis Menand in the current issue of New Yorker, with the title > > "Everybody's an Expert - Putting predictions to the > > test." (available here: http://www.newyorker.com/critics/books/ > > articles/051205crbo_books1 ) > > > > The review looks at the book "Expert Political Judgment: How good > > is it? How can we know?" by Philip Tetlock (a UC Berkeley Psychology > > prof.). The general message is (to an extent . . .) that being an > > "expert" makes you, on average, somewhat worse at predicting than > > otherwise. There are sort of two questions here: Why are > > (individual) humans so bad at predicting, and why would "knowing > > more" make one a worse predictor? > > > > There is a classic bit of research (done in various forms at > > various times . . .) described. A rat is put in a T-maze (two > > choices, left and right). Repetitively, food is put at the end of > > one branch or the other. 60% of the time it is on the left branch, > > 40% on the right branch (randomized). After a while, the rat > > "figures out" that going left is better, and so eventually, it > > "guesses correctly" about 60% of the time (eventually, it almost > > always goes left). On the other hand, if a human is asked to do the > > same task (i.e., choose left or right for a reward), the human will > > "apply problem solving / pattern recognition skills" to the problem, > > and will "see" spurious patterns in the data, and try to use those as > > a basis for judgment. In typical runs, humans will end up "guessing > > correctly" something like only 52% of the time! In some sense, the > > human "knows too much" (or at least, thinks they do . . . :-). Part > > of the problem here is that we humans "can't stand the truth" in the > > sense that we can't readily incorporate disconfirming evidence into > > our modeling process. On the other hand, we readily accept > > (apparently?) "confirming" evidence. Of course, in this example, we > > humans also "know" that the "real" default probability model is > > always 50/50, so we are likely to "correct" our observations to > > conform to that expectation. The rat, on the other hand, is agnostic > > on default probability models :-) > > > > So (perhaps counterintuitively?), an important first step for the > > simulation/complexity "expert advisor" is likely to be to induce > > "self-doubt" in the decision maker. As long as the decision-maker is > > sure they're right (e.g., ideologues . . .), they will simply ignore > > any advice that doesn't conform to their predispositions. (Of > > course, this is likely to make the prediction business a tough sell, > > unless you're willing and able to build models that simply confirm > > what the decision-maker "already knew" . . . :-) > > > > What, then, to do? Perhaps a first piece of advice is to engage > > in some "simulation jiu jitsu" . . . Have available a few disarming > > (and disconcerting?) dead-simple, very easy to understand simulations > > with counterintuitive (or at least unexpected) results. The first > > step in the "advice" process is likely to be inducing the decision- > > maker to deconstruct their own world-view. Induce self doubt. Then > > build from there . . . The Zen "beginner's mind" is a good thing. > > (Some of you who have seen me lecture have seen some of my attempts > > at putting aspects of this into practice . . .) > > > > (And thus, of course, the humility of Frisbeetarianism, with its > > motto: When throwing a frisbee (or running a new simulation, or > > living a life :-), never say anything more predictive than "Watch > > this!") > > > > Anyway, interesting essay by Louis Menand -- and my guess is, > > interesting book by Tetlock . . . > > > > tom > > > > p.s. This sort of thinking also leads to a potentially amusing > > metaphor for why, pragmatically, "democracy" is a good thing: the > > body politic as a giant rat, which, using a "delphi" method, has a > > reasonable likelihood of coming to "better decisions" than the so- > > called "experts" -- the Giant Rat isn't "smart enough" to make the > > "expert" mistakes . . . > > > > > > > > > > ============================================================ > > FRIAM Applied Complexity Group listserv > > Meets Fridays 9a-11:30 at Mission Cafe > > lectures, archives, unsubscribe, maps at http://www.friam.org > > > ============================================================ > FRIAM Applied Complexity Group listserv > Meets Fridays 9a-11:30 at Mission Cafe > lectures, archives, unsubscribe, maps at http://www.friam.org > An HTML attachment was scrubbed... URL: http://redfish.com/pipermail/friam_redfish.com/attachments/20051203/700bf674/attachment.htm |
Some quotes came to mind...
"There are no such things as good models only useful ones." "A consultant can explain (analyze) everything but predict nothing." The understanding is that humans love patterns and see them were they don't exist (as Tom's rat story illustrates). The difficulty is proving/demonstrating/validiating that a model is closely related to the real world and therefore might be useful. Hard sciences does this all the time. Soft sciences do it some of the time and I suspect political science never really gets a chance. Are there any experiences, citations of any models of nation states at any meaningful level of detail? How would you develop any confidence in its predictions? I suppose you could build a model with the initial conditions of the Bolshevik revolution and try to match the emergence and collapse of the Soviet Union (but it seems to me it would also need all the outside influences turning it into a world model). Then use the same model and put in the intial conditions of the early colonization of North America and see if it predicts the emergence of the United States. Then repeat the exercise to predict the rise and fall of the Roman Empire. I'd be impressed with that. Is it doable? Sounds like such an effort would take more resources than any one can tap (at this point). Also don't unique and creative events and discoveries appear spontaneously in these histories that defy prediction that had enormous impacts? Robert Cordingley Robert Holmes wrote: > I'm not sure about your analysis vs. prediction distinction Owen. In > ABM analysis I'm trying - as you say - to "dig out the details", > understand what happened, figure out how the narrative arc developed. > Why? Probably because I want to use that understanding to inform some > real-world situation. If I'm making models of poverty in the US for > some policy maker, it's because the policy maker needs to get a better > idea of what's going to happen if that policy is enacted. In other > words the model is being used for prediction, albeit qualitative > rather than quantitative. And if the 'understanding' isn't being used > to help qualitative predictions, then exactly what was the point of > uncovering it in the first place? > > Robert > > > > On 12/3/05, Owen Densmore <owen at backspaces.net > <mailto:owen at backspaces.net>> wrote: > > God, what a great read. Stephen: we may put this in a customer > folder somewhere to explain why we don't predict. > > I think this is somewhat along the Freakonomics direction. Analysis > is different from Prediction. Analysis can be a model with knobs, > data mining, statistics and so on. It is prosaic. It does not > predict -- rather it simply tries to dig out the details and use them > to build a story, possibly with alternative outcomes (knobs in a > model). Its dispassionate nature makes it far easier for the various > players to think together without bias. > > I remember my first commercial potential hack here in Santa Fe: take > an economic model (it was in an excel spreadsheet) and re-target it > to a different market. My first look at the critter made me realize > the "deliverable" was prediction, not clarification. The model was > so arcane that no one other than the initial programmer could > understand it. And the "product" could not be used or understood by > a mortal. I dropped that sucker like a hot potato! > > Not to say that there are places for prediction, in stock markets. > But even there it has a different form, and the best application is > for the creators of the predictions to also make the bet! .. to have > real skin in the game. > > But even these have the coin flipper "expert" problem. > - Get 32 people to flip a coin, predicting their results. > - On average half will be right, thus appearing more expert. > - Have those 16 (roughly) folks repeat their prediction > - Now we get 8 who've proven themselves quite expert indeed. > - Continue .. > You get the idea .. a lot of expertise is really exogenous > (probability in this case) and eventually fails. > > We are realizing that the real experts are the clients, not us. Our > job is to help them see their domain from a different, more analytic > and data driven, direction and let THEM make decisions and > predictions. > > -- Owen > > Owen Densmore > http://backspaces.net - http://redfish.com - http://friam.org > > > On Dec 2, 2005, at 5:42 PM, Tom Carter wrote: > > > All - > > > > So, floating in the air is roughly a question like, "What can > > simulation/complexity 'experts' offer to politicians/policy-makers?" > > > > Apropos that question, there is an interesting essay / book > review > > by Louis Menand in the current issue of New Yorker, with the title > > "Everybody's an Expert - Putting predictions to the > > test." (available here: http://www.newyorker.com/critics/books/ > > articles/051205crbo_books1 ) > > > > The review looks at the book "Expert Political Judgment: How good > > is it? How can we know?" by Philip Tetlock (a UC Berkeley > Psychology > > prof.). The general message is (to an extent . . .) that being an > > "expert" makes you, on average, somewhat worse at predicting than > > otherwise. There are sort of two questions here: Why are > > (individual) humans so bad at predicting, and why would "knowing > > more" make one a worse predictor? > > > > There is a classic bit of research (done in various forms at > > various times . . .) described. A rat is put in a T-maze (two > > choices, left and right). Repetitively, food is put at the end of > > one branch or the other. 60% of the time it is on the left branch, > > 40% on the right branch (randomized). After a while, the rat > > "figures out" that going left is better, and so eventually, it > > "guesses correctly" about 60% of the time (eventually, it almost > > always goes left). On the other hand, if a human is asked to do > the > > same task (i.e., choose left or right for a reward), the human will > > "apply problem solving / pattern recognition skills" to the problem, > > and will "see" spurious patterns in the data, and try to use > those as > > a basis for judgment. In typical runs, humans will end up "guessing > > correctly" something like only 52% of the time! In some sense, the > > human "knows too much" (or at least, thinks they do . . . > :-). Part > > of the problem here is that we humans "can't stand the truth" in the > > sense that we can't readily incorporate disconfirming evidence into > > our modeling process. On the other hand, we readily accept > > (apparently?) "confirming" evidence. Of course, in this example, we > > humans also "know" that the "real" default probability model is > > always 50/50, so we are likely to "correct" our observations to > > conform to that expectation. The rat, on the other hand, is > agnostic > > on default probability models :-) > > > > So (perhaps counterintuitively?), an important first step for the > > simulation/complexity "expert advisor" is likely to be to induce > > "self-doubt" in the decision maker. As long as the > decision-maker is > > sure they're right (e.g., ideologues . . .), they will simply ignore > > any advice that doesn't conform to their predispositions. (Of > > course, this is likely to make the prediction business a tough sell, > > unless you're willing and able to build models that simply confirm > > what the decision-maker "already knew" . . . :-) > > > > What, then, to do? Perhaps a first piece of advice is to engage > > in some "simulation jiu jitsu" . . . Have available a few disarming > > (and disconcerting?) dead-simple, very easy to understand > simulations > > with counterintuitive (or at least unexpected) results. The first > > step in the "advice" process is likely to be inducing the decision- > > maker to deconstruct their own world-view. Induce self > doubt. Then > > build from there . . . The Zen "beginner's mind" is a good thing. > > (Some of you who have seen me lecture have seen some of my attempts > > at putting aspects of this into practice . . .) > > > > (And thus, of course, the humility of Frisbeetarianism, with its > > motto: When throwing a frisbee (or running a new simulation, or > > living a life :-), never say anything more predictive than "Watch > > this!") > > > > Anyway, interesting essay by Louis Menand -- and my guess is, > > interesting book by Tetlock . . . > > > > tom > > > > p.s. This sort of thinking also leads to a potentially amusing > > metaphor for why, pragmatically, "democracy" is a good thing: the > > body politic as a giant rat, which, using a "delphi" method, has a > > reasonable likelihood of coming to "better decisions" than the so- > > called "experts" -- the Giant Rat isn't "smart enough" to make the > > "expert" mistakes . . . > > > > > > > > > > ============================================================ > > FRIAM Applied Complexity Group listserv > > Meets Fridays 9a-11:30 at Mission Cafe > > lectures, archives, unsubscribe, maps at http://www.friam.org > > > ============================================================ > FRIAM Applied Complexity Group listserv > Meets Fridays 9a-11:30 at Mission Cafe > lectures, archives, unsubscribe, maps at http://www.friam.org > > >------------------------------------------------------------------------ > >============================================================ >FRIAM Applied Complexity Group listserv >Meets Fridays 9a-11:30 at Mission Cafe >lectures, archives, unsubscribe, maps at http://www.friam.org > An HTML attachment was scrubbed... URL: http://redfish.com/pipermail/friam_redfish.com/attachments/20051203/97f3c5ca/attachment-0001.htm |
In reply to this post by Robert Holmes-2
This is a very complex dynamic. Not only do we have all of the social,
psychological, and economic issues and how they come forward when a policy window opens for their consideration, we also have the internal dynamics of the Legislature, of its relationship to the Governor, and each to the administration. These in turn are connected to various social institutions that do or do not accept tax money to address various social problems. These groups intern feedback via the election process, the bill writing and passing process, the regulation preparation and program design process to shape decisions. Each are driven by varying rates of change, etc, that do or do not propagate over various distances information, resources, energy, etc. Small world and other network conditions with their own characteristics affect these exchanges. In any case, variations on how institutions were organized and continue to function come forth out of the past. They, like all social institutions live and die over a long period of time. So too for policy influences going forward into the future. In any case, such an agency based simulation would, I think, be best visualized using a continuous process simulation like a river that one could slice into and vary magnification of. Knobs would permit a number of variations of parameters. Special analytical tools would measure what is going on. How is it used? First, it is to complex to explain since no one understands all of the causal relationship (remember autonomous agents). Second, it is based on real short and long term time series and various profiles of instructions committees, etc. Frankly, no one knows how it works but it has face validity and is interesting. A skilled group facilitator and a researcher who understands the issue as both an academic and practitioner, facilitate the session. Decision makers are taught to "read" the machine and to manipulate it. Many, many simulations are run as though they are "serious games." This qualitative, artistic, intuitive and social process would produce interesting and perhaps insightful policies. Justice is not necessarily served. Witness the use of energy simulations by energy companies vs. the state of California. Gus Gus Koehler, Ph.D. Principal Time Structures 1545 University Ave. Sacramento, CA 95825 916-564-8683, Fax: 916-564-7895 Cell: 916-716-1740 www.timestructures.com -----Original Message----- From: [hidden email] [mailto:[hidden email]] On Behalf Of Robert Holmes Sent: Saturday, December 03, 2005 12:30 PM To: The Friday Morning Applied Complexity Coffee Group Subject: Re: [FRIAM] Open Invitation: (Who needs experts?) I'm not sure about your analysis vs. prediction distinction Owen. In ABM analysis I'm trying - as you say - to "dig out the details", understand what happened, figure out how the narrative arc developed. Why? Probably because I want to use that understanding to inform some real-world situation. If I'm making models of poverty in the US for some policy maker, it's because the policy maker needs to get a better idea of what's going to happen if that policy is enacted. In other words the model is being used for prediction, albeit qualitative rather than quantitative. And if the 'understanding' isn't being used to help qualitative predictions, then exactly what was the point of uncovering it in the first place? Robert On 12/3/05, Owen Densmore <owen at backspaces.net> wrote: God, what a great read. Stephen: we may put this in a customer folder somewhere to explain why we don't predict. I think this is somewhat along the Freakonomics direction. Analysis is different from Prediction. Analysis can be a model with knobs, data mining, statistics and so on. It is prosaic. It does not predict -- rather it simply tries to dig out the details and use them to build a story, possibly with alternative outcomes (knobs in a model). Its dispassionate nature makes it far easier for the various players to think together without bias. I remember my first commercial potential hack here in Santa Fe: take an economic model (it was in an excel spreadsheet) and re-target it to a different market. My first look at the critter made me realize the "deliverable" was prediction, not clarification. The model was so arcane that no one other than the initial programmer could understand it. And the "product" could not be used or understood by a mortal. I dropped that sucker like a hot potato! Not to say that there are places for prediction, in stock markets. But even there it has a different form, and the best application is for the creators of the predictions to also make the bet! .. to have real skin in the game. But even these have the coin flipper "expert" problem. - Get 32 people to flip a coin, predicting their results. - On average half will be right, thus appearing more expert. - Have those 16 (roughly) folks repeat their prediction - Now we get 8 who've proven themselves quite expert indeed. - Continue .. You get the idea .. a lot of expertise is really exogenous (probability in this case) and eventually fails. We are realizing that the real experts are the clients, not us. Our job is to help them see their domain from a different, more analytic and data driven, direction and let THEM make decisions and predictions. -- Owen Owen Densmore http://backspaces.net - http://redfish.com - http://friam.org On Dec 2, 2005, at 5:42 PM, Tom Carter wrote: > All - > > So, floating in the air is roughly a question like, "What can > simulation/complexity 'experts' offer to politicians/policy-makers?" > > Apropos that question, there is an interesting essay / book review > by Louis Menand in the current issue of New Yorker, with the title > "Everybody's an Expert - Putting predictions to the > test." (available here: <http://www.newyorker.com/critics/books/> http://www.newyorker.com/critics/books/ > articles/051205crbo_books1 ) > > The review looks at the book "Expert Political Judgment: How good > is it? How can we know?" by Philip Tetlock (a UC Berkeley Psychology > prof.). The general message is (to an extent . . .) that being an > "expert" makes you, on average, somewhat worse at predicting than > otherwise. There are sort of two questions here: Why are > (individual) humans so bad at predicting, and why would "knowing > more" make one a worse predictor? > > There is a classic bit of research (done in various forms at > various times . . .) described. A rat is put in a T-maze (two > choices, left and right). Repetitively, food is put at the end of > one branch or the other. 60% of the time it is on the left branch, > 40% on the right branch (randomized). After a while, the rat > "figures out" that going left is better, and so eventually, it > "guesses correctly" about 60% of the time (eventually, it almost > always goes left). On the other hand, if a human is asked to do the > same task (i.e., choose left or right for a reward), the human will > "apply problem solving / pattern recognition skills" to the problem, > and will "see" spurious patterns in the data, and try to use those as > a basis for judgment. In typical runs, humans will end up "guessing > correctly" something like only 52% of the time! In some sense, the > human "knows too much" (or at least, thinks they do . . . :-). Part > of the problem here is that we humans "can't stand the truth" in the > sense that we can't readily incorporate disconfirming evidence into > our modeling process. On the other hand, we readily accept > (apparently?) "confirming" evidence. Of course, in this example, we > humans also "know" that the "real" default probability model is > always 50/50, so we are likely to "correct" our observations to > conform to that expectation. The rat, on the other hand, is agnostic > on default probability models :-) > > So (perhaps counterintuitively?), an important first step for the > simulation/complexity "expert advisor" is likely to be to induce > "self-doubt" in the decision maker. As long as the decision-maker is > sure they're right (e.g., ideologues . . .), they will simply ignore > any advice that doesn't conform to their predispositions. (Of > course, this is likely to make the prediction business a tough sell, > unless you're willing and able to build models that simply confirm > what the decision-maker "already knew" . . . :-) > > What, then, to do? Perhaps a first piece of advice is to engage > in some "simulation jiu jitsu" . . . Have available a few disarming > (and disconcerting?) dead-simple, very easy to understand simulations > with counterintuitive (or at least unexpected) results. The first > step in the "advice" process is likely to be inducing the decision- > maker to deconstruct their own world-view. Induce self doubt. Then > build from there . . . The Zen "beginner's mind" is a good thing. > (Some of you who have seen me lecture have seen some of my attempts > at putting aspects of this into practice . . .) > > (And thus, of course, the humility of Frisbeetarianism, with its > motto: When throwing a frisbee (or running a new simulation, or > living a life :-), never say anything more predictive than "Watch > this!") > > Anyway, interesting essay by Louis Menand -- and my guess is, > interesting book by Tetlock . . . > > tom > > p.s. This sort of thinking also leads to a potentially amusing > metaphor for why, pragmatically, "democracy" is a good thing: the > body politic as a giant rat, which, using a "delphi" method, has a > reasonable likelihood of coming to "better decisions" than the so- > called "experts" -- the Giant Rat isn't "smart enough" to make the > "expert" mistakes . . . > > > > > ============================================================ > FRIAM Applied Complexity Group listserv > Meets Fridays 9a-11:30 at Mission Cafe > lectures, archives, unsubscribe, maps at http://www.friam.org ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at Mission Cafe lectures, archives, unsubscribe, maps at http://www.friam.org -------------- next part -------------- An HTML attachment was scrubbed... URL: http://redfish.com/pipermail/friam_redfish.com/attachments/20051203/c3504c6f/attachment-0001.htm |
In reply to this post by Michael Agar
Michael,
Good insights. What is the Netlogo version of the sugarscape model? Any other insights? Working on a first draft of the brief for the groups feedback. Best, Justin --- Michael Agar <magar at anth.umd.edu> wrote: > Getting a sympathetic politician's ear is good. > Getting his/her > staff's ear is even better. But a simulation isn't > going to do > anything about poverty. A simulation might model a > particular > argument, though one needs to pay attention to the > ideological > substrate of the code, and a simulation might be a > persuasive > rhetorical device, and it might serve as a point of > reference to > clarify debates, all good things. But it won't solve > anything absent > a sense of the problem and an argument for its > solution, assuming > that that sense and that argument have any chance of > solving it, > which in our current political climate is hard to > hope for. > > Why not send the politicos the netlogo version of > that part of > sugarscape that shows how the model generates an > inequitable > distribution of wealth as an example together with > some words to > explain it? > > Mike > > > > On Dec 1, 2005, at 3:07 AM, Jochen Fromm wrote: > > > > > Poverty is like wealthiness partially a > consequence of > > the free market economy in the USA. Free market > means > > always inequality and imparity. In Germany we have > a > > social market economy and no real poverty > (state-run support > > for jobless and benefit payments), but also a huge > > national deficit. I doubt that there are simple > policies > > for "getting rid of poverty in America". That > sounds > > stupid to me. > > > > If the majority of the people would really move > from > > poverty to the middle clas in America, the firms > and > > companies would no longer be able to pay them, and > > would either increase the prices or relocate their > > business to other, cheaper countries, which would > > in turn increase the unemployment rate in the USA. > > You cannot consider the problem of poverty in > isolation. > > It is connected to other classic problems like > unemployment, > > national deficit, economic growth and > environmental pollution. > > > > -J. > > > >> -----Urspr?ngliche Nachricht----- > >> Von: Friam-bounces at redfish.com > [mailto:Friam-bounces at redfish.com] Im > > Auftrag von Justin Lyon > >> Gesendet: Mittwoch, 30. November 2005 20:56 > >> An: system-dynamics at VENSIM.COM; friam at redfish.com > >> Cc: Dante Suarez > >> Betreff: [FRIAM] Open Invitation: Senator John > Edwards and Josh > >> Brumberger > >> > >> During our brief chat with Senator Edwards, we > pitched > >> him on using simulations to help in moving people > from > >> poverty to the middle clas in America. > >> > >> [...] > >> > >> I wanted to ask the FRIAM and System Dynamics > >> simulation communities if they had any thoughts > on how > >> we might use simulation to develop policies for > >> getting rid of poverty in America? > >> > > > > > > > > > > > FRIAM Applied Complexity Group listserv > > Meets Fridays 9a-11:30 at Mission Cafe > > lectures, archives, unsubscribe, maps at > http://www.friam.org > > > ============================================================ > FRIAM Applied Complexity Group listserv > Meets Fridays 9a-11:30 at Mission Cafe > lectures, archives, unsubscribe, maps at > http://www.friam.org > |
Justin--The main Netlogo page is at http://ccl.northwestern.edu/
netlogo/. The software is a free download. There are models in the main program, and then additional models on the community page, reachable from the home page. Many of them have potential policy implications, including the sugarscape model and a version of Schelling's segregation model, among others. Take a look and then you can show ones that seem relevant to you given your conversations with them to the politicos and in turn they can play with them in the privacy of their own homes to see the potential. Hope that helps. Mike On Dec 6, 2005, at 7:59 PM, Justin Lyon wrote: > Michael, > > Good insights. What is the Netlogo version of the > sugarscape model? Any other insights? Working on a > first draft of the brief for the groups feedback. > > Best, > Justin > > --- Michael Agar <magar at anth.umd.edu> wrote: > >> Getting a sympathetic politician's ear is good. >> Getting his/her >> staff's ear is even better. But a simulation isn't >> going to do >> anything about poverty. A simulation might model a >> particular >> argument, though one needs to pay attention to the >> ideological >> substrate of the code, and a simulation might be a >> persuasive >> rhetorical device, and it might serve as a point of >> reference to >> clarify debates, all good things. But it won't solve >> anything absent >> a sense of the problem and an argument for its >> solution, assuming >> that that sense and that argument have any chance of >> solving it, >> which in our current political climate is hard to >> hope for. >> >> Why not send the politicos the netlogo version of >> that part of >> sugarscape that shows how the model generates an >> inequitable >> distribution of wealth as an example together with >> some words to >> explain it? >> >> Mike >> >> >> >> On Dec 1, 2005, at 3:07 AM, Jochen Fromm wrote: >> >>> >>> Poverty is like wealthiness partially a >> consequence of >>> the free market economy in the USA. Free market >> means >>> always inequality and imparity. In Germany we have >> a >>> social market economy and no real poverty >> (state-run support >>> for jobless and benefit payments), but also a huge >>> national deficit. I doubt that there are simple >> policies >>> for "getting rid of poverty in America". That >> sounds >>> stupid to me. >>> >>> If the majority of the people would really move >> from >>> poverty to the middle clas in America, the firms >> and >>> companies would no longer be able to pay them, and >>> would either increase the prices or relocate their >>> business to other, cheaper countries, which would >>> in turn increase the unemployment rate in the USA. >>> You cannot consider the problem of poverty in >> isolation. >>> It is connected to other classic problems like >> unemployment, >>> national deficit, economic growth and >> environmental pollution. >>> >>> -J. >>> >>>> -----Urspr?ngliche Nachricht----- >>>> Von: Friam-bounces at redfish.com >> [mailto:Friam-bounces at redfish.com] Im >>> Auftrag von Justin Lyon >>>> Gesendet: Mittwoch, 30. November 2005 20:56 >>>> An: system-dynamics at VENSIM.COM; friam at redfish.com >>>> Cc: Dante Suarez >>>> Betreff: [FRIAM] Open Invitation: Senator John >> Edwards and Josh >>>> Brumberger >>>> >>>> During our brief chat with Senator Edwards, we >> pitched >>>> him on using simulations to help in moving people >> from >>>> poverty to the middle clas in America. >>>> >>>> [...] >>>> >>>> I wanted to ask the FRIAM and System Dynamics >>>> simulation communities if they had any thoughts >> on how >>>> we might use simulation to develop policies for >>>> getting rid of poverty in America? >>>> >>> >>> >>> >>> >> > ============================================================ >>> FRIAM Applied Complexity Group listserv >>> Meets Fridays 9a-11:30 at Mission Cafe >>> lectures, archives, unsubscribe, maps at >> http://www.friam.org >> >> >> > ============================================================ >> FRIAM Applied Complexity Group listserv >> Meets Fridays 9a-11:30 at Mission Cafe >> lectures, archives, unsubscribe, maps at >> http://www.friam.org >> > > > ============================================================ > FRIAM Applied Complexity Group listserv > Meets Fridays 9a-11:30 at Mission Cafe > lectures, archives, unsubscribe, maps at http://www.friam.org |
Free forum by Nabble | Edit this page |