Besides the technical issues, what is the advantage of parallel agent-based simulations ? Can you achieve more with a billion agents than with a few thousand, or is it just an attractive-sounding possibility ? An ant colony with a billion ants will not be significantly different or more intelligent than a colony with 10.000 ants. A swarm with 10.000 birds will look similar to a swarm with 100 birds, only a bit more fine-grained. Is a simulation with millions or billions of agents somehow qualitative different from a simulation with only a few thousand agents ? Certainly not if they are all alike, if they all do the same or if they all "live" in the same environment. I looks very difficult to construct a billion different agents or to assign different tasks to billions of agents. In evolutionary systems, AI, and ALife, scale certainly matters: a typical human brain has billions of neurons, a chromosome contains roughly a GByte program with a billion bytes, and evolution on Earth took from the earliest forms to the computer nerd today a few billion years. If we expect something interesting in an evolutionary ALife system, do we have to let it run for some billion years using a billion agents in order to get a "genetic code" with a billion bytes ? I bet the first true AI will have more than a billion bytes of code, too (already a few films take easily a few GByte of data). Somehow the lower bound for interesting behavior seems to be a billion interacting units - why is this so ? -J. |
Nobody knows until someone does the experiment. It is certainly
possible that something interesting will happen once enough agents are simulated together. Right now it is a challenging task just to scale the simulations up. Cheers On Mon, Oct 09, 2006 at 11:43:27AM +0200, Jochen Fromm wrote: > > Besides the technical issues, what is the advantage of > parallel agent-based simulations ? Can you achieve more > with a billion agents than with a few thousand, or is it > just an attractive-sounding possibility ? An ant colony > with a billion ants will not be significantly different > or more intelligent than a colony with 10.000 ants. A swarm > with 10.000 birds will look similar to a swarm with 100 > birds, only a bit more fine-grained. > > Is a simulation with millions or billions of agents somehow > qualitative different from a simulation with only a few > thousand agents ? Certainly not if they are all alike, if > they all do the same or if they all "live" in the same > environment. I looks very difficult to construct a billion > different agents or to assign different tasks to billions > of agents. > > In evolutionary systems, AI, and ALife, scale certainly > matters: a typical human brain has billions of neurons, > a chromosome contains roughly a GByte program with a > billion bytes, and evolution on Earth took from the > earliest forms to the computer nerd today a few billion > years. If we expect something interesting in an evolutionary > ALife system, do we have to let it run for some billion years > using a billion agents in order to get a "genetic code" > with a billion bytes ? > > I bet the first true AI will have more than a billion bytes > of code, too (already a few films take easily a few GByte of > data). Somehow the lower bound for interesting behavior seems > to be a billion interacting units - why is this so ? > > -J. > > > ============================================================ > FRIAM Applied Complexity Group listserv > Meets Fridays 9a-11:30 at cafe at St. John's College > lectures, archives, unsubscribe, maps at http://www.friam.org -- *PS: A number of people ask me about the attachment to my email, which is of type "application/pgp-signature". Don't worry, it is not a virus. It is an electronic signature, that may be used to verify this email came from me if you have PGP or GPG installed. Otherwise, you may safely ignore this attachment. ---------------------------------------------------------------------------- A/Prof Russell Standish Phone 0425 253119 (mobile) Mathematics UNSW SYDNEY 2052 R.Standish at unsw.edu.au Australia http://parallel.hpc.unsw.edu.au/rks International prefix +612, Interstate prefix 02 ---------------------------------------------------------------------------- |
Yes, but it does not make sense to simulate billions of agents just for their own sake. I guess nothing interesting will happen until each of the billion agents is unique and contributes something different to the same goal - which is only possible if the goal is clear. If the goal is the simulation of a city - is a simulation of a whole city with 250,000 agents really different from a simulation with 2,500 agents ? The reproduction of a whole system in a ratio of 1:1 would be a very poor model to understand it. And how would you specify 250,000 agents with different preferences ? -J. -----Original Message----- From: Russell Standish Sent: Monday, October 09, 2006 12:47 PM To: The Friday Morning Applied Complexity Coffee Group Subject: Re: [FRIAM] A billion agents Nobody knows until someone does the experiment. It is certainly possible that something interesting will happen once enough agents are simulated together. Right now it is a challenging task just to scale the simulations up. |
On Mon, Oct 09, 2006 at 01:23:29PM +0200, Jochen Fromm wrote:
> > Yes, but it does not make sense to simulate > billions of agents just for their own sake. > I guess nothing interesting will happen until > each of the billion agents is unique and > contributes something different to the same > goal - which is only possible if the goal is > clear. > > If the goal is the simulation of a city - > is a simulation of a whole city with 250,000 > agents really different from a simulation > with 2,500 agents ? The reproduction of a > whole system in a ratio of 1:1 would be a > very poor model to understand it. And how > would you specify 250,000 agents with different > preferences ? By setting their behaviour parameters from a probability distribution. Also don't forget that geographical effects can be modelled in finer and finer detail - a few thousand similar agents in a homogenous environment probably is indistinguishable from bulk, but if you have the street plan of New York, you may need hundreds of thousands of agents before bulk effects come into play. > > -J. > > -----Original Message----- > From: Russell Standish > Sent: Monday, October 09, 2006 12:47 PM > To: The Friday Morning Applied Complexity Coffee Group > Subject: Re: [FRIAM] A billion agents > > Nobody knows until someone does the experiment. It is certainly > possible that something interesting will happen once enough agents are > simulated together. Right now it is a challenging task just to scale > the simulations up. > > > > > > ============================================================ > FRIAM Applied Complexity Group listserv > Meets Fridays 9a-11:30 at cafe at St. John's College > lectures, archives, unsubscribe, maps at http://www.friam.org -- *PS: A number of people ask me about the attachment to my email, which is of type "application/pgp-signature". Don't worry, it is not a virus. It is an electronic signature, that may be used to verify this email came from me if you have PGP or GPG installed. Otherwise, you may safely ignore this attachment. ---------------------------------------------------------------------------- A/Prof Russell Standish Phone 0425 253119 (mobile) Mathematics UNSW SYDNEY 2052 R.Standish at unsw.edu.au Australia http://parallel.hpc.unsw.edu.au/rks International prefix +612, Interstate prefix 02 ---------------------------------------------------------------------------- |
In reply to this post by Jochen Fromm-3
In the case of epidemiology simulations, much information is gained by
simulating all 300 million people in the US population to see patterns in disease outbreak, and the relative effectivness of various intervention strategies. Essential to this, of course is that the city/transportation population networks and activity patterns be realistically represented. --Doug On 10/9/06, Jochen Fromm <fromm at vs.uni-kassel.de> wrote: > > > Besides the technical issues, what is the advantage of > parallel agent-based simulations ? Can you achieve more > with a billion agents than with a few thousand, or is it > just an attractive-sounding possibility ? An ant colony > with a billion ants will not be significantly different > or more intelligent than a colony with 10.000 ants. A swarm > with 10.000 birds will look similar to a swarm with 100 > birds, only a bit more fine-grained. > > Is a simulation with millions or billions of agents somehow > qualitative different from a simulation with only a few > thousand agents ? Certainly not if they are all alike, if > they all do the same or if they all "live" in the same > environment. I looks very difficult to construct a billion > different agents or to assign different tasks to billions > of agents. > > In evolutionary systems, AI, and ALife, scale certainly > matters: a typical human brain has billions of neurons, > a chromosome contains roughly a GByte program with a > billion bytes, and evolution on Earth took from the > earliest forms to the computer nerd today a few billion > years. If we expect something interesting in an evolutionary > ALife system, do we have to let it run for some billion years > using a billion agents in order to get a "genetic code" > with a billion bytes ? > > I bet the first true AI will have more than a billion bytes > of code, too (already a few films take easily a few GByte of > data). Somehow the lower bound for interesting behavior seems > to be a billion interacting units - why is this so ? > > -J. > > > ============================================================ > FRIAM Applied Complexity Group listserv > Meets Fridays 9a-11:30 at cafe at St. John's College > lectures, archives, unsubscribe, maps at http://www.friam.org > -- Doug Roberts, RTI International droberts at rti.org doug at parrot-farm.net 505-455-7333 - Office 505-670-8195 - Cell -------------- next part -------------- An HTML attachment was scrubbed... URL: /pipermail/friam_redfish.com/attachments/20061009/f94f947e/attachment.html |
In reply to this post by Jochen Fromm-3
Jochen Fromm wrote:
> Is a simulation with millions or billions of agents somehow > qualitative different from a simulation with only a few > thousand agents ? Certainly not if they are all alike, if > they all do the same or if they all "live" in the same > environment. It's not unusual to have semi-realistic landscape (e.g .from GIS data), and then have agents move around on that landscape. So even though the overall population is very large the effective population acting on specific landscape over time is smaller. To get good statistics on these smaller areas of interest, or various kinds of rare events can mean that the whole simulation has to be very big. |
In reply to this post by Russell Standish
Russell Standish suggested that one could specify large quantities of
similiar but not exactly the same agents: > By setting their behaviour parameters from a probability distribution. But isn't this self-fulfilling? If you collect data about behaviours to populate your probability distribution you will be programming your agents to act the way you collected your data. If, by chance or design, your data collection is biased, your agents will be biased. -- Ray Parks rcparks at sandia.gov IDART Project Lead Voice:505-844-4024 IORTA Department Mobile:505-238-9359 http://www.sandia.gov/scada Fax:505-844-9641 http://www.sandia.gov/idart Pager:800-690-5288 |
Raymond Parks wrote:
> Russell Standish suggested that one could specify large quantities of > similiar but not exactly the same agents: > > >> By setting their behaviour parameters from a probability distribution. >> > > But isn't this self-fulfilling? If you collect data about behaviours > to populate your probability distribution you will be programming your > agents to act the way you collected your data. If, by chance or design, > your data collection is biased, your agents will be biased. > kinds of agent behaviors) will have random peturbations around typical values and in a large or long enough run you'll witness the consequences of how this bias might play out at a global level. The bigger the computers, the wider variances of agent mixes that can be measured. |
Another way to answer this is, "No, it will not be self fulfilling if there
is an appropriate experimental design for using stochastically-generated input parameters for agents in an ABM system." EpiSims uses stochastically generated disease parameters to characterize both the disease agents and the individual person responses to disease. When the EpiSims runs are made there are additional stochastic processes that influence population mixing patterns, with the results being statistically valid, and non-self-fulfiling. --Doug -- Doug Roberts, RTI International droberts at rti.org doug at parrot-farm.net 505-455-7333 - Office 505-670-8195 - Cell On 10/9/06, Marcus G. Daniels <mgd at santafe.edu> wrote: > > Raymond Parks wrote: > > Russell Standish suggested that one could specify large quantities of > > similiar but not exactly the same agents: > > > > > >> By setting their behaviour parameters from a probability distribution. > >> > > > > But isn't this self-fulfilling? If you collect data about behaviours > > to populate your probability distribution you will be programming your > > agents to act the way you collected your data. If, by chance or design, > > your data collection is biased, your agents will be biased. > > > Being distributions, the parameters (the mixing ratios of different > kinds of agent behaviors) will have random peturbations around typical > values and in a large or long enough run you'll witness the consequences > of how this bias might play out at a global level. > > The bigger the computers, the wider variances of agent mixes that can be > measured. > > ============================================================ > FRIAM Applied Complexity Group listserv > Meets Fridays 9a-11:30 at cafe at St. John's College > lectures, archives, unsubscribe, maps at http://www.friam.org > An HTML attachment was scrubbed... URL: /pipermail/friam_redfish.com/attachments/20061009/b3e6b889/attachment.html |
Douglas Roberts wrote:
> Another way to answer this is, "No, it will not be self fulfilling if > there is an appropriate experimental design for using > stochastically-generated input parameters for agents in an ABM system." > EpiSims uses stochastically generated disease parameters to characterize > both the disease agents and the individual person responses to disease. > When the EpiSims runs are made there are additional stochastic processes > that influence population mixing patterns, with the results being > statistically valid, and non-self-fulfiling. > > --Doug > -- > Doug Roberts, RTI International > droberts at rti.org <mailto:droberts at rti.org> > doug at parrot-farm.net <mailto:doug at parrot-farm.net> > 505-455-7333 - Office > 505-670-8195 - Cell > > On 10/9/06, *Marcus G. Daniels* <mgd at santafe.edu > <mailto:mgd at santafe.edu>> wrote: > > Being distributions, the parameters (the mixing ratios of different > kinds of agent behaviors) will have random peturbations around typical > values and in a large or long enough run you'll witness the consequences > of how this bias might play out at a global level. > > The bigger the computers, the wider variances of agent mixes that can be > measured. What I'm hearing is that you-all (or at least Doug) use statistically valid distributions to program the behaviour of your agents. This seems reasonable for things that can be measured with certainty - patient response to disease, for example. However, it seems to me that if you conduct a survey of a population sample concerning, say, political opinion or purchasing habits, you run the risk of bias in the survey being translated into bias in the ABM. Let me suggest an example close to my own heart. I am interested in how people react to retail market price changes in electricity in future demand-response systems. There are some data from existing demand-response systems but these function differently than newer systems. What's more, the populations are geographically different from the target populations and they are much smaller. We could survey customers to see if they would allow their heating and air-conditioning systems to respond to market prices. The trouble with past surveys of this type is that utility customers tend to be more willing to sacrifice comfort in the abstract than in reality. If one uses the PDF from the survey, then the results are far different than the PDF from the reality. And, the reality only works for certain locales and climes. -- Ray Parks rcparks at sandia.gov IDART Project Lead Voice:505-844-4024 IORTA Department Mobile:505-238-9359 http://www.sandia.gov/scada Fax:505-844-9641 http://www.sandia.gov/idart Pager:800-690-5288 |
Interesting, Ray. A number of years ago a couple of LANL program managers
and I went back to Washington to try to interest DOE in an ABM that was intended to do exactly what you describe blow. In addition, the simulation would have allowed market gaming scenarios to be run for analyzing how the power arbitragers (remember CalPine?) could game the market (remember all those windfall profits the power companies were making in California a few years back?). We spent a few days, but basically could not find anybody at DOE willing to give us the time of day, so the project never flew. We did mention a couple of times during our visit that DOE does have the word "Energy" in it. Had we gotten any support, the system would have been named ElectriSims, and would have been another if the family of "SIM's" developed over the years by our group (TRAMSIMS, EpiSims, MobiCOM). --Doug (still completely unimpressed by the US Department of Energy -- rhymes with "Doh!") > What I'm hearing is that you-all (or at least Doug) use statistically > valid distributions to program the behaviour of your agents. This seems > reasonable for things that can be measured with certainty - patient > response to disease, for example. However, it seems to me that if you > conduct a survey of a population sample concerning, say, political > opinion or purchasing habits, you run the risk of bias in the survey > being translated into bias in the ABM. > > Let me suggest an example close to my own heart. I am interested in > how people react to retail market price changes in electricity in future > demand-response systems. There are some data from existing > demand-response systems but these function differently than newer > systems. What's more, the populations are geographically different from > the target populations and they are much smaller. We could survey > customers to see if they would allow their heating and air-conditioning > systems to respond to market prices. The trouble with past surveys of > this type is that utility customers tend to be more willing to sacrifice > comfort in the abstract than in reality. If one uses the PDF from the > survey, then the results are far different than the PDF from the > reality. And, the reality only works for certain locales and climes. > > -- > Ray Parks rcparks at sandia.gov > IDART Project Lead Voice:505-844-4024 > IORTA Department Mobile:505-238-9359 > http://www.sandia.gov/scada Fax:505-844-9641 > http://www.sandia.gov/idart Pager:800-690-5288 > > > ============================================================ > FRIAM Applied Complexity Group listserv > Meets Fridays 9a-11:30 at cafe at St. John's College > lectures, archives, unsubscribe, maps at http://www.friam.org > -- Doug Roberts, RTI International droberts at rti.org doug at parrot-farm.net 505-455-7333 - Office 505-670-8195 - Cell -------------- next part -------------- An HTML attachment was scrubbed... URL: /pipermail/friam_redfish.com/attachments/20061009/99143787/attachment-0001.html |
On 10/9/06, Douglas Roberts <doug at parrot-farm.net> wrote:
> > Interesting, Ray. A number of years ago a couple of LANL program managers > and I went back to Washington to try to interest DOE in an ABM that was > intended to do exactly what you describe blow. In addition, the simulation > would have allowed market gaming scenarios to be run for analyzing how the > power arbitragers (remember CalPine?) could game the market (remember all > those windfall profits the power companies were making in California a few > years back?). > <snip> It's also interesting to note that many power arbitragers successfully gamed the system without the benefit of massive simulations. It suggests to me that we often get tied up in our own simulation/modeling expertise and forget that there's a lot of practitioners out there with valuable domain expertise. Give an arbitrager a solid financial incentive for finding opportunities to game a system and s/he will find them a whole load faster than a modeler :-) R -------------- next part -------------- An HTML attachment was scrubbed... URL: /pipermail/friam_redfish.com/attachments/20061010/df9e77c5/attachment.html |
One is tempted to say that the arbitragers are effective(ly) modelers ;-)
----- Original Message ----- From: Robert Holmes To: The Friday Morning Applied Complexity Coffee Group Sent: Tuesday, October 10, 2006 10:09 AM Subject: Re: [FRIAM] A billion agents On 10/9/06, Douglas Roberts <doug at parrot-farm.net> wrote: Interesting, Ray. A number of years ago a couple of LANL program managers and I went back to Washington to try to interest DOE in an ABM that was intended to do exactly what you describe blow. In addition, the simulation would have allowed market gaming scenarios to be run for analyzing how the power arbitragers (remember CalPine?) could game the market (remember all those windfall profits the power companies were making in California a few years back?). <snip> It's also interesting to note that many power arbitragers successfully gamed the system without the benefit of massive simulations. It suggests to me that we often get tied up in our own simulation/modeling expertise and forget that there's a lot of practitioners out there with valuable domain expertise. Give an arbitrager a solid financial incentive for finding opportunities to game a system and s/he will find them a whole load faster than a modeler :-) R ------------------------------------------------------------------------------ ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College lectures, archives, unsubscribe, maps at http://www.friam.org -------------- next part -------------- An HTML attachment was scrubbed... URL: /pipermail/friam_redfish.com/attachments/20061010/4d2728a2/attachment.html |
Free forum by Nabble | Edit this page |