Administrator
|
Is there a formal definition of Agent Based Model? By "formal
definition" I mean something like there is for a Finite State Automata: A Finite State Automata is definded as a 5-tuple (Q, ?, ?, q0, F) where 1 - Q is a finite set called the states. 2 - ? is a finite set called the alphabet 3 - ?: Q X ? -> Q is the transition function 4 - q0 ? Q is the start state, and 5 - F ? Q is the set of accept states (Note: the above uses Unicode so if your display is odd, its trying to show symbols for Sigma, Delta and so on) If not, we ought to build one, right? Wikipedia discusses both CAs and ABMs: http://en.wikipedia.org/wiki/Cellular_automata http://en.wikipedia.org/wiki/Agent_based_model .. as well as FSA: http://en.wikipedia.org/wiki/Finite_state_automata Note that only the latter has a formal definition in the article. -- Owen |
Owen Densmore wrote:
> If not, we ought to build one, right? How about writing ABMs in a functional programming language? Then all transitions can be viewed as equations. |
In reply to this post by Owen Densmore
Not sure about Agent-based models, but Benenson and Torrens have a formal
definition for Geographic Automata Systems (GAS). More on that is available in their book on Geosimulation <http://www.geosimulationbook.com/> --snip-- Formally, a Geographic Automata System (GAS), G, may be defined as consisting of seven components: G ~ (K; S, TS; L, ML; R, NR) Here K denotes a set of types of automata featured in the GAS and three pairs of symbols denote the rest of the components noted above, each representing a specific property and the rules that determine its dynamics. The first pair denotes a set of states S, associated with the GAS, G (consisting of subsets of states Sk of automata of each type k?K), and a set of state transition rules TS, used to determine how automata states should change over time. The second pair represents location information. L denotes the geo-referencing conventions that dictate the location of automata in the system and ML denotes the movement rules for automata, governing changes in their location. According to general definition (1) ? (2), state transitions and changes in location for geographic automata depend on automata themselves and on input (I), given by the states of neighbors. The third pair in (4) specifies this condition. R represents neighbors of the automata and NR represents the neighborhood transition rules that govern how automata relate to the other automata in their vicinity. . --snip-- More reading: Benenson, I. & *Torrens, P.M.* (2004) *Geosimulation: Automata-Based Models of Urban Phenomena <http://www.geosimulationbook.com/>*. Chichester: John Wiley & Sons. Benenson, I. & *Torrens, P.M. *(2004) "Geosimulation: object-based modeling of urban phenomena<http://www.sciencedirect.com/science?_ob=ArticleURL&_udi=B6V9K-482YTP2-2&_user=10&_coverDate=03%2F31%2F2004&_rdoc=1&_fmt=summary&_orig=browse&_sort=d&view=c&_acct=C000050221&_version=1&_urlVersion=0&_userid=10&md5=af4a11dff3aadf9aadac26a912266168>". *Computers, Environment and Urban Systems* 28 (1/2): 1-8. On 5/9/07, Owen Densmore <owen at backspaces.net> wrote: > > Is there a formal definition of Agent Based Model? By "formal > definition" I mean something like there is for a Finite State Automata: > > A Finite State Automata is definded as a 5-tuple (Q, ?, ?, q0, F) > where > 1 - Q is a finite set called the states. > 2 - ? is a finite set called the alphabet > 3 - ?: Q X ? -> Q is the transition function > 4 - q0 ? Q is the start state, and > 5 - F ? Q is the set of accept states > (Note: the above uses Unicode so if your display is odd, its trying > to show symbols for Sigma, Delta and so on) > > If not, we ought to build one, right? Wikipedia discusses both CAs > and ABMs: > http://en.wikipedia.org/wiki/Cellular_automata > http://en.wikipedia.org/wiki/Agent_based_model > .. as well as FSA: > http://en.wikipedia.org/wiki/Finite_state_automata > > Note that only the latter has a formal definition in the article. > > -- Owen > > > > ============================================================ > FRIAM Applied Complexity Group listserv > Meets Fridays 9a-11:30 at cafe at St. John's College > lectures, archives, unsubscribe, maps at http://www.friam.org Sudhira -- ---------------------------------------------------------------------------------------------------------------- Research Scholar, Ph.D., Department of Management Studies and Centre for Sustainable Technologies, Indian Institute of Science, Bangalore - 560 012, Karnataka State, INDIA ---------------------------------------------------------------------------------------------------------------- -------------- next part -------------- An HTML attachment was scrubbed... URL: http://redfish.com/pipermail/friam_redfish.com/attachments/20070509/c2582dca/attachment.html |
In reply to this post by Marcus G. Daniels
Could Owen's definition allow ABM's having an internal
(self-referential) design, powered by a gradient external to their design? That would somewhat model the occurrence of 'emergent cells' of new systems of relationships that are 'feeding on' rather than being 'determined by' their environments. Phil Henshaw ????.?? ? `?.???? ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 680 Ft. Washington Ave NY NY 10040 tel: 212-795-4844 e-mail: pfh at synapse9.com explorations: www.synapse9.com > -----Original Message----- > From: friam-bounces at redfish.com > [mailto:friam-bounces at redfish.com] On Behalf Of Marcus G. Daniels > Sent: Wednesday, May 09, 2007 11:58 AM > To: The Friday Morning Applied Complexity Coffee Group > Subject: Re: [FRIAM] Formal Definition of Agent Based Model > > > Owen Densmore wrote: > > If not, we ought to build one, right? > How about writing ABMs in a functional programming language? > Then all transitions can be viewed as equations. > > ============================================================ > FRIAM Applied Complexity Group listserv > Meets Fridays 9a-11:30 at cafe at St. John's College lectures, > archives, unsubscribe, maps at http://www.friam.org > > |
Phil Henshaw wrote:
> That would somewhat model the occurrence of 'emergent cells' > of new systems of relationships that are 'feeding on' rather than being > 'determined by' their environments. > To my way of thinking, a `type correct' computer program behaves somewhat like a fit individual in a biological system. But it's not very realistic since type consistency is global property of a program. If you fiddle with a subcomponent of a Java program, for example, the compiler will tell you if you did something wrong and not let you run the program. In a biological system the corresponding subcomponent might not get exercised enough to change the fitness of the individual much. Fitness is both a function of the environment and the individual. For example, given a large population of individuals and few of them are in the situation in the environment where changes to a part of there genome would kill them, it doesn't matter if that change is, in principle, inferior. By `in principle' I mean there are easy to imagine environments where it would be inferior. In a computer program I mean that a code path has operands and operators that are incompatible. For me, it's initially easier to write some kinds of computer programs that are not type safe (e.g. to use scripting languages) because I don't have to think about how all of the pieces fit together. It's like not pounding all of the nails all of the way down. Attention can be focused in writing a new part of a program and once the logic is worked out, the details of the interface can be decided to safely connect it to the rest. The larger program or set of programs can even be running while this logic is worked out. I think this kind of flexibility is pretty important for evolution and missing in a lot of ABM and ABM tools. |
That doesn't seem to get at my question. It's partly about the major
structure of nature, that it's full of things that are tightly self-referential. A self-sufficient set of rules is a good example, but so is an individual air current a very good example too. With respect to an internalized system's self-references, anything else in the universe is 'out of the loop' and does not meaningfully exist. I think that's a major reason why it's quite hard to perceive the presence of natural systems at all. Still, where we see such closed systems developing in the physical world it is always on 'gradients' (yes even closed sets of rules would appear to live on the gradients of the author's imagination). There is certainly lots of thinking to be done about all that, but my question was simply whether there's a way to design a set of rules that has only to do with itself, but relies for it's operation on rules outside it's definitions, beyond it's set of self-references? I can see how it works with physical things, but not how to do it with abstract models. It's a topological question I think, something like how to have a universal set contain itself as an element, and other confusing conjectures... In nature one can often see how it has to do with physical parts of systems participating in many independent systems at once. That seems harder for me to imagine for logical systems. Can it be done somehow? Phil Henshaw ????.?? ? `?.???? ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 680 Ft. Washington Ave NY NY 10040 tel: 212-795-4844 e-mail: pfh at synapse9.com explorations: www.synapse9.com > -----Original Message----- > From: friam-bounces at redfish.com > [mailto:friam-bounces at redfish.com] On Behalf Of Marcus G. Daniels > Sent: Friday, May 11, 2007 10:27 AM > To: The Friday Morning Applied Complexity Coffee Group > Subject: Re: [FRIAM] Formal Definition of Agent Based Model > > > Phil Henshaw wrote: > > That would somewhat model the occurrence of 'emergent cells' of new > > systems of relationships that are 'feeding on' rather than being > > 'determined by' their environments. > > > To my way of thinking, a `type correct' computer program behaves > somewhat like a fit individual in a biological system. But it's not > very realistic since type consistency is global property of a > program. > If you fiddle with a subcomponent of a Java program, for example, the > compiler will tell you if you did something wrong and not let you run > the program. In a biological system the corresponding subcomponent > might not get exercised enough to change the fitness of the > individual > much. Fitness is both a function of the environment and the > individual. For example, given a large population of > individuals and > few of them are in the situation in the environment where > changes to a > part of there genome would kill them, it doesn't matter if > that change > is, in principle, inferior. By `in principle' I mean there > are easy to > imagine environments where it would be inferior. In a > computer program > I mean that a code path has operands and operators that are > incompatible. |
Phil Henshaw wrote:
> In nature one can often see how it has to do with > physical parts of systems participating in many independent systems at > once. That seems harder for me to imagine for logical systems. Can it > be done somehow? > > If you take a typical program and look at it, it's just a string of characters: opcodes and operands. However, some of these substrings represent functions that get called from different sorts of callers on behalf of a range of higher-level purposes (e.g. a call to allocate memory or a thread of execution). > my question > was simply whether there's a way to design a set of rules that has only > to do with itself, but relies for it's operation on rules outside it's > definitions, beyond it's set of self-references? With a Unix shared library, for example, even many of the physical memory parts are shared across independent systems. Self-referential logic is contained in an assembly, exposing some symbols, and then different client programs (rules outside it's definition) drive it via these symbols -- a.k.a user applications. |
hmmm... yes, good examples of how the logical systems of programmers are
implemented using 'heterarchical' physical systems... but does that answer the question of whether abstract systems of logic can be built to be entirely self-referential, while being sustained by 'feeding on' or in other ways 'exploring' the gradients of other systems with which they have no 'logical' connection? Phil Henshaw ????.?? ? `?.???? ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 680 Ft. Washington Ave NY NY 10040 tel: 212-795-4844 e-mail: pfh at synapse9.com explorations: www.synapse9.com > -----Original Message----- > From: friam-bounces at redfish.com > [mailto:friam-bounces at redfish.com] On Behalf Of Marcus G. Daniels > Sent: Friday, May 11, 2007 11:57 PM > To: The Friday Morning Applied Complexity Coffee Group > Subject: Re: [FRIAM] Formal Definition of Agent Based Model > > > Phil Henshaw wrote: > > In nature one can often see how it has to do with > > physical parts of systems participating in many independent > systems at > > once. That seems harder for me to imagine for logical > systems. Can it > > be done somehow? > > > > > If you take a typical program and look at it, it's just a string of > characters: opcodes and operands. However, some of these substrings > represent functions that get called from different sorts of > callers on > behalf of a range of higher-level purposes (e.g. a call to allocate > memory or a thread of execution). > > my question > > was simply whether there's a way to design a set of rules that has > > only to do with itself, but relies for it's operation on > rules outside > > it's definitions, beyond it's set of self-references? > With a Unix shared library, for example, even many of the physical > memory parts are shared across independent systems. > Self-referential logic is contained in an assembly, exposing some > symbols, and then different client programs (rules outside it's > definition) drive it via these symbols -- a.k.a user applications. > > > ============================================================ > FRIAM Applied Complexity Group listserv > Meets Fridays 9a-11:30 at cafe at St. John's College > lectures, archives, unsubscribe, maps at http://www.friam.org > > |
Phil Henshaw wrote:
> but does that > answer the question of whether abstract systems of logic can be built to > be entirely self-referential, while being sustained by 'feeding on' or > in other ways 'exploring' the gradients of other systems with which they > have no 'logical' connection? > If a person moves to a new country and doesn't know the language, and learns it through observation and experiment, that might be an example. That's assuming the person had a closed self-referential learning mechanism. Presumably the engine that's driven by the abstract system of logic can store and retrieve data, even if the logic itself were unmodifiable? (I doubt the latter is a useful constraint, but for the sake of the thought experiment..) |
I think the question is bound up in the technical meaning of the word
'explore' which has to do with discovering a world 'outside'. Can a self-referential logic learn from its environment? That's my question. I think it probably can if it is designed to evolve by experimenting at its fringe and responding to the feedbacks. Then it doesn't need to have any logical connection with the mechanisms of the systems it is thus interacting with, but may co-evolve or collaborate with them. > > Phil Henshaw wrote: > > but does that > > answer the question of whether abstract systems of logic > can be built > > to be entirely self-referential, while being sustained by > 'feeding on' > > or in other ways 'exploring' the gradients of other systems > with which > > they have no 'logical' connection? > > > If a person moves to a new country and doesn't know the language, and > learns it through observation and experiment, that might be an > example. That's assuming the person had a closed self-referential > learning mechanism. Presumably the engine that's driven by the > abstract system of logic can store and retrieve data, even if > the logic > itself were unmodifiable? (I doubt the latter is a useful > constraint, > but for the sake of the thought experiment..) > > ============================================================ > FRIAM Applied Complexity Group listserv > Meets Fridays 9a-11:30 at cafe at St. John's College > lectures, archives, unsubscribe, maps at http://www.friam.org > > |
In reply to this post by Marcus G. Daniels
Hi,
> Phil Henshaw wrote: > If a person moves to a new country and doesn't know the language, and > learns it through observation and experiment, that might be an > example. That's assuming the person had a closed self-referential > learning mechanism. Presumably the engine that's driven by the > abstract system of logic can store and retrieve data, even if the logic > itself were unmodifiable? (I doubt the latter is a useful constraint, > but for the sake of the thought experiment..) I suggest you read a wonderful book by James Welch Jr. The Hearsong of Charging Elk Here an individual (agent) is left alone for decades in a foreign culture and language and no one there of course can speak Sioux. "The Heartsong of Charging Elk," published in 2000, tells the story of a young Oglala Sioux, traveling in France with Buffalo Bill's Wild West show in 1889, who is hospitalized and stranded in Marseilles when the troupe moves on. The novel follows the young man as he comes to grips with the enormous cultural, linguistic and geographical dislocations of his life, gradually building a life for himself as a Frenchman." Lou |
Free forum by Nabble | Edit this page |