Hi,
I'm interesting in developing a model that uses rule-driven agents. I would like the agent rules to be condition-action rules, i.e., similar to the sorts of rules one finds in forward chaining blackboard systems. In addition, I would like both the agents and the rules themselves to be first class objects. In other words, the rules should be able:
-- Russ ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College lectures, archives, unsubscribe, maps at http://www.friam.org |
Russ,
I haven't seen a system like your describing. It shouldn't be too hard to assemble though if the rule grammar was simple. -S --- -. . ..-. .. ... .... - .-- --- ..-. .. ... .... [hidden email] (m) 505.577.5828 (o) 505.995.0206 redfish.com _ sfcomplex.org _ simtable.com _ lava3d.com On Aug 22, 2009, at 9:13 PM, Russ Abbott wrote: > Hi, > > I'm interesting in developing a model that uses rule-driven agents. > I would like the agent rules to be condition-action rules, i.e., > similar to the sorts of rules one finds in forward chaining > blackboard systems. In addition, I would like both the agents and > the rules themselves to be first class objects. In other words, the > rules should be able: > • to refer to agents, > • to create and destroy agents, > • to create new rules for newly created agents, > • to disable rules for existing agents, and > • to modify existing rules for existing agents. > Does anyone know of a system like that? > > -- Russ > ============================================================ > FRIAM Applied Complexity Group listserv > Meets Fridays 9a-11:30 at cafe at St. John's College > lectures, archives, unsubscribe, maps at http://www.friam.org ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College lectures, archives, unsubscribe, maps at http://www.friam.org |
Thanks, Steven,
From my initial Googling the closest I could find was Drools. It's intended to provide a forward chaining rule programming language for distributed systems (J2EE). It's open source from JBoss. Although it has nothing to do with Agent-based modeling, it seems quite nice and quite general. It runs on top of and is completely integrated with Java. Agent can simply be an object type. Its Template capability allows it to generate rules that are stored as Java objects, which seems to make it capable of manipulating rules dynamically. I'll have to look into that further. One can make it a simulation engine by keeping a tick counter in the workspace. It would be nice, though, if there were a system already developed for this sort of thing. Now that I know what I want, it seems like such a natural and powerful kind of modeling capability that it's amazing that it hasn't been done! -- Russ On Sat, Aug 22, 2009 at 8:58 PM, Stephen Guerin <[hidden email]> wrote: Russ, ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College lectures, archives, unsubscribe, maps at http://www.friam.org |
In reply to this post by Russ Abbott
About a million years ago, we developed an agent-based model (except that in 1986 we called them Actor-based models) that did just this. It was a C^3I (Command, Control, Communication, and Intel) military simulation in which battalion-sized organizations would deploy in a rad-war environment. the simulation had reconnaissance agents, commander agents, fuelers, and communications. It was implemented in KEE (Knowledge Engineering Environment), a LISP-base AI shell. The decision logic was implemented in KEE's rule system. It did not create new rules, but it could disable or modify existing ones.
The simulation updated it's state, operating on perceived knowledge about the state of the terrain it was traversing and updating that with "ground truth" when such became available. The project was called "The Mobile Intercontinental Ballistic Missile (MICBM) simulation". Remnants of it can still be found by googling: http://www.osti.gov/bridge/product.biblio.jsp?query_id=0&page=0&osti_id=6940830 If you go here it will cost you $17 to read all about it: http://www.ntis.gov/search/product.aspx?ABBR=DE87003741 --Doug -- Doug Roberts [hidden email] [hidden email] 505-455-7333 - Office 505-670-8195 - Cell On Sat, Aug 22, 2009 at 9:13 PM, Russ Abbott <[hidden email]> wrote: Hi, ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College lectures, archives, unsubscribe, maps at http://www.friam.org |
Hope this helps.
There is a formal specification framework that has the flavor you are looking for. It's called abstract state machines, and has several executable implementations (AsmL, CoreASM and several others). There's been some work (Uwe Glaesser at Simon Fraser) on applying it to social simulations (mainly computational criminology). During the pre-migration period of CCS-5 some people seemed to think that it was a promising approach for formally specifying the simulations they were dealing with. I don't know, however, if anything came out of this. There is a crucial requirement on your list missing from the ASM definition, though: the "first-class" part. To have that you would have to modify the formal definition of ASM. How to do it properly is not entirely trivial. Well, let's say I hope to complete a paper doing that in a not too distant future :). As for executable implementations ... Best, Gabi On 8/23/09, Douglas Roberts <[hidden email]> wrote: > About a million years ago, we developed an agent-based model (except that in > 1986 we called them Actor-based models) that did just this. It was a C^3I > (Command, Control, Communication, and Intel) military simulation in which > battalion-sized organizations would deploy in a rad-war environment. the > simulation had reconnaissance agents, commander agents, fuelers, and > communications. It was implemented in KEE (Knowledge Engineering > Environment), a LISP-base AI shell. The decision logic was implemented in > KEE's rule system. It did not create new rules, but it could disable or > modify existing ones. > > The simulation updated it's state, operating on perceived knowledge about > the state of the terrain it was traversing and updating that with "ground > truth" when such became available. The project was called "The Mobile > Intercontinental Ballistic Missile (MICBM) simulation". Remnants of it can > still be found by googling: > > http://www.osti.gov/bridge/product.biblio.jsp?query_id=0&page=0&osti_id=6940830 > > If you go here it will cost you $17 to read all about it: > > http://www.ntis.gov/search/product.aspx?ABBR=DE87003741 > > --Doug > > -- > Doug Roberts > [hidden email] > [hidden email] > 505-455-7333 - Office > 505-670-8195 - Cell > > On Sat, Aug 22, 2009 at 9:13 PM, Russ Abbott <[hidden email]> wrote: > >> Hi, >> >> I'm interesting in developing a model that uses rule-driven agents. I >> would >> like the agent rules to be condition-action rules, i.e., similar to the >> sorts of rules one finds in forward chaining blackboard systems. In >> addition, I would like both the agents and the rules themselves to be >> first >> class objects. In other words, the rules should be able: >> >> - to refer to agents, >> - to create and destroy agents, >> - to create new rules for newly created agents, >> - to disable rules for existing agents, and >> - to modify existing rules for existing agents. >> >> Does anyone know of a system like that? >> >> -- Russ >> >> ============================================================ >> FRIAM Applied Complexity Group listserv >> Meets Fridays 9a-11:30 at cafe at St. John's College >> lectures, archives, unsubscribe, maps at http://www.friam.org >> > ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College lectures, archives, unsubscribe, maps at http://www.friam.org |
In reply to this post by Russ Abbott
Have you looked at Jade (Jave Agent Devel Environ)? Try a Google search on Jade Agent Systems and you will also turn up AgentOWL. The latter is an agent system based on Jade but adds an OWL based ontology using Jena for the ontology processing. Steph T Russ Abbott wrote: Thanks, Steven, ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College lectures, archives, unsubscribe, maps at http://www.friam.org |
Thanks Steph,
I had looked at Jade and got the impression that it was intended to support agent-to-agent interaction over the web, where the agents are essentially Java programs. I didn't see anything about the agents being driven by rules in a language that the agents themselves could manipulate. OWL provides an ontology language/framework, but I don't think it provides a means for writing rules that will control the agents. I got a similar impression about Jena. Also, I hate looking at XML and wouldn't want my agents to have to read and write it. Perhaps all that's judging in haste. I'll keep an open mind about it. But I must say that I've been very impressed by what I saw on the Drools site. I had never heard of it before this weekend, but now I want to know more. -- Russ On Sun, Aug 23, 2009 at 5:49 AM, Stephen Thompson <[hidden email]> wrote:
============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College lectures, archives, unsubscribe, maps at http://www.friam.org |
In reply to this post by Russ Abbott
Russ Abbott wrote:
> I'm interesting in developing a model that uses rule-driven agents. I > would like the agent rules to be condition-action rules, i.e., similar > to the sorts of rules one finds in forward chaining blackboard systems. You might take a look at rewrite approaches, e.g. http://www.swarm.org/index.php/NFsim:_A_Novel_Agent-based_Platform_for_Stochastic_Simulation_of_Complex_Biological_Systems > In addition, I would like both the agents and the rules themselves to > be first class objects. In other words, the rules should be able: > > * to refer to agents, > * to create and destroy agents, > * to create new rules for newly created agents, > * to disable rules for existing agents, and > * to modify existing rules for existing agents. > In the end, this is a matter of programming (implementing a symbol table and having a means to string together virtual instructions). But it's easier if the programming language you are using has an evaluator or runtime compiler, and reflection capability. Languages with minimal syntax and good macro systems, like Lisp, also make it easier to construct new rules at runtime too. ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College lectures, archives, unsubscribe, maps at http://www.friam.org |
In reply to this post by Russ Abbott
Russ, I'm probably just saying this out of ignorance, but... If you want to "really" do that, I'm not sure how to do so.... However, given that you are simulating anyway... If you want to simulate doing that, it seems straightforward. Pick any agent-based simulation program, create two classes of agents, call one class "rules" and the others "agents". Let individuals in the "rules" class do all sorts of things to individuals in the "agents" class (including controlling which other "rules" they accept commands from and how they respond to those commands). Not the most elegant solution in the world, but it would likely be able to answer whatever question you want to answer (assuming it is a question answering task you wish to engage in), with minimum time spent banging your head against the wall programming it. My biases (and lack of programming brilliance) typically lead me to find the simplest way to simulate what I want, even if that means the computers need to run a little longer. I assume there is some reason this would not be satisfactory? Eric On Sat, Aug 22, 2009 11:13 PM, Russ Abbott <[hidden email]> wrote: Eric Charles Professional Student and Assistant Professor of Psychology Penn State University Altoona, PA 16601 ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College lectures, archives, unsubscribe, maps at http://www.friam.org |
Administrator
|
In reply to this post by Stephen Guerin
Isn't this close to Miles Parker's latest work? His WedTech talk
seemed to suggest that. His proposal was accepted, and he's got a project page: http://www.eclipse.org/amp/ .. he may have become less ambitious in the rule-driven approach, not sure. -- Owen On Aug 22, 2009, at 9:58 PM, Stephen Guerin wrote: > Russ, > > I haven't seen a system like your describing. It shouldn't be too > hard to assemble though if the rule grammar was simple. > > -S > --- -. . ..-. .. ... .... - .-- --- ..-. .. ... .... > [hidden email] > (m) 505.577.5828 (o) 505.995.0206 > redfish.com _ sfcomplex.org _ simtable.com _ lava3d.com > > On Aug 22, 2009, at 9:13 PM, Russ Abbott wrote: > >> Hi, >> >> I'm interesting in developing a model that uses rule-driven agents. >> I would like the agent rules to be condition-action rules, i.e., >> similar to the sorts of rules one finds in forward chaining >> blackboard systems. In addition, I would like both the agents and >> the rules themselves to be first class objects. In other words, the >> rules should be able: >> • to refer to agents, >> • to create and destroy agents, >> • to create new rules for newly created agents, >> • to disable rules for existing agents, and >> • to modify existing rules for existing agents. >> Does anyone know of a system like that? >> >> -- Russ============================================================ >> FRIAM Applied Complexity Group listserv >> Meets Fridays 9a-11:30 at cafe at St. John's College >> lectures, archives, unsubscribe, maps at http://www.friam.org > > > ============================================================ > FRIAM Applied Complexity Group listserv > Meets Fridays 9a-11:30 at cafe at St. John's College > lectures, archives, unsubscribe, maps at http://www.friam.org ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College lectures, archives, unsubscribe, maps at http://www.friam.org |
Hi Miles,
We exchanged some mutually appreciative emails some time ago but haven't spoken in quite a while. I hope things are well with you. I'm looking for an ABM framework in which (a) agents are driven by forward chaining rules (i.e., a "blackboard" system) and (b) those rules are themselves first class citizens in the model so that, for example, an agent can create a new agent with dynamically generated rules or an agent can modify its own rules dynamically. Owen Densmore of the FRIAM group suggested (see below) that you may be working on something like this. Are you? I can't find the reference Owen referred to. Thanks, -- Russ Abbott _____________________________________________ Professor, Computer Science California State University, Los Angeles Cell phone: 310-621-3805 o Check out my blog at http://russabbott.blogspot.com/ On Sun, Aug 23, 2009 at 11:17 AM, Owen Densmore <[hidden email]> wrote: Isn't this close to Miles Parker's latest work? His WedTech talk seemed to suggest that. His proposal was accepted, and he's got a project page: ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College lectures, archives, unsubscribe, maps at http://www.friam.org |
In reply to this post by Eric Charles
Thanks Eric. It doesn't sound like your suggestion will do what I want. I want to be able to create new rules dynamically as in rule evolution. As I understand your scheme, the set of rule-agents is fixed in advance.
-- Russ On Sun, Aug 23, 2009 at 8:30 AM, ERIC P. CHARLES <[hidden email]> wrote:
============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College lectures, archives, unsubscribe, maps at http://www.friam.org |
In reply to this post by Russ Abbott
Hi Michael,
I'm interesting in developing a model that uses rule-driven agents. I would like the agent rules to be condition-action rules, i.e., similar to the sorts of rules one finds in forward chaining blackboard systems. In addition, I would like both the agents and the rules themselves to be first class objects. In other words, the rules should be able:
Thanks, -- Russ Abbott _____________________________________________ Professor, Computer Science California State University, Los Angeles Cell phone: 310-621-3805 o Check out my blog at http://russabbott.blogspot.com/ ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College lectures, archives, unsubscribe, maps at http://www.friam.org |
In reply to this post by Russ Abbott
Well, there are some ways of playing fast and loose with the metaphor.
There are almost always easy, but computationally non-elegant, ways to simulate
things like this. Remember, we have quotes because "rules" and "agents" are
just two classes of agents with different structures.
Some options: 1) The "rules" can alter themselves over time, as they can be agents in a Darwinian algorithm or any other source of system level change you want to impose. 2) The "rules" could accept instructions from the "agents" telling them how to change. 3) The "agents" could adjust their responses to commands given by the "rules" which effectively changes what the rule (now not in quotes) does. To get some examples, let's start with a "rule" that says "when in a red patch, turn left". That is, in the starting conditions the "agent" tells the rule it is in a red patch, the "rule" replies back "turn left": 1) Over time that particular "rule" could be deemed not-useful and therefore done away with in some master way. It could either be replaced by a different "rule", or there could just no longer be a "rule" about what to do in red patches. 2) An "agent" in a red patch could for some reason no longer be able to turn left. When this happens, it could send a command to the "rule" telling the "rule" it needs to change, and the "rule" could randomly (or non-randomly) generate a new contingency. 3) In the same situation, an "agent" could simply modify itself to turns right instead; that is, when the command "turn left" is received through that "rule" (or perhaps from any "rule"), the "agent" now turns right. This is analogous to what happens at some point for children when "don't touch that" becomes "touch that". The parents persist in issuing the same command, but the rule (now not in quotes) has clearly changed. Either way, if you are trying to answer a question, I think it something like one of the above options is bound to work. If there is some higher reason you are trying to do something in a particular way, or you have reason to be worried about processor time, then it might not be exactly what you are after. Eric On Sun, Aug 23, 2009 05:18 PM, Russ Abbott <[hidden email]> wrote: Eric Charles Professional Student and Assistant Professor of Psychology Penn State University Altoona, PA 16601 ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College lectures, archives, unsubscribe, maps at http://www.friam.org |
My original request was for an ABM system in which rules were first class objects and could be constructed and modified dynamically. Although your discussion casually suggests that rules can be treated the same way as agents, you haven't mentioned a system in which that was the case. Which system would you use to implement your example? How, for example, can a rule alter itself over time? I'm not talking about systems in which a rule modifies a field in a fixed template. I'm talking about modifications that are more flexible.
Certainly there are many examples in which rule modifications occur within very limited domains. The various Prisoner Dilemma systems in which the rules combine with each other come to mind. But the domain of PD rules is very limited. Suppose you really wanted to do something along the lines that your example suggests. What sort of ABM system would you use? How could a rule "randomly (or non-randomly) generate a new contingency" in some way other than simply plugging new values into a fixed template? As I've said, that's not what I want to do. If you know of an ABM system that has a built-in Genetic Programming capability for generating rules, that would be a good start. Do you know of any such system? -- Russ On Mon, Aug 24, 2009 at 11:10 AM, ERIC P. CHARLES <[hidden email]> wrote:
============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College lectures, archives, unsubscribe, maps at http://www.friam.org |
Russ (and everyone else),
Just because its what I know, I would do it in NetLogo. I'm not suggesting that NetLogo will do what you want, just that it can simulate doing what you want. Not knowing what you want to do, lets keep it general: You start by making an "agent" with a list of things it can do, lets label them 1-1000, and a list of things it can sense, lets label them A-ZZZ. But there is a catch, the agent has no commands connecting the sensory list to the behaviors list, a different object must do that. The agent must query all the rules until it finds one that accepts its current input, and then the rule sends it a behavior code. (Note, that any combination of inputs can be represented as a single signal or as several separate ones, it doesn't matter for theses purposes) You then make several "rules", each of which receives a signal from an agent and outputs a behavior command. One rule might be "If input WFB, then behavior 134." Note, it doesn't matter how complicated the rule is, this general formula will still work. Any countable infinity of options can be re-presented using the natural numbers, so it is a useful simplification. Alternatively, imagine that each digit provides independent information and make the strings as long as you wish. Now, to implement one of my suggestions you could use: 1) The "system level" solution: On an iterative basis asses the benefit gained by individuals who accessed a given rule (i.e. turtles who accessed rule 4 gained 140 points on average, while turtles who accessed rule 5 only gained 2 points on average). This master assessor then removes or modifies rules that aren't up to snuff. 2) The "rule modified by agents" solution: Agents could have a third set of attributes, in addition to behaviors and sensations they might have "rule changers". Let's label them from ! to ^%*. For example, command $% could tell the rule to select another behavior at random, while command *# could tell the rule to simply add 1 to the current behavior. 3) The "agents disobey" solution: Agents could in the presence of certain sensations modify their reactions to the behavior a given rule calls up in a permanent manner. This would require an attribute that kept track of which rules had been previously followed and what the agent had decided from that experience. For example, a given sensation may indicate that doing certain behaviors is impossible or unwise (you can't walk through a wall, you don't want to walk over a cliff); under these circumstance, if a rule said "go forward" the agent could permanently decide that if rule 89 ever says "go forward" I'm gonna "turn right" instead.... where "go forward" = "54" and "turn right" = "834". In this case the object labeled "rule" is still the same, but only because the effect of the rule has been altered within the agent, which for metaphorical purposes should be sufficient. Because of the countable-infinity thing, I'm not sure what kinds of thing a system like this couldn't simulate. Any combination of inputs and outputs that a rule might give can be simulated in this way. If you want to have 200 "sensory channels" and 200 "limbs" that can do the various behaviors in the most subtle ways imaginable, it would still work in essentially the same way, or could be simulated in exactly the same way. Other complications are easy to incorporate: For example, you could have a rule that responded to a large set of inputs, and have those inputs change... or you could have rules link themselves together to change simultaneously... or you could have the agent send several inputs to the same rule by making it less accurate in detection. You could have rules that delay sending the behavior command... or you could just have a delay built into certain behavior commands. Eric P.S. I'm sorry for the bandwidth all, but I am continuing to communicate through the list because I am hoping someone far more experienced than I will chime in if I am giving poor advice. On Sun, Aug 23, 2009 10:32 PM, Russ Abbott <[hidden email]> wrote: Eric Charles Professional Student and Assistant Professor of Psychology Penn State University Altoona, PA 16601 ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College lectures, archives, unsubscribe, maps at http://www.friam.org |
Eric,
You said, "Not knowing what you want to do ...". It's clear from the rest of your message that you're absolutely right. You have no idea what I want to do. What amazes me is that you nevertheless seem to think that you can tell me the best way for me to do it. How can you be so arrogant? Perhaps that's also what went wrong in our discussion of consciousness a while ago. -- Russ On Mon, Aug 24, 2009 at 1:58 PM, ERIC P. CHARLES <[hidden email]> wrote:
============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College lectures, archives, unsubscribe, maps at http://www.friam.org |
Russ,
What the hell is your problem? I said explicitly that you may be looking for a specific way to do things, rather than a way to solve a particular problem. I also indicated that even if I understood what you were asking for perfectly, my solution was non-ideal in many ways. No indication was made that there was anything "best" about what I was offering, quite the opposite. I also indicated repeatedly that I was open to other, more experienced people on the list telling me I was on the wrong track. For Christ's sake, I'm a psychologist who dabbles in simulation, enjoys brainstorming, and likes to make small contributions to other people's projects. MY emails were generated directly by YOUR emails. For example, the system I propose would simulate one in which rules are able: As that is a direct quote from your email.... I assure you I am not acting out of arrogance, but out of an intention to help. In the future please don't ask me (or anyone else) to elaborate on something if you know in advance that the elaboration will not be what you are looking for. My last email was composed specifically in response to your indicated desire for further information. If I misread your request, as sincere when it was intended as sarcastic, I apologize. That said, I am not a quick writer, I take these emails seriously, I recheck them several times before I send them (this one has taken over an hour in itself), and it is a waste of my time to fulfill such requests only to be insulted afterward. My classes start tomorrow, my syllabi are not prepared, and yet I have dedicated several hours of my weekend to this. If what I said isn't useful, you can say that, but you don't need to be rude about it. Seriously, what the hell is your problem? Sincerely, Eric On Mon, Aug 24, 2009 12:37 AM, Russ Abbott <[hidden email]> wrote:
============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College lectures, archives, unsubscribe, maps at http://www.friam.org |
Administrator
|
In reply to this post by Russ Abbott
Whew, good thing I didn't make the NetLogo response, its exactly what
I was thinking of though. It would be quite easily done, I believe, by a competent NetLogo programmer. We have several here on the list .. they could let us know if Eric and I are wrong. Russ, there is no reason to be so rude. It makes you appear a pouting ass. -- Owen On Aug 23, 2009, at 10:37 PM, Russ Abbott wrote: > Eric, > > You said, "Not knowing what you want to do ...". > > It's clear from the rest of your message that you're absolutely > right. You have no idea what I want to do. > > What amazes me is that you nevertheless seem to think that you can > tell me the best way for me to do it. How can you be so arrogant? > > Perhaps that's also what went wrong in our discussion of > consciousness a while ago. > > -- Russ > > > > On Mon, Aug 24, 2009 at 1:58 PM, ERIC P. CHARLES <[hidden email]> wrote: > Russ (and everyone else), > Just because its what I know, I would do it in NetLogo. I'm not > suggesting that NetLogo will do what you want, just that it can > simulate doing what you want. Not knowing what you want to do, lets > keep it general: > > You start by making an "agent" with a list of things it can do, lets > label them 1-1000, and a list of things it can sense, lets label > them A-ZZZ. But there is a catch, the agent has no commands > connecting the sensory list to the behaviors list, a different > object must do that. The agent must query all the rules until it > finds one that accepts its current input, and then the rule sends it > a behavior code. (Note, that any combination of inputs can be > represented as a single signal or as several separate ones, it > doesn't matter for theses purposes) > > You then make several "rules", each of which receives a signal from > an agent and outputs a behavior command. One rule might be "If input > WFB, then behavior 134." Note, it doesn't matter how complicated the > rule is, this general formula will still work. Any countable > infinity of options can be re-presented using the natural numbers, > so it is a useful simplification. Alternatively, imagine that each > digit provides independent information and make the strings as long > as you wish. > > Now, to implement one of my suggestions you could use: > 1) The "system level" solution: On an iterative basis asses the > benefit gained by individuals who accessed a given rule (i.e. > turtles who accessed rule 4 gained 140 points on average, while > turtles who accessed rule 5 only gained 2 points on average). This > master assessor then removes or modifies rules that aren't up to > snuff. > > 2) The "rule modified by agents" solution: Agents could have a third > set of attributes, in addition to behaviors and sensations they > might have "rule changers". Let's label them from ! to ^%*. For > example, command $% could tell the rule to select another behavior > at random, while command *# could tell the rule to simply add 1 to > the current behavior. > > 3) The "agents disobey" solution: Agents could in the presence of > certain sensations modify their reactions to the behavior a given > rule calls up in a permanent manner. This would require an attribute > that kept track of which rules had been previously followed and what > the agent had decided from that experience. For example, a given > sensation may indicate that doing certain behaviors is impossible or > unwise (you can't walk through a wall, you don't want to walk over a > cliff); under these circumstance, if a rule said "go forward" the > agent could permanently decide that if rule 89 ever says "go > forward" I'm gonna "turn right" instead.... where "go forward" = > "54" and "turn right" = "834". In this case the object labeled > "rule" is still the same, but only because the effect of the rule > has been altered within the agent, which for metaphorical purposes > should be sufficient. > > Because of the countable-infinity thing, I'm not sure what kinds of > thing a system like this couldn't simulate. Any combination of > inputs and outputs that a rule might give can be simulated in this > way. If you want to have 200 "sensory channels" and 200 "limbs" that > can do the various behaviors in the most subtle ways imaginable, it > would still work in essentially the same way, or could be simulated > in exactly the same way. > > Other complications are easy to incorporate: For example, you could > have a rule that responded to a large set of inputs, and have those > inputs change... or you could have rules link themselves together to > change simultaneously... or you could have the agent send several > inputs to the same rule by making it less accurate in detection. You > could have rules that delay sending the behavior command... or you > could just have a delay built into certain behavior commands. > > > Eric > > P.S. I'm sorry for the bandwidth all, but I am continuing to > communicate through the list because I am hoping someone far more > experienced than I will chime in if I am giving poor advice. > > > > > On Sun, Aug 23, 2009 10:32 PM, Russ Abbott <[hidden email]> > wrote: > My original request was for an ABM system in which rules were first > class objects and could be constructed and modified dynamically. > Although your discussion casually suggests that rules can be treated > the same way as agents, you haven't mentioned a system in which that > was the case. Which system would you use to implement your example? > How, for example, can a rule alter itself over time? I'm not talking > about systems in which a rule modifies a field in a fixed template. > I'm talking about modifications that are more flexible. > > Certainly there are many examples in which rule modifications occur > within very limited domains. The various Prisoner Dilemma systems in > which the rules combine with each other come to mind. But the domain > of PD rules is very limited. > > Suppose you really wanted to do something along the lines that your > example suggests. What sort of ABM system would you use? How could > a rule "randomly (or non-randomly) generate a new contingency" in > some way other than simply plugging new values into a fixed > template? As I've said, that's not what I want to do. > > If you know of an ABM system that has a built-in Genetic Programming > capability for generating rules, that would be a good start. Do you > know of any such system? > > -- Russ > > > > > On Mon, Aug 24, 2009 at 11:10 AM, ERIC P. CHARLES <[hidden email]> > wrote: > Well, there are some ways of playing fast and loose with the > metaphor. There are almost always easy, but computationally non- > elegant, ways to simulate things like this. Remember, we have quotes > because "rules" and "agents" are just two classes of agents with > different structures. > > Some options: > 1) The "rules" can alter themselves over time, as they can be agents > in a Darwinian algorithm or any other source of system level change > you want to impose. > 2) The "rules" could accept instructions from the "agents" telling > them how to change. > 3) The "agents" could adjust their responses to commands given by > the "rules" which effectively changes what the rule (now not in > quotes) does. > > To get some examples, let's start with a "rule" that says "when in a > red patch, turn left". That is, in the starting conditions the > "agent" tells the rule it is in a red patch, the "rule" replies back > "turn left": > 1) Over time that particular "rule" could be deemed not-useful and > therefore done away with in some master way. It could either be > replaced by a different "rule", or there could just no longer be a > "rule" about what to do in red patches. > 2) An "agent" in a red patch could for some reason no longer be able > to turn left. When this happens, it could send a command to the > "rule" telling the "rule" it needs to change, and the "rule" could > randomly (or non-randomly) generate a new contingency. > 3) In the same situation, an "agent" could simply modify itself to > turns right instead; that is, when the command "turn left" is > received through that "rule" (or perhaps from any "rule"), the > "agent" now turns right. This is analogous to what happens at some > point for children when "don't touch that" becomes "touch that". The > parents persist in issuing the same command, but the rule (now not > in quotes) has clearly changed. > > Either way, if you are trying to answer a question, I think it > something like one of the above options is bound to work. If there > is some higher reason you are trying to do something in a particular > way, or you have reason to be worried about processor time, then it > might not be exactly what you are after. > > Eric > > > > On Sun, Aug 23, 2009 05:18 PM, Russ Abbott <[hidden email]> > wrote: > Thanks Eric. It doesn't sound like your suggestion will do what I > want. I want to be able to create new rules dynamically as in rule > evolution. As I understand your scheme, the set of rule-agents is > fixed in advance. > > -- Russ > > > > > On Sun, Aug 23, 2009 at 8:30 AM, ERIC P. CHARLES <[hidden email]> wrote: > Russ, > I'm probably just saying this out of ignorance, but... If you want > to "really" do that, I'm not sure how to do so.... However, given > that you are simulating anyway... If you want to simulate doing > that, it seems straightforward. Pick any agent-based simulation > program, create two classes of agents, call one class "rules" and > the others "agents". Let individuals in the "rules" class do all > sorts of things to individuals in the "agents" class (including > controlling which other "rules" they accept commands from and how > they respond to those commands). > > Not the most elegant solution in the world, but it would likely be > able to answer whatever question you want to answer (assuming it is > a question answering task you wish to engage in), with minimum time > spent banging your head against the wall programming it. My biases > (and lack of programming brilliance) typically lead me to find the > simplest way to simulate what I want, even if that means the > computers need to run a little longer. I assume there is some reason > this would not be satisfactory? > > Eric > > > > > On Sat, Aug 22, 2009 11:13 PM, Russ Abbott <[hidden email]> > wrote: > Hi, > > I'm interesting in developing a model that uses rule-driven agents. > I would like the agent rules to be condition-action rules, i.e., > similar to the sorts of rules one finds in forward chaining > blackboard systems. In addition, I would like both the agents and > the rules themselves to be first class objects. In other words, the > rules should be able: > > • to refer to agents, > • to create and destroy agents, > • to create new rules for newly created agents, > • to disable rules for existing agents, and > • to modify existing rules for existing agents. > Does anyone know of a system like that? > > -- Russ > ============================================================ > FRIAM Applied Complexity Group listserv > Meets Fridays 9a-11:30 at cafe at St. John's College > lectures, archives, unsubscribe, maps at http://www.friam.org > Eric Charles > > Professional Student and > Assistant Professor of Psychology > Penn State University > Altoona, PA 16601 > > > > ============================================================ > FRIAM Applied Complexity Group listserv > Meets Fridays 9a-11:30 at cafe at St. John's College > lectures, archives, unsubscribe, maps at http://www.friam.org > > ============================================================ > FRIAM Applied Complexity Group listserv > Meets Fridays 9a-11:30 at cafe at St. John's College > lectures, archives, unsubscribe, maps at http://www.friam.org > Eric Charles > > Professional Student and > Assistant Professor of Psychology > Penn State University > Altoona, PA 16601 > > > > ============================================================ > FRIAM Applied Complexity Group listserv > Meets Fridays 9a-11:30 at cafe at St. John's College > lectures, archives, unsubscribe, maps at http://www.friam.org > > ============================================================ > FRIAM Applied Complexity Group listserv > Meets Fridays 9a-11:30 at cafe at St. John's College > lectures, archives, unsubscribe, maps at http://www.friam.org > Eric Charles > > Professional Student and > Assistant Professor of Psychology > Penn State University > Altoona, PA 16601 > > > > ============================================================ > FRIAM Applied Complexity Group listserv > Meets Fridays 9a-11:30 at cafe at St. John's College > lectures, archives, unsubscribe, maps at http://www.friam.org > > ============================================================ > FRIAM Applied Complexity Group listserv > Meets Fridays 9a-11:30 at cafe at St. John's College > lectures, archives, unsubscribe, maps at http://www.friam.org ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College lectures, archives, unsubscribe, maps at http://www.friam.org |
In reply to this post by Russ Abbott
Russ, what's the matter with you? Why being so rude and evil?
I personally don't like arrogant people, even worse are arrogant and ignorant people. The response from Eric was neither, he was polite and tried to be helpful. I can see nothing wrong with that. How can we know what you want to do if you don't tell us? It would perhaps be useful to know what kind of problem you want to solve, what system you want to simulate, and what the purpose of the whole simulation is. -J. ----- Original Message ----- From: Russ Abbott To: The Friday Morning Applied Complexity Coffee Group Sent: Monday, August 24, 2009 6:37 AM Subject: Re: [FRIAM] Rule driven agent-based modeling systems Eric, You said, "Not knowing what you want to do ...". It's clear from the rest of your message that you're absolutely right. You have no idea what I want to do. What amazes me is that you nevertheless seem to think that you can tell me the best way for me to do it. How can you be so arrogant? Perhaps that's also what went wrong in our discussion of consciousness a while ago. -- Russ ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College lectures, archives, unsubscribe, maps at http://www.friam.org |
Free forum by Nabble | Edit this page |