Rule driven agent-based modeling systems

classic Classic list List threaded Threaded
53 messages Options
123
Reply | Threaded
Open this post in threaded view
|

Re: Rule driven agent-based modeling systems

Douglas Roberts-2
Welcome to alt.friam.flames! (You geezers know what I'm talking about.)

;-}

FWIW, I was thinking along similar lines, i.e. that a hybrid ABM / rule-based simulation framework would be amenable to implementation in C++.

I was also going to make an observation that people (here on this list, as well as elsewhere) seem to constantly be in search of some kind of 'one size fits all' simulation development environment, and that such always fail because they are too heavy-weight, too cumbersome, and too constraining.

But I won't make that comment now.

--Doug

On Mon, Aug 24, 2009 at 9:43 AM, Owen Densmore <[hidden email]> wrote:
Whew, good thing I didn't make the NetLogo response, its exactly what I was thinking of though.  It would be quite easily done, I believe, by a competent NetLogo programmer.

We have several here on the list .. they could let us know if Eric and I are wrong.

Russ, there is no reason to be so rude.  It makes you appear a pouting ass.

   -- Owen



On Aug 23, 2009, at 10:37 PM, Russ Abbott wrote:

Eric,

You said, "Not knowing what you want to do ...".

It's clear from the rest of your message that you're absolutely right. You have no idea what I want to do.

What amazes me is that you nevertheless seem to think that you can tell me the best way for me to do it. How can you be so arrogant?

Perhaps that's also what went wrong in our discussion of consciousness a while ago.

-- Russ



On Mon, Aug 24, 2009 at 1:58 PM, ERIC P. CHARLES <[hidden email]> wrote:
Russ (and everyone else),
Just because its what I know, I would do it in NetLogo. I'm not suggesting that NetLogo will do what you want, just that it can simulate doing what you want. Not knowing what you want to do, lets keep it general:

You start by making an "agent" with a list of things it can do, lets label them 1-1000, and a list of things it can sense, lets label them A-ZZZ. But there is a catch, the agent has no commands connecting the sensory list to the behaviors list, a different object must do that. The agent must query all the rules until it finds one that accepts its current input, and then the rule sends it a behavior code. (Note, that any combination of inputs can be represented as a single signal or as several separate ones, it doesn't matter for theses purposes)

You then make several "rules", each of which receives a signal from an agent and outputs a behavior command. One rule might be "If input WFB, then behavior 134." Note, it doesn't matter how complicated the rule is, this general formula will still work. Any countable infinity of options can be re-presented using the natural numbers, so it is a useful simplification. Alternatively, imagine that each digit provides independent information and make the strings as long as you wish.

Now, to implement one of my suggestions you could use:
1) The "system level" solution: On an iterative basis asses the benefit gained by individuals who accessed a given rule (i.e. turtles who accessed rule 4 gained 140 points on average, while turtles who accessed rule 5 only gained 2 points on average). This master assessor then removes or modifies rules that aren't up to snuff.

2) The "rule modified by agents" solution: Agents could have a third set of attributes, in addition to behaviors and sensations they might have "rule changers". Let's label them from ! to ^%*. For example, command $% could tell the rule to select another behavior at random, while command *# could tell the rule to simply add 1 to the current behavior.

3) The "agents disobey" solution: Agents could in the presence of certain sensations modify their reactions to the behavior a given rule calls up in a permanent manner. This would require an attribute that kept track of which rules had been previously followed and what the agent had decided from that experience. For example, a given sensation may indicate that doing certain behaviors is impossible or unwise (you can't walk through a wall, you don't want to walk over a cliff); under these circumstance, if a rule said "go forward" the agent could permanently decide that if rule 89 ever says "go forward" I'm gonna "turn right" instead.... where "go forward" = "54" and "turn right" = "834". In this case the object labeled "rule" is still the same, but only because the effect of the rule has been altered within the agent, which for metaphorical purposes should be sufficient.

Because of the countable-infinity thing, I'm not sure what kinds of thing a system like this couldn't simulate. Any combination of inputs and outputs that a rule might give can be simulated in this way. If you want to have 200 "sensory channels" and 200 "limbs" that can do the various behaviors in the most subtle ways imaginable, it would still work in essentially the same way, or could be simulated in exactly the same way.

Other complications are easy to incorporate: For example, you could have a rule that responded to a large set of inputs, and have those inputs change... or you could have rules link themselves together to change simultaneously... or you could have the agent send several inputs to the same rule by making it less accurate in detection. You could have rules that delay sending the behavior command... or you could just have a delay built into certain behavior commands.


Eric

P.S. I'm sorry for the bandwidth all, but I am continuing to communicate through the list because I am hoping someone far more experienced than I will chime in if I am giving poor advice.




On Sun, Aug 23, 2009 10:32 PM, Russ Abbott <[hidden email]> wrote:
My original request was for an ABM system in which rules were first class objects and could be constructed and modified dynamically. Although your discussion casually suggests that rules can be treated the same way as agents, you haven't mentioned a system in which that was the case. Which system would you use to implement your example? How, for example, can a rule alter itself over time? I'm not talking about systems in which a rule modifies a field in a fixed template. I'm talking about modifications that are more flexible.

Certainly there are many examples in which rule modifications occur within very limited domains. The various Prisoner Dilemma systems in which the rules combine with each other come to mind. But the domain of PD rules is very limited.

Suppose you really wanted to do something along the lines that your example suggests.  What sort of ABM system would you use? How could a rule "randomly (or non-randomly) generate a new contingency" in some way other than simply plugging new values into a fixed template? As I've said, that's not what I want to do.

If you know of an ABM system that has a built-in Genetic Programming capability for generating rules, that would be a good start. Do you know of any such system?

-- Russ




On Mon, Aug 24, 2009 at 11:10 AM, ERIC P. CHARLES <[hidden email]> wrote:
Well, there are some ways of playing fast and loose with the metaphor. There are almost always easy, but computationally non-elegant, ways to simulate things like this. Remember, we have quotes because "rules" and "agents" are just two classes of agents with different structures.

Some options:
1) The "rules" can alter themselves over time, as they can be agents in a Darwinian algorithm or any other source of system level change you want to impose.
2) The "rules" could accept instructions from the "agents" telling them how to change.
3) The "agents" could adjust their responses to commands given by the "rules" which effectively changes what the rule (now not in quotes) does.

To get some examples, let's start with a "rule" that says "when in a red patch, turn left". That is, in the starting conditions the "agent" tells the rule it is in a red patch, the "rule" replies back "turn left":
1) Over time that particular "rule" could be deemed not-useful and therefore done away with in some master way. It could either be replaced by a different "rule", or there could just no longer be a "rule" about what to do in red patches.
2) An "agent" in a red patch could for some reason no longer be able to turn left. When this happens, it could send a command to the "rule" telling the "rule" it needs to change, and the "rule" could randomly (or non-randomly) generate a new contingency.
3) In the same situation, an "agent" could simply modify itself to turns right instead; that is, when the command "turn left" is received through that "rule" (or perhaps from any "rule"), the "agent" now turns right. This is analogous to what happens at some point for children when "don't touch that" becomes "touch that". The parents persist in issuing the same command, but the rule (now not in quotes) has clearly changed.

Either way, if you are trying to answer a question, I think it something like one of the above options is bound to work. If there is some higher reason you are trying to do something in a particular way, or you have reason to be worried about processor time, then it might not be exactly what you are after.

Eric



On Sun, Aug 23, 2009 05:18 PM, Russ Abbott <[hidden email]> wrote:
Thanks Eric. It doesn't sound like your suggestion will do what I want. I want to be able to create new rules dynamically as in rule evolution. As I understand your scheme, the set of rule-agents is fixed in advance.

-- Russ



On Sun, Aug 23, 2009 at 8:30 AM, ERIC P. CHARLES <[hidden email]> wrote:
Russ,
I'm probably just saying this out of ignorance, but... If you want to "really" do that, I'm not sure how to do so.... However, given that you are simulating anyway... If you want to simulate doing that, it seems straightforward. Pick any agent-based simulation program, create two classes of agents, call one class "rules" and the others "agents". Let individuals in the "rules" class do all sorts of things to individuals in the "agents" class (including controlling which other "rules" they accept commands from and how they respond to those commands).

Not the most elegant solution in the world, but it would likely be able to answer whatever question you want to answer (assuming it is a question answering task you wish to engage in), with minimum time spent banging your head against the wall programming it. My biases (and lack of programming brilliance) typically lead me to find the simplest way to simulate what I want, even if that means the computers need to run a little longer. I assume there is some reason this would not be satisfactory?

Eric




On Sat, Aug 22, 2009 11:13 PM, Russ Abbott <[hidden email]> wrote:
Hi,

I'm interesting in developing a model that uses rule-driven agents. I would like the agent rules to be condition-action rules, i.e., similar to the sorts of rules one finds in forward chaining blackboard systems. In addition, I would like both the agents and the rules themselves to be first class objects. In other words, the rules should be able:

       • to refer to agents,
       • to create and destroy agents,
       • to create new rules for newly created agents,
       • to disable rules for existing agents, and
       • to modify existing rules for existing agents.
Does anyone know of a system like that?

-- Russ============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org
Eric Charles

Professional Student and
Assistant Professor of Psychology
Penn State University
Altoona, PA 16601



============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org
Eric Charles

Professional Student and
Assistant Professor of Psychology
Penn State University
Altoona, PA 16601



============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org
Eric Charles

Professional Student and
Assistant Professor of Psychology
Penn State University
Altoona, PA 16601



============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org
Reply | Threaded
Open this post in threaded view
|

Re: Rule driven agent-based modeling systems

Owen Densmore
Administrator
What surprised me about the anger was that I thought the thread was quite successful:
- A member asks for advice.
- A large number of responses point out several possibilities, and give an overview of the ABM world
- A sensible suggestion on a DIY approach was made
- OP goes non-linear.

Did I miss anything?

    -- Owen


On Aug 24, 2009, at 2:01 PM, Douglas Roberts wrote:

Welcome to alt.friam.flames! (You geezers know what I'm talking about.)

;-}

FWIW, I was thinking along similar lines, i.e. that a hybrid ABM / rule-based simulation framework would be amenable to implementation in C++.

I was also going to make an observation that people (here on this list, as well as elsewhere) seem to constantly be in search of some kind of 'one size fits all' simulation development environment, and that such always fail because they are too heavy-weight, too cumbersome, and too constraining.

But I won't make that comment now.

--Doug

On Mon, Aug 24, 2009 at 9:43 AM, Owen Densmore <[hidden email]> wrote:
Whew, good thing I didn't make the NetLogo response, its exactly what I was thinking of though.  It would be quite easily done, I believe, by a competent NetLogo programmer.

We have several here on the list .. they could let us know if Eric and I are wrong.

Russ, there is no reason to be so rude.  It makes you appear a pouting ass.

   -- Owen



On Aug 23, 2009, at 10:37 PM, Russ Abbott wrote:

Eric,

You said, "Not knowing what you want to do ...".

It's clear from the rest of your message that you're absolutely right. You have no idea what I want to do.

What amazes me is that you nevertheless seem to think that you can tell me the best way for me to do it. How can you be so arrogant?

Perhaps that's also what went wrong in our discussion of consciousness a while ago.

-- Russ



On Mon, Aug 24, 2009 at 1:58 PM, ERIC P. CHARLES <[hidden email]> wrote:
Russ (and everyone else),
Just because its what I know, I would do it in NetLogo. I'm not suggesting that NetLogo will do what you want, just that it can simulate doing what you want. Not knowing what you want to do, lets keep it general:

You start by making an "agent" with a list of things it can do, lets label them 1-1000, and a list of things it can sense, lets label them A-ZZZ. But there is a catch, the agent has no commands connecting the sensory list to the behaviors list, a different object must do that. The agent must query all the rules until it finds one that accepts its current input, and then the rule sends it a behavior code. (Note, that any combination of inputs can be represented as a single signal or as several separate ones, it doesn't matter for theses purposes)

You then make several "rules", each of which receives a signal from an agent and outputs a behavior command. One rule might be "If input WFB, then behavior 134." Note, it doesn't matter how complicated the rule is, this general formula will still work. Any countable infinity of options can be re-presented using the natural numbers, so it is a useful simplification. Alternatively, imagine that each digit provides independent information and make the strings as long as you wish.

Now, to implement one of my suggestions you could use:
1) The "system level" solution: On an iterative basis asses the benefit gained by individuals who accessed a given rule (i.e. turtles who accessed rule 4 gained 140 points on average, while turtles who accessed rule 5 only gained 2 points on average). This master assessor then removes or modifies rules that aren't up to snuff.

2) The "rule modified by agents" solution: Agents could have a third set of attributes, in addition to behaviors and sensations they might have "rule changers". Let's label them from ! to ^%*. For example, command $% could tell the rule to select another behavior at random, while command *# could tell the rule to simply add 1 to the current behavior.

3) The "agents disobey" solution: Agents could in the presence of certain sensations modify their reactions to the behavior a given rule calls up in a permanent manner. This would require an attribute that kept track of which rules had been previously followed and what the agent had decided from that experience. For example, a given sensation may indicate that doing certain behaviors is impossible or unwise (you can't walk through a wall, you don't want to walk over a cliff); under these circumstance, if a rule said "go forward" the agent could permanently decide that if rule 89 ever says "go forward" I'm gonna "turn right" instead.... where "go forward" = "54" and "turn right" = "834". In this case the object labeled "rule" is still the same, but only because the effect of the rule has been altered within the agent, which for metaphorical purposes should be sufficient.

Because of the countable-infinity thing, I'm not sure what kinds of thing a system like this couldn't simulate. Any combination of inputs and outputs that a rule might give can be simulated in this way. If you want to have 200 "sensory channels" and 200 "limbs" that can do the various behaviors in the most subtle ways imaginable, it would still work in essentially the same way, or could be simulated in exactly the same way.

Other complications are easy to incorporate: For example, you could have a rule that responded to a large set of inputs, and have those inputs change... or you could have rules link themselves together to change simultaneously... or you could have the agent send several inputs to the same rule by making it less accurate in detection. You could have rules that delay sending the behavior command... or you could just have a delay built into certain behavior commands.


Eric

P.S. I'm sorry for the bandwidth all, but I am continuing to communicate through the list because I am hoping someone far more experienced than I will chime in if I am giving poor advice.




On Sun, Aug 23, 2009 10:32 PM, Russ Abbott <[hidden email]> wrote:
My original request was for an ABM system in which rules were first class objects and could be constructed and modified dynamically. Although your discussion casually suggests that rules can be treated the same way as agents, you haven't mentioned a system in which that was the case. Which system would you use to implement your example? How, for example, can a rule alter itself over time? I'm not talking about systems in which a rule modifies a field in a fixed template. I'm talking about modifications that are more flexible.

Certainly there are many examples in which rule modifications occur within very limited domains. The various Prisoner Dilemma systems in which the rules combine with each other come to mind. But the domain of PD rules is very limited.

Suppose you really wanted to do something along the lines that your example suggests.  What sort of ABM system would you use? How could a rule "randomly (or non-randomly) generate a new contingency" in some way other than simply plugging new values into a fixed template? As I've said, that's not what I want to do.

If you know of an ABM system that has a built-in Genetic Programming capability for generating rules, that would be a good start. Do you know of any such system?

-- Russ




On Mon, Aug 24, 2009 at 11:10 AM, ERIC P. CHARLES <[hidden email]> wrote:
Well, there are some ways of playing fast and loose with the metaphor. There are almost always easy, but computationally non-elegant, ways to simulate things like this. Remember, we have quotes because "rules" and "agents" are just two classes of agents with different structures.

Some options:
1) The "rules" can alter themselves over time, as they can be agents in a Darwinian algorithm or any other source of system level change you want to impose.
2) The "rules" could accept instructions from the "agents" telling them how to change.
3) The "agents" could adjust their responses to commands given by the "rules" which effectively changes what the rule (now not in quotes) does.

To get some examples, let's start with a "rule" that says "when in a red patch, turn left". That is, in the starting conditions the "agent" tells the rule it is in a red patch, the "rule" replies back "turn left":
1) Over time that particular "rule" could be deemed not-useful and therefore done away with in some master way. It could either be replaced by a different "rule", or there could just no longer be a "rule" about what to do in red patches.
2) An "agent" in a red patch could for some reason no longer be able to turn left. When this happens, it could send a command to the "rule" telling the "rule" it needs to change, and the "rule" could randomly (or non-randomly) generate a new contingency.
3) In the same situation, an "agent" could simply modify itself to turns right instead; that is, when the command "turn left" is received through that "rule" (or perhaps from any "rule"), the "agent" now turns right. This is analogous to what happens at some point for children when "don't touch that" becomes "touch that". The parents persist in issuing the same command, but the rule (now not in quotes) has clearly changed.

Either way, if you are trying to answer a question, I think it something like one of the above options is bound to work. If there is some higher reason you are trying to do something in a particular way, or you have reason to be worried about processor time, then it might not be exactly what you are after.

Eric



On Sun, Aug 23, 2009 05:18 PM, Russ Abbott <[hidden email]> wrote:
Thanks Eric. It doesn't sound like your suggestion will do what I want. I want to be able to create new rules dynamically as in rule evolution. As I understand your scheme, the set of rule-agents is fixed in advance.

-- Russ



On Sun, Aug 23, 2009 at 8:30 AM, ERIC P. CHARLES <[hidden email]> wrote:
Russ,
I'm probably just saying this out of ignorance, but... If you want to "really" do that, I'm not sure how to do so.... However, given that you are simulating anyway... If you want to simulate doing that, it seems straightforward. Pick any agent-based simulation program, create two classes of agents, call one class "rules" and the others "agents". Let individuals in the "rules" class do all sorts of things to individuals in the "agents" class (including controlling which other "rules" they accept commands from and how they respond to those commands).

Not the most elegant solution in the world, but it would likely be able to answer whatever question you want to answer (assuming it is a question answering task you wish to engage in), with minimum time spent banging your head against the wall programming it. My biases (and lack of programming brilliance) typically lead me to find the simplest way to simulate what I want, even if that means the computers need to run a little longer. I assume there is some reason this would not be satisfactory?

Eric




On Sat, Aug 22, 2009 11:13 PM, Russ Abbott <[hidden email]> wrote:
Hi,

I'm interesting in developing a model that uses rule-driven agents. I would like the agent rules to be condition-action rules, i.e., similar to the sorts of rules one finds in forward chaining blackboard systems. In addition, I would like both the agents and the rules themselves to be first class objects. In other words, the rules should be able:

       • to refer to agents,
       • to create and destroy agents,
       • to create new rules for newly created agents,
       • to disable rules for existing agents, and
       • to modify existing rules for existing agents.
Does anyone know of a system like that?

-- Russ============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org
Eric Charles

Professional Student and
Assistant Professor of Psychology
Penn State University
Altoona, PA 16601



============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org
Eric Charles

Professional Student and
Assistant Professor of Psychology
Penn State University
Altoona, PA 16601



============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org
Eric Charles

Professional Student and
Assistant Professor of Psychology
Penn State University
Altoona, PA 16601



============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org
Reply | Threaded
Open this post in threaded view
|

Re: Rule driven agent-based modeling systems

Douglas Roberts-2
Perhaps you overlooked the stochastic nature of the human-agent temperament.

draw_random_fury_factor (-5, 9000);

--Doug

On Mon, Aug 24, 2009 at 4:15 PM, Owen Densmore <[hidden email]> wrote:
[...]

Did I miss anything?

    -- Owen


On Aug 24, 2009, at 2:01 PM, Douglas Roberts wrote:

Welcome to alt.friam.flames! (You geezers know what I'm talking about.)

;-}

FWIW, I was thinking along similar lines, i.e. that a hybrid ABM / rule-based simulation framework would be amenable to implementation in C++.

I was also going to make an observation that people (here on this list, as well as elsewhere) seem to constantly be in search of some kind of 'one size fits all' simulation development environment, and that such always fail because they are too heavy-weight, too cumbersome, and too constraining.

But I won't make that comment now.

--Doug

On Mon, Aug 24, 2009 at 9:43 AM, Owen Densmore <[hidden email]> wrote:
Whew, good thing I didn't make the NetLogo response, its exactly what I was thinking of though.  It would be quite easily done, I believe, by a competent NetLogo programmer.

We have several here on the list .. they could let us know if Eric and I are wrong.

Russ, there is no reason to be so rude.  It makes you appear a pouting ass.

   -- Owen



On Aug 23, 2009, at 10:37 PM, Russ Abbott wrote:

Eric,

You said, "Not knowing what you want to do ...".

It's clear from the rest of your message that you're absolutely right. You have no idea what I want to do.

What amazes me is that you nevertheless seem to think that you can tell me the best way for me to do it. How can you be so arrogant?

Perhaps that's also what went wrong in our discussion of consciousness a while ago.

-- Russ



On Mon, Aug 24, 2009 at 1:58 PM, ERIC P. CHARLES <[hidden email]> wrote:
Russ (and everyone else),
Just because its what I know, I would do it in NetLogo. I'm not suggesting that NetLogo will do what you want, just that it can simulate doing what you want. Not knowing what you want to do, lets keep it general:

You start by making an "agent" with a list of things it can do, lets label them 1-1000, and a list of things it can sense, lets label them A-ZZZ. But there is a catch, the agent has no commands connecting the sensory list to the behaviors list, a different object must do that. The agent must query all the rules until it finds one that accepts its current input, and then the rule sends it a behavior code. (Note, that any combination of inputs can be represented as a single signal or as several separate ones, it doesn't matter for theses purposes)

You then make several "rules", each of which receives a signal from an agent and outputs a behavior command. One rule might be "If input WFB, then behavior 134." Note, it doesn't matter how complicated the rule is, this general formula will still work. Any countable infinity of options can be re-presented using the natural numbers, so it is a useful simplification. Alternatively, imagine that each digit provides independent information and make the strings as long as you wish.

Now, to implement one of my suggestions you could use:
1) The "system level" solution: On an iterative basis asses the benefit gained by individuals who accessed a given rule (i.e. turtles who accessed rule 4 gained 140 points on average, while turtles who accessed rule 5 only gained 2 points on average). This master assessor then removes or modifies rules that aren't up to snuff.

2) The "rule modified by agents" solution: Agents could have a third set of attributes, in addition to behaviors and sensations they might have "rule changers". Let's label them from ! to ^%*. For example, command $% could tell the rule to select another behavior at random, while command *# could tell the rule to simply add 1 to the current behavior.

3) The "agents disobey" solution: Agents could in the presence of certain sensations modify their reactions to the behavior a given rule calls up in a permanent manner. This would require an attribute that kept track of which rules had been previously followed and what the agent had decided from that experience. For example, a given sensation may indicate that doing certain behaviors is impossible or unwise (you can't walk through a wall, you don't want to walk over a cliff); under these circumstance, if a rule said "go forward" the agent could permanently decide that if rule 89 ever says "go forward" I'm gonna "turn right" instead.... where "go forward" = "54" and "turn right" = "834". In this case the object labeled "rule" is still the same, but only because the effect of the rule has been altered within the agent, which for metaphorical purposes should be sufficient.

Because of the countable-infinity thing, I'm not sure what kinds of thing a system like this couldn't simulate. Any combination of inputs and outputs that a rule might give can be simulated in this way. If you want to have 200 "sensory channels" and 200 "limbs" that can do the various behaviors in the most subtle ways imaginable, it would still work in essentially the same way, or could be simulated in exactly the same way.

Other complications are easy to incorporate: For example, you could have a rule that responded to a large set of inputs, and have those inputs change... or you could have rules link themselves together to change simultaneously... or you could have the agent send several inputs to the same rule by making it less accurate in detection. You could have rules that delay sending the behavior command... or you could just have a delay built into certain behavior commands.


Eric

P.S. I'm sorry for the bandwidth all, but I am continuing to communicate through the list because I am hoping someone far more experienced than I will chime in if I am giving poor advice.




On Sun, Aug 23, 2009 10:32 PM, Russ Abbott <[hidden email]> wrote:
My original request was for an ABM system in which rules were first class objects and could be constructed and modified dynamically. Although your discussion casually suggests that rules can be treated the same way as agents, you haven't mentioned a system in which that was the case. Which system would you use to implement your example? How, for example, can a rule alter itself over time? I'm not talking about systems in which a rule modifies a field in a fixed template. I'm talking about modifications that are more flexible.

Certainly there are many examples in which rule modifications occur within very limited domains. The various Prisoner Dilemma systems in which the rules combine with each other come to mind. But the domain of PD rules is very limited.

Suppose you really wanted to do something along the lines that your example suggests.  What sort of ABM system would you use? How could a rule "randomly (or non-randomly) generate a new contingency" in some way other than simply plugging new values into a fixed template? As I've said, that's not what I want to do.

If you know of an ABM system that has a built-in Genetic Programming capability for generating rules, that would be a good start. Do you know of any such system?

-- Russ




On Mon, Aug 24, 2009 at 11:10 AM, ERIC P. CHARLES <[hidden email]> wrote:
Well, there are some ways of playing fast and loose with the metaphor. There are almost always easy, but computationally non-elegant, ways to simulate things like this. Remember, we have quotes because "rules" and "agents" are just two classes of agents with different structures.

Some options:
1) The "rules" can alter themselves over time, as they can be agents in a Darwinian algorithm or any other source of system level change you want to impose.
2) The "rules" could accept instructions from the "agents" telling them how to change.
3) The "agents" could adjust their responses to commands given by the "rules" which effectively changes what the rule (now not in quotes) does.

To get some examples, let's start with a "rule" that says "when in a red patch, turn left". That is, in the starting conditions the "agent" tells the rule it is in a red patch, the "rule" replies back "turn left":
1) Over time that particular "rule" could be deemed not-useful and therefore done away with in some master way. It could either be replaced by a different "rule", or there could just no longer be a "rule" about what to do in red patches.
2) An "agent" in a red patch could for some reason no longer be able to turn left. When this happens, it could send a command to the "rule" telling the "rule" it needs to change, and the "rule" could randomly (or non-randomly) generate a new contingency.
3) In the same situation, an "agent" could simply modify itself to turns right instead; that is, when the command "turn left" is received through that "rule" (or perhaps from any "rule"), the "agent" now turns right. This is analogous to what happens at some point for children when "don't touch that" becomes "touch that". The parents persist in issuing the same command, but the rule (now not in quotes) has clearly changed.

Either way, if you are trying to answer a question, I think it something like one of the above options is bound to work. If there is some higher reason you are trying to do something in a particular way, or you have reason to be worried about processor time, then it might not be exactly what you are after.

Eric



On Sun, Aug 23, 2009 05:18 PM, Russ Abbott <[hidden email]> wrote:
Thanks Eric. It doesn't sound like your suggestion will do what I want. I want to be able to create new rules dynamically as in rule evolution. As I understand your scheme, the set of rule-agents is fixed in advance.

-- Russ



On Sun, Aug 23, 2009 at 8:30 AM, ERIC P. CHARLES <[hidden email]> wrote:
Russ,
I'm probably just saying this out of ignorance, but... If you want to "really" do that, I'm not sure how to do so.... However, given that you are simulating anyway... If you want to simulate doing that, it seems straightforward. Pick any agent-based simulation program, create two classes of agents, call one class "rules" and the others "agents". Let individuals in the "rules" class do all sorts of things to individuals in the "agents" class (including controlling which other "rules" they accept commands from and how they respond to those commands).

Not the most elegant solution in the world, but it would likely be able to answer whatever question you want to answer (assuming it is a question answering task you wish to engage in), with minimum time spent banging your head against the wall programming it. My biases (and lack of programming brilliance) typically lead me to find the simplest way to simulate what I want, even if that means the computers need to run a little longer. I assume there is some reason this would not be satisfactory?

Eric




On Sat, Aug 22, 2009 11:13 PM, Russ Abbott <[hidden email]> wrote:
Hi,

I'm interesting in developing a model that uses rule-driven agents. I would like the agent rules to be condition-action rules, i.e., similar to the sorts of rules one finds in forward chaining blackboard systems. In addition, I would like both the agents and the rules themselves to be first class objects. In other words, the rules should be able:

       • to refer to agents,
       • to create and destroy agents,
       • to create new rules for newly created agents,
       • to disable rules for existing agents, and
       • to modify existing rules for existing agents.
Does anyone know of a system like that?

-- Russ============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org
Eric Charles

Professional Student and
Assistant Professor of Psychology
Penn State University
Altoona, PA 16601



============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org
Eric Charles

Professional Student and
Assistant Professor of Psychology
Penn State University
Altoona, PA 16601



============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org
Eric Charles

Professional Student and
Assistant Professor of Psychology
Penn State University
Altoona, PA 16601



============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org



--
Doug Roberts
[hidden email]
[hidden email]
505-455-7333 - Office
505-670-8195 - Cell

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org
Reply | Threaded
Open this post in threaded view
|

Re: Rule driven agent-based modeling systems

Marcus G. Daniels
In reply to this post by Owen Densmore

 > In this case the object labeled "rule" is still the same, but only
because the effect of the rule has been altered within the agent, which
for metaphorical purposes should be sufficient.

I'd say it comes down to whether or not predicate/action pairs can be
defined on the fly.   So long as there a way to make new functions that
test for things and also can describe new
states of the world (of which one part is more predicate/action pairs),
then it should work fine for hybrid genetic programming / ABM.   I'd put
this under the general category of `rewriting systems'.

There is an important practical difference between, say, forking the GCC
compiler every time a variant agent is proposed, versus having
lightweight just-in-time native code compilation from first class
programming language objects.  You can crudely approximate the latter
with more and more ad-hoc hacks (like you mention) in almost any kind of
programming or modeling environment, but why not use tools well suited
to the job?   In the end an ad-hoc interpreter for will be clumsy and
slow compared to the work of programming language implementors who spend
years on design, tuning and optimization.

Marcus

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org
Reply | Threaded
Open this post in threaded view
|

Re: Rule driven agent-based modeling systems

Russ Abbott
I think Marcus gets to the problem. I want agents that can create new agents that have new functionality -- or equivalently modify their own functionality in new ways. I don't see how NetLogo lets you do that. You need the ability to manipulate the rules themselves. Just creating a class of agents that you call rules doesn't do that. It doesn't provide a way to create new functionality. 

As Marcus said, think of it as having an agent with a built-in Genetic Programming system that it can use to generate and test possible rules.  When it finds good ones, it either creates a new agent with those rules or it replaces its own rules with that new rule set. I don't see how NetLogo lets you do that without building an awful lot of new stuff.

-- Russ



On Mon, Aug 24, 2009 at 4:15 PM, Marcus G. Daniels <[hidden email]> wrote:

> In this case the object labeled "rule" is still the same, but only because the effect of the rule has been altered within the agent, which for metaphorical purposes should be sufficient.

I'd say it comes down to whether or not predicate/action pairs can be defined on the fly.   So long as there a way to make new functions that test for things and also can describe new
states of the world (of which one part is more predicate/action pairs), then it should work fine for hybrid genetic programming / ABM.   I'd put this under the general category of `rewriting systems'.

There is an important practical difference between, say, forking the GCC compiler every time a variant agent is proposed, versus having lightweight just-in-time native code compilation from first class programming language objects.  You can crudely approximate the latter with more and more ad-hoc hacks (like you mention) in almost any kind of programming or modeling environment, but why not use tools well suited to the job?   In the end an ad-hoc interpreter for will be clumsy and slow compared to the work of programming language implementors who spend years on design, tuning and optimization.
Marcus


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org
Reply | Threaded
Open this post in threaded view
|

Re: Rule driven agent-based modeling systems

Owen Densmore
Administrator
On Aug 24, 2009, at 6:18 PM, Russ Abbott wrote:

I think Marcus gets to the problem. I want agents that can create new agents that have new functionality -- or equivalently modify their own functionality in new ways.

Well, will table based functionality work?  .. i.e. a list of behavior parameters?

If not, will code snippets work?  NetLogo has an eval function (run & runresult) so dynamically created code snippets as behavior would work.

I don't see how NetLogo lets you do that.

See above.

You need the ability to manipulate the rules themselves.

Well, both the above can do that.

Just creating a class of agents that you call rules doesn't do that. It doesn't provide a way to create new functionality. 

Agreed, at least to a point.  Agents are just objects.  And it's easy to embed genetic algorithms in them.  I've got a TSP in NetLogo that uses the algorithm in the Modern Heuristics book.  The list probably has a hundred others!

As Marcus said, think of it as having an agent with a built-in Genetic Programming system that it can use to generate and test possible rules. 

OK, GP is a different deal.  None the less, with the eval function in NetLogo, I believe this is possible.

When it finds good ones, it either creates a new agent with those rules or it replaces its own rules with that new rule set. I don't see how NetLogo lets you do that without building an awful lot of new stuff.

This is a fairly big/detailed conversation.

What are the performance requirements, for example.  Redfish found that there are ranges of number of agents that NetLogo does fine with, but when we got in the 100,000+ range, we had to move to Processing and roll our own agent classes.  Hopefully you are not in Doug Robert's world needing a room full of clusters, but if so, he's the expert on the list.

How about other agent systems?  We used Repast for years, but found that there was very little it could do that NetLogo could not.  It might be worth another look after all these years.

How about libraries that already exist to make your life easier?  I think there are GP libraries in several languages.  That might bias you in a particular direction.

And possibly most important of all is community.  Is there a community that is working in this domain?  This was why I mentioned Miles, I thought his WedTech talk of a few years past sounded like it might do well in this domain.

   -- Owen


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org
Reply | Threaded
Open this post in threaded view
|

Re: Rule driven agent-based modeling systems

Russ Abbott
Thanks Owen,

I didn't know that NetLogo had an eval function. There's nothing in the documentation under eval. In fact, the word eval doesn't appear at all, and evaluate appears only in a couple of comments in examples that evaluate expressions. But now that I know it's there, it's worth considering. There's still the problem of generating the strings to be evaluated though. Without syntactic support that's non-trivial. Writing and embedding a GP system is also non-trivial. (I'm not interested in a GA with a fixed rule template.)

In general I like NetLogo. It would be my first choice for relatively simple agent-based models. My main complaints about it are its lack of a real sub-classing and inheritance system and (probably most important) that one is limited to a single file of NetLogo code. That means that anything of any complexity quickly gets difficult to manage. Can you imagine dealing with the NetLogo file that includes NetLogo GP code as well as any other code?  I wouldn't want to start down that road.

I couldn't find the paper by Miles Parker that you mentioned. I think I copied the list on the email I sent him. So far, no reply.

I mentioned Drools in an early message.  Here's what I originally wrote about it.

>From my initial Googling the closest I could find was
Drools. It's intended to provide a forward chaining rule programming language for distributed systems (J2EE). It's open source from JBoss. Although it has nothing to do with Agent-based modeling, it seems quite nice and quite general. It runs on top of and is completely integrated with Java.  Agent can simply be an object type. Its Template capability allows it to generate rules that are stored as Java objects, which seems to make it capable of manipulating rules dynamically. I'll have to look into that further. One can make it a simulation engine by keeping a tick counter in the workspace. 


I still think it's the closest thing there is to what I want -- and that it can be used to create what I want more easily than anything else. In fact, with a little work I'd say that enhanced Drools can be to Java with respect to Agent-Based Modeling and Simulation what C++ is to C with respect to OO Programming. It will be a layer of abstraction that provides the right additional elements without getting in the way. (By the way, Jade doesn't do it for me. It supports agent communication across a distributed system, but it doesn't support rule-driven agents or agents interacting in a workspace. It's intended domain of application is really not ABM. It's intended to allow one to use agents in developing distributed systems.)

I had never heard of Drools a week ago. But having looked at the website, having scanned some of the documentation, and having talked to some of the people on their list, I'm quite impressed. It seems well designed and implemented. It's also both mature (version 5.0) and active.

It's amazing what's out there that one doesn't know about. I thought that perhaps the list could point me to some other hidden gems.

-- Russ



On Tue, Aug 25, 2009 at 11:44 AM, Owen Densmore <[hidden email]> wrote:
On Aug 24, 2009, at 6:18 PM, Russ Abbott wrote:

I think Marcus gets to the problem. I want agents that can create new agents that have new functionality -- or equivalently modify their own functionality in new ways.

Well, will table based functionality work?  .. i.e. a list of behavior parameters?

If not, will code snippets work?  NetLogo has an eval function (run & runresult) so dynamically created code snippets as behavior would work.

I don't see how NetLogo lets you do that.

See above.

You need the ability to manipulate the rules themselves.

Well, both the above can do that.

Just creating a class of agents that you call rules doesn't do that. It doesn't provide a way to create new functionality. 

Agreed, at least to a point.  Agents are just objects.  And it's easy to embed genetic algorithms in them.  I've got a TSP in NetLogo that uses the algorithm in the Modern Heuristics book.  The list probably has a hundred others!

As Marcus said, think of it as having an agent with a built-in Genetic Programming system that it can use to generate and test possible rules. 

OK, GP is a different deal.  None the less, with the eval function in NetLogo, I believe this is possible.

When it finds good ones, it either creates a new agent with those rules or it replaces its own rules with that new rule set. I don't see how NetLogo lets you do that without building an awful lot of new stuff.

This is a fairly big/detailed conversation.

What are the performance requirements, for example.  Redfish found that there are ranges of number of agents that NetLogo does fine with, but when we got in the 100,000+ range, we had to move to Processing and roll our own agent classes.  Hopefully you are not in Doug Robert's world needing a room full of clusters, but if so, he's the expert on the list.

How about other agent systems?  We used Repast for years, but found that there was very little it could do that NetLogo could not.  It might be worth another look after all these years.

How about libraries that already exist to make your life easier?  I think there are GP libraries in several languages.  That might bias you in a particular direction.

And possibly most important of all is community.  Is there a community that is working in this domain?  This was why I mentioned Miles, I thought his WedTech talk of a few years past sounded like it might do well in this domain.

   -- Owen



============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org
Reply | Threaded
Open this post in threaded view
|

Re: Rule driven agent-based modeling systems

Stephen Guerin
On Aug 24, 2009, at 9:37 PM, Russ Abbott wrote:
> In general I like NetLogo. It would be my first choice for  
> relatively simple agent-based models. My main complaints about it  
> are its lack of a real sub-classing and inheritance system and  
> (probably most important) that one is limited to a single file of  
> NetLogo code. That means that anything of any complexity quickly  
> gets difficult to manage. Can you imagine dealing with the NetLogo  
> file that includes NetLogo GP code as well as any other code?  I  
> wouldn't want to start down that road.

A few versions back, Netlogo added the "__includes" keyword that lets  
you split up your code into multiple files. Very useful for reusable  
bits of code.

-S
--- -. .   ..-. .. ... ....   - .-- ---   ..-. .. ... ....
[hidden email]
(m) 505.577.5828  (o) 505.995.0206
redfish.com _ sfcomplex.org _ simtable.com _ lava3d.com








============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org
Reply | Threaded
Open this post in threaded view
|

Re: Rule driven agent-based modeling systems

Russ Abbott
Something else I didn't know! Thanks

-- Russ



On Tue, Aug 25, 2009 at 1:54 PM, Stephen Guerin <[hidden email]> wrote:
On Aug 24, 2009, at 9:37 PM, Russ Abbott wrote:
In general I like NetLogo. It would be my first choice for relatively simple agent-based models. My main complaints about it are its lack of a real sub-classing and inheritance system and (probably most important) that one is limited to a single file of NetLogo code. That means that anything of any complexity quickly gets difficult to manage. Can you imagine dealing with the NetLogo file that includes NetLogo GP code as well as any other code?  I wouldn't want to start down that road.

A few versions back, Netlogo added the "__includes" keyword that lets you split up your code into multiple files. Very useful for reusable bits of code.

-S
--- -. .   ..-. .. ... ....   - .-- ---   ..-. .. ... ....
[hidden email]
(m) 505.577.5828  (o) 505.995.0206
redfish.com _ sfcomplex.org _ simtable.com _ lava3d.com









============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org
Reply | Threaded
Open this post in threaded view
|

Re: Rule driven agent-based modeling systems

Douglas Roberts-2
In reply to this post by Stephen Guerin
NN (Netlogo Newbies):

Embrace the power of C++

http://www.cplusplus.com/reference/std/valarray/valarray/apply/

Hints of LISP:

http://www.n-a-n-o.com/lisp/cmucl-tutorials/LISP-tutorial-20.html

but without that troublesome garbage collector thing always slowing the parade to a crawl.  Cue Marcus.

--Doug


On Mon, Aug 24, 2009 at 9:54 PM, Stephen Guerin <[hidden email]> wrote:
On Aug 24, 2009, at 9:37 PM, Russ Abbott wrote:
In general I like NetLogo. It would be my first choice for relatively simple agent-based models. My main complaints about it are its lack of a real sub-classing and inheritance system and (probably most important) that one is limited to a single file of NetLogo code. That means that anything of any complexity quickly gets difficult to manage. Can you imagine dealing with the NetLogo file that includes NetLogo GP code as well as any other code?  I wouldn't want to start down that road.

A few versions back, Netlogo added the "__includes" keyword that lets you split up your code into multiple files. Very useful for reusable bits of code.


-S
--- -. .   ..-. .. ... ....   - .-- ---   ..-. .. ... ....
[hidden email]
(m) 505.577.5828  (o) 505.995.0206
redfish.com _ sfcomplex.org _ simtable.com _ lava3d.com





============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org
Reply | Threaded
Open this post in threaded view
|

Re: Rule driven agent-based modeling systems

Marcus G. Daniels
In reply to this post by Owen Densmore
Owen Densmore wrote:
> Well, will table based functionality work?  .. i.e. a list of behavior
> parameters?
Indeed, consider the case of the interface between C++ objects and
JavaScript in Firefox or the similar system in Windows.   The common
data structure from both sides is a table of function pointers.  It's
literally the vtbl for the class on the C++ side.  Using operator
overloading on the C++ side, one can think of normal looking C++
expressions as just indexes into tables of functions.    `+' would call
the add function, and so forth, and the table entries could be swapped
for other implementations.  Meanwhile, on the JavaScript side [or insert
your favorite interpreted language here] the table is generated at
runtime, and function call table entries encapsulate the function name
in a closure, but actually always call the same interpret entry points.

Marcus



============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org
Reply | Threaded
Open this post in threaded view
|

Re: Rule driven agent-based modeling systems

Douglas Roberts-2
What could possibly go wrong?



On Mon, Aug 24, 2009 at 10:13 PM, Marcus G. Daniels <[hidden email]> wrote:
Owen Densmore wrote:
Well, will table based functionality work?  .. i.e. a list of behavior parameters?
Indeed, consider the case of the interface between C++ objects and JavaScript in Firefox or the similar system in Windows.   The common data structure from both sides is a table of function pointers.  It's literally the vtbl for the class on the C++ side.  Using operator overloading on the C++ side, one can think of normal looking C++ expressions as just indexes into tables of functions.    `+' would call the add function, and so forth, and the table entries could be swapped for other implementations.  Meanwhile, on the JavaScript side [or insert your favorite interpreted language here] the table is generated at runtime, and function call table entries encapsulate the function name in a closure, but actually always call the same interpret entry points.
Marcus



============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org



--
Doug Roberts
[hidden email]
[hidden email]
505-455-7333 - Office
505-670-8195 - Cell

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org
Reply | Threaded
Open this post in threaded view
|

Re: Rule driven agent-based modeling systems

Sunny Fugate
In reply to this post by Russ Abbott
Russ,

I think that the NASA developed CLIPS rule-based system can be run in  
the way that you describe to drive an ABM system.  Each agent could  
encompasses either a fixed CLIPS executable with a malleable set of  
rules and/or a rule interpreter into which new rules can be defined or  
old rules modified.  The agents can communicate by sharing facts  
within a parent interpreter or agent as in a blackboard system, can be  
wrapped in another language to initiate agent execution or control  
agent execution, create and/or destroy themselves or each other, and  
implement calls to external procedures using a language of your  
choice.  Everything can be done via the firing of rules within the  
agents.  I've worked with implementations that performed most of these  
feats (other than creating/destroying entirely new agents, or the  
creation of non-template-based rules).  It seems that given new input  
into an agent from an outside source, an agent could use the input to  
create new rules that incorporate the input patterns and potentially  
escape the problem of fixed rule templates.  That is, literal input  
patterns that cause rules to fire can be used as first-class elements  
of new rule definitions.

There is also an object-oriented Java CLIPS implementation (JESS) at a  
cost to performance.  But the JESS implementation could integrate  
easily with existing Java-based ABM systems such as NetLogo.

v/r,

Sunny


On Aug 24, 2009, at 9:58 PM, [hidden email] wrote:

> Send Friam mailing list submissions to
> [hidden email]
>
> To subscribe or unsubscribe via the World Wide Web, visit
> http://redfish.com/mailman/listinfo/friam_redfish.com
> or, via email, send a message with subject or body 'help' to
> [hidden email]
>
> You can reach the person managing the list at
> [hidden email]
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of Friam digest..."
> Today's Topics:
>
>   1. Re: Rule driven agent-based modeling systems (Jochen Fromm)
>   2. Re: Rule driven agent-based modeling systems (Douglas Roberts)
>   3. Re: Rule driven agent-based modeling systems (Owen Densmore)
>   4. Re: Rule driven agent-based modeling systems (Douglas Roberts)
>   5. Re: Rule driven agent-based modeling systems (Marcus G. Daniels)
>   6. Re: Rule driven agent-based modeling systems (Russ Abbott)
>   7. Re: Rule driven agent-based modeling systems (Owen Densmore)
>   8. Re: Rule driven agent-based modeling systems (Russ Abbott)
>   9. Re: Rule driven agent-based modeling systems (Stephen Guerin)
>  10. Re: Rule driven agent-based modeling systems (Russ Abbott)
>
>
> -J.
>
> ----- Original Message -----
> From: Russ Abbott
> To: The Friday Morning Applied Complexity Coffee Group
> Sent: Monday, August 24, 2009 6:37 AM
> Subject: Re: [FRIAM] Rule driven agent-based modeling systems
>
> Eric,
>
> You said, "Not knowing what you want to do ...".
>
> It's clear from the rest of your message that you're absolutely  
> right. You
> have no idea what I want to do.
>
> What amazes me is that you nevertheless seem to think that you can  
> tell me
> the best way for me to do it. How can you be so arrogant?
>
> Perhaps that's also what went wrong in our discussion of  
> consciousness a
> while ago.
>
> -- Russ
>
>
>
>
>
>
>
> From: Douglas Roberts <[hidden email]>
> Date: August 24, 2009 2:01:26 PM MDT
> To: The Friday Morning Applied Complexity Coffee Group <[hidden email]
> >
> Subject: Re: [FRIAM] Rule driven agent-based modeling systems
> Reply-To: The Friday Morning Applied Complexity Coffee Group <[hidden email]
> >
>
>
> Welcome to alt.friam.flames! (You geezers know what I'm talking  
> about.)
>
> ;-}
>
> FWIW, I was thinking along similar lines, i.e. that a hybrid ABM /  
> rule-based simulation framework would be amenable to implementation  
> in C++.
>
> I was also going to make an observation that people (here on this  
> list, as well as elsewhere) seem to constantly be in search of some  
> kind of 'one size fits all' simulation development environment, and  
> that such always fail because they are too heavy-weight, too  
> cumbersome, and too constraining.
>
> But I won't make that comment now.
>
> --Doug
>
> On Mon, Aug 24, 2009 at 9:43 AM, Owen Densmore <[hidden email]>  
> wrote:
> Whew, good thing I didn't make the NetLogo response, its exactly  
> what I was thinking of though.  It would be quite easily done, I  
> believe, by a competent NetLogo programmer.
>
> We have several here on the list .. they could let us know if Eric  
> and I are wrong.
>
> Russ, there is no reason to be so rude.  It makes you appear a  
> pouting ass.
>
>    -- Owen
>
>
>
> On Aug 23, 2009, at 10:37 PM, Russ Abbott wrote:
>
> Eric,
>
> You said, "Not knowing what you want to do ...".
>
> It's clear from the rest of your message that you're absolutely  
> right. You have no idea what I want to do.
>
> What amazes me is that you nevertheless seem to think that you can  
> tell me the best way for me to do it. How can you be so arrogant?
>
> Perhaps that's also what went wrong in our discussion of  
> consciousness a while ago.
>
> -- Russ
>
>
>
> On Mon, Aug 24, 2009 at 1:58 PM, ERIC P. CHARLES <[hidden email]> wrote:
> Russ (and everyone else),
> Just because its what I know, I would do it in NetLogo. I'm not  
> suggesting that NetLogo will do what you want, just that it can  
> simulate doing what you want. Not knowing what you want to do, lets  
> keep it general:
>
> You start by making an "agent" with a list of things it can do, lets  
> label them 1-1000, and a list of things it can sense, lets label  
> them A-ZZZ. But there is a catch, the agent has no commands  
> connecting the sensory list to the behaviors list, a different  
> object must do that. The agent must query all the rules until it  
> finds one that accepts its current input, and then the rule sends it  
> a behavior code. (Note, that any combination of inputs can be  
> represented as a single signal or as several separate ones, it  
> doesn't matter for theses purposes)
>
> You then make several "rules", each of which receives a signal from  
> an agent and outputs a behavior command. One rule might be "If input  
> WFB, then behavior 134." Note, it doesn't matter how complicated the  
> rule is, this general formula will still work. Any countable  
> infinity of options can be re-presented using the natural numbers,  
> so it is a useful simplification. Alternatively, imagine that each  
> digit provides independent information and make the strings as long  
> as you wish.
>
> Now, to implement one of my suggestions you could use:
> 1) The "system level" solution: On an iterative basis asses the  
> benefit gained by individuals who accessed a given rule (i.e.  
> turtles who accessed rule 4 gained 140 points on average, while  
> turtles who accessed rule 5 only gained 2 points on average). This  
> master assessor then removes or modifies rules that aren't up to  
> snuff.
>
> 2) The "rule modified by agents" solution: Agents could have a third  
> set of attributes, in addition to behaviors and sensations they  
> might have "rule changers". Let's label them from ! to ^%*. For  
> example, command $% could tell the rule to select another behavior  
> at random, while command *# could tell the rule to simply add 1 to  
> the current behavior.
>
> 3) The "agents disobey" solution: Agents could in the presence of  
> certain sensations modify their reactions to the behavior a given  
> rule calls up in a permanent manner. This would require an attribute  
> that kept track of which rules had been previously followed and what  
> the agent had decided from that experience. For example, a given  
> sensation may indicate that doing certain behaviors is impossible or  
> unwise (you can't walk through a wall, you don't want to walk over a  
> cliff); under these circumstance, if a rule said "go forward" the  
> agent could permanently decide that if rule 89 ever says "go  
> forward" I'm gonna "turn right" instead.... where "go forward" =  
> "54" and "turn right" = "834". In this case the object labeled  
> "rule" is still the same, but only because the effect of the rule  
> has been altered within the agent, which for metaphorical purposes  
> should be sufficient.
>
> Because of the countable-infinity thing, I'm not sure what kinds of  
> thing a system like this couldn't simulate. Any combination of  
> inputs and outputs that a rule might give can be simulated in this  
> way. If you want to have 200 "sensory channels" and 200 "limbs" that  
> can do the various behaviors in the most subtle ways imaginable, it  
> would still work in essentially the same way, or could be simulated  
> in exactly the same way.
>
> Other complications are easy to incorporate: For example, you could  
> have a rule that responded to a large set of inputs, and have those  
> inputs change... or you could have rules link themselves together to  
> change simultaneously... or you could have the agent send several  
> inputs to the same rule by making it less accurate in detection. You  
> could have rules that delay sending the behavior command... or you  
> could just have a delay built into certain behavior commands.
>
>
> Eric
>
> P.S. I'm sorry for the bandwidth all, but I am continuing to  
> communicate through the list because I am hoping someone far more  
> experienced than I will chime in if I am giving poor advice.
>
>
>
>
> On Sun, Aug 23, 2009 10:32 PM, Russ Abbott <[hidden email]>  
> wrote:
> My original request was for an ABM system in which rules were first  
> class objects and could be constructed and modified dynamically.  
> Although your discussion casually suggests that rules can be treated  
> the same way as agents, you haven't mentioned a system in which that  
> was the case. Which system would you use to implement your example?  
> How, for example, can a rule alter itself over time? I'm not talking  
> about systems in which a rule modifies a field in a fixed template.  
> I'm talking about modifications that are more flexible.
>
> Certainly there are many examples in which rule modifications occur  
> within very limited domains. The various Prisoner Dilemma systems in  
> which the rules combine with each other come to mind. But the domain  
> of PD rules is very limited.
>
> Suppose you really wanted to do something along the lines that your  
> example suggests.  What sort of ABM system would you use? How could  
> a rule "randomly (or non-randomly) generate a new contingency" in  
> some way other than simply plugging new values into a fixed  
> template? As I've said, that's not what I want to do.
>
> If you know of an ABM system that has a built-in Genetic Programming  
> capability for generating rules, that would be a good start. Do you  
> know of any such system?
>
> -- Russ
>
>
>
>
> On Mon, Aug 24, 2009 at 11:10 AM, ERIC P. CHARLES <[hidden email]>  
> wrote:
> Well, there are some ways of playing fast and loose with the  
> metaphor. There are almost always easy, but computationally non-
> elegant, ways to simulate things like this. Remember, we have quotes  
> because "rules" and "agents" are just two classes of agents with  
> different structures.
>
> Some options:
> 1) The "rules" can alter themselves over time, as they can be agents  
> in a Darwinian algorithm or any other source of system level change  
> you want to impose.
> 2) The "rules" could accept instructions from the "agents" telling  
> them how to change.
> 3) The "agents" could adjust their responses to commands given by  
> the "rules" which effectively changes what the rule (now not in  
> quotes) does.
>
> To get some examples, let's start with a "rule" that says "when in a  
> red patch, turn left". That is, in the starting conditions the  
> "agent" tells the rule it is in a red patch, the "rule" replies back  
> "turn left":
> 1) Over time that particular "rule" could be deemed not-useful and  
> therefore done away with in some master way. It could either be  
> replaced by a different "rule", or there could just no longer be a  
> "rule" about what to do in red patches.
> 2) An "agent" in a red patch could for some reason no longer be able  
> to turn left. When this happens, it could send a command to the  
> "rule" telling the "rule" it needs to change, and the "rule" could  
> randomly (or non-randomly) generate a new contingency.
> 3) In the same situation, an "agent" could simply modify itself to  
> turns right instead; that is, when the command "turn left" is  
> received through that "rule" (or perhaps from any "rule"), the  
> "agent" now turns right. This is analogous to what happens at some  
> point for children when "don't touch that" becomes "touch that". The  
> parents persist in issuing the same command, but the rule (now not  
> in quotes) has clearly changed.
>
> Either way, if you are trying to answer a question, I think it  
> something like one of the above options is bound to work. If there  
> is some higher reason you are trying to do something in a particular  
> way, or you have reason to be worried about processor time, then it  
> might not be exactly what you are after.
>
> Eric
>
>
>
> On Sun, Aug 23, 2009 05:18 PM, Russ Abbott <[hidden email]>  
> wrote:
> Thanks Eric. It doesn't sound like your suggestion will do what I  
> want. I want to be able to create new rules dynamically as in rule  
> evolution. As I understand your scheme, the set of rule-agents is  
> fixed in advance.
>
> -- Russ
>
>
>
> On Sun, Aug 23, 2009 at 8:30 AM, ERIC P. CHARLES <[hidden email]> wrote:
> Russ,
> I'm probably just saying this out of ignorance, but... If you want  
> to "really" do that, I'm not sure how to do so.... However, given  
> that you are simulating anyway... If you want to simulate doing  
> that, it seems straightforward. Pick any agent-based simulation  
> program, create two classes of agents, call one class "rules" and  
> the others "agents". Let individuals in the "rules" class do all  
> sorts of things to individuals in the "agents" class (including  
> controlling which other "rules" they accept commands from and how  
> they respond to those commands).
>
> Not the most elegant solution in the world, but it would likely be  
> able to answer whatever question you want to answer (assuming it is  
> a question answering task you wish to engage in), with minimum time  
> spent banging your head against the wall programming it. My biases  
> (and lack of programming brilliance) typically lead me to find the  
> simplest way to simulate what I want, even if that means the  
> computers need to run a little longer. I assume there is some reason  
> this would not be satisfactory?
>
> Eric
>
>
>
>
> On Sat, Aug 22, 2009 11:13 PM, Russ Abbott <[hidden email]>  
> wrote:
> Hi,
>
> I'm interesting in developing a model that uses rule-driven agents.  
> I would like the agent rules to be condition-action rules, i.e.,  
> similar to the sorts of rules one finds in forward chaining  
> blackboard systems. In addition, I would like both the agents and  
> the rules themselves to be first class objects. In other words, the  
> rules should be able:
>
>        • to refer to agents,
>        • to create and destroy agents,
>        • to create new rules for newly created agents,
>        • to disable rules for existing agents, and
>        • to modify existing rules for existing agents.
> Does anyone know of a system like that?
>
> -- Russ============================================================
> FRIAM Applied Complexity Group listserv
> Meets Fridays 9a-11:30 at cafe at St. John's College
> lectures, archives, unsubscribe, maps at http://www.friam.org
> Eric Charles
>
> Professional Student and
> Assistant Professor of Psychology
> Penn State University
> Altoona, PA 16601
>
>
>
> ============================================================
> FRIAM Applied Complexity Group listserv
> Meets Fridays 9a-11:30 at cafe at St. John's College
> lectures, archives, unsubscribe, maps at http://www.friam.org
>
> ============================================================
> FRIAM Applied Complexity Group listserv
> Meets Fridays 9a-11:30 at cafe at St. John's College
> lectures, archives, unsubscribe, maps at http://www.friam.org
> Eric Charles
>
> Professional Student and
> Assistant Professor of Psychology
> Penn State University
> Altoona, PA 16601
>
>
>
> ============================================================
> FRIAM Applied Complexity Group listserv
> Meets Fridays 9a-11:30 at cafe at St. John's College
> lectures, archives, unsubscribe, maps at http://www.friam.org
>
> ============================================================
> FRIAM Applied Complexity Group listserv
> Meets Fridays 9a-11:30 at cafe at St. John's College
> lectures, archives, unsubscribe, maps at http://www.friam.org
> Eric Charles
>
> Professional Student and
> Assistant Professor of Psychology
> Penn State University
> Altoona, PA 16601
>
>
>
> ============================================================
> FRIAM Applied Complexity Group listserv
> Meets Fridays 9a-11:30 at cafe at St. John's College
> lectures, archives, unsubscribe, maps at http://www.friam.org
>
>
>
>
> From: Owen Densmore <[hidden email]>
> Date: August 24, 2009 4:15:15 PM MDT
> To: The Friday Morning Applied Complexity Coffee Group <[hidden email]
> >
> Subject: Re: [FRIAM] Rule driven agent-based modeling systems
> Reply-To: The Friday Morning Applied Complexity Coffee Group <[hidden email]
> >
>
>
> What surprised me about the anger was that I thought the thread was  
> quite successful:
> - A member asks for advice.
> - A large number of responses point out several possibilities, and  
> give an overview of the ABM world
> - A sensible suggestion on a DIY approach was made
> - OP goes non-linear.
>
> Did I miss anything?
>
>     -- Owen
>
>
> On Aug 24, 2009, at 2:01 PM, Douglas Roberts wrote:
>
>> Welcome to alt.friam.flames! (You geezers know what I'm talking  
>> about.)
>>
>> ;-}
>>
>> FWIW, I was thinking along similar lines, i.e. that a hybrid ABM /  
>> rule-based simulation framework would be amenable to implementation  
>> in C++.
>>
>> I was also going to make an observation that people (here on this  
>> list, as well as elsewhere) seem to constantly be in search of some  
>> kind of 'one size fits all' simulation development environment, and  
>> that such always fail because they are too heavy-weight, too  
>> cumbersome, and too constraining.
>>
>> But I won't make that comment now.
>>
>> --Doug
>>
>> On Mon, Aug 24, 2009 at 9:43 AM, Owen Densmore  
>> <[hidden email]> wrote:
>> Whew, good thing I didn't make the NetLogo response, its exactly  
>> what I was thinking of though.  It would be quite easily done, I  
>> believe, by a competent NetLogo programmer.
>>
>> We have several here on the list .. they could let us know if Eric  
>> and I are wrong.
>>
>> Russ, there is no reason to be so rude.  It makes you appear a  
>> pouting ass.
>>
>>    -- Owen
>>
>>
>>
>> On Aug 23, 2009, at 10:37 PM, Russ Abbott wrote:
>>
>> Eric,
>>
>> You said, "Not knowing what you want to do ...".
>>
>> It's clear from the rest of your message that you're absolutely  
>> right. You have no idea what I want to do.
>>
>> What amazes me is that you nevertheless seem to think that you can  
>> tell me the best way for me to do it. How can you be so arrogant?
>>
>> Perhaps that's also what went wrong in our discussion of  
>> consciousness a while ago.
>>
>> -- Russ
>>
>>
>>
>> On Mon, Aug 24, 2009 at 1:58 PM, ERIC P. CHARLES <[hidden email]>  
>> wrote:
>> Russ (and everyone else),
>> Just because its what I know, I would do it in NetLogo. I'm not  
>> suggesting that NetLogo will do what you want, just that it can  
>> simulate doing what you want. Not knowing what you want to do, lets  
>> keep it general:
>>
>> You start by making an "agent" with a list of things it can do,  
>> lets label them 1-1000, and a list of things it can sense, lets  
>> label them A-ZZZ. But there is a catch, the agent has no commands  
>> connecting the sensory list to the behaviors list, a different  
>> object must do that. The agent must query all the rules until it  
>> finds one that accepts its current input, and then the rule sends  
>> it a behavior code. (Note, that any combination of inputs can be  
>> represented as a single signal or as several separate ones, it  
>> doesn't matter for theses purposes)
>>
>> You then make several "rules", each of which receives a signal from  
>> an agent and outputs a behavior command. One rule might be "If  
>> input WFB, then behavior 134." Note, it doesn't matter how  
>> complicated the rule is, this general formula will still work. Any  
>> countable infinity of options can be re-presented using the natural  
>> numbers, so it is a useful simplification. Alternatively, imagine  
>> that each digit provides independent information and make the  
>> strings as long as you wish.
>>
>> Now, to implement one of my suggestions you could use:
>> 1) The "system level" solution: On an iterative basis asses the  
>> benefit gained by individuals who accessed a given rule (i.e.  
>> turtles who accessed rule 4 gained 140 points on average, while  
>> turtles who accessed rule 5 only gained 2 points on average). This  
>> master assessor then removes or modifies rules that aren't up to  
>> snuff.
>>
>> 2) The "rule modified by agents" solution: Agents could have a  
>> third set of attributes, in addition to behaviors and sensations  
>> they might have "rule changers". Let's label them from ! to ^%*.  
>> For example, command $% could tell the rule to select another  
>> behavior at random, while command *# could tell the rule to simply  
>> add 1 to the current behavior.
>>
>> 3) The "agents disobey" solution: Agents could in the presence of  
>> certain sensations modify their reactions to the behavior a given  
>> rule calls up in a permanent manner. This would require an  
>> attribute that kept track of which rules had been previously  
>> followed and what the agent had decided from that experience. For  
>> example, a given sensation may indicate that doing certain  
>> behaviors is impossible or unwise (you can't walk through a wall,  
>> you don't want to walk over a cliff); under these circumstance, if  
>> a rule said "go forward" the agent could permanently decide that if  
>> rule 89 ever says "go forward" I'm gonna "turn right" instead....  
>> where "go forward" = "54" and "turn right" = "834". In this case  
>> the object labeled "rule" is still the same, but only because the  
>> effect of the rule has been altered within the agent, which for  
>> metaphorical purposes should be sufficient.
>>
>> Because of the countable-infinity thing, I'm not sure what kinds of  
>> thing a system like this couldn't simulate. Any combination of  
>> inputs and outputs that a rule might give can be simulated in this  
>> way. If you want to have 200 "sensory channels" and 200 "limbs"  
>> that can do the various behaviors in the most subtle ways  
>> imaginable, it would still work in essentially the same way, or  
>> could be simulated in exactly the same way.
>>
>> Other complications are easy to incorporate: For example, you could  
>> have a rule that responded to a large set of inputs, and have those  
>> inputs change... or you could have rules link themselves together  
>> to change simultaneously... or you could have the agent send  
>> several inputs to the same rule by making it less accurate in  
>> detection. You could have rules that delay sending the behavior  
>> command... or you could just have a delay built into certain  
>> behavior commands.
>>
>>
>> Eric
>>
>> P.S. I'm sorry for the bandwidth all, but I am continuing to  
>> communicate through the list because I am hoping someone far more  
>> experienced than I will chime in if I am giving poor advice.
>>
>>
>>
>>
>> On Sun, Aug 23, 2009 10:32 PM, Russ Abbott <[hidden email]>  
>> wrote:
>> My original request was for an ABM system in which rules were first  
>> class objects and could be constructed and modified dynamically.  
>> Although your discussion casually suggests that rules can be  
>> treated the same way as agents, you haven't mentioned a system in  
>> which that was the case. Which system would you use to implement  
>> your example? How, for example, can a rule alter itself over time?  
>> I'm not talking about systems in which a rule modifies a field in a  
>> fixed template. I'm talking about modifications that are more  
>> flexible.
>>
>> Certainly there are many examples in which rule modifications occur  
>> within very limited domains. The various Prisoner Dilemma systems  
>> in which the rules combine with each other come to mind. But the  
>> domain of PD rules is very limited.
>>
>> Suppose you really wanted to do something along the lines that your  
>> example suggests.  What sort of ABM system would you use? How could  
>> a rule "randomly (or non-randomly) generate a new contingency" in  
>> some way other than simply plugging new values into a fixed  
>> template? As I've said, that's not what I want to do.
>>
>> If you know of an ABM system that has a built-in Genetic  
>> Programming capability for generating rules, that would be a good  
>> start. Do you know of any such system?
>>
>> -- Russ
>>
>>
>>
>>
>> On Mon, Aug 24, 2009 at 11:10 AM, ERIC P. CHARLES <[hidden email]>  
>> wrote:
>> Well, there are some ways of playing fast and loose with the  
>> metaphor. There are almost always easy, but computationally non-
>> elegant, ways to simulate things like this. Remember, we have  
>> quotes because "rules" and "agents" are just two classes of agents  
>> with different structures.
>>
>> Some options:
>> 1) The "rules" can alter themselves over time, as they can be  
>> agents in a Darwinian algorithm or any other source of system level  
>> change you want to impose.
>> 2) The "rules" could accept instructions from the "agents" telling  
>> them how to change.
>> 3) The "agents" could adjust their responses to commands given by  
>> the "rules" which effectively changes what the rule (now not in  
>> quotes) does.
>>
>> To get some examples, let's start with a "rule" that says "when in  
>> a red patch, turn left". That is, in the starting conditions the  
>> "agent" tells the rule it is in a red patch, the "rule" replies  
>> back "turn left":
>> 1) Over time that particular "rule" could be deemed not-useful and  
>> therefore done away with in some master way. It could either be  
>> replaced by a different "rule", or there could just no longer be a  
>> "rule" about what to do in red patches.
>> 2) An "agent" in a red patch could for some reason no longer be  
>> able to turn left. When this happens, it could send a command to  
>> the "rule" telling the "rule" it needs to change, and the "rule"  
>> could randomly (or non-randomly) generate a new contingency.
>> 3) In the same situation, an "agent" could simply modify itself to  
>> turns right instead; that is, when the command "turn left" is  
>> received through that "rule" (or perhaps from any "rule"), the  
>> "agent" now turns right. This is analogous to what happens at some  
>> point for children when "don't touch that" becomes "touch that".  
>> The parents persist in issuing the same command, but the rule (now  
>> not in quotes) has clearly changed.
>>
>> Either way, if you are trying to answer a question, I think it  
>> something like one of the above options is bound to work. If there  
>> is some higher reason you are trying to do something in a  
>> particular way, or you have reason to be worried about processor  
>> time, then it might not be exactly what you are after.
>>
>> Eric
>>
>>
>>
>> On Sun, Aug 23, 2009 05:18 PM, Russ Abbott <[hidden email]>  
>> wrote:
>> Thanks Eric. It doesn't sound like your suggestion will do what I  
>> want. I want to be able to create new rules dynamically as in rule  
>> evolution. As I understand your scheme, the set of rule-agents is  
>> fixed in advance.
>>
>> -- Russ
>>
>>
>>
>> On Sun, Aug 23, 2009 at 8:30 AM, ERIC P. CHARLES <[hidden email]>  
>> wrote:
>> Russ,
>> I'm probably just saying this out of ignorance, but... If you want  
>> to "really" do that, I'm not sure how to do so.... However, given  
>> that you are simulating anyway... If you want to simulate doing  
>> that, it seems straightforward. Pick any agent-based simulation  
>> program, create two classes of agents, call one class "rules" and  
>> the others "agents". Let individuals in the "rules" class do all  
>> sorts of things to individuals in the "agents" class (including  
>> controlling which other "rules" they accept commands from and how  
>> they respond to those commands).
>>
>> Not the most elegant solution in the world, but it would likely be  
>> able to answer whatever question you want to answer (assuming it is  
>> a question answering task you wish to engage in), with minimum time  
>> spent banging your head against the wall programming it. My biases  
>> (and lack of programming brilliance) typically lead me to find the  
>> simplest way to simulate what I want, even if that means the  
>> computers need to run a little longer. I assume there is some  
>> reason this would not be satisfactory?
>>
>> Eric
>>
>>
>>
>>
>> On Sat, Aug 22, 2009 11:13 PM, Russ Abbott <[hidden email]>  
>> wrote:
>> Hi,
>>
>> I'm interesting in developing a model that uses rule-driven agents.  
>> I would like the agent rules to be condition-action rules, i.e.,  
>> similar to the sorts of rules one finds in forward chaining  
>> blackboard systems. In addition, I would like both the agents and  
>> the rules themselves to be first class objects. In other words, the  
>> rules should be able:
>>
>>        • to refer to agents,
>>        • to create and destroy agents,
>>        • to create new rules for newly created agents,
>>        • to disable rules for existing agents, and
>>        • to modify existing rules for existing agents.
>> Does anyone know of a system like that?
>>
>> -- Russ============================================================
>> FRIAM Applied Complexity Group listserv
>> Meets Fridays 9a-11:30 at cafe at St. John's College
>> lectures, archives, unsubscribe, maps at http://www.friam.org
>> Eric Charles
>>
>> Professional Student and
>> Assistant Professor of Psychology
>> Penn State University
>> Altoona, PA 16601
>>
>>
>>
>> ============================================================
>> FRIAM Applied Complexity Group listserv
>> Meets Fridays 9a-11:30 at cafe at St. John's College
>> lectures, archives, unsubscribe, maps at http://www.friam.org
>>
>> ============================================================
>> FRIAM Applied Complexity Group listserv
>> Meets Fridays 9a-11:30 at cafe at St. John's College
>> lectures, archives, unsubscribe, maps at http://www.friam.org
>> Eric Charles
>>
>> Professional Student and
>> Assistant Professor of Psychology
>> Penn State University
>> Altoona, PA 16601
>>
>>
>>
>> ============================================================
>> FRIAM Applied Complexity Group listserv
>> Meets Fridays 9a-11:30 at cafe at St. John's College
>> lectures, archives, unsubscribe, maps at http://www.friam.org
>>
>> ============================================================
>> FRIAM Applied Complexity Group listserv
>> Meets Fridays 9a-11:30 at cafe at St. John's College
>> lectures, archives, unsubscribe, maps at http://www.friam.org
>> Eric Charles
>>
>> Professional Student and
>> Assistant Professor of Psychology
>> Penn State University
>> Altoona, PA 16601
>>
>>
>>
>> ============================================================
>> FRIAM Applied Complexity Group listserv
>> Meets Fridays 9a-11:30 at cafe at St. John's College
>> lectures, archives, unsubscribe, maps at http://www.friam.org
>>
>> ============================================================
>> FRIAM Applied Complexity Group listserv
>> Meets Fridays 9a-11:30 at cafe at St. John's College
>> lectures, archives, unsubscribe, maps at http://www.friam.org
>
>
>
>
> From: Douglas Roberts <[hidden email]>
> Date: August 24, 2009 4:25:16 PM MDT
> To: The Friday Morning Applied Complexity Coffee Group <[hidden email]
> >
> Subject: Re: [FRIAM] Rule driven agent-based modeling systems
> Reply-To: The Friday Morning Applied Complexity Coffee Group <[hidden email]
> >
>
>
> Perhaps you overlooked the stochastic nature of the human-agent  
> temperament.
>
> draw_random_fury_factor (-5, 9000);
>
> --Doug
>
> On Mon, Aug 24, 2009 at 4:15 PM, Owen Densmore <[hidden email]>  
> wrote:
> [...]
>
> Did I miss anything?
>
>     -- Owen
>
>
> On Aug 24, 2009, at 2:01 PM, Douglas Roberts wrote:
>
>> Welcome to alt.friam.flames! (You geezers know what I'm talking  
>> about.)
>>
>> ;-}
>>
>> FWIW, I was thinking along similar lines, i.e. that a hybrid ABM /  
>> rule-based simulation framework would be amenable to implementation  
>> in C++.
>>
>> I was also going to make an observation that people (here on this  
>> list, as well as elsewhere) seem to constantly be in search of some  
>> kind of 'one size fits all' simulation development environment, and  
>> that such always fail because they are too heavy-weight, too  
>> cumbersome, and too constraining.
>>
>> But I won't make that comment now.
>>
>> --Doug
>>
>> On Mon, Aug 24, 2009 at 9:43 AM, Owen Densmore  
>> <[hidden email]> wrote:
>> Whew, good thing I didn't make the NetLogo response, its exactly  
>> what I was thinking of though.  It would be quite easily done, I  
>> believe, by a competent NetLogo programmer.
>>
>> We have several here on the list .. they could let us know if Eric  
>> and I are wrong.
>>
>> Russ, there is no reason to be so rude.  It makes you appear a  
>> pouting ass.
>>
>>    -- Owen
>>
>>
>>
>> On Aug 23, 2009, at 10:37 PM, Russ Abbott wrote:
>>
>> Eric,
>>
>> You said, "Not knowing what you want to do ...".
>>
>> It's clear from the rest of your message that you're absolutely  
>> right. You have no idea what I want to do.
>>
>> What amazes me is that you nevertheless seem to think that you can  
>> tell me the best way for me to do it. How can you be so arrogant?
>>
>> Perhaps that's also what went wrong in our discussion of  
>> consciousness a while ago.
>>
>> -- Russ
>>
>>
>>
>> On Mon, Aug 24, 2009 at 1:58 PM, ERIC P. CHARLES <[hidden email]>  
>> wrote:
>> Russ (and everyone else),
>> Just because its what I know, I would do it in NetLogo. I'm not  
>> suggesting that NetLogo will do what you want, just that it can  
>> simulate doing what you want. Not knowing what you want to do, lets  
>> keep it general:
>>
>> You start by making an "agent" with a list of things it can do,  
>> lets label them 1-1000, and a list of things it can sense, lets  
>> label them A-ZZZ. But there is a catch, the agent has no commands  
>> connecting the sensory list to the behaviors list, a different  
>> object must do that. The agent must query all the rules until it  
>> finds one that accepts its current input, and then the rule sends  
>> it a behavior code. (Note, that any combination of inputs can be  
>> represented as a single signal or as several separate ones, it  
>> doesn't matter for theses purposes)
>>
>> You then make several "rules", each of which receives a signal from  
>> an agent and outputs a behavior command. One rule might be "If  
>> input WFB, then behavior 134." Note, it doesn't matter how  
>> complicated the rule is, this general formula will still work. Any  
>> countable infinity of options can be re-presented using the natural  
>> numbers, so it is a useful simplification. Alternatively, imagine  
>> that each digit provides independent information and make the  
>> strings as long as you wish.
>>
>> Now, to implement one of my suggestions you could use:
>> 1) The "system level" solution: On an iterative basis asses the  
>> benefit gained by individuals who accessed a given rule (i.e.  
>> turtles who accessed rule 4 gained 140 points on average, while  
>> turtles who accessed rule 5 only gained 2 points on average). This  
>> master assessor then removes or modifies rules that aren't up to  
>> snuff.
>>
>> 2) The "rule modified by agents" solution: Agents could have a  
>> third set of attributes, in addition to behaviors and sensations  
>> they might have "rule changers". Let's label them from ! to ^%*.  
>> For example, command $% could tell the rule to select another  
>> behavior at random, while command *# could tell the rule to simply  
>> add 1 to the current behavior.
>>
>> 3) The "agents disobey" solution: Agents could in the presence of  
>> certain sensations modify their reactions to the behavior a given  
>> rule calls up in a permanent manner. This would require an  
>> attribute that kept track of which rules had been previously  
>> followed and what the agent had decided from that experience. For  
>> example, a given sensation may indicate that doing certain  
>> behaviors is impossible or unwise (you can't walk through a wall,  
>> you don't want to walk over a cliff); under these circumstance, if  
>> a rule said "go forward" the agent could permanently decide that if  
>> rule 89 ever says "go forward" I'm gonna "turn right" instead....  
>> where "go forward" = "54" and "turn right" = "834". In this case  
>> the object labeled "rule" is still the same, but only because the  
>> effect of the rule has been altered within the agent, which for  
>> metaphorical purposes should be sufficient.
>>
>> Because of the countable-infinity thing, I'm not sure what kinds of  
>> thing a system like this couldn't simulate. Any combination of  
>> inputs and outputs that a rule might give can be simulated in this  
>> way. If you want to have 200 "sensory channels" and 200 "limbs"  
>> that can do the various behaviors in the most subtle ways  
>> imaginable, it would still work in essentially the same way, or  
>> could be simulated in exactly the same way.
>>
>> Other complications are easy to incorporate: For example, you could  
>> have a rule that responded to a large set of inputs, and have those  
>> inputs change... or you could have rules link themselves together  
>> to change simultaneously... or you could have the agent send  
>> several inputs to the same rule by making it less accurate in  
>> detection. You could have rules that delay sending the behavior  
>> command... or you could just have a delay built into certain  
>> behavior commands.
>>
>>
>> Eric
>>
>> P.S. I'm sorry for the bandwidth all, but I am continuing to  
>> communicate through the list because I am hoping someone far more  
>> experienced than I will chime in if I am giving poor advice.
>>
>>
>>
>>
>> On Sun, Aug 23, 2009 10:32 PM, Russ Abbott <[hidden email]>  
>> wrote:
>> My original request was for an ABM system in which rules were first  
>> class objects and could be constructed and modified dynamically.  
>> Although your discussion casually suggests that rules can be  
>> treated the same way as agents, you haven't mentioned a system in  
>> which that was the case. Which system would you use to implement  
>> your example? How, for example, can a rule alter itself over time?  
>> I'm not talking about systems in which a rule modifies a field in a  
>> fixed template. I'm talking about modifications that are more  
>> flexible.
>>
>> Certainly there are many examples in which rule modifications occur  
>> within very limited domains. The various Prisoner Dilemma systems  
>> in which the rules combine with each other come to mind. But the  
>> domain of PD rules is very limited.
>>
>> Suppose you really wanted to do something along the lines that your  
>> example suggests.  What sort of ABM system would you use? How could  
>> a rule "randomly (or non-randomly) generate a new contingency" in  
>> some way other than simply plugging new values into a fixed  
>> template? As I've said, that's not what I want to do.
>>
>> If you know of an ABM system that has a built-in Genetic  
>> Programming capability for generating rules, that would be a good  
>> start. Do you know of any such system?
>>
>> -- Russ
>>
>>
>>
>>
>> On Mon, Aug 24, 2009 at 11:10 AM, ERIC P. CHARLES <[hidden email]>  
>> wrote:
>> Well, there are some ways of playing fast and loose with the  
>> metaphor. There are almost always easy, but computationally non-
>> elegant, ways to simulate things like this. Remember, we have  
>> quotes because "rules" and "agents" are just two classes of agents  
>> with different structures.
>>
>> Some options:
>> 1) The "rules" can alter themselves over time, as they can be  
>> agents in a Darwinian algorithm or any other source of system level  
>> change you want to impose.
>> 2) The "rules" could accept instructions from the "agents" telling  
>> them how to change.
>> 3) The "agents" could adjust their responses to commands given by  
>> the "rules" which effectively changes what the rule (now not in  
>> quotes) does.
>>
>> To get some examples, let's start with a "rule" that says "when in  
>> a red patch, turn left". That is, in the starting conditions the  
>> "agent" tells the rule it is in a red patch, the "rule" replies  
>> back "turn left":
>> 1) Over time that particular "rule" could be deemed not-useful and  
>> therefore done away with in some master way. It could either be  
>> replaced by a different "rule", or there could just no longer be a  
>> "rule" about what to do in red patches.
>> 2) An "agent" in a red patch could for some reason no longer be  
>> able to turn left. When this happens, it could send a command to  
>> the "rule" telling the "rule" it needs to change, and the "rule"  
>> could randomly (or non-randomly) generate a new contingency.
>> 3) In the same situation, an "agent" could simply modify itself to  
>> turns right instead; that is, when the command "turn left" is  
>> received through that "rule" (or perhaps from any "rule"), the  
>> "agent" now turns right. This is analogous to what happens at some  
>> point for children when "don't touch that" becomes "touch that".  
>> The parents persist in issuing the same command, but the rule (now  
>> not in quotes) has clearly changed.
>>
>> Either way, if you are trying to answer a question, I think it  
>> something like one of the above options is bound to work. If there  
>> is some higher reason you are trying to do something in a  
>> particular way, or you have reason to be worried about processor  
>> time, then it might not be exactly what you are after.
>>
>> Eric
>>
>>
>>
>> On Sun, Aug 23, 2009 05:18 PM, Russ Abbott <[hidden email]>  
>> wrote:
>> Thanks Eric. It doesn't sound like your suggestion will do what I  
>> want. I want to be able to create new rules dynamically as in rule  
>> evolution. As I understand your scheme, the set of rule-agents is  
>> fixed in advance.
>>
>> -- Russ
>>
>>
>>
>> On Sun, Aug 23, 2009 at 8:30 AM, ERIC P. CHARLES <[hidden email]>  
>> wrote:
>> Russ,
>> I'm probably just saying this out of ignorance, but... If you want  
>> to "really" do that, I'm not sure how to do so.... However, given  
>> that you are simulating anyway... If you want to simulate doing  
>> that, it seems straightforward. Pick any agent-based simulation  
>> program, create two classes of agents, call one class "rules" and  
>> the others "agents". Let individuals in the "rules" class do all  
>> sorts of things to individuals in the "agents" class (including  
>> controlling which other "rules" they accept commands from and how  
>> they respond to those commands).
>>
>> Not the most elegant solution in the world, but it would likely be  
>> able to answer whatever question you want to answer (assuming it is  
>> a question answering task you wish to engage in), with minimum time  
>> spent banging your head against the wall programming it. My biases  
>> (and lack of programming brilliance) typically lead me to find the  
>> simplest way to simulate what I want, even if that means the  
>> computers need to run a little longer. I assume there is some  
>> reason this would not be satisfactory?
>>
>> Eric
>>
>>
>>
>>
>> On Sat, Aug 22, 2009 11:13 PM, Russ Abbott <[hidden email]>  
>> wrote:
>> Hi,
>>
>> I'm interesting in developing a model that uses rule-driven agents.  
>> I would like the agent rules to be condition-action rules, i.e.,  
>> similar to the sorts of rules one finds in forward chaining  
>> blackboard systems. In addition, I would like both the agents and  
>> the rules themselves to be first class objects. In other words, the  
>> rules should be able:
>>
>>        • to refer to agents,
>>        • to create and destroy agents,
>>        • to create new rules for newly created agents,
>>        • to disable rules for existing agents, and
>>        • to modify existing rules for existing agents.
>> Does anyone know of a system like that?
>>
>> -- Russ============================================================
>> FRIAM Applied Complexity Group listserv
>> Meets Fridays 9a-11:30 at cafe at St. John's College
>> lectures, archives, unsubscribe, maps at http://www.friam.org
>> Eric Charles
>>
>> Professional Student and
>> Assistant Professor of Psychology
>> Penn State University
>> Altoona, PA 16601
>>
>>
>>
>> ============================================================
>> FRIAM Applied Complexity Group listserv
>> Meets Fridays 9a-11:30 at cafe at St. John's College
>> lectures, archives, unsubscribe, maps at http://www.friam.org
>>
>> ============================================================
>> FRIAM Applied Complexity Group listserv
>> Meets Fridays 9a-11:30 at cafe at St. John's College
>> lectures, archives, unsubscribe, maps at http://www.friam.org
>> Eric Charles
>>
>> Professional Student and
>> Assistant Professor of Psychology
>> Penn State University
>> Altoona, PA 16601
>>
>>
>>
>> ============================================================
>> FRIAM Applied Complexity Group listserv
>> Meets Fridays 9a-11:30 at cafe at St. John's College
>> lectures, archives, unsubscribe, maps at http://www.friam.org
>>
>> ============================================================
>> FRIAM Applied Complexity Group listserv
>> Meets Fridays 9a-11:30 at cafe at St. John's College
>> lectures, archives, unsubscribe, maps at http://www.friam.org
>> Eric Charles
>>
>> Professional Student and
>> Assistant Professor of Psychology
>> Penn State University
>> Altoona, PA 16601
>>
>>
>>
>> ============================================================
>> FRIAM Applied Complexity Group listserv
>> Meets Fridays 9a-11:30 at cafe at St. John's College
>> lectures, archives, unsubscribe, maps at http://www.friam.org
>>
>> ============================================================
>> FRIAM Applied Complexity Group listserv
>> Meets Fridays 9a-11:30 at cafe at St. John's College
>> lectures, archives, unsubscribe, maps at http://www.friam.org
>
>
> ============================================================
> FRIAM Applied Complexity Group listserv
> Meets Fridays 9a-11:30 at cafe at St. John's College
> lectures, archives, unsubscribe, maps at http://www.friam.org
>
>
>
> --
> Doug Roberts
> [hidden email]
> [hidden email]
> 505-455-7333 - Office
> 505-670-8195 - Cell
>
>
>
>
>> In this case the object labeled "rule" is still the same, but only
> because the effect of the rule has been altered within the agent,  
> which
> for metaphorical purposes should be sufficient.
>
> I'd say it comes down to whether or not predicate/action pairs can be
> defined on the fly.   So long as there a way to make new functions  
> that
> test for things and also can describe new
> states of the world (of which one part is more predicate/action  
> pairs),
> then it should work fine for hybrid genetic programming / ABM.   I'd  
> put
> this under the general category of `rewriting systems'.
>
> There is an important practical difference between, say, forking the  
> GCC
> compiler every time a variant agent is proposed, versus having
> lightweight just-in-time native code compilation from first class
> programming language objects.  You can crudely approximate the latter
> with more and more ad-hoc hacks (like you mention) in almost any  
> kind of
> programming or modeling environment, but why not use tools well suited
> to the job?   In the end an ad-hoc interpreter for will be clumsy and
> slow compared to the work of programming language implementors who  
> spend
> years on design, tuning and optimization.
>
> Marcus
>
>
>
>
>
> From: Russ Abbott <[hidden email]>
> Date: August 24, 2009 6:18:39 PM MDT
> To: The Friday Morning Applied Complexity Coffee Group <[hidden email]
> >
> Subject: Re: [FRIAM] Rule driven agent-based modeling systems
> Reply-To: [hidden email], The Friday Morning Applied  
> Complexity Coffee Group <[hidden email]>
>
>
> I think Marcus gets to the problem. I want agents that can create  
> new agents that have new functionality -- or equivalently modify  
> their own functionality in new ways. I don't see how NetLogo lets  
> you do that. You need the ability to manipulate the rules  
> themselves. Just creating a class of agents that you call rules  
> doesn't do that. It doesn't provide a way to create new functionality.
>
> As Marcus said, think of it as having an agent with a built-in  
> Genetic Programming system that it can use to generate and test  
> possible rules.  When it finds good ones, it either creates a new  
> agent with those rules or it replaces its own rules with that new  
> rule set. I don't see how NetLogo lets you do that without building  
> an awful lot of new stuff.
>
> -- Russ
>
>
>
> On Mon, Aug 24, 2009 at 4:15 PM, Marcus G. Daniels <[hidden email]
> > wrote:
>
> > In this case the object labeled "rule" is still the same, but only  
> because the effect of the rule has been altered within the agent,  
> which for metaphorical purposes should be sufficient.
>
> I'd say it comes down to whether or not predicate/action pairs can  
> be defined on the fly.   So long as there a way to make new  
> functions that test for things and also can describe new
> states of the world (of which one part is more predicate/action  
> pairs), then it should work fine for hybrid genetic programming /  
> ABM.   I'd put this under the general category of `rewriting systems'.
>
> There is an important practical difference between, say, forking the  
> GCC compiler every time a variant agent is proposed, versus having  
> lightweight just-in-time native code compilation from first class  
> programming language objects.  You can crudely approximate the  
> latter with more and more ad-hoc hacks (like you mention) in almost  
> any kind of programming or modeling environment, but why not use  
> tools well suited to the job?   In the end an ad-hoc interpreter for  
> will be clumsy and slow compared to the work of programming language  
> implementors who spend years on design, tuning and optimization.
> Marcus
>
>
> ============================================================
> FRIAM Applied Complexity Group listserv
> Meets Fridays 9a-11:30 at cafe at St. John's College
> lectures, archives, unsubscribe, maps at http://www.friam.org
>
>
>
>
> From: Owen Densmore <[hidden email]>
> Date: August 24, 2009 7:44:45 PM MDT
> To: [hidden email], The Friday Morning Applied Complexity  
> Coffee Group <[hidden email]>
> Subject: Re: [FRIAM] Rule driven agent-based modeling systems
> Reply-To: The Friday Morning Applied Complexity Coffee Group <[hidden email]
> >
>
>
> On Aug 24, 2009, at 6:18 PM, Russ Abbott wrote:
>
>> I think Marcus gets to the problem. I want agents that can create  
>> new agents that have new functionality -- or equivalently modify  
>> their own functionality in new ways.
>
> Well, will table based functionality work?  .. i.e. a list of  
> behavior parameters?
>
> If not, will code snippets work?  NetLogo has an eval function (run  
> & runresult) so dynamically created code snippets as behavior would  
> work.
>
>> I don't see how NetLogo lets you do that.
>
> See above.
>
>> You need the ability to manipulate the rules themselves.
>
> Well, both the above can do that.
>
>> Just creating a class of agents that you call rules doesn't do  
>> that. It doesn't provide a way to create new functionality.
>
> Agreed, at least to a point.  Agents are just objects.  And it's  
> easy to embed genetic algorithms in them.  I've got a TSP in NetLogo  
> that uses the algorithm in the Modern Heuristics book.  The list  
> probably has a hundred others!
>
>> As Marcus said, think of it as having an agent with a built-in  
>> Genetic Programming system that it can use to generate and test  
>> possible rules.
>
> OK, GP is a different deal.  None the less, with the eval function  
> in NetLogo, I believe this is possible.
>
>> When it finds good ones, it either creates a new agent with those  
>> rules or it replaces its own rules with that new rule set. I don't  
>> see how NetLogo lets you do that without building an awful lot of  
>> new stuff.
>
> This is a fairly big/detailed conversation.
>
> What are the performance requirements, for example.  Redfish found  
> that there are ranges of number of agents that NetLogo does fine  
> with, but when we got in the 100,000+ range, we had to move to  
> Processing and roll our own agent classes.  Hopefully you are not in  
> Doug Robert's world needing a room full of clusters, but if so, he's  
> the expert on the list.
>
> How about other agent systems?  We used Repast for years, but found  
> that there was very little it could do that NetLogo could not.  It  
> might be worth another look after all these years.
>
> How about libraries that already exist to make your life easier?  I  
> think there are GP libraries in several languages.  That might bias  
> you in a particular direction.
>
> And possibly most important of all is community.  Is there a  
> community that is working in this domain?  This was why I mentioned  
> Miles, I thought his WedTech talk of a few years past sounded like  
> it might do well in this domain.
>
>    -- Owen
>
>
>
>
> From: Russ Abbott <[hidden email]>
> Date: August 24, 2009 9:37:17 PM MDT
> To: Owen Densmore <[hidden email]>
> Cc: The Friday Morning Applied Complexity Coffee Group <[hidden email]
> >
> Subject: Re: [FRIAM] Rule driven agent-based modeling systems
> Reply-To: [hidden email], The Friday Morning Applied  
> Complexity Coffee Group <[hidden email]>
>
>
> Thanks Owen,
>
> I didn't know that NetLogo had an eval function. There's nothing in  
> the documentation under eval. In fact, the word eval doesn't appear  
> at all, and evaluate appears only in a couple of comments in  
> examples that evaluate expressions. But now that I know it's there,  
> it's worth considering. There's still the problem of generating the  
> strings to be evaluated though. Without syntactic support that's non-
> trivial. Writing and embedding a GP system is also non-trivial. (I'm  
> not interested in a GA with a fixed rule template.)
>
> In general I like NetLogo. It would be my first choice for  
> relatively simple agent-based models. My main complaints about it  
> are its lack of a real sub-classing and inheritance system and  
> (probably most important) that one is limited to a single file of  
> NetLogo code. That means that anything of any complexity quickly  
> gets difficult to manage. Can you imagine dealing with the NetLogo  
> file that includes NetLogo GP code as well as any other code?  I  
> wouldn't want to start down that road.
>
> I couldn't find the paper by Miles Parker that you mentioned. I  
> think I copied the list on the email I sent him. So far, no reply.
>
> I mentioned Drools in an early message.  Here's what I originally  
> wrote about it.
>
> >From my initial Googling the closest I could find was Drools. It's  
> intended to provide a forward chaining rule programming language for  
> distributed systems (J2EE). It's open source from JBoss. Although it  
> has nothing to do with Agent-based modeling, it seems quite nice and  
> quite general. It runs on top of and is completely integrated with  
> Java.  Agent can simply be an object type. Its Template capability  
> allows it to generate rules that are stored as Java objects, which  
> seems to make it capable of manipulating rules dynamically. I'll  
> have to look into that further. One can make it a simulation engine  
> by keeping a tick counter in the workspace.
>
> I still think it's the closest thing there is to what I want -- and  
> that it can be used to create what I want more easily than anything  
> else. In fact, with a little work I'd say that enhanced Drools can  
> be to Java with respect to Agent-Based Modeling and Simulation what C
> ++ is to C with respect to OO Programming. It will be a layer of  
> abstraction that provides the right additional elements without  
> getting in the way. (By the way, Jade doesn't do it for me. It  
> supports agent communication across a distributed system, but it  
> doesn't support rule-driven agents or agents interacting in a  
> workspace. It's intended domain of application is really not ABM.  
> It's intended to allow one to use agents in developing distributed  
> systems.)
>
> I had never heard of Drools a week ago. But having looked at the  
> website, having scanned some of the documentation, and having talked  
> to some of the people on their list, I'm quite impressed. It seems  
> well designed and implemented. It's also both mature (version 5.0)  
> and active.
>
> It's amazing what's out there that one doesn't know about. I thought  
> that perhaps the list could point me to some other hidden gems.
>
> -- Russ
>
>
>
> On Tue, Aug 25, 2009 at 11:44 AM, Owen Densmore  
> <[hidden email]> wrote:
> On Aug 24, 2009, at 6:18 PM, Russ Abbott wrote:
>
>> I think Marcus gets to the problem. I want agents that can create  
>> new agents that have new functionality -- or equivalently modify  
>> their own functionality in new ways.
>
> Well, will table based functionality work?  .. i.e. a list of  
> behavior parameters?
>
> If not, will code snippets work?  NetLogo has an eval function (run  
> & runresult) so dynamically created code snippets as behavior would  
> work.
>
>> I don't see how NetLogo lets you do that.
>
> See above.
>
>> You need the ability to manipulate the rules themselves.
>
> Well, both the above can do that.
>
>> Just creating a class of agents that you call rules doesn't do  
>> that. It doesn't provide a way to create new functionality.
>
> Agreed, at least to a point.  Agents are just objects.  And it's  
> easy to embed genetic algorithms in them.  I've got a TSP in NetLogo  
> that uses the algorithm in the Modern Heuristics book.  The list  
> probably has a hundred others!
>
>> As Marcus said, think of it as having an agent with a built-in  
>> Genetic Programming system that it can use to generate and test  
>> possible rules.
>
> OK, GP is a different deal.  None the less, with the eval function  
> in NetLogo, I believe this is possible.
>
>> When it finds good ones, it either creates a new agent with those  
>> rules or it replaces its own rules with that new rule set. I don't  
>> see how NetLogo lets you do that without building an awful lot of  
>> new stuff.
>
> This is a fairly big/detailed conversation.
>
> What are the performance requirements, for example.  Redfish found  
> that there are ranges of number of agents that NetLogo does fine  
> with, but when we got in the 100,000+ range, we had to move to  
> Processing and roll our own agent classes.  Hopefully you are not in  
> Doug Robert's world needing a room full of clusters, but if so, he's  
> the expert on the list.
>
> How about other agent systems?  We used Repast for years, but found  
> that there was very little it could do that NetLogo could not.  It  
> might be worth another look after all these years.
>
> How about libraries that already exist to make your life easier?  I  
> think there are GP libraries in several languages.  That might bias  
> you in a particular direction.
>
> And possibly most important of all is community.  Is there a  
> community that is working in this domain?  This was why I mentioned  
> Miles, I thought his WedTech talk of a few years past sounded like  
> it might do well in this domain.
>
>    -- Owen
>
>
>
>
>
>
> A few versions back, Netlogo added the "__includes" keyword that lets
> you split up your code into multiple files. Very useful for reusable
> bits of code.
>
> -S
> --- -. .   ..-. .. ... ....   - .-- ---   ..-. .. ... ....
> [hidden email]
> (m) 505.577.5828  (o) 505.995.0206
> redfish.com _ sfcomplex.org _ simtable.com _ lava3d.com
>
>
>
>
>
>
>
>
>
>
>
>
> From: Russ Abbott <[hidden email]>
> Date: August 24, 2009 9:57:33 PM MDT
> To: Stephen Guerin <[hidden email]>
> Cc: The Friday Morning Applied Complexity Coffee Group <[hidden email]
> >
> Subject: Re: [FRIAM] Rule driven agent-based modeling systems
> Reply-To: [hidden email], The Friday Morning Applied  
> Complexity Coffee Group <[hidden email]>
>
>
> Something else I didn't know! Thanks
>
> -- Russ
>
>
>
> On Tue, Aug 25, 2009 at 1:54 PM, Stephen Guerin <[hidden email]
> > wrote:
> On Aug 24, 2009, at 9:37 PM, Russ Abbott wrote:
> In general I like NetLogo. It would be my first choice for  
> relatively simple agent-based models. My main complaints about it  
> are its lack of a real sub-classing and inheritance system and  
> (probably most important) that one is limited to a single file of  
> NetLogo code. That means that anything of any complexity quickly  
> gets difficult to manage. Can you imagine dealing with the NetLogo  
> file that includes NetLogo GP code as well as any other code?  I  
> wouldn't want to start down that road.
>
> A few versions back, Netlogo added the "__includes" keyword that  
> lets you split up your code into multiple files. Very useful for  
> reusable bits of code.
>
> -S
> --- -. .   ..-. .. ... ....   - .-- ---   ..-. .. ... ....
> [hidden email]
> (m) 505.577.5828  (o) 505.995.0206
> redfish.com _ sfcomplex.org _ simtable.com _ lava3d.com
>
>
>
>
>
>
>
>
>
>
> _______________________________________________
> Friam mailing list
> [hidden email]
> http://redfish.com/mailman/listinfo/friam_redfish.com


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org
Reply | Threaded
Open this post in threaded view
|

Re: Rule driven agent-based modeling systems

Marcus G. Daniels
In reply to this post by Douglas Roberts-2
Douglas Roberts wrote:
> What could possibly go wrong?
>
>
Well, it's not very fast for one thing.  Making every primitive operator
involve a blind function call means that ahead-of-time inlining and
planned use of CPU registers is out.    It doesn't make fast code
impossible, but it makes the problem harder.   At some point the harder
effort goes beyond what interpreted ABM toolkit folks will have the time
and energy to do themselves -- they just have to layer on other stuff
like Java, JavaScript or Python runtimes, and hope for the best.  



============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org
Reply | Threaded
Open this post in threaded view
|

Re: Rule driven agent-based modeling systems

Carl Tollander
In reply to this post by Russ Abbott
Would it be at all accurate to say you are looking for something akin to
an RNA-world?   More a regulatory and mixing n-category gumbo than
simple recombination of DNA deck chairs?   If so, one might look to
Caporale, Margulis, Nusslein-Volhard, Carroll, Edelman for more
biologically-inspired approaches.    Nevertheless, a "systems biology"
reducible to a computing language environment is certainly non-trivial,
though I have some hope it will sweep the others aside eventually.

I suspect we should look more closely at what we might mean by
"on-the-fly".  Sensitivity to world states defined as the presence or
absence of local dimensions and values those dimensions might take on
may not describe "on-the-fly" adequately.   Rather, we are looking to
world states as other regulatory systems or n-categories (topoi?),
themselves operating "on-the-fly".   I'm not at all sure that simple
rules and rule-rewrites are a viable path for describing such states.  

Carl


Russ Abbott wrote:

> I think Marcus gets to the problem. I want agents that can create new
> agents that have new functionality -- or equivalently modify their own
> functionality in new ways. I don't see how NetLogo lets you do that.
> You need the ability to manipulate the rules themselves. Just creating
> a class of agents that you call rules doesn't do that. It doesn't
> provide a way to create new functionality.
>
> As Marcus said, think of it as having an agent with a built-in Genetic
> Programming system that it can use to generate and test possible
> rules.  When it finds good ones, it either creates a new agent with
> those rules or it replaces its own rules with that new rule set. I
> don't see how NetLogo lets you do that without building an awful lot
> of new stuff.
>
> -- Russ
>
>
>
> On Mon, Aug 24, 2009 at 4:15 PM, Marcus G. Daniels
> <[hidden email] <mailto:[hidden email]>> wrote:
>
>
>     > In this case the object labeled "rule" is still the same, but
>     only because the effect of the rule has been altered within the
>     agent, which for metaphorical purposes should be sufficient.
>
>     I'd say it comes down to whether or not predicate/action pairs can
>     be defined on the fly.   So long as there a way to make new
>     functions that test for things and also can describe new
>     states of the world (of which one part is more predicate/action
>     pairs), then it should work fine for hybrid genetic programming /
>     ABM.   I'd put this under the general category of `rewriting systems'.
>
>     There is an important practical difference between, say, forking
>     the GCC compiler every time a variant agent is proposed, versus
>     having lightweight just-in-time native code compilation from first
>     class programming language objects.  You can crudely approximate
>     the latter with more and more ad-hoc hacks (like you mention) in
>     almost any kind of programming or modeling environment, but why
>     not use tools well suited to the job?   In the end an ad-hoc
>     interpreter for will be clumsy and slow compared to the work of
>     programming language implementors who spend years on design,
>     tuning and optimization.
>     Marcus
>
>
>     ============================================================
>     FRIAM Applied Complexity Group listserv
>     Meets Fridays 9a-11:30 at cafe at St. John's College
>     lectures, archives, unsubscribe, maps at http://www.friam.org
>
>
> ------------------------------------------------------------------------
>
> ============================================================
> FRIAM Applied Complexity Group listserv
> Meets Fridays 9a-11:30 at cafe at St. John's College
> lectures, archives, unsubscribe, maps at http://www.friam.org

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org
Reply | Threaded
Open this post in threaded view
|

Re: Rule driven agent-based modeling systems

Marcus G. Daniels
Carl Tollander wrote:
> Rather, we are looking to world states as other regulatory systems or
> n-categories (topoi?), themselves operating "on-the-fly".   I'm not at
> all sure that simple rules and rule-rewrites are a viable path for
> describing such states.
Hmm, rewrites could draw from world objects involving many kinds of
subcomponents to form an array of other abstracted patterns to constrain
further polymorphic rewrites.    For example, first map various symptoms
to suggest a disease process, and then do rewrites on the basis of the
presence of that disease.   The approach of operating on multiple views
of the world seems more natural that working on it directly -- most of
which is irrelevant detail.

Is this a more general topic than the current question?

Marcus

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org
Reply | Threaded
Open this post in threaded view
|

Re: Rule driven agent-based modeling systems

Carl Tollander
Back around to Selection among Reductions.   I'm on board with that, so
long as we keep principled on the notion of what a constrained reduction
is; not just a choice of cartoons.

Somewhere along the line, though, the notion of rewrites got less
salient for me.

Yes, maybe too general, fair enough, but since it was Russ's question
originally....

Carl

Marcus G. Daniels wrote:

> Carl Tollander wrote:
>> Rather, we are looking to world states as other regulatory systems or
>> n-categories (topoi?), themselves operating "on-the-fly".   I'm not
>> at all sure that simple rules and rule-rewrites are a viable path for
>> describing such states.
> Hmm, rewrites could draw from world objects involving many kinds of
> subcomponents to form an array of other abstracted patterns to
> constrain further polymorphic rewrites.    For example, first map
> various symptoms to suggest a disease process, and then do rewrites on
> the basis of the presence of that disease.   The approach of operating
> on multiple views of the world seems more natural that working on it
> directly -- most of which is irrelevant detail.
>
> Is this a more general topic than the current question?
>
> Marcus
>
> ============================================================
> FRIAM Applied Complexity Group listserv
> Meets Fridays 9a-11:30 at cafe at St. John's College
> lectures, archives, unsubscribe, maps at http://www.friam.org
>

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org
Reply | Threaded
Open this post in threaded view
|

Re: Rule driven agent-based modeling systems

Russ Abbott
In reply to this post by Marcus G. Daniels
In fact, I have more in mind than just the ability to change rules dynamically. I have become convinced that what's missing from most agent-based modeling frameworks (besides the ability to modify agent rules dynamically) is a service-oriented perspective. I am interested in modeling how economies work with respect to how different elements of the economy provide services to each other.

A simple example is a supply chain. The upstream suppliers create objects that are processed eventually into a final result.  As I said, that's a very simple example. But start there and ask how that might be represented in an agent-based modeling framework.

Then add something like capital equipment. There is a supply chain for capital equipment. But the machinery produced by the capital equipment manufacturer may be used by members of its own supply chain. So the supply chain has loops in it. (Also it should be possible to model the difference between (a) the components involved when an entity takes components and produces a result and (b) the capital equipment used in such a production. They are both in some sense input to the creation of the output. But the equipment doesn't get "used up" in the process -- at least not as fast.

The picture this brings to mind is a network of consumers and producers. where there may be arbitrary loops in the network. The network should be able to represent operations that modify objects as well as nodes that combine them into something new. For example a transportation agent can be modeled by allowing objects to have a location attribute and then having the transportation agent modify that attribute. (The consumer-producer network is abstract and doesn't represent geographic space.)

The nodes on this network will be the agents. Their operations are the services that they perform. Agents have prerequisite to being able to perform their services -- their incoming edges. Their outgoing edges reflect the result of their having performed whatever service they offer. So if you take the consumer-producer graph suggested above and generalize it so that it represents services performed at each node, that's the sort of thing I'm after.

Of course there will be a need for money to move in the opposite direction of the services. There will have to be the generation of raw materials and energy and the eventual production and removal from the system of final products (such as purchased food) or final services (such as entertainment).

It will have to be possible for the network to reconfigure itself so that an agent is able to find better/cheaper suppliers than the ones it already has. (That's one reason agents have to be able to rewrite their rules.) Presumably there will also be mechanisms for offering and buying (i.e., trading) services/products.

Once such a model is built one can modify the source and sink nodes and see how it reconfigures itself in response.

It is also likely that bubbles and busts will develop. So there will have to be regulatory nodes. What kinds of regulatory nodes is to be determined. The need for regulatory nodes (like the capital equipment nodes) require that the services not be rigidly stratified, that everything (any service or product) be available to anything else. So what I'm imagining is a network of agents that continually reconfigures itself where each agent performs some service (including the possibility of producing an object) that any other agent may take advantage of..

The basic programming mechanism for such a network isn't hard. As I said, it's just a network. What will be challenging will be the semantics of each of the agent nodes. But then the world is complicated. That's to be expected.

So to return to what I want as an ABM framework, it's one in which agents can be understood in an abstract sense as producing results that flow to other agents to enable them to produce their results. This is what I have in mind as an agent-based service-oriented framework for building models. The framework will be relatively simple. Just agents, the ability to connect and reconnect them, and the ability to program them in a generic input -> output rule language.

I don't want the rule language to be a general purpose programming language -- although it should be possible to include calls to a general purpose language within the rules when new primitive operations are needed.

This has been an on-the-fly description of what I'm looking for. I know it's been somewhat choppy. I hope it conveys the main ideas.

-- Russ

On Tue, Aug 25, 2009 at 3:32 PM, Marcus G. Daniels <[hidden email]> wrote:
Carl Tollander wrote:
Rather, we are looking to world states as other regulatory systems or n-categories (topoi?), themselves operating "on-the-fly".   I'm not at all sure that simple rules and rule-rewrites are a viable path for describing such states.
Hmm, rewrites could draw from world objects involving many kinds of subcomponents to form an array of other abstracted patterns to constrain further polymorphic rewrites.    For example, first map various symptoms to suggest a disease process, and then do rewrites on the basis of the presence of that disease.   The approach of operating on multiple views of the world seems more natural that working on it directly -- most of which is irrelevant detail.

Is this a more general topic than the current question?


Marcus

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org
Reply | Threaded
Open this post in threaded view
|

Re: Rule driven agent-based modeling systems

Stephen Thompson
In reply to this post by Eric Charles
Eric: 

This is a nice outline of an agent-software experiment.  I wish I knew NetLogo so I could give it a try.  I will save this
email and when I get further into agent-based software I will give this a try. 

I expected you to also be a computer scientist or university professor in the field, so I was pleasantly surprised to hear
this is a "fun" field for you.  I am an investment officer with an MBA of 25 years ago.  I find this forum interesting to
listen to and look for ideas I couldn't generate on my own.  I did recently complete an MS in Software Eng. because
it is, well, *fun*. 

A few years ago I created a backward chainining inference engine in Prolog starting with a core from a book by Keith
Weiskamp and Terry Hengl, "Artificial Intelligence Programming with Turbo Prolog" (Borland...boy, doesn't that take
you back!).  I enjoy Prolog.  Too bad its not used much in the U.S.

Thanks for your email.  It gave me some ideas on how I might further my interest in agent-based software. 

Thanks,
Steph T


ERIC P. CHARLES wrote:
Russ (and everyone else),
Just because its what I know, I would do it in NetLogo. I'm not suggesting that NetLogo will do what you want, just that it can simulate doing what you want. Not knowing what you want to do, lets keep it general:

You start by making an "agent" with a list of things it can do, lets label them 1-1000, and a list of things it can sense, lets label them A-ZZZ. But there is a catch, the agent has no commands connecting the sensory list to the behaviors list, a different object must do that. The agent must query all the rules until it finds one that accepts its current input, and then the rule sends it a behavior code. (Note, that any combination of inputs can be represented as a single signal or as several separate ones, it doesn't matter for theses purposes)

You then make several "rules", each of which receives a signal from an agent and outputs a behavior command. One rule might be "If input WFB, then behavior 134." Note, it doesn't matter how complicated the rule is, this general formula will still work. Any countable infinity of options can be re-presented using the natural numbers, so it is a useful simplification. Alternatively, imagine that each digit provides independent information and make the strings as long as you wish.

Now, to implement one of my suggestions you could use:
1) The "system level" solution: On an iterative basis asses the benefit gained by individuals who accessed a given rule (i.e. turtles who accessed rule 4 gained 140 points on average, while turtles who accessed rule 5 only gained 2 points on average). This master assessor then removes or modifies rules that aren't up to snuff.

2) The "rule modified by agents" solution: Agents could have a third set of attributes, in addition to behaviors and sensations they might have "rule changers". Let's label them from ! to ^%*. For example, command $% could tell the rule to select another behavior at random, while command *# could tell the rule to simply add 1 to the current behavior.

3) The "agents disobey" solution: Agents could in the presence of certain sensations modify their reactions to the behavior a given rule calls up in a permanent manner. This would require an attribute that kept track of which rules had been previously followed and what the agent had decided from that experience. For example, a given sensation may indicate that doing certain behaviors is impossible or unwise (you can't walk through a wall, you don't want to walk over a cliff); under these circumstance, if a rule said "go forward" the agent could permanently decide that if rule 89 ever says "go forward" I'm gonna "turn right" instead.... where "go forward" = "54" and "turn right" = "834". In this case the object labeled "rule" is still the same, but only because the effect of the rule has been altered within the agent, which for metaphorical purposes should be sufficient.

Because of the countable-infinity thing, I'm not sure what kinds of thing a system like this couldn't simulate. Any combination of inputs and outputs that a rule might give can be simulated in this way. If you want to have 200 "sensory channels" and 200 "limbs" that can do the various behaviors in the most subtle ways imaginable, it would still work in essentially the same way, or could be simulated in exactly the same way.

Other complications are easy to incorporate: For example, you could have a rule that responded to a large set of inputs, and have those inputs change... or you could have rules link themselves together to change simultaneously... or you could have the agent send several inputs to the same rule by making it less accurate in detection. You could have rules that delay sending the behavior command... or you could just have a delay built into certain behavior commands.


Eric

P.S. I'm sorry for the bandwidth all, but I am continuing to communicate through the list because I am hoping someone far more experienced than I will chime in if I am giving poor advice.




On Sun, Aug 23, 2009 10:32 PM, Russ Abbott [hidden email] wrote:

My original request was for an ABM system in which rules were first class objects and could be constructed and modified dynamically. Although your discussion casually suggests that rules can be treated the same way as agents, you haven't mentioned a system in which that was the case. Which system would you use to implement your example? How, for example, can a rule alter itself over time? I'm not talking about systems in which a rule modifies a field in a fixed template. I'm talking about modifications that are more flexible.

Certainly there are many examples in which rule modifications occur within very limited domains. The various Prisoner Dilemma systems in which the rules combine with each other come to mind. But the domain of PD rules is very limited. 

Suppose you really wanted to do something along the lines that your example suggests.  What sort of ABM system would you use? How could a rule "randomly (or non-randomly) generate a new contingency" in some way other than simply plugging new values into a fixed template? As I've said, that's not what I want to do.

If you know of an ABM system that has a built-in Genetic Programming capability for generating rules, that would be a good start. Do you know of any such system?

-- Russ



On Mon, Aug 24, 2009 at 11:10 AM, ERIC P. CHARLES <epc2@...> wrote:
Well, there are some ways of playing fast and loose with the metaphor. There are almost always easy, but computationally non-elegant, ways to simulate things like this. Remember, we have quotes because "rules" and "agents" are just two classes of agents with different structures.

Some options:
1) The "rules" can alter themselves over time, as they can be agents in a Darwinian algorithm or any other source of system level change you want to impose.
2) The "rules" could accept instructions from the "agents" telling them how to change.
3) The "agents" could adjust their responses to commands given by the "rules" which effectively changes what the rule (now not in quotes) does.

To get some examples, let's start with a "rule" that says "when in a red patch, turn left". That is, in the starting conditions the "agent" tells the rule it is in a red patch, the "rule" replies back "turn left":
1) Over time that particular "rule" could be deemed not-useful and therefore done away with in some master way. It could either be replaced by a different "rule", or there could just no longer be a "rule" about what to do in red patches.
2) An "agent" in a red patch could for some reason no longer be able to turn left. When this happens, it could send a command to the "rule" telling the "rule" it needs to change, and the "rule" could randomly (or non-randomly) generate a new contingency.
3) In the same situation, an "agent" could simply modify itself to turns right instead; that is, when the command "turn left" is received through that "rule" (or perhaps from any "rule"), the "agent" now turns right. This is analogous to what happens at some point for children when "don't touch that" becomes "touch that". The parents persist in issuing the same command, but the rule (now not in quotes) has clearly changed.

Either way, if you are trying to answer a question, I think it something like one of the above options is bound to work. If there is some higher reason you are trying to do something in a particular way, or you have reason to be worried about processor time, then it might not be exactly what you are after.

Eric



On Sun, Aug 23, 2009 05:18 PM, Russ Abbott <russ.abbott@...> wrote:

Thanks Eric. It doesn't sound like your suggestion will do what I want. I want to be able to create new rules dynamically as in rule evolution. As I understand your scheme, the set of rule-agents is fixed in advance.

-- Russ



On Sun, Aug 23, 2009 at 8:30 AM, ERIC P. CHARLES <epc2@...> wrote:
Russ,
I'm probably just saying this out of ignorance, but... If you want to "really" do that, I'm not sure how to do so.... However, given that you are simulating anyway... If you want to simulate doing that, it seems straightforward. Pick any agent-based simulation program, create two classes of agents, call one class "rules" and the others "agents". Let individuals in the "rules" class do all sorts of things to individuals in the "agents" class (including controlling which other "rules" they accept commands from and how they respond to those commands).

Not the most elegant solution in the world, but it would likely be able to answer whatever question you want to answer (assuming it is a question answering task you wish to engage in), with minimum time spent banging your head against the wall programming it. My biases (and lack of programming brilliance) typically lead me to find the simplest way to simulate what I want, even if that means the computers need to run a little longer. I assume there is some reason this would not be satisfactory?

Eric




On Sat, Aug 22, 2009 11:13 PM, Russ Abbott <russ.abbott@...> wrote:

Hi,

I'm interesting in developing a model that uses rule-driven agents. I would like the agent rules to be condition-action rules, i.e., similar to the sorts of rules one finds in forward chaining blackboard systems. In addition, I would like both the agents and the rules themselves to be first class objects. In other words, the rules should be able:

  • to refer to agents,
  • to create and destroy agents,
  • to create new rules for newly created agents,
  • to disable rules for existing agents, and
  • to modify existing rules for existing agents.
Does anyone know of a system like that?

-- Russ
============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at <a moz-do-not-send="true"
 href="http://www.friam.org" target=""
 onclick="window.open('http://www.friam.org');return
false;">http://www.friam.org
            
Eric Charles

Professional Student and
Assistant Professor of Psychology
Penn State University
Altoona, PA 16601



============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at <a moz-do-not-send="true" href="http://www.friam.org" target="" onclick="window.open('http://www.friam.org');return false;">http://www.friam.org

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at <a moz-do-not-send="true"
 href="http://www.friam.org" target=""
 onclick="window.open('http://www.friam.org');return
false;">http://www.friam.org
        
Eric Charles

Professional Student and
Assistant Professor of Psychology
Penn State University
Altoona, PA 16601



============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at <a moz-do-not-send="true" href="http://www.friam.org" target="" onclick="window.open('http://www.friam.org');return false;">http://www.friam.org

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org
    
Eric Charles

Professional Student and
Assistant Professor of Psychology
Penn State University
Altoona, PA 16601



============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College lectures, archives, unsubscribe, maps at http://www.friam.org

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org
Reply | Threaded
Open this post in threaded view
|

Re: Rule driven agent-based modeling systems

Owen Densmore
Administrator
In reply to this post by Russ Abbott
On Aug 25, 2009, at 12:20 AM, Russ Abbott wrote:

> <snip>
> So to return to what I want as an ABM framework, it's one in which  
> agents can be understood in an abstract sense as producing results  
> that flow to other agents to enable them to produce their results.  
> This is what I have in mind as an agent-based service-oriented  
> framework for building models. The framework will be relatively  
> simple. Just agents, the ability to connect and reconnect them, and  
> the ability to program them in a generic input -> output rule  
> language.
>
> I don't want the rule language to be a general purpose programming  
> language -- although it should be possible to include calls to a  
> general purpose language within the rules when new primitive  
> operations are needed.


Hmm..just a thought: Have you considered "semantic networks"?  Marko  
Rodriguez:
   "Marko A. Rodriguez" <[hidden email]>
.. has drawn several of us into considering "triple stores" as an  
adjunct to our work, both in redfish and the santa fe complex.

The triple stores contain many triples of the nature of
   A verb B
.. where A & B are nouns: Marko Knows Steve.  Knows is a link between  
Marko and Steve.

The triple store is a graph database such as:
   http://neo4j.org/
RDF is the "semantic web" use of triple stores:
   http://en.wikipedia.org/wiki/Resource_Description_Framework
.. and neo4j has an RDF layer, I believe.

 From your description, I could see models where agents were nodes  
with links between themselves.  The dynamic nature you describe could  
be simply removing and adding links.

I think the graph database idea is on the brink of exploding upon the  
computing scene.  Its been around for quite a while, but folks are  
just starting to understand just how powerful a notion they are.  
Definitely NOT sql structured .. but might work very well in a Big  
Table like Google's App Engine.

     -- Owen


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org
123