Perhaps one minimalist instance of this, depending how simple rule or
behavior can be and still be called an "algorithm", is Brian Arthur's paper on the El Farol problem. Each agent chooses from one of several "heuristics" in their set of possible rules based on their belief in that rule to forecast the bar's attendance and decide if they want to go or not. They update their beliefs I these rules from past experience and out of this ecology of simple rules, accurate forecast happen and the attendance is pretty steady near the threshold at which attending goes from pleasant to unpleasant. Again, the heuristics he used are very simple and may not be what you had in mind by "algorithm", but it's in the same spirit I think. I don't think it has anything to do with the earlier parts of the thread though. http://www.santafe.edu/arthur/Papers/El_Farol.html Rich -----Original Message----- From: Robert Holmes [mailto:[hidden email]] Sent: Tuesday, March 15, 2005 3:33 PM To: 'The Friday Morning Applied Complexity Coffee Group' Subject: RE: [FRIAM] Evolution of Emergent Behavior using Genetic Programming Dan, Steve - thanks for your posts. Good to know that there are good reason for me not understanding this stuff. You should write a paper someday ;-) Steve - I like your intuitive "collection of algorithms". I wonder though - just because we have a collection of computing entities (ants or replicating genes) does that mean we have a collection of algorithms? In the examples I've seen, the entities are all running essentially the same algorithm, it's just that the parameters of each agent's instance of the algorithm is different. Now if they were *genuinely* different algorithms (one ant is using gradient descent, one gradient descent with memory, one talking with its neighbours to create some form of local collaborative simplex algorithm etc.) things could get kinda fun. Anyone done any work on this? Robert > -----Original Message----- > From: Stephen Guerin [mailto:[hidden email]] > Sent: Monday, March 14, 2005 9:10 PM > To: Friam > Subject: RE: [FRIAM] Evolution of Emergent Behavior using > Genetic Programming > > > OK, I'm being dumb as a rock. Can other search techniques be > > considered self-organizing? If not, what is that makes a GA > > self-organizing but (say) gradient descent not? > > It's not you, Robert -- It's gradient descent that is as dumb > as a rock ;-) > > We've been punting the criteria for self-organizing > algorithms around for a few years now and I can't say that > I've seen necessary and sufficient conditions defined out there yet. > > Intuitively, I would say greedy gradient descent by a single > computational entity/agent is not self-organizing. I think > its interesting, though, that I would classify learning by > back propagation in a neural network as an instance of > self-organization when Greg Stone's chapter in the early UCSD > PDP books (Stone, 1986) showed back propagation (delta rule) > to be a form of gradient descent. So what's the difference here? > > Firstly, I think you need a *collection* of algorithms to > constitute a system that is capable of self-organization. > I'll call the things in the collections agents but you can > call them PDEs :-) > > Secondly, I suspect it has to do with the transfer of > information of the asymmetries of the environment (the > function to be optimized) into the structure of the agent > system. I don't see this happening in simple gradient descent. > > In self-organizing systems, agents are constrained (informed) > from their natural degrees of freedom (loss of symmetry) as > the system evolves. > Constraints can manifest themselves in environment-agent > interactions, agent-agent interactions, or internal rules to > the agents themselves. > > In the neural network example, information about the > structure of the pattern to be classified is imported to > weights on the edges of the neural network. This is a change > on the agent-agent interaction. In the example of a single > agent falling down hill, I don't see the importing of > information into an agent's system structure - only work > being done on the agent as it moves to a lower energy state. > I wonder if we could move further on a continuum of > self-organizing systems if we introduced history to the > gradient-decent agent like what happens in simulated annealing... > > So to your question "are GAs self-organizing"? In the past, > I've said no. > But I could flip-flop as the definitions get clearer and we > think of the structure of the environment being tranferred to > lower the entropy of the genomes. Chris Adami has looked at > mutal entropy measures like this for his Avida system and > proposes it as a measure of complexity. So, in this context, > maybe GAs are self-organizing. > > Eventually, perhaps concepts from nonequilibrium statistical > mechanics will apply that center on generalized definitions > of heat and work and make all this more rigorous. (Maybe, > we'll have to goad Eric S. into doing the appropriate maths > and sums.) For now, it probably is just a bunch of jargon > babble :-) It's certainly murky in my head. It reminds me of > asking "are viruses alive?" when we don't have a good > definition of life yet... > > -Steve > > Stone, G. O. (1986). An analysis of the delta rule and the > learning of statistical associations. In Rumelhart, D. E. & > McClelland, J. L.(Eds.), Parallel Distributed Processing, > Vol. I (pp.444-459). Cambridge, > Massachusetts: MIT Press. > > > > > > > > ============================================================ > FRIAM Applied Complexity Group listserv > Meets Fridays 9AM @ Jane's Cafe > Lecture schedule, archives, unsubscribe, etc.: > http://www.friam.org > > > ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9AM @ Jane's Cafe Lecture schedule, archives, unsubscribe, etc.: http://www.friam.org |
I don't believe the El Farol problem, nor that of the Minority game
(which is similar) counts as evolution. It does display self-organisation though. On Tue, Mar 15, 2005 at 05:31:31PM -0000, Richard Harris wrote: > Perhaps one minimalist instance of this, depending how simple rule or > behavior can be and still be called an "algorithm", is Brian Arthur's paper > on the El Farol problem. Each agent chooses from one of several "heuristics" > in their set of possible rules based on their belief in that rule to > forecast the bar's attendance and decide if they want to go or not. They > update their beliefs I these rules from past experience and out of this > ecology of simple rules, accurate forecast happen and the attendance is > pretty steady near the threshold at which attending goes from pleasant to > unpleasant. > > Again, the heuristics he used are very simple and may not be what you had in > mind by "algorithm", but it's in the same spirit I think. I don't think it > has anything to do with the earlier parts of the thread though. > > http://www.santafe.edu/arthur/Papers/El_Farol.html > > Rich > > -----Original Message----- > From: Robert Holmes [mailto:[hidden email]] > Sent: Tuesday, March 15, 2005 3:33 PM > To: 'The Friday Morning Applied Complexity Coffee Group' > Subject: RE: [FRIAM] Evolution of Emergent Behavior using Genetic > Programming > > Dan, Steve - thanks for your posts. Good to know that there are good reason > for me not understanding this stuff. You should write a paper someday ;-) > > Steve - I like your intuitive "collection of algorithms". I wonder though - > just because we have a collection of computing entities (ants or replicating > genes) does that mean we have a collection of algorithms? In the examples > I've seen, the entities are all running essentially the same algorithm, it's > just that the parameters of each agent's instance of the algorithm is > different. Now if they were *genuinely* different algorithms (one ant is > using gradient descent, one gradient descent with memory, one talking with > its neighbours to create some form of local collaborative simplex algorithm > etc.) things could get kinda fun. Anyone done any work on this? > > Robert > > > > > -----Original Message----- > > From: Stephen Guerin [mailto:[hidden email]] > > Sent: Monday, March 14, 2005 9:10 PM > > To: Friam > > Subject: RE: [FRIAM] Evolution of Emergent Behavior using > > Genetic Programming > > > > > OK, I'm being dumb as a rock. Can other search techniques be > > > considered self-organizing? If not, what is that makes a GA > > > self-organizing but (say) gradient descent not? > > > > It's not you, Robert -- It's gradient descent that is as dumb > > as a rock ;-) > > > > We've been punting the criteria for self-organizing > > algorithms around for a few years now and I can't say that > > I've seen necessary and sufficient conditions defined out there yet. > > > > Intuitively, I would say greedy gradient descent by a single > > computational entity/agent is not self-organizing. I think > > its interesting, though, that I would classify learning by > > back propagation in a neural network as an instance of > > self-organization when Greg Stone's chapter in the early UCSD > > PDP books (Stone, 1986) showed back propagation (delta rule) > > to be a form of gradient descent. So what's the difference here? > > > > Firstly, I think you need a *collection* of algorithms to > > constitute a system that is capable of self-organization. > > I'll call the things in the collections agents but you can > > call them PDEs :-) > > > > Secondly, I suspect it has to do with the transfer of > > information of the asymmetries of the environment (the > > function to be optimized) into the structure of the agent > > system. I don't see this happening in simple gradient descent. > > > > In self-organizing systems, agents are constrained (informed) > > from their natural degrees of freedom (loss of symmetry) as > > the system evolves. > > Constraints can manifest themselves in environment-agent > > interactions, agent-agent interactions, or internal rules to > > the agents themselves. > > > > In the neural network example, information about the > > structure of the pattern to be classified is imported to > > weights on the edges of the neural network. This is a change > > on the agent-agent interaction. In the example of a single > > agent falling down hill, I don't see the importing of > > information into an agent's system structure - only work > > being done on the agent as it moves to a lower energy state. > > I wonder if we could move further on a continuum of > > self-organizing systems if we introduced history to the > > gradient-decent agent like what happens in simulated annealing... > > > > So to your question "are GAs self-organizing"? In the past, > > I've said no. > > But I could flip-flop as the definitions get clearer and we > > think of the structure of the environment being tranferred to > > lower the entropy of the genomes. Chris Adami has looked at > > mutal entropy measures like this for his Avida system and > > proposes it as a measure of complexity. So, in this context, > > maybe GAs are self-organizing. > > > > Eventually, perhaps concepts from nonequilibrium statistical > > mechanics will apply that center on generalized definitions > > of heat and work and make all this more rigorous. (Maybe, > > we'll have to goad Eric S. into doing the appropriate maths > > and sums.) For now, it probably is just a bunch of jargon > > babble :-) It's certainly murky in my head. It reminds me of > > asking "are viruses alive?" when we don't have a good > > definition of life yet... > > > > -Steve > > > > Stone, G. O. (1986). An analysis of the delta rule and the > > learning of statistical associations. In Rumelhart, D. E. & > > McClelland, J. L.(Eds.), Parallel Distributed Processing, > > Vol. I (pp.444-459). Cambridge, > > Massachusetts: MIT Press. > > > > > > > > > > > > > > > > ============================================================ > > FRIAM Applied Complexity Group listserv > > Meets Fridays 9AM @ Jane's Cafe > > Lecture schedule, archives, unsubscribe, etc.: > > http://www.friam.org > > > > > > > > > ============================================================ > FRIAM Applied Complexity Group listserv > Meets Fridays 9AM @ Jane's Cafe > Lecture schedule, archives, unsubscribe, etc.: > http://www.friam.org > > > > > > ============================================================ > FRIAM Applied Complexity Group listserv > Meets Fridays 9AM @ Jane's Cafe > Lecture schedule, archives, unsubscribe, etc.: > http://www.friam.org -- *PS: A number of people ask me about the attachment to my email, which is of type "application/pgp-signature". Don't worry, it is not a virus. It is an electronic signature, that may be used to verify this email came from me if you have PGP or GPG installed. Otherwise, you may safely ignore this attachment. ---------------------------------------------------------------------------- A/Prof Russell Standish Director High Performance Computing Support Unit, Phone 9385 6967, 8308 3119 (mobile) UNSW SYDNEY 2052 Fax 9385 6965, 0425 253119 (") Australia [hidden email] Room 2075, Red Centre http://parallel.hpc.unsw.edu.au/rks International prefix +612, Interstate prefix 02 ---------------------------------------------------------------------------- |
No, its was only in response to Robert's last comment, and not the earlier
parts of the thread. -----Original Message----- From: Russell Standish [mailto:[hidden email]] Sent: Tuesday, March 15, 2005 9:53 PM To: The Friday Morning Applied Complexity Coffee Group Subject: Re: [FRIAM] Evolution of Emergent Behavior using Genetic Programming I don't believe the El Farol problem, nor that of the Minority game (which is similar) counts as evolution. It does display self-organisation though. On Tue, Mar 15, 2005 at 05:31:31PM -0000, Richard Harris wrote: > Perhaps one minimalist instance of this, depending how simple rule or > behavior can be and still be called an "algorithm", is Brian Arthur's paper > on the El Farol problem. Each agent chooses from one of several "heuristics" > in their set of possible rules based on their belief in that rule to > forecast the bar's attendance and decide if they want to go or not. They > update their beliefs I these rules from past experience and out of this > ecology of simple rules, accurate forecast happen and the attendance is > pretty steady near the threshold at which attending goes from pleasant to > unpleasant. > > Again, the heuristics he used are very simple and may not be what you had in > mind by "algorithm", but it's in the same spirit I think. I don't think it > has anything to do with the earlier parts of the thread though. > > http://www.santafe.edu/arthur/Papers/El_Farol.html > > Rich > > -----Original Message----- > From: Robert Holmes [mailto:[hidden email]] > Sent: Tuesday, March 15, 2005 3:33 PM > To: 'The Friday Morning Applied Complexity Coffee Group' > Subject: RE: [FRIAM] Evolution of Emergent Behavior using Genetic > Programming > > Dan, Steve - thanks for your posts. Good to know that there are good > for me not understanding this stuff. You should write a paper someday ;-) > > Steve - I like your intuitive "collection of algorithms". I wonder though - > just because we have a collection of computing entities (ants or replicating > genes) does that mean we have a collection of algorithms? In the examples > I've seen, the entities are all running essentially the same algorithm, it's > just that the parameters of each agent's instance of the algorithm is > different. Now if they were *genuinely* different algorithms (one ant is > using gradient descent, one gradient descent with memory, one talking with > its neighbours to create some form of local collaborative simplex algorithm > etc.) things could get kinda fun. Anyone done any work on this? > > Robert > > > > > -----Original Message----- > > From: Stephen Guerin [mailto:[hidden email]] > > Sent: Monday, March 14, 2005 9:10 PM > > To: Friam > > Subject: RE: [FRIAM] Evolution of Emergent Behavior using > > Genetic Programming > > > > > OK, I'm being dumb as a rock. Can other search techniques be > > > considered self-organizing? If not, what is that makes a GA > > > self-organizing but (say) gradient descent not? > > > > It's not you, Robert -- It's gradient descent that is as dumb > > as a rock ;-) > > > > We've been punting the criteria for self-organizing > > algorithms around for a few years now and I can't say that > > I've seen necessary and sufficient conditions defined out there yet. > > > > Intuitively, I would say greedy gradient descent by a single > > computational entity/agent is not self-organizing. I think > > its interesting, though, that I would classify learning by > > back propagation in a neural network as an instance of > > self-organization when Greg Stone's chapter in the early UCSD > > PDP books (Stone, 1986) showed back propagation (delta rule) > > to be a form of gradient descent. So what's the difference here? > > > > Firstly, I think you need a *collection* of algorithms to > > constitute a system that is capable of self-organization. > > I'll call the things in the collections agents but you can > > call them PDEs :-) > > > > Secondly, I suspect it has to do with the transfer of > > information of the asymmetries of the environment (the > > function to be optimized) into the structure of the agent > > system. I don't see this happening in simple gradient descent. > > > > In self-organizing systems, agents are constrained (informed) > > from their natural degrees of freedom (loss of symmetry) as > > the system evolves. > > Constraints can manifest themselves in environment-agent > > interactions, agent-agent interactions, or internal rules to > > the agents themselves. > > > > In the neural network example, information about the > > structure of the pattern to be classified is imported to > > weights on the edges of the neural network. This is a change > > on the agent-agent interaction. In the example of a single > > agent falling down hill, I don't see the importing of > > information into an agent's system structure - only work > > being done on the agent as it moves to a lower energy state. > > I wonder if we could move further on a continuum of > > self-organizing systems if we introduced history to the > > gradient-decent agent like what happens in simulated annealing... > > > > So to your question "are GAs self-organizing"? In the past, > > I've said no. > > But I could flip-flop as the definitions get clearer and we > > think of the structure of the environment being tranferred to > > lower the entropy of the genomes. Chris Adami has looked at > > mutal entropy measures like this for his Avida system and > > proposes it as a measure of complexity. So, in this context, > > maybe GAs are self-organizing. > > > > Eventually, perhaps concepts from nonequilibrium statistical > > mechanics will apply that center on generalized definitions > > of heat and work and make all this more rigorous. (Maybe, > > we'll have to goad Eric S. into doing the appropriate maths > > and sums.) For now, it probably is just a bunch of jargon > > babble :-) It's certainly murky in my head. It reminds me of > > asking "are viruses alive?" when we don't have a good > > definition of life yet... > > > > -Steve > > > > Stone, G. O. (1986). An analysis of the delta rule and the > > learning of statistical associations. In Rumelhart, D. E. & > > McClelland, J. L.(Eds.), Parallel Distributed Processing, > > Vol. I (pp.444-459). Cambridge, > > Massachusetts: MIT Press. > > > > > > > > > > > > > > > > ============================================================ > > FRIAM Applied Complexity Group listserv > > Meets Fridays 9AM @ Jane's Cafe > > Lecture schedule, archives, unsubscribe, etc.: > > http://www.friam.org > > > > > > > > > ============================================================ > FRIAM Applied Complexity Group listserv > Meets Fridays 9AM @ Jane's Cafe > Lecture schedule, archives, unsubscribe, etc.: > http://www.friam.org > > > > > > ============================================================ > FRIAM Applied Complexity Group listserv > Meets Fridays 9AM @ Jane's Cafe > Lecture schedule, archives, unsubscribe, etc.: > http://www.friam.org -- *PS: A number of people ask me about the attachment to my email, which is of type "application/pgp-signature". Don't worry, it is not a virus. It is an electronic signature, that may be used to verify this email came from me if you have PGP or GPG installed. Otherwise, you may safely ignore this attachment. ---------------------------------------------------------------------------- A/Prof Russell Standish Director High Performance Computing Support Unit, Phone 9385 6967, 8308 3119 (mobile) UNSW SYDNEY 2052 Fax 9385 6965, 0425 253119 (") Australia [hidden email] Room 2075, Red Centre http://parallel.hpc.unsw.edu.au/rks International prefix +612, Interstate prefix 02 ---------------------------------------------------------------------------- ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9AM @ Jane's Cafe Lecture schedule, archives, unsubscribe, etc.: http://www.friam.org |
Free forum by Nabble | Edit this page |