Hey, thanks Dave.
A couple of comments:
To some, I suppose lack of efficiency and the ability to implement pure, faithful representations of the physical system being modeled are positive attributes of a language. One the other hand, in practical use simulations are only required to represent the physical system of interest at some level of abstraction which has been identified as sufficient to answer the questions being asked of the system. Therefore, 100% faithfulness of representation of the physical system is not only not needed, it can get in the way of producing results. As to efficiency, what can I say: efficiency is everything to the analyst. Without it even the most beautiful, elegant model won't be used, because results that are produced too late, or which require too much effort to produce are simply of little use. IMO, the most important aspect of developing useful simulations is not the elegance of the language being used, but rather the skills of the model designer in producing a design that is properly abstracted: not too much detail, not too little, which will address the pertinent analysis issues. The "pureness" of the OO language being used really doesn't come to play with any of the actual fielded applications I've been involved with. Cheers, --Doug -- Doug Roberts [hidden email] [hidden email] 505-455-7333 - Office 505-670-8195 - Cell ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College lectures, archives, unsubscribe, maps at http://www.friam.org |
In reply to this post by Steve Smith
A snark sash can *never* be too heavy!
;-}
-- Doug Roberts [hidden email] [hidden email] 505-455-7333 - Office 505-670-8195 - Cell ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College lectures, archives, unsubscribe, maps at http://www.friam.org |
Douglas Roberts wrote:
> A snark sash can *never* be too heavy! Snarf! ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College lectures, archives, unsubscribe, maps at http://www.friam.org |
In reply to this post by Douglas Roberts-2
Doug/Dave/innocent bystanders -
Actually, this exchange is more useful to me than most I've seen on this topic. As usual, I understand both sides of the argument and agree with neither! Or to be contrary, I agree with both. I appreciate the elegance of representation that well-designed notations offer, especially in domain specific problems. I also appreciate the value of efficiency and that the *real* customer should never see the code and should only see the interface and its performance (qualitative and quantitative). On the other (other) hand, it seems that an important customer of our code is always also ourselves and our peers. Even though we never re-use or share code as much as we should, we *do* re-use and share code and people *do* inherit eachothers' code for continuation, propagation, remediation, etc. I find it irritating and inconvenient when either extreme is obtained. When I must learn (and sometimes master) a relatively obscure (Simula, Smalltalk, LISP, ObjC in that order) language because someone else declared it to be the pinnacle of form and style. Similarly, when someone builds extravagant and convoluted abstractions in a language never intended for it. For example, when someone tries to implement the functionality of Snobol or LISP in C. These are the times when a less "elegant" or elaborate abstraction is better. While it might seem mundane, a good deal of very simple string manipulation and/or logic implementation has been done in C or Fortran quite well in simple ways. I am glad to be free of the restrictions (esp. of Fortran IV) of a simple procedural language with limited and limiting structures, but often find that the modes of use of more expressive languages to be ?deliberately? obtuse? Do we, as prideful practitioners, too often indulge in our egos by raising elegance (or efficiency) above its station, or for sure utility and relevance? I do not want to take away from either end of the spectrum, those who pursue (perhaps Dave fits this) purity and elegance for it's own sake, nor from those who pursue (I think this shoe fits Doug's foot) efficiency and economy as an art-form of it's own. People on top of their game (either game) can do this with little or no loss of final utility. But it is those of us (I appreciate both but aspire to neither) who muddle along in our many ways, often get caught in the crossfire of a holy war between the Big Enders and the Little Enders? The lucidity of both Doug and Dave in this discussion has been very useful. Neither has been the usual raving fanatic I am used to, perhaps this is a testimony to these individuals, to a maturing field, or to this forum. Perhaps all three. Carry on! - Steve
============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College lectures, archives, unsubscribe, maps at http://www.friam.org |
In reply to this post by Douglas Roberts-2
Douglas Roberts wrote:
> To some, I suppose lack of efficiency and the ability to implement > pure, faithful representations of the physical system being modeled > are positive attributes of a language. > Therefore, 100% faithfulness of representation of the physical system > is not only not needed, it can get in the way of producing results. > There's faithful in the sense of simulating things that aren't relevant to a model, and then there's faithful in the sense of thinking things through. Doing the latter needn't get in the way of efficiency, it can actually facilitate it. In the assisted suicide example, a garbage collector is in the best position to determine who has references to an object that is being removed. Without that support, ad-hoc mechanisms to overwrite dead objects with death signatures would be needed (while keeping enough of its memory around for the signature pattern), otherwise there'd be invalid pointers in the simulation after the object was deallocated. IMO, research often does get in the way of production work, and vice versa. Marcus ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College lectures, archives, unsubscribe, maps at http://www.friam.org |
In reply to this post by Steve Smith
That's what we keep you around for, big guy!
Cheers, yourself. --Doug On Mon, May 25, 2009 at 10:06 AM, Steve Smith <[hidden email]> wrote:
-- Doug Roberts [hidden email] [hidden email] 505-455-7333 - Office 505-670-8195 - Cell ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College lectures, archives, unsubscribe, maps at http://www.friam.org |
In reply to this post by Marcus G. Daniels
I don't disagree with any of that, Marcus. I do feel compelled to point out that garbage collectors are extremely heavy weight language components, and are one of the features of Java that prevent it from competing with C++ for large-scale computational efficiency.
--Doug On Mon, May 25, 2009 at 10:11 AM, Marcus G. Daniels <[hidden email]> wrote: Douglas Roberts wrote: ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College lectures, archives, unsubscribe, maps at http://www.friam.org |
In reply to this post by Marcus G. Daniels
On May 24, 2009, at 10:05 PM, Marcus G. Daniels wrote:
>> Steve wrote: >> To better describe agent-oriented, I would like to extend an >> object to: >> 1) >> 2) >> 3) have control over its own execution >> 4) >> 5) >> > > Typically garbage collectors observe for objects that are isolated > from all others, and then call finalization routines on their behalf > (like executors of a will). But for agent simulations, I think it > would be useful to have voluntary and involuntary kill capability > integrated in the collector whereby all references to that object > would be nulled and the finalization process run. Assisted > suicide would be the voluntary form, presumably limited by rules > that examine of various properties of the object and connected > objects. > The unique applicability to ABM is that engineered programs have > objects in different roles for reasons, and it would break the whole > thing to have the program act on itself that way. On the other > hand, ABMs are looser collections of more autonomous objects where > agents come and go, and the proper analogy is more often killing or > resource depletion, rather voluntary self-removal (e.g. digging your > own grave via a `destructor'). Ah, by control over its own execution, I meant "execution" as thread of computation. But yes, given the other meaning of execution, I agree with you with respect to how to probably handle the death and garbage collection. I suspect we might adopt more of an cellular apotosis model <http://evolutionofcomputing.org/Multicellular/Apoptosis.html > where agents remove themselves unless they constantly receieve a keep-alive-message from other agents. There's also the idea that there should be a mechanism where agents will migrate away from the edge of the network where users are to lower cost, high latency parts of the network when they are less in demand - a kind of cold storage. -S ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College lectures, archives, unsubscribe, maps at http://www.friam.org |
Administrator
|
In reply to this post by Prof David West
I actually had a fairly long talk with Bjarne Stroustrup while
building an UI Toolkit out of (would you believe) PostScript for the NeWS (Network Extensible Window System). We had realized that PS really was, like Javascript, a very lisp-like language and so prototyped (pun intended for you JS'ers) a 1-page of code classing system, with multiple inheritance and mix-ins if needed. This was in the mid-80s. Bjarne was *really* clear that C++ was *not* supposed to be an OO version of C. But a *better* C. The PS-JS link is quite strong, historically, BTW. Both are attached to a widely distributed visual programming system (Printers, Browsers). Both are dynamic, scripting, and ubiquitous. Both are standardized. Etc. -- Owen On May 25, 2009, at 8:01 AM, Prof David West wrote: > Doug, > > Some short answers, we can discuss further some time if interested. > First: the "technical" reasons C++ was not considered OO = strong > typing, friend declarations, multiple inheritance, explicit > constructors, and an over-dependence on function overrides. > > Second: subtler, but in my opinion more important, the philosophy of > the language - C++ was never intended to be an OO language. > Marketing saw some superficial similarities and jumped on the OO > bandwagon and represented the language as something it was not > intended to be. (They also worked very hard to redefine OO to be > closer to what C++ offered - their own version of Newspeak.) > > C++ was intended to be a means to impose structured programming > discipline on C programmers without, in any way, interfering with > the hyper-efficient performance characteristics that arose from > being as faithful a representation of the hardware as possible. > > In contrast - the OO tradition that began with Simula (not Simula I > which was already moving away from the philosophical ideal) and was > embodied in Self and Smalltalk, did not care about the machine, did > not care about efficiency, it was all about the domain - faithful > representation of same - and about human-machine "natural" > communication about that shared domain (both humans and objects > "lived" in the same "world"). > > C++ versus Smalltalk was an expression of an even deeper > philosophical divide between formalists and aformalists that traces > back to the ascendency of the former during the Age of Reason. > > dave > > > On Sun, 24 May 2009 18:35 -0600, "Douglas Roberts" <[hidden email] > > wrote: > >> Not to digress, but Dave kind of lost me one day at a FRIAM when he >> said "C++ is not object oriented." I didn't really know what he >> meant, because I've been using C++ for about 20 years now to >> accomplish polymorphism via object inheritance, containment, and >> method specialization (with and without templates) -- which use >> pretty much meets most definitions of OO programming that I've >> encountered. >> >> Dave, I'd be interested in knowing what you meant... >> >> --Doug >> >> On Sun, May 24, 2009 at 6:20 PM, Stephen Guerin <[hidden email] >> > wrote: >> On Sun, May 24, 2009 at 5:47 PM, Douglas Roberts <[hidden email] >> > wrote: >> > Interesting. Other issues that will come to play with an ABM of >> the >> > intended scales you describe are synchronization of the various >> asynchronous >> > distributed components, message passing latency, and message >> passing >> > bandwidth. Hopefully a course-grained sync & message passing >> design can be >> > developed, because http is not good for either latency or >> bandwidth (using >> > Myrinet or Infiniband for comparison). >> Yeah, I'm not thinking this would be used for a single large-scale >> ABM >> for exactly the synch issues you describe. >> >> This would be more for authoring and deploying many smaller-scale >> applications written with an agent-oriented perspective. What Dave >> West talks about when he refers to how object-orientation was >> originally conceived not how current object-oriented programming is >> done. This is close to what Smalltalk/Seaside looks like but probably >> implemented within Javascript. >> >> -S >> -- >> >> --- -. . ..-. .. ... .... - .-- --- ..-. .. ... .... >> [hidden email] >> (m) 505.577.5828 (o) 505.995.0206 >> redfish.com _ simtable.com _ sfcomplex.org _ lava3d.com >> >> >> ============================================================ >> FRIAM Applied Complexity Group listserv >> Meets Fridays 9a-11:30 at cafe at St. John's College >> lectures, archives, unsubscribe, maps at http://www.friam.org >> >> >> >> -- >> Doug Roberts >> [hidden email] >> [hidden email] >> 505-455-7333 - Office >> 505-670-8195 - Cell > ============================================================ > FRIAM Applied Complexity Group listserv > Meets Fridays 9a-11:30 at cafe at St. John's College > lectures, archives, unsubscribe, maps at http://www.friam.org ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College lectures, archives, unsubscribe, maps at http://www.friam.org |
In reply to this post by Douglas Roberts-2
Douglas Roberts wrote:
> I do feel compelled to point out that garbage collectors are extremely > heavy weight language components, and are one of the features of Java > that prevent it from competing with C++ for large-scale computational > efficiency. You can believe what you want, but see figures 4, 5, 6 of the URL below for a conservative garbage collector outperforming malloc in C. Even in benchmarks where malloc is faster, it's within a factor of two. http://www.hpl.hp.com/techreports/2000/HPL-2000-165.html Similar results 15 years ago: ftp://ftp.cs.colorado.edu/pub/techreports/zorn/CU-CS-665-93.ps.Z ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College lectures, archives, unsubscribe, maps at http://www.friam.org |
In reply to this post by Stephen Guerin
Stephen Guerin wrote:
> Ah, by control over its own execution, I meant "execution" as thread > of computation. Yeah, I realize the word was overloaded.. See my other e-mail on not being able to predictably get resources. (Scheduling a thread is does not imply actually commencing execution.) Here I was just getting Doug to confront his prejudice about garbage collectors. ;-) > I suspect we might adopt more of an cellular apotosis model > <http://evolutionofcomputing.org/Multicellular/Apoptosis.html> where > agents remove themselves unless they constantly receieve a > keep-alive-message from other agents. There's also the idea that there > should be a mechanism where agents will migrate away from the edge of > the network where users are to lower cost, high latency parts of the > network when they are less in demand - a kind of cold storage. Cool. I think biological approaches to resilience and system optimization are intriguing.. Marcus ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College lectures, archives, unsubscribe, maps at http://www.friam.org |
Hey, as IB, I would like to point out that biological models make sense because we are unavoidably biological in design, thus our systems will be. All our systems: interpretive, expressive, diagnostic, experimental. No matter how far we think we may evolve past that, we cannot. We just think about it, with our biological brains. We are a self-referential species, for better or worse.
So our models, in whatever field, will ultimately ping on that neurological level. Makes sense to work with that presupposition, since in the end we return to it. Tory Begin forwarded message:
============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College lectures, archives, unsubscribe, maps at http://www.friam.org |
In reply to this post by Marcus G. Daniels
I believe that under optimal conditions (from the perspective of the garbage collecting language) a benchmark can be contrived that equals malloc. I also believe that the converse is true, especially for large applications. I cannot count the times I have had to reboot a LISP machine or kill a Java app because they had ground themselves into the ground attempting a GC.
I suspect, without offering any evidence to support my suspicions that most "real world" applications, i.e. large to the bursting point of the hosts' memory and processing power, will favor malloc over GC. --Doug On Mon, May 25, 2009 at 11:04 AM, Marcus G. Daniels <[hidden email]> wrote:
-- Doug Roberts [hidden email] [hidden email] 505-455-7333 - Office 505-670-8195 - Cell ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College lectures, archives, unsubscribe, maps at http://www.friam.org |
Douglas Roberts wrote:
> I believe that under optimal conditions (from the perspective of the > garbage collecting language) a benchmark can be contrived that equals > malloc. If a person can plan out a workset that is minimal and needed all of the time, then it won't need to be moved around independent of whether malloc or GC is used. It can just be declared in fixed array on the heap and that is that. Typically people that care about fast code plan out the use of memory in a careful way, and also use a language that generates code they can inspect and understand the generated machine language to be efficient. That's typically C or C++ or Fortran. A person prototyping code for a new problem doesn't yet have whole thing in their head, so they don't want to commit to decisions like global sharing of data. malloc/free is undesirable in that situation because it just adds one more thing to keep track of. Once they do have the whole problem understood (with the help of some exploratory programming), there's nothing stopping them from making a new production code that is fast. This has nothing to do with GC vs. malloc though. It's just a question of whether it is cheaper to throw more computers at it than it is to do some factoring and profiling or do a new implementation. There surely a class of problems where an implementation plan is evident before any code is written and there's a clear lifetime for certain blocks of data. This would be a case where malloc/free would probably win. I'd also expect it has a substantial overlap with the class of Boring Problems. Marcus ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College lectures, archives, unsubscribe, maps at http://www.friam.org |
On Mon, May 25, 2009 at 11:52 AM, Marcus G. Daniels <[hidden email]> wrote:
Well, yes, except... Running your C++ prototype code through ValGrind and
checking for leaks is only one short step, and a good idea anyhow.
I've not found keeping track of new()/delete() to be that onerous when
prototyping. On the other hand, having the crutch of a garbage
collector always around can cause a developer to become insensitive to
memory issues... Isn't this fun? We could probably go on like this this all day! ;-}
--Doug -- Doug Roberts [hidden email] [hidden email] 505-455-7333 - Office 505-670-8195 - Cell ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College lectures, archives, unsubscribe, maps at http://www.friam.org |
In reply to this post by Victoria Hughes
Victoria Hughes wrote:
> So our models, in /whatever/ field, will ultimately ping on that > neurological level. Makes sense to work with that presupposition, > since in the end we return to it. A question of technology? http://www.newscientist.com/article/dn3523-synapse-chip-taps-into-brain-chemistry.html ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College lectures, archives, unsubscribe, maps at http://www.friam.org |
In reply to this post by Douglas Roberts-2
So a 'real world' application is one you never quite have enough memory
and power for? Douglas Roberts wrote: > I believe that under optimal conditions (from the perspective of the > garbage collecting language) a benchmark can be contrived that equals > malloc. I also believe that the converse is true, especially for > large applications. I cannot count the times I have had to reboot a > LISP machine or kill a Java app because they had ground themselves > into the ground attempting a GC. > > I suspect, without offering any evidence to support my suspicions that > most "real world" applications, i.e. large to the bursting point of > the hosts' memory and processing power, will favor malloc over GC. > > --Doug > > On Mon, May 25, 2009 at 11:04 AM, Marcus G. Daniels > <[hidden email] <mailto:[hidden email]>> wrote: > > Stephen Guerin wrote: > > Ah, by control over its own execution, I meant "execution" as > thread of computation. > > Yeah, I realize the word was overloaded.. See my other e-mail on > not being able to predictably get resources. (Scheduling a thread > is does not imply actually commencing execution.) > > Here I was just getting Doug to confront his prejudice about > garbage collectors. ;-) > > I suspect we might adopt more of an cellular apotosis model > <http://evolutionofcomputing.org/Multicellular/Apoptosis.html> > where agents remove themselves unless they constantly receieve > a keep-alive-message from other agents. There's also the idea > that there should be a mechanism where agents will migrate > away from the edge of the network where users are to lower > cost, high latency parts of the network when they are less in > demand - a kind of cold storage. > > Cool. I think biological approaches to resilience and system > optimization are intriguing.. > > Marcus > > > ============================================================ > FRIAM Applied Complexity Group listserv > Meets Fridays 9a-11:30 at cafe at St. John's College > lectures, archives, unsubscribe, maps at http://www.friam.org > > > > > -- > Doug Roberts > [hidden email] <mailto:[hidden email]> > [hidden email] <mailto:[hidden email]> > 505-455-7333 - Office > 505-670-8195 - Cell > ------------------------------------------------------------------------ > > ============================================================ > FRIAM Applied Complexity Group listserv > Meets Fridays 9a-11:30 at cafe at St. John's College > lectures, archives, unsubscribe, maps at http://www.friam.org ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College lectures, archives, unsubscribe, maps at http://www.friam.org |
On Mon, May 25, 2009 at 3:06 PM, Carl Tollander <[hidden email]> wrote: So a 'real world' application is one you never quite have enough memory and power for? Yeah, pretty much. First you build a model that approximates the system you want to model. Then you realize that there are features that would be *really nice* for it to have. So you add them in. Then, someone come along and says, "Gee, that's nice. But I would really like for it to do X, Y, and Z, instead." So you wedge that in. Then, it doesn't run fast enough. Then it runs out of memory. Then, you design version 2 to address the performance shortcomings. Repeat cycle as required until your funding agency finds a new contractor. --Doug -- Doug Roberts [hidden email] [hidden email] 505-455-7333 - Office 505-670-8195 - Cell ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College lectures, archives, unsubscribe, maps at http://www.friam.org |
In reply to this post by Stephen Guerin
Ok, so now that we have our trip down OOP Memory Land out of the way, a few questions:
1) What are the agents in this > 1.0E^6 agent simulation? 2) What are the rules that define how they interact? 3) What are the communications requirements between agents? 4) What is the compute infrastructure? 5) What are the desired results from running the ABM? --Doug On Sun, May 24, 2009 at 2:01 PM, Stephen Guerin <[hidden email]> wrote: So a few of us are exploring new ways of constructing scalable distributed agent systems and are playing around with architecting a first instantiation in either Javascript or in Smalltalk. We are interested in architecting a system that grow and evolve without collapsing on the weight of itself, much in the same way the Internet has been able to grow over the last 40 years without a reboot. ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College lectures, archives, unsubscribe, maps at http://www.friam.org |
In reply to this post by Stephen Guerin
OK... It is all I can do to avoid a segue into Soylent Green analogies...
- SS > > > But yes, given the other meaning of execution, I agree with you with > respect to how to probably handle the death and garbage collection. I > suspect we might adopt more of an cellular apotosis model > <http://evolutionofcomputing.org/Multicellular/Apoptosis.html> where > agents remove themselves unless they constantly receieve a > keep-alive-message from other agents. There's also the idea that there > should be a mechanism where agents will migrate away from the edge of > the network where users are to lower cost, high latency parts of the > network when they are less in demand - a kind of cold storage. > > -S > > > > > ============================================================ > FRIAM Applied Complexity Group listserv > Meets Fridays 9a-11:30 at cafe at St. John's College > lectures, archives, unsubscribe, maps at http://www.friam.org ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College lectures, archives, unsubscribe, maps at http://www.friam.org |
Free forum by Nabble | Edit this page |