FW: Distribution / Parallelization of ABM's

classic Classic list List threaded Threaded
15 messages Options
Reply | Threaded
Open this post in threaded view
|

FW: Distribution / Parallelization of ABM's

Stephen Guerin
Laszlo sent the same request out to the NAACSOS list, too. Here's a response
that may be interesting to FRIAM-folk.

-Steve

> -----Original Message-----
> From: Les Gasser [mailto:gasser at uiuc.edu]
> Sent: Friday, October 06, 2006 1:14 PM
> To: Laszlo Gulyas
> Cc: naacsos-list at lists.andrew.cmu.edu; SIMSOC at JISCMAIL.AC.UK
> Subject: Re: Distribution / Parallelization of ABM's
>
> NAACSOS - http://www.casos.cs.cmu.edu/naacsos/
> Laszlo, below are links to five papers that address various
> aspects of these issues, part of a stream of work over about
> a 20 year period.
> These cover conceptualizations, requirements, approaches,
> scaling issues, etc. (Also available through
> http://www.isrl.uiuc.edu/~gasser/papers/).
>
> Others have also worked in these areas, going back to Lesser
> et al.'s work distributing HEARSAY (papers of Lesser &
> Fennel; Lesser & Erman); Ed Durfee's MS thesis at UMASS in
> the early 1980s on distributing a distributed problem solving
> simulator, Dan Corkill's work on parallelizing blackboard
> systems at UMASS, early 1990s (others worked on this too).  
> References to all this are availble via
> http://mas.cs.umass.edu/pub/ and it has been quite inspiring
> to me personally.  More recently there is also Brian Logan
> and Georgios Theorodopoulos' work on distributing MAS,
> concerning especially dealing with environment models as
> points of serialization.
>
> Hope this helps,
>
> -- Les
>
> Les Gasser, Kelvin Kakugawa, Brant Chee and Marc Esteva
> "Smooth Scaling Ahead: Progressive MAS Simulation from Single
> PCs to Grids"
> in Paul Davidsson, Brian Logan, and Keiki Takadama (Eds.)
> Multi-Agent and Multi-Agent-Based Simulation.
> Lecture Notes in Computer Science 3415, Springer, 2005
> http://www.isrl.uiuc.edu/~gasser/papers/gasser-etal-mamabs04-final.pdf
>
> Les Gasser and Kelvin Kakugawa.
> "MACE3J: Fast Flexible Distributed Simulation of Large,
> Large-Grain Multi-Agent Systems."
> In Proceedings of AAMAS-2002.
> [Finalist for Best Paper Award at this conference.]
> http://www.isrl.uiuc.edu/~gasser/papers/mace3j-aamas02-pap.pdf
>
> Les Gasser.
> "MAS Infrastructure Definitions, Needs, Prospects,"
> in Thomas Wagner and Omer Rana, editors, Infrastructure for
> Agents, Multi-Agent Systems, and Scalable Multi-Agent
> Systems, Springer-Verlag, 2001 Also appears in ICFAI Journal
> of Managerial Economics, 11:2, May, 2004, pp 35-45.
> http://www.isrl.uiuc.edu/~gasser/papers/masidnp-08-with-table.pdf
>
> Les Gasser.
> "Agents and Concurrent Objects."
> IEEE Concurrency, 6(4) pp. 74-77&81, October-December, 1998.
> http://www.isrl.uiuc.edu/~gasser/papers/AgentsAndObjects-07.html
>
> Les Gasser, Carl Braganza, and Nava Herman.
> "MACE: A Flexible Testbed for Distributed AI Research"
> in Michael N. Huhns, ed.
> Distributed Artificial Intelligence
> Pitman Publishers, 1987, 119-152.
> http://www.isrl.uiuc.edu/~gasser/papers/gasser-braganza-herman
> -mace-a-flexible-testbed-for-dai-research-1987.ps
> http://www.isrl.uiuc.edu/~gasser/papers/gasser-braganza-herman
> -mace-a-flexible-testbed-for-dai-research-1987.pdf
>
>
> Laszlo Gulyas wrote:
> > NAACSOS - http://www.casos.cs.cmu.edu/naacsos/
> > [**** Apologies for cross-postings. ****]
> >
> > Dear Colleagues,
> >
> > We are compiling a survey on techniques to parallelize agent-based
> > simulations. We are interested in both in-run and inter-run
> > parallelizations (i.e., when one distributes the agents and
> when one
> > distributes individual runs in a parameter sweep), albeit I
> think, the
> > more challenging part is the former.
> >
> > We are aware that in-run parallelization is a non-trivial task and,
> > what's more, it is likely that it cannot be done in general. Our
> > approach is trying to collect 'communication templates'
> that may make
> > distribution  / parallelization feasible. E.g., when the model is
> > spatial and comminication is (mostly) local, there are
> already works to do the job.
> > However, we foresee other cases when the problem can be solved.
> >
> > As I said, we are now compiling a survey. We are aware of a few
> > publications and threads at various lists, but I'd like to
> ask you all
> > to send me references to such works if you know about them.
> (If you do
> > not have references, but have ideas that you are ready to share,
> > please, do not hesitate either.) Thank you all in advance!
> >
> > For your information, our ultimate goal is to be able to
> run ABM's on
> > the grid -- which adds another layer of complication, namely the
> > uncertainity of resources and slower communication. But we
> will deal with that later!
> > ;-)
> >
> > Best regards,
> >
> > Laszlo Gulyas (aka Gulya)
> The NAACSOS mailing list is a service of NAACSOS, the North
> American Association for Computational and Organizational
> Science (http://www.casos.cs.cmu.edu/naacsos/).
> To remove yourself from this mailing list, send an email to
> <Majordomo at lists.andrew.cmu.edu> with the following command
> in the body of your email message:
> unsubscribe naacsos-list
> -
>
>



Reply | Threaded
Open this post in threaded view
|

FW: Distribution / Parallelization of ABM's

Joshua Thorp
I came across this interesting doc on garbage collection in java:
http://java.sun.com/docs/hotspot/gc5.0/gc_tuning_5.html

which notes:
"""
...virtual machines for the JavaTM platform up to and including  
version 1.3.1 do not have parallel garbage collection, so the impact  
of garbage collection on a multiprocessor system grows relative to an  
otherwise parallel application.

The graph below models an ideal system that is perfectly scalable  
with the exception of garbage collection. The red line is an  
application spending only 1% of the time in garbage collection on a  
uniprocessor system. This translates to more than a 20% loss in  
throughput on 32 processor systems. At 10% of the time in garbage  
collection (not considered an outrageous amount of time in garbage  
collection in uniprocessor applications) more than 75% of throughput  
is lost when scaling up to 32 processors.

"""

I hadn't looked at Java's GC for a while.  It has gotten very  
complicated!  I wonder if they have parallelized the GC.  As the  
quote above comes from a document for java 5.0 apparently not...

--joshua

On Oct 6, 2006, at 1:36 PM, Stephen Guerin wrote:

> Laszlo sent the same request out to the NAACSOS list, too. Here's a  
> response
> that may be interesting to FRIAM-folk.
>
> -Steve
>
>> -----Original Message-----
>> From: Les Gasser [mailto:gasser at uiuc.edu]
>> Sent: Friday, October 06, 2006 1:14 PM
>> To: Laszlo Gulyas
>> Cc: naacsos-list at lists.andrew.cmu.edu; SIMSOC at JISCMAIL.AC.UK
>> Subject: Re: Distribution / Parallelization of ABM's
>>
>> NAACSOS - http://www.casos.cs.cmu.edu/naacsos/
>> Laszlo, below are links to five papers that address various
>> aspects of these issues, part of a stream of work over about
>> a 20 year period.
>> These cover conceptualizations, requirements, approaches,
>> scaling issues, etc. (Also available through
>> http://www.isrl.uiuc.edu/~gasser/papers/).
>>
>> Others have also worked in these areas, going back to Lesser
>> et al.'s work distributing HEARSAY (papers of Lesser &
>> Fennel; Lesser & Erman); Ed Durfee's MS thesis at UMASS in
>> the early 1980s on distributing a distributed problem solving
>> simulator, Dan Corkill's work on parallelizing blackboard
>> systems at UMASS, early 1990s (others worked on this too).
>> References to all this are availble via
>> http://mas.cs.umass.edu/pub/ and it has been quite inspiring
>> to me personally.  More recently there is also Brian Logan
>> and Georgios Theorodopoulos' work on distributing MAS,
>> concerning especially dealing with environment models as
>> points of serialization.
>>
>> Hope this helps,
>>
>> -- Les
>>
>> Les Gasser, Kelvin Kakugawa, Brant Chee and Marc Esteva
>> "Smooth Scaling Ahead: Progressive MAS Simulation from Single
>> PCs to Grids"
>> in Paul Davidsson, Brian Logan, and Keiki Takadama (Eds.)
>> Multi-Agent and Multi-Agent-Based Simulation.
>> Lecture Notes in Computer Science 3415, Springer, 2005
>> http://www.isrl.uiuc.edu/~gasser/papers/gasser-etal-mamabs04- 
>> final.pdf
>>
>> Les Gasser and Kelvin Kakugawa.
>> "MACE3J: Fast Flexible Distributed Simulation of Large,
>> Large-Grain Multi-Agent Systems."
>> In Proceedings of AAMAS-2002.
>> [Finalist for Best Paper Award at this conference.]
>> http://www.isrl.uiuc.edu/~gasser/papers/mace3j-aamas02-pap.pdf
>>
>> Les Gasser.
>> "MAS Infrastructure Definitions, Needs, Prospects,"
>> in Thomas Wagner and Omer Rana, editors, Infrastructure for
>> Agents, Multi-Agent Systems, and Scalable Multi-Agent
>> Systems, Springer-Verlag, 2001 Also appears in ICFAI Journal
>> of Managerial Economics, 11:2, May, 2004, pp 35-45.
>> http://www.isrl.uiuc.edu/~gasser/papers/masidnp-08-with-table.pdf
>>
>> Les Gasser.
>> "Agents and Concurrent Objects."
>> IEEE Concurrency, 6(4) pp. 74-77&81, October-December, 1998.
>> http://www.isrl.uiuc.edu/~gasser/papers/AgentsAndObjects-07.html
>>
>> Les Gasser, Carl Braganza, and Nava Herman.
>> "MACE: A Flexible Testbed for Distributed AI Research"
>> in Michael N. Huhns, ed.
>> Distributed Artificial Intelligence
>> Pitman Publishers, 1987, 119-152.
>> http://www.isrl.uiuc.edu/~gasser/papers/gasser-braganza-herman
>> -mace-a-flexible-testbed-for-dai-research-1987.ps
>> http://www.isrl.uiuc.edu/~gasser/papers/gasser-braganza-herman
>> -mace-a-flexible-testbed-for-dai-research-1987.pdf
>>
>>
>> Laszlo Gulyas wrote:
>>> NAACSOS - http://www.casos.cs.cmu.edu/naacsos/
>>> [**** Apologies for cross-postings. ****]
>>>
>>> Dear Colleagues,
>>>
>>> We are compiling a survey on techniques to parallelize agent-based
>>> simulations. We are interested in both in-run and inter-run
>>> parallelizations (i.e., when one distributes the agents and
>> when one
>>> distributes individual runs in a parameter sweep), albeit I
>> think, the
>>> more challenging part is the former.
>>>
>>> We are aware that in-run parallelization is a non-trivial task and,
>>> what's more, it is likely that it cannot be done in general. Our
>>> approach is trying to collect 'communication templates'
>> that may make
>>> distribution  / parallelization feasible. E.g., when the model is
>>> spatial and comminication is (mostly) local, there are
>> already works to do the job.
>>> However, we foresee other cases when the problem can be solved.
>>>
>>> As I said, we are now compiling a survey. We are aware of a few
>>> publications and threads at various lists, but I'd like to
>> ask you all
>>> to send me references to such works if you know about them.
>> (If you do
>>> not have references, but have ideas that you are ready to share,
>>> please, do not hesitate either.) Thank you all in advance!
>>>
>>> For your information, our ultimate goal is to be able to
>> run ABM's on
>>> the grid -- which adds another layer of complication, namely the
>>> uncertainity of resources and slower communication. But we
>> will deal with that later!
>>> ;-)
>>>
>>> Best regards,
>>>
>>> Laszlo Gulyas (aka Gulya)
>> The NAACSOS mailing list is a service of NAACSOS, the North
>> American Association for Computational and Organizational
>> Science (http://www.casos.cs.cmu.edu/naacsos/).
>> To remove yourself from this mailing list, send an email to
>> <Majordomo at lists.andrew.cmu.edu> with the following command
>> in the body of your email message:
>> unsubscribe naacsos-list
>> -
>>
>>
>
>
> ============================================================
> FRIAM Applied Complexity Group listserv
> Meets Fridays 9a-11:30 at cafe at St. John's College
> lectures, archives, unsubscribe, maps at http://www.friam.org



Reply | Threaded
Open this post in threaded view
|

FW: Distribution / Parallelization of ABM's

Douglas Roberts-2
If you go to any of the supercomputing centers such as NCSA, SDSC, or PSC,
you do not see parallel java apps running on any of their machines (with the
occasional exception of a parallel newbie trying, with great difficulty to
make something work).  The reasons:

   1. there are few supported message passing toolkits that support
   parallel java apps,
   2. java runs 3-4 times slower than C, C++, Fortran, and machine time
   is expensive, and finally
   3. there are well-designed and maintained languages, tookits and APIs
   for implementing HPC applications, and || developers use them instead of
   java.

I do have first-hand experience with a researcher who has stubbornly
insisted in trying to build a parallel java app using RMI for the message
passing interface.  It's just a bad match fo rrunning on distributed memory
archtectures.  But, he loves java and doesn't know any HPC friendly
object-oriented languages.  He's wasted a whole year so far trying to
reimplement a subset of MPI functionality...

--Doug

--
Doug Roberts, RTI International
droberts at rti.org
doug at parrot-farm.net
505-455-7333 - Office
505-670-8195 - Cell

On 10/6/06, Joshua Thorp <jthorp at redfish.com> wrote:

>
> I came across this interesting doc on garbage collection in java:
> http://java.sun.com/docs/hotspot/gc5.0/gc_tuning_5.html
>
> which notes:
> """
> ...virtual machines for the JavaTM platform up to and including
> version 1.3.1 do not have parallel garbage collection, so the impact
> of garbage collection on a multiprocessor system grows relative to an
> otherwise parallel application.
>
> The graph below models an ideal system that is perfectly scalable
> with the exception of garbage collection. The red line is an
> application spending only 1% of the time in garbage collection on a
> uniprocessor system. This translates to more than a 20% loss in
> throughput on 32 processor systems. At 10% of the time in garbage
> collection (not considered an outrageous amount of time in garbage
> collection in uniprocessor applications) more than 75% of throughput
> is lost when scaling up to 32 processors.
>
> """
>
> I hadn't looked at Java's GC for a while.  It has gotten very
> complicated!  I wonder if they have parallelized the GC.  As the
> quote above comes from a document for java 5.0 apparently not...
>
> --joshua
>
> On Oct 6, 2006, at 1:36 PM, Stephen Guerin wrote:
>
> > Laszlo sent the same request out to the NAACSOS list, too. Here's a
> > response
> > that may be interesting to FRIAM-folk.
> >
> > -Steve
> >
> >> -----Original Message-----
> >> From: Les Gasser [mailto:gasser at uiuc.edu]
> >> Sent: Friday, October 06, 2006 1:14 PM
> >> To: Laszlo Gulyas
> >> Cc: naacsos-list at lists.andrew.cmu.edu; SIMSOC at JISCMAIL.AC.UK
> >> Subject: Re: Distribution / Parallelization of ABM's
> >>
> >> NAACSOS - http://www.casos.cs.cmu.edu/naacsos/
> >> Laszlo, below are links to five papers that address various
> >> aspects of these issues, part of a stream of work over about
> >> a 20 year period.
> >> These cover conceptualizations, requirements, approaches,
> >> scaling issues, etc. (Also available through
> >> http://www.isrl.uiuc.edu/~gasser/papers/).
> >>
> >> Others have also worked in these areas, going back to Lesser
> >> et al.'s work distributing HEARSAY (papers of Lesser &
> >> Fennel; Lesser & Erman); Ed Durfee's MS thesis at UMASS in
> >> the early 1980s on distributing a distributed problem solving
> >> simulator, Dan Corkill's work on parallelizing blackboard
> >> systems at UMASS, early 1990s (others worked on this too).
> >> References to all this are availble via
> >> http://mas.cs.umass.edu/pub/ and it has been quite inspiring
> >> to me personally.  More recently there is also Brian Logan
> >> and Georgios Theorodopoulos' work on distributing MAS,
> >> concerning especially dealing with environment models as
> >> points of serialization.
> >>
> >> Hope this helps,
> >>
> >> -- Les
> >>
> >> Les Gasser, Kelvin Kakugawa, Brant Chee and Marc Esteva
> >> "Smooth Scaling Ahead: Progressive MAS Simulation from Single
> >> PCs to Grids"
> >> in Paul Davidsson, Brian Logan, and Keiki Takadama (Eds.)
> >> Multi-Agent and Multi-Agent-Based Simulation.
> >> Lecture Notes in Computer Science 3415, Springer, 2005
> >> http://www.isrl.uiuc.edu/~gasser/papers/gasser-etal-mamabs04-
> >> final.pdf
> >>
> >> Les Gasser and Kelvin Kakugawa.
> >> "MACE3J: Fast Flexible Distributed Simulation of Large,
> >> Large-Grain Multi-Agent Systems."
> >> In Proceedings of AAMAS-2002.
> >> [Finalist for Best Paper Award at this conference.]
> >> http://www.isrl.uiuc.edu/~gasser/papers/mace3j-aamas02-pap.pdf
> >>
> >> Les Gasser.
> >> "MAS Infrastructure Definitions, Needs, Prospects,"
> >> in Thomas Wagner and Omer Rana, editors, Infrastructure for
> >> Agents, Multi-Agent Systems, and Scalable Multi-Agent
> >> Systems, Springer-Verlag, 2001 Also appears in ICFAI Journal
> >> of Managerial Economics, 11:2, May, 2004, pp 35-45.
> >> http://www.isrl.uiuc.edu/~gasser/papers/masidnp-08-with-table.pdf
> >>
> >> Les Gasser.
> >> "Agents and Concurrent Objects."
> >> IEEE Concurrency, 6(4) pp. 74-77&81, October-December, 1998.
> >> http://www.isrl.uiuc.edu/~gasser/papers/AgentsAndObjects-07.html
> >>
> >> Les Gasser, Carl Braganza, and Nava Herman.
> >> "MACE: A Flexible Testbed for Distributed AI Research"
> >> in Michael N. Huhns, ed.
> >> Distributed Artificial Intelligence
> >> Pitman Publishers, 1987, 119-152.
> >> http://www.isrl.uiuc.edu/~gasser/papers/gasser-braganza-herman
> >> -mace-a-flexible-testbed-for-dai-research-1987.ps
> >> http://www.isrl.uiuc.edu/~gasser/papers/gasser-braganza-herman
> >> -mace-a-flexible-testbed-for-dai-research-1987.pdf
> >>
> >>
> >> Laszlo Gulyas wrote:
> >>> NAACSOS - http://www.casos.cs.cmu.edu/naacsos/
> >>> [**** Apologies for cross-postings. ****]
> >>>
> >>> Dear Colleagues,
> >>>
> >>> We are compiling a survey on techniques to parallelize agent-based
> >>> simulations. We are interested in both in-run and inter-run
> >>> parallelizations (i.e., when one distributes the agents and
> >> when one
> >>> distributes individual runs in a parameter sweep), albeit I
> >> think, the
> >>> more challenging part is the former.
> >>>
> >>> We are aware that in-run parallelization is a non-trivial task and,
> >>> what's more, it is likely that it cannot be done in general. Our
> >>> approach is trying to collect 'communication templates'
> >> that may make
> >>> distribution  / parallelization feasible. E.g., when the model is
> >>> spatial and comminication is (mostly) local, there are
> >> already works to do the job.
> >>> However, we foresee other cases when the problem can be solved.
> >>>
> >>> As I said, we are now compiling a survey. We are aware of a few
> >>> publications and threads at various lists, but I'd like to
> >> ask you all
> >>> to send me references to such works if you know about them.
> >> (If you do
> >>> not have references, but have ideas that you are ready to share,
> >>> please, do not hesitate either.) Thank you all in advance!
> >>>
> >>> For your information, our ultimate goal is to be able to
> >> run ABM's on
> >>> the grid -- which adds another layer of complication, namely the
> >>> uncertainity of resources and slower communication. But we
> >> will deal with that later!
> >>> ;-)
> >>>
> >>> Best regards,
> >>>
> >>> Laszlo Gulyas (aka Gulya)
> >> The NAACSOS mailing list is a service of NAACSOS, the North
> >> American Association for Computational and Organizational
> >> Science (http://www.casos.cs.cmu.edu/naacsos/).
> >> To remove yourself from this mailing list, send an email to
> >> <Majordomo at lists.andrew.cmu.edu> with the following command
> >> in the body of your email message:
> >> unsubscribe naacsos-list
> >> -
> >>
> >>
> >
> >
> > ============================================================
> > FRIAM Applied Complexity Group listserv
> > Meets Fridays 9a-11:30 at cafe at St. John's College
> > lectures, archives, unsubscribe, maps at http://www.friam.org
>
>
> ============================================================
> FRIAM Applied Complexity Group listserv
> Meets Fridays 9a-11:30 at cafe at St. John's College
> lectures, archives, unsubscribe, maps at http://www.friam.org
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: /pipermail/friam_redfish.com/attachments/20061006/2b0ba160/attachment-0001.html

Reply | Threaded
Open this post in threaded view
|

FW: Distribution / Parallelization of ABM's

Marcus G. Daniels-3
Quoting Douglas Roberts <doug at parrot-farm.net>:

> If you go to any of the supercomputing centers such as NCSA, SDSC, or PSC,
> you do not see parallel java apps running on any of their machines (with the
> occasional exception of a parallel newbie trying, with great difficulty to
> make something work).

Soon to be centers for hobbyists and enthusiasts!  :-)

http://www.lanl.gov/news/index.php/fuseaction/nb.story/story_id/8932/nb_date/index.php?fuseaction=nb.story&story_id=8932&nb_date=2006-09-07



Reply | Threaded
Open this post in threaded view
|

FW: Distribution / Parallelization of ABM's

Carl Tollander
In reply to this post by Douglas Roberts-2
Josh writes:
> I hadn't looked at Java's GC for a while.  It has gotten very
> complicated!  I wonder if they have parallelized the GC.
JVM setting:  -XX:UseParallelGC  
Since 1.4.2

However, its not just a matter of turning on the flag!  See (for example)
http://www.petefreitag.com/articles/gctuning/
which is probably a good article for Java GC stuff in general, let alone
the parallel stuff.

Carl



Reply | Threaded
Open this post in threaded view
|

FW: Distribution / Parallelization of ABM's

Frank Wimberly
In reply to this post by Marcus G. Daniels-3
Good news for the hobbyists.  The NSF centers are anticipating petaflop
capability by 2010:

http://www.psc.edu/science/2005/foreword/

---
Frank C. Wimberly
140 Calle Ojo Feliz              (505) 995-8715 or (505) 670-9918 (cell)
Santa Fe, NM 87505           wimberly3 at earthlink.net

-----Original Message-----
From: [hidden email] [mailto:[hidden email]] On
Behalf Of mgd at santafe.edu
Sent: Friday, October 06, 2006 3:53 PM
To: The Friday Morning Applied Complexity Coffee Group; Douglas Roberts
Subject: Re: [FRIAM] FW: Distribution / Parallelization of ABM's

Quoting Douglas Roberts <doug at parrot-farm.net>:

> If you go to any of the supercomputing centers such as NCSA, SDSC, or
PSC,
> you do not see parallel java apps running on any of their machines
(with the
> occasional exception of a parallel newbie trying, with great
difficulty to
> make something work).

Soon to be centers for hobbyists and enthusiasts!  :-)

http://www.lanl.gov/news/index.php/fuseaction/nb.story/story_id/8932/nb_
date/index.php?fuseaction=nb.story&story_id=8932&nb_date=2006-09-07


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org



Reply | Threaded
Open this post in threaded view
|

FW: Distribution / Parallelization of ABM's

Douglas Roberts-2
In reply to this post by Marcus G. Daniels-3
Well, first of all, Roadrunner will be a classified machine that will be
used to do nuclear weapons research at LANL.  Secondly, the architecture of
the machine is an extension of existing distributed memory cluster hardware
that will hopefully leverage compact blade configurations combined with
Opteron-based cluster technology using hybrid chips for specialized number
crunching.  In other words, it will not be a general-purpose machine;
rather, it will require specially coded applications to take advantage of
it.  It will also be a huge power hog.  Finally, if previous experience with
experimental new HPC hardware at LANL is any judge, "Roadrunner" has a bumpy
road ahead of it before it becomes a productive resource.

All of the Teragrid supercomputer centers (NCAR, NCSA, SDSC, PSC, ORNL,
Purdue, Indiana, TACC, and UC/ANL) have large queues of jobs waiting to run
on their available resources.  I suspect the need for HPC cycles that are
being provided by these centers will not go away any time soon.

--Doug

--
Doug Roberts, RTI International
droberts at rti.org
doug at parrot-farm.net
505-455-7333 - Office
505-670-8195 - Cell

On 10/6/06, mgd at santafe.edu <mgd at santafe.edu> wrote:

>
> Quoting Douglas Roberts <doug at parrot-farm.net>:
>
> > If you go to any of the supercomputing centers such as NCSA, SDSC, or
> PSC,
> > you do not see parallel java apps running on any of their machines (with
> the
> > occasional exception of a parallel newbie trying, with great difficulty
> to
> > make something work).
>
> Soon to be centers for hobbyists and enthusiasts!  :-)
>
>
> http://www.lanl.gov/news/index.php/fuseaction/nb.story/story_id/8932/nb_date/index.php?fuseaction=nb.story&story_id=8932&nb_date=2006-09-07
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: /pipermail/friam_redfish.com/attachments/20061007/c1b61b9d/attachment.html

Reply | Threaded
Open this post in threaded view
|

FW: Distribution / Parallelization of ABM's

Marcus G. Daniels-3
Douglas Roberts wrote:
> In other words, it will not be a general-purpose machine; rather, it
> will require specially coded applications to take advantage of it.  
> [..]  Finally, if previous experience with experimental new HPC
> hardware at LANL is any judge, "Roadrunner" has a bumpy road ahead of
> it before it becomes a productive resource.
These are not disadvantages, but opportunities!

> All of the Teragrid supercomputer centers (NCAR, NCSA, SDSC, PSC,
> ORNL, Purdue, Indiana, TACC, and UC/ANL) have large queues of jobs
> waiting to run on their available resources.  I suspect the need for
> HPC cycles that are being provided by these centers will not go away
> any time soon.
I was just teasing..


Reply | Threaded
Open this post in threaded view
|

FW: Distribution / Parallelization of ABM's

Douglas Roberts-2
I knew that ;-} .

On 10/7/06, Marcus G. Daniels <mgd at santafe.edu> wrote:
>
>
> I was just teasing..
>
> ============================================================
> FRIAM Applied Complexity Group listserv
> Meets Fridays 9a-11:30 at cafe at St. John's College
> lectures, archives, unsubscribe, maps at http://www.friam.org
>



--
Doug Roberts, RTI International
droberts at rti.org
doug at parrot-farm.net
505-455-7333 - Office
505-670-8195 - Cell
-------------- next part --------------
An HTML attachment was scrubbed...
URL: /pipermail/friam_redfish.com/attachments/20061007/a184fae8/attachment-0001.html

Reply | Threaded
Open this post in threaded view
|

FW: Distribution / Parallelization of ABM's

Günther Greindl-2
In reply to this post by Douglas Roberts-2
Hello Doug,

I guess your friend is aware of this:

http://www.hpjava.org/

(a Java wrapper to interface with a native MPI package).
What speaks against this?

>   2. java runs 3-4 times slower than C, C++, Fortran, and machine time
>   is expensive, and finally

There have already been many studies (also published on this
list ;-) that this is a prejudice. Java apps can be made to
run just as fast as C/C++ - if you take care programming.
(I _don't_ want to start a language war!)

Best Regards,
G?nther



Reply | Threaded
Open this post in threaded view
|

FW: Distribution / Parallelization of ABM's

Douglas Roberts-2
Hi, Gunther.

I don't want to start any language wars either.  I have seen some of these
java performance studies as well... Regardless, I have yet to see a
well-written real-world java application that can outperform an equivalent
well-written C++ app, java garbage collection being one of the numerous
reasons why.

There is, after all, a reason that there are so few java apps running on any
of the TeraGrid HPC resources.

One limit of the HPJava environment is that it is primarily for use with
data parallel or SIMD (Single Instruction Multiple Data) synchronous
applications.  This type of parallel application is almost by definition
*not* an agent based simulation. In reality most if not all distributed
agent based simulations are MIMD (Multiple Instruction Multiple Data)
asynchronous applications when implemented in an HPC environment.  HPJava
will provide no help here.

--Doug

--
Doug Roberts, RTI International
droberts at rti.org
doug at parrot-farm.net
505-455-7333 - Office
505-670-8195 - Cell

On 10/7/06, G?nther Greindl <g.greindl at aon.at> wrote:

>
> Hello Doug,
>
> I guess your friend is aware of this:
>
> http://www.hpjava.org/
>
> (a Java wrapper to interface with a native MPI package).
> What speaks against this?
>
> >   2. java runs 3-4 times slower than C, C++, Fortran, and machine time
> >   is expensive, and finally
>
> There have already been many studies (also published on this
> list ;-) that this is a prejudice. Java apps can be made to
> run just as fast as C/C++ - if you take care programming.
> (I _don't_ want to start a language war!)
>
> Best Regards,
> G?nther
>
>
> ============================================================
> FRIAM Applied Complexity Group listserv
> Meets Fridays 9a-11:30 at cafe at St. John's College
> lectures, archives, unsubscribe, maps at http://www.friam.org
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: /pipermail/friam_redfish.com/attachments/20061007/0a70415d/attachment.html

Reply | Threaded
Open this post in threaded view
|

FW: Distribution / Parallelization of ABM's

Douglas Roberts-2
By the way, regarding those studies which purport to show that java is as
fast is C++:  it is easy to construct a test that does not require much
garbage collection for a java implementation.  In reality, large
agent-based  simulations written in java garbage collect.  It is therefore
easy to find studies that support either side of the argument regarding java
performance.

Here is one report that shows performance results that are more commiserate
with my own personal experience:

From: http://verify.stanford.edu/uli/java_cpp.html : a bit dated but still
relevant --

Here the results for my *400 MHz Pentium II PC* running RedHat 5.2 / 6.1:

 Version Execution Environment Execution Time  Java array JVM 1.1.5 v7
/ 1.2pre-v2 /
1.2.2 rc2 192s / 96s / 118s    HotSpot Server 1.3.0 / IBM Classic VM 1.3.0
16.7s / 10.7s    HotSpot Server 1.3.1beta-b15 16.3s  Java Vector JVM 1.1.5v7 /
1.2 pre-v2 / 1.2.2 rc2 698s / 481s / 543s    HotSpot Server 1.3.0 / IBM
Classic VM 1.3.0 46.5s / 71.5s    HotSpot Server 1.3.1beta-b15 45.6s  Java
ArrayList JVM 1.2 pre-v2 / 1.2.2 rc2 - / 260s    HotSpot Server 1.3.0 / IBM
Classic VM 1.3.0 18.8s / 62.8s    HotSpot Server 1.3.1beta-b15 18.8s  C++
pointer gcc version egcs-2.90.29 / 2.95 3.6s / 3.3s  C++ object gcc version
egcs-2.90.29 / 2.95 5.7s / 3.9s  C++ vector gcc version egcs-2.90.29 / 2.95
6.0s / 5.9s  C++ STL gcc version egcs-2.90.29 / 2.95 3.8s / 3.9s

Here more results for a *Sun Ultra 10* running Solaris 8 (SunOS 5.8).

 Version Execution Environment Execution Time  Java array HotSpot Server VM
1.3.0 17.4s  Java ArrayList HotSpot Server VM 1.3.0 26.3s  C++ pointer Sun
CC 5.1 5.22s    gcc 2.95.2 5.84s  C++ STL gcc 2.95.2 6.55s

--Doug

On 10/7/06, Douglas Roberts <doug at parrot-farm.net> wrote:

>
> Hi, Gunther.
>
> I don't want to start any language wars either.  I have seen some of these
> java performance studies as well... Regardless, I have yet to see a
> well-written real-world java application that can outperform an equivalent
> well-written C++ app, java garbage collection being one of the numerous
> reasons why.
>
> There is, after all, a reason that there are so few java apps running on
> any of the TeraGrid HPC resources.
>
> One limit of the HPJava environment is that it is primarily for use with
> data parallel or SIMD (Single Instruction Multiple Data) synchronous
> applications.  This type of parallel application is almost by definition
> *not* an agent based simulation. In reality most if not all distributed
> agent based simulations are MIMD (Multiple Instruction Multiple Data)
> asynchronous applications when implemented in an HPC environment.  HPJava
> will provide no help here.
>
> --Doug
>
> --
> Doug Roberts, RTI International
> droberts at rti.org
> doug at parrot-farm.net
> 505-455-7333 - Office
> 505-670-8195 - Cell
>
> On 10/7/06, G?nther Greindl <g.greindl at aon.at> wrote:
> >
> > Hello Doug,
> >
> > I guess your friend is aware of this:
> >
> > http://www.hpjava.org/
> >
> > (a Java wrapper to interface with a native MPI package).
> > What speaks against this?
> >
> > >   2. java runs 3-4 times slower than C, C++, Fortran, and machine time
> > >   is expensive, and finally
> >
> > There have already been many studies (also published on this
> > list ;-) that this is a prejudice. Java apps can be made to
> > run just as fast as C/C++ - if you take care programming.
> > (I _don't_ want to start a language war!)
> >
> > Best Regards,
> > G?nther
> >
> >
> > ============================================================
> > FRIAM Applied Complexity Group listserv
> > Meets Fridays 9a-11:30 at cafe at St. John's College
> > lectures, archives, unsubscribe, maps at http://www.friam.org
> >
>
>
>


--
Doug Roberts, RTI International
droberts at rti.org
doug at parrot-farm.net
505-455-7333 - Office
505-670-8195 - Cell
-------------- next part --------------
An HTML attachment was scrubbed...
URL: /pipermail/friam_redfish.com/attachments/20061007/bf61bd5c/attachment.html

Reply | Threaded
Open this post in threaded view
|

FW: Distribution / Parallelization of ABM's

Marcus G. Daniels-3
In reply to this post by Douglas Roberts-2
Douglas Roberts wrote:
> This type of parallel application is almost by definition *not* an
> agent based simulation. In reality most if not all distributed agent
> based simulations are MIMD (Multiple Instruction Multiple Data)
> asynchronous applications when implemented in an HPC environment.
Software Transactional Memory might be one way to make multithreaded
ABMs easier to program and more scalable...

http://en.wikipedia.org/wiki/Software_transactional_memory



Reply | Threaded
Open this post in threaded view
|

FW: Distribution / Parallelization of ABM's

Marcus G. Daniels-3
In reply to this post by Douglas Roberts-2
Douglas Roberts wrote:
> By the way, regarding those studies which purport to show that java is
> as fast is C++:  it is easy to construct a test that does not require
> much garbage collection for a java implementation.  In reality, large
> agent-based  simulations written in java garbage collect.  It is
> therefore easy to find studies that support either side of the
> argument regarding java performance.
The issue is not whether Java code performance is or can be made to be
as fast as C++.   The issue is whether Java's memory model is amenable
to understanding how code really runs on a processor in a JVM.   If
there are pipeline stalls that are due to allocation and garbage
collection is it obvious how to intervene?   Very high performance code
requires that the logical execution of a program meshes well with the
work that the CPU can actually do in parallel.   To the extent HPC
people laugh at Java users, it's because so many Java users happen to be
newbies that don't actually have any idea about making code groovy with
a given CPU architecture.  In principle all of the same things can be
done in Java, given some tolerance for indirection and lack of control,
but historically use of supercomputers was something that was planned
with code being implemented around a specific architecture to get
maximum bang for the buck.


Reply | Threaded
Open this post in threaded view
|

FW: Distribution / Parallelization of ABM's

Douglas Roberts-2
In reply to this post by Marcus G. Daniels-3
For SMP implementations, perhaps.  I'm still not a big believer in SMP being
the future for large-scale HPC, in spite of Intel's promise of an 80-core
processor within 5 years:
http://news.com.com/Intel+pledges+80+cores+in+five+years/2100-1006_3-6119618.html,
or perhaps even because of it.  If it's going to take 5 years to get an 80
cpu chip, distributed memory HPC systems will have continued to increase
their lead in performance/cost over shared memory systems and will be even
more competitive than they are now.

--Doug

--
Doug Roberts, RTI International
droberts at rti.org
doug at parrot-farm.net
505-455-7333 - Office
505-670-8195 - Cell

On 10/7/06, Marcus G. Daniels <mgd at santafe.edu> wrote:

>
>
> Software Transactional Memory might be one way to make multithreaded
> ABMs easier to program and more scalable...
>
> http://en.wikipedia.org/wiki/Software_transactional_memory
>
>
> ============================================================
> FRIAM Applied Complexity Group listserv
> Meets Fridays 9a-11:30 at cafe at St. John's College
> lectures, archives, unsubscribe, maps at http://www.friam.org
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: /pipermail/friam_redfish.com/attachments/20061007/eb6bf92a/attachment-0001.html