FW: Distribution / Parallelization of ABM's

classic Classic list List threaded Threaded
12 messages Options
Reply | Threaded
Open this post in threaded view
|

FW: Distribution / Parallelization of ABM's

Stephen Guerin
>From Laszlo Gulyas on the SimSoc list.

> -----Original Message-----
> From: Laszlo Gulyas [mailto:lgulyas at AITIA.AI]
> Sent: Friday, October 06, 2006 5:32 AM
> To: SIMSOC at JISCMAIL.AC.UK
> Subject: Distribution / Parallelization of ABM's
>
> [**** Apologies for cross-postings. ****]
>
> Dear Colleagues,
>
> We are compiling a survey on techniques to parallelize
> agent-based simulations. We are interested in both in-run and
> inter-run parallelizations (i.e., when one distributes the
> agents and when one distributes individual runs in a
> parameter sweep), albeit I think, the more challenging part
> is the former.
>
> We are aware that in-run parallelization is a non-trivial
> task and, what's more, it is likely that it cannot be done in
> general. Our approach is trying to collect 'communication
> templates' that may make distribution  / parallelization
> feasible. E.g., when the model is spatial and comminication
> is (mostly) local, there are already works to do the job.
> However, we foresee other cases when the problem can be solved.
>
> As I said, we are now compiling a survey. We are aware of a
> few publications and threads at various lists, but I'd like
> to ask you all to send me references to such works if you
> know about them. (If you do not have references, but have
> ideas that you are ready to share, please, do not hesitate
> either.) Thank you all in advance!
>
> For your information, our ultimate goal is to be able to run
> ABM's on the grid -- which adds another layer of
> complication, namely the uncertainity of resources and slower
> communication. But we will deal with that later!
> ;-)
>
> Best regards,
>
> Laszlo Gulyas (aka Gulya)
> --
> Gulyas Laszlo   |     Laszlo Gulyas
> kut.ig.         |  dir. of research
> AITIA Rt.       |         AITIA Inc.
>
>



Reply | Threaded
Open this post in threaded view
|

FW: Distribution / Parallelization of ABM's

Douglas Roberts-2
Laszlo,

My colleagues and I have been designing and implementing distributed agent
based simulations for many years.  Two examples of our work are TRANSIMS,
http://www.transims.net/home.html, and EpiSims,
http://ndssl.vbi.vt.edu/episims.html.  Both codes are written in C++ and use
MPI as the message passing toolkit.  They were both designed to run on
distributed memory Linux clusters, and each code has an established user
base.

EpiSims has been used to model disease outbreak in large metropolitan areas
such as Los Angelas, CA, and Chicago, IL. For more information on the
EpiSims application and current users of the code, see

http://www.sciam.com/article.cfm?chanID=sa006&colID=1&articleID=000BBC08-CEA3-1213-8EA383414B7FFE9F

and

http://necsi.org/community/wiki/index.php/Infectious_disease

Feel free to contact me for more information.

Regards,

--Doug

--
Doug Roberts, RTI International
droberts at rti.org
doug at parrot-farm.net
505-455-7333 - Office
505-670-8195 - Cell



On 10/6/06, Stephen Guerin <stephen.guerin at redfish.com> wrote:

>
> >From Laszlo Gulyas on the SimSoc list.
>
> > -----Original Message-----
> > From: Laszlo Gulyas [mailto:lgulyas at AITIA.AI]
> > Sent: Friday, October 06, 2006 5:32 AM
> > To: SIMSOC at JISCMAIL.AC.UK
> > Subject: Distribution / Parallelization of ABM's
> >
> > [**** Apologies for cross-postings. ****]
> >
> > Dear Colleagues,
> >
> > We are compiling a survey on techniques to parallelize
> > agent-based simulations. We are interested in both in-run and
> > inter-run parallelizations (i.e., when one distributes the
> > agents and when one distributes individual runs in a
> > parameter sweep), albeit I think, the more challenging part
> > is the former.
> >
> > We are aware that in-run parallelization is a non-trivial
> > task and, what's more, it is likely that it cannot be done in
> > general. Our approach is trying to collect 'communication
> > templates' that may make distribution  / parallelization
> > feasible. E.g., when the model is spatial and comminication
> > is (mostly) local, there are already works to do the job.
> > However, we foresee other cases when the problem can be solved.
> >
> > As I said, we are now compiling a survey. We are aware of a
> > few publications and threads at various lists, but I'd like
> > to ask you all to send me references to such works if you
> > know about them. (If you do not have references, but have
> > ideas that you are ready to share, please, do not hesitate
> > either.) Thank you all in advance!
> >
> > For your information, our ultimate goal is to be able to run
> > ABM's on the grid -- which adds another layer of
> > complication, namely the uncertainity of resources and slower
> > communication. But we will deal with that later!
> > ;-)
> >
> > Best regards,
> >
> > Laszlo Gulyas (aka Gulya)
> > --
> > Gulyas Laszlo   |     Laszlo Gulyas
> > kut.ig.         |  dir. of research
> > AITIA Rt.       |         AITIA Inc.
> >
> >
>
>
> ============================================================
> FRIAM Applied Complexity Group listserv
> Meets Fridays 9a-11:30 at cafe at St. John's College
> lectures, archives, unsubscribe, maps at http://www.friam.org
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: /pipermail/friam_redfish.com/attachments/20061006/2460ff06/attachment.html

Reply | Threaded
Open this post in threaded view
|

FW: Distribution / Parallelization of ABM's

Douglas Roberts-2
I forgot to mention: the project that I am currently working on is named
MIDAS, funded by NIH (http://archive.nigms.nih.gov/research/midas.html).
One of our project teams just received a grant from the TerraGrid community
for 200,000 compute hours to develop a grid-aware version of EpiSims.  The
successful proposal for that grant can be found
here:http://www.friam.org/uploads/MMarathe.pdf

Regards,

--Doug

--
Doug Roberts, RTI International
droberts at rti.org
doug at parrot-farm.net
505-455-7333 - Office
505-670-8195 - Cell


On 10/6/06, Douglas Roberts <doug at parrot-farm.net> wrote:

>
> Laszlo,
>
> My colleagues and I have been designing and implementing distributed agent
> based simulations for many years.  Two examples of our work are TRANSIMS, http://www.transims.net/home.html
> , and EpiSims, http://ndssl.vbi.vt.edu/episims.html.  Both codes are
> written in C++ and use MPI as the message passing toolkit.  They were both
> designed to run on distributed memory Linux clusters, and each code has an
> established user base.
>
> EpiSims has been used to model disease outbreak in large metropolitan
> areas such as Los Angelas, CA, and Chicago, IL. For more information on the
> EpiSims application and current users of the code, see
>
>
> http://www.sciam.com/article.cfm?chanID=sa006&colID=1&articleID=000BBC08-CEA3-1213-8EA383414B7FFE9F
>
> and
>
> http://necsi.org/community/wiki/index.php/Infectious_disease
>
> Feel free to contact me for more information.
>
> Regards,
>
> --Doug
>
> --
> Doug Roberts, RTI International
> droberts at rti.org
> doug at parrot-farm.net
> 505-455-7333 - Office
> 505-670-8195 - Cell
>
>
>
> On 10/6/06, Stephen Guerin < stephen.guerin at redfish.com> wrote:
> >
> > >From Laszlo Gulyas on the SimSoc list.
> >
> > > -----Original Message-----
> > > From: Laszlo Gulyas [mailto:lgulyas at AITIA.AI]
> > > Sent: Friday, October 06, 2006 5:32 AM
> > > To: SIMSOC at JISCMAIL.AC.UK
> > > Subject: Distribution / Parallelization of ABM's
> > >
> > > [**** Apologies for cross-postings. ****]
> > >
> > > Dear Colleagues,
> > >
> > > We are compiling a survey on techniques to parallelize
> > > agent-based simulations. We are interested in both in-run and
> > > inter-run parallelizations (i.e., when one distributes the
> > > agents and when one distributes individual runs in a
> > > parameter sweep), albeit I think, the more challenging part
> > > is the former.
> > >
> > > We are aware that in-run parallelization is a non-trivial
> > > task and, what's more, it is likely that it cannot be done in
> > > general. Our approach is trying to collect 'communication
> > > templates' that may make distribution  / parallelization
> > > feasible. E.g., when the model is spatial and comminication
> > > is (mostly) local, there are already works to do the job.
> > > However, we foresee other cases when the problem can be solved.
> > >
> > > As I said, we are now compiling a survey. We are aware of a
> > > few publications and threads at various lists, but I'd like
> > > to ask you all to send me references to such works if you
> > > know about them. (If you do not have references, but have
> > > ideas that you are ready to share, please, do not hesitate
> > > either.) Thank you all in advance!
> > >
> > > For your information, our ultimate goal is to be able to run
> > > ABM's on the grid -- which adds another layer of
> > > complication, namely the uncertainity of resources and slower
> > > communication. But we will deal with that later!
> > > ;-)
> > >
> > > Best regards,
> > >
> > > Laszlo Gulyas (aka Gulya)
> > > --
> > > Gulyas Laszlo   |     Laszlo Gulyas
> > > kut.ig.         |  dir. of research
> > > AITIA Rt.       |         AITIA Inc.
> > >
> > >
> >
> >
> > ============================================================
> > FRIAM Applied Complexity Group listserv
> > Meets Fridays 9a-11:30 at cafe at St. John's College
> > lectures, archives, unsubscribe, maps at http://www.friam.org
> >
>
>
>
>


--
Doug Roberts, RTI International
droberts at rti.org
doug at parrot-farm.net
505-455-7333 - Office
505-670-8195 - Cell
-------------- next part --------------
An HTML attachment was scrubbed...
URL: /pipermail/friam_redfish.com/attachments/20061006/205b5824/attachment-0001.html

Reply | Threaded
Open this post in threaded view
|

FW: Distribution / Parallelization of ABM's

Owen Densmore
Administrator
In reply to this post by Stephen Guerin
Turns out there is a poll being taken on some mail lists on the topic  
of new parallel hardware and if/how it will be used:
   Parallelism: the next generation -- a small survey
   http://www.nabble.com/A-small-survey-tf2337745.html

     -- Owen



Reply | Threaded
Open this post in threaded view
|

Practical Parallelism

Owen Densmore
Administrator
On Oct 7, 2006, at 10:29 AM, Owen Densmore wrote:
> Turns out there is a poll being taken on some mail lists on the topic
> of new parallel hardware and if/how it will be used:
>    Parallelism: the next generation -- a small survey
>    http://www.nabble.com/A-small-survey-tf2337745.html
>
>      -- Owen

OK, so we've had an interesting interchange on Distribution /  
Parallelization of ABM's.  But what I'm interested is a bit more  
practical:

Given what *we* want to do, and given the recent advances in desktop,  
workstation, and server computing, and given our experiences over the  
last year with things like the Blender Render Farm .. what would be  
the most reasonable way for us to take a step or two toward higher  
performance?
   - Should we consider buying a fairly high performance linux box?
   - How about buying a multi-processor/multi-core system?
   - Do we want to consider a shared Santa Fe Super Cluster?
   - What public computing facilities could we use?

And possibly more to the point:
   - What computing architecture are we interested in?

I'll say from my experience, I'm mainly interested two approaches:

   - Unix based piped systems where I don't have to consider the  
architecture in my programs, only in the way I use sh/bash to execute  
them to make sure they work well in parallel.  In plain words: good  
parameter scanning, or piped tasks (model, visualize, render) using  
built-in unix piping mechanisms with parallel execution of the  
programs.  I've done this in the past with dramatic increase in  
elapsed times.  And its dead simple.

   - Java or similar based multi-threaded approaches where I need a  
bit of awareness in my code as to how I approach parallelism, but  
*the language supports it*.  I'm not very much interested in exotic  
and difficult to maintain grid/cluster architectures, I'm not at all  
convinced for the scale we're approaching that they make sense.  And,  
yes, Java is good enough.

In other words, given Redfish, Commodicast, and other local  
scientific computing endeavors, what would be interesting systems for  
our scale of computing?  I.e. reasonable increase in power with  
modest change in architecture.

Owen


Reply | Threaded
Open this post in threaded view
|

Practical Parallelism

Marcus G. Daniels-3
Owen Densmore wrote:
> In other words, given Redfish, Commodicast, and other local  
> scientific computing endeavors, what would be interesting systems for  
> our scale of computing?  I.e. reasonable increase in power with  
> modest change in architecture.
>  
Get a Mac Pro with low-end processors and then upgrade it to Clovertown
quad core.
Eight processors, and fast ones..  Should work well for multithreaded
Java code.


Reply | Threaded
Open this post in threaded view
|

Practical Parallelism

Douglas Roberts-2
In reply to this post by Owen Densmore
Owen:

I'm all for practical.  But first, show us your requirements.  A "step or
two towards higher performance" is a bit vague.  ;-}

What's your goal:  16 million agents, simulated at 80X real time?

Or something less.  Or something more.

Joking aside,

What are your requirements?  How much do you need to scale now; how far do
you need to scale eventually, how soon do you need to do it, what are your
agent complexities, output requirements, data I/O needs, post processing
requirements, what existing designs do you have now, and what are their
limitations, what is the memory footprint for your existing implementation,
what are your current run times, etc. etc. etc...

System requirements should come first;  these will lead to suggestions for
SW & HW implementation environments.

--Doug
--
Doug Roberts, RTI International
droberts at rti.org
doug at parrot-farm.net
505-455-7333 - Office
505-670-8195 - Cell

On 10/7/06, Owen Densmore <owen at backspaces.net> wrote:

>
> On Oct 7, 2006, at 10:29 AM, Owen Densmore wrote:
> > Turns out there is a poll being taken on some mail lists on the topic
> > of new parallel hardware and if/how it will be used:
> >    Parallelism: the next generation -- a small survey
> >    http://www.nabble.com/A-small-survey-tf2337745.html
> >
> >      -- Owen
>
> OK, so we've had an interesting interchange on Distribution /
> Parallelization of ABM's.  But what I'm interested is a bit more
> practical:
>
> Given what *we* want to do, and given the recent advances in desktop,
> workstation, and server computing, and given our experiences over the
> last year with things like the Blender Render Farm .. what would be
> the most reasonable way for us to take a step or two toward higher
> performance?
>    - Should we consider buying a fairly high performance linux box?
>    - How about buying a multi-processor/multi-core system?
>    - Do we want to consider a shared Santa Fe Super Cluster?
>    - What public computing facilities could we use?
>
> And possibly more to the point:
>    - What computing architecture are we interested in?
>
> I'll say from my experience, I'm mainly interested two approaches:
>
>    - Unix based piped systems where I don't have to consider the
> architecture in my programs, only in the way I use sh/bash to execute
> them to make sure they work well in parallel.  In plain words: good
> parameter scanning, or piped tasks (model, visualize, render) using
> built-in unix piping mechanisms with parallel execution of the
> programs.  I've done this in the past with dramatic increase in
> elapsed times.  And its dead simple.
>
>    - Java or similar based multi-threaded approaches where I need a
> bit of awareness in my code as to how I approach parallelism, but
> *the language supports it*.  I'm not very much interested in exotic
> and difficult to maintain grid/cluster architectures, I'm not at all
> convinced for the scale we're approaching that they make sense.  And,
> yes, Java is good enough.
>
> In other words, given Redfish, Commodicast, and other local
> scientific computing endeavors, what would be interesting systems for
> our scale of computing?  I.e. reasonable increase in power with
> modest change in architecture.
>
> Owen
>
> ============================================================
> FRIAM Applied Complexity Group listserv
> Meets Fridays 9a-11:30 at cafe at St. John's College
> lectures, archives, unsubscribe, maps at http://www.friam.org
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: /pipermail/friam_redfish.com/attachments/20061008/3fce4d60/attachment.html

Reply | Threaded
Open this post in threaded view
|

Practical Parallelism

Bill Eldridge

In particular, focus on what your primary bottlenecks are.
If your data set isn't that large but you're wasting a lot of time
getting data off disk, than a large sized RAM disk - 8 Gigabytes
for two GC-Ramdisk's put together at:

http://www.gigabyte.com.tw/Products/Storage/Products_Overview.aspx?ProductID=2180&ProductName=GC-RAMDISK 
-

or 16 Gigs at
http://www.hyperossystems.co.uk/07042003/products.htm#hyperosHDIIproduct

can speed things up tremendously. (And yes, you can do some of this
through on-board
memory, but sometimes it's easier to work just having an available super
fast drive
rather than considering memory management).

Douglas Roberts wrote:

> Owen:
>
> I'm all for practical.  But first, show us your requirements.  A "step
> or two towards higher performance" is a bit vague.  ;-}
>
> What's your goal:  16 million agents, simulated at 80X real time?
>
> Or something less.  Or something more.
>
> Joking aside,
>
> What are your requirements?  How much do you need to scale now; how
> far do you need to scale eventually, how soon do you need to do it,
> what are your agent complexities, output requirements, data I/O needs,
> post processing requirements, what existing designs do you have now,
> and what are their limitations, what is the memory footprint for your
> existing implementation, what are your current run times, etc. etc.
> etc...
>
> System requirements should come first;  these will lead to suggestions
> for SW & HW implementation environments.
>
> --Doug
> --
> Doug Roberts, RTI International
> droberts at rti.org <mailto:droberts at rti.org>
> doug at parrot-farm.net <mailto:doug at parrot-farm.net>
> 505-455-7333 - Office
> 505-670-8195 - Cell
>
> On 10/7/06, *Owen Densmore* < owen at backspaces.net
> <mailto:owen at backspaces.net>> wrote:
>
>     On Oct 7, 2006, at 10:29 AM, Owen Densmore wrote:
>     > Turns out there is a poll being taken on some mail lists on the
>     topic
>     > of new parallel hardware and if/how it will be used:
>     >    Parallelism: the next generation -- a small survey
>     >     http://www.nabble.com/A-small-survey-tf2337745.html
>     >
>     >      -- Owen
>
>     OK, so we've had an interesting interchange on Distribution /
>     Parallelization of ABM's.  But what I'm interested is a bit more
>     practical:
>
>     Given what *we* want to do, and given the recent advances in desktop,
>     workstation, and server computing, and given our experiences over the
>     last year with things like the Blender Render Farm .. what would be
>     the most reasonable way for us to take a step or two toward higher
>     performance?
>        - Should we consider buying a fairly high performance linux box?
>        - How about buying a multi-processor/multi-core system?
>        - Do we want to consider a shared Santa Fe Super Cluster?
>        - What public computing facilities could we use?
>
>     And possibly more to the point:
>        - What computing architecture are we interested in?
>
>     I'll say from my experience, I'm mainly interested two approaches:
>
>        - Unix based piped systems where I don't have to consider the
>     architecture in my programs, only in the way I use sh/bash to execute
>     them to make sure they work well in parallel.  In plain words: good
>     parameter scanning, or piped tasks (model, visualize, render) using
>     built-in unix piping mechanisms with parallel execution of the
>     programs.  I've done this in the past with dramatic increase in
>     elapsed times.  And its dead simple.
>
>        - Java or similar based multi-threaded approaches where I need a
>     bit of awareness in my code as to how I approach parallelism, but
>     *the language supports it*.  I'm not very much interested in exotic
>     and difficult to maintain grid/cluster architectures, I'm not at all
>     convinced for the scale we're approaching that they make sense.  And,
>     yes, Java is good enough.
>
>     In other words, given Redfish, Commodicast, and other local
>     scientific computing endeavors, what would be interesting systems for
>     our scale of computing?  I.e. reasonable increase in power with
>     modest change in architecture.
>
>     Owen
>
>     ============================================================
>     FRIAM Applied Complexity Group listserv
>     Meets Fridays 9a-11:30 at cafe at St. John's College
>     lectures, archives, unsubscribe, maps at http://www.friam.org
>
>
>
>
>
> ------------------------------------------------------------------------
>
> ============================================================
> FRIAM Applied Complexity Group listserv
> Meets Fridays 9a-11:30 at cafe at St. John's College
> lectures, archives, unsubscribe, maps at http://www.friam.org

-------------- next part --------------
An HTML attachment was scrubbed...
URL: /pipermail/friam_redfish.com/attachments/20061008/a5afbefe/attachment.html

Reply | Threaded
Open this post in threaded view
|

Practical Parallelism

Owen Densmore
Administrator
In reply to this post by Douglas Roberts-2
Good question, it certainly pays to know your goal.

At the simplest level, the goal is to do what we're currently doing,  
with better performance, or similar performance but with greater  
resources:

   - ABM: We'd love to be able to run really large simulations (the  
city of Santa Fe, for example) with up to 250,000 agents.  This could  
be with existing ABM systems like NetLogo and Repast, or possibly one  
we're not using yet like MASON, or using Processing/Java as we're  
beginning to do now.

   - Visualization: We'd love to be able to run Blender or a similar  
3D modeler at near real time, with the data derived from the ABM  
above.  The render farm approach seems good for building "movies",  
but not for running the 3D modeler in near real time.

   - Decision Theater/Immersive Modeling: We're just starting to use  
some nifty hacks Josh came up with which allow us to project models  
onto tables, letting laser pens shining onto the table become input  
into the model, via having a camera look at the projected image.  
We're not yet running into serious issues, but we may.  It certainly  
pushes us toward real time models with sophisticated interaction.

I think we'd be willing to stick to memory resident systems for now  
-- if we can cram 8 Gig or so into one.  I say this because we're not  
yet trying to go for systems that are handling several million  
agents.  That said, I'm not sure how good the memory systems (bus,  
caches, etc) for multi core/processor systems actually are,  
especially in terms of concurrent access to shared data.  And I'd be  
willing to fudge memory residency by including systems with very good  
swapping algorithms, thus letting us exceed memory capacity onto disks.

In terms of languages: the main issue I think will be for language  
support for the multiple cores/processors -- which I think primarily  
boils down to threads and concurrency, and indirectly includes  
swapping via the OS if that indeed becomes a target.

My bias is to look first at husky workstations/servers before going  
into clusters and grids, mainly because I think they're becoming a  
sweet spot.  We have a modest budget.  We know breaking tasks down  
into independent subtasks works well: parameter scans and building  
individual movie frames.  But we'd certainly have to start getting  
into intelligent scheduling and thread/memory architectures.

So basically we'd like to do more and faster versions of the ABM, Vis  
and Immersion work we're currently doing, intelligently mapped onto  
reasonably affordable modern multi core/processor systems.

Owen

On Oct 8, 2006, at 1:11 AM, Douglas Roberts wrote:

> Owen:
>
> I'm all for practical.  But first, show us your requirements.  A  
> "step or
> two towards higher performance" is a bit vague.  ;-}
>
> What's your goal:  16 million agents, simulated at 80X real time?
>
> Or something less.  Or something more.
>
> Joking aside,
>
> What are your requirements?  How much do you need to scale now; how  
> far do
> you need to scale eventually, how soon do you need to do it, what  
> are your
> agent complexities, output requirements, data I/O needs, post  
> processing
> requirements, what existing designs do you have now, and what are  
> their
> limitations, what is the memory footprint for your existing  
> implementation,
> what are your current run times, etc. etc. etc...
>
> System requirements should come first;  these will lead to  
> suggestions for
> SW & HW implementation environments.
>
> --Doug
> --
> Doug Roberts, RTI International
> droberts at rti.org
> doug at parrot-farm.net
> 505-455-7333 - Office
> 505-670-8195 - Cell
>
> On 10/7/06, Owen Densmore <owen at backspaces.net> wrote:
>>
>> On Oct 7, 2006, at 10:29 AM, Owen Densmore wrote:
>> > Turns out there is a poll being taken on some mail lists on the  
>> topic
>> > of new parallel hardware and if/how it will be used:
>> >    Parallelism: the next generation -- a small survey
>> >    http://www.nabble.com/A-small-survey-tf2337745.html
>> >
>> >      -- Owen
>>
>> OK, so we've had an interesting interchange on Distribution /
>> Parallelization of ABM's.  But what I'm interested is a bit more
>> practical:
>>
>> Given what *we* want to do, and given the recent advances in desktop,
>> workstation, and server computing, and given our experiences over the
>> last year with things like the Blender Render Farm .. what would be
>> the most reasonable way for us to take a step or two toward higher
>> performance?
>>    - Should we consider buying a fairly high performance linux box?
>>    - How about buying a multi-processor/multi-core system?
>>    - Do we want to consider a shared Santa Fe Super Cluster?
>>    - What public computing facilities could we use?
>>
>> And possibly more to the point:
>>    - What computing architecture are we interested in?
>>
>> I'll say from my experience, I'm mainly interested two approaches:
>>
>>    - Unix based piped systems where I don't have to consider the
>> architecture in my programs, only in the way I use sh/bash to execute
>> them to make sure they work well in parallel.  In plain words: good
>> parameter scanning, or piped tasks (model, visualize, render) using
>> built-in unix piping mechanisms with parallel execution of the
>> programs.  I've done this in the past with dramatic increase in
>> elapsed times.  And its dead simple.
>>
>>    - Java or similar based multi-threaded approaches where I need a
>> bit of awareness in my code as to how I approach parallelism, but
>> *the language supports it*.  I'm not very much interested in exotic
>> and difficult to maintain grid/cluster architectures, I'm not at all
>> convinced for the scale we're approaching that they make sense.  And,
>> yes, Java is good enough.
>>
>> In other words, given Redfish, Commodicast, and other local
>> scientific computing endeavors, what would be interesting systems for
>> our scale of computing?  I.e. reasonable increase in power with
>> modest change in architecture.
>>
>> Owen
>>
>> ============================================================
>> FRIAM Applied Complexity Group listserv
>> Meets Fridays 9a-11:30 at cafe at St. John's College
>> lectures, archives, unsubscribe, maps at http://www.friam.org
>>
> ============================================================
> FRIAM Applied Complexity Group listserv
> Meets Fridays 9a-11:30 at cafe at St. John's College
> lectures, archives, unsubscribe, maps at http://www.friam.org



Reply | Threaded
Open this post in threaded view
|

Practical Parallelism

Marcus G. Daniels-3
Owen Densmore wrote:
>    - ABM: We'd love to be able to run really large simulations (the  
> city of Santa Fe, for example) with up to 250,000 agents.
250,000 should not be a problem on one CPU.   Profile the code, natively
compile and optimize the crucial bits.  Write core loops in C if you
have to.  Buy a copy of Intel VTune to really see what is going on to
please or displease the processor.   I've run half a million agents in
Swarm on an ordinary Athlon 64 in under 2GB.  


Reply | Threaded
Open this post in threaded view
|

Practical Parallelism

Russell Standish
I concur. 1 million agents on a single CPU is usally feasible in
C/C++. You do need to make sure agent interactions are local though,
as otherwise doing the full n^2 interactions will kill any simulation.

Cheers

On Sun, Oct 08, 2006 at 11:04:25PM -0600, Marcus G. Daniels wrote:

> Owen Densmore wrote:
> >    - ABM: We'd love to be able to run really large simulations (the  
> > city of Santa Fe, for example) with up to 250,000 agents.
> 250,000 should not be a problem on one CPU.   Profile the code, natively
> compile and optimize the crucial bits.  Write core loops in C if you
> have to.  Buy a copy of Intel VTune to really see what is going on to
> please or displease the processor.   I've run half a million agents in
> Swarm on an ordinary Athlon 64 in under 2GB.  
>
> ============================================================
> FRIAM Applied Complexity Group listserv
> Meets Fridays 9a-11:30 at cafe at St. John's College
> lectures, archives, unsubscribe, maps at http://www.friam.org

--
*PS: A number of people ask me about the attachment to my email, which
is of type "application/pgp-signature". Don't worry, it is not a
virus. It is an electronic signature, that may be used to verify this
email came from me if you have PGP or GPG installed. Otherwise, you
may safely ignore this attachment.

----------------------------------------------------------------------------
A/Prof Russell Standish                  Phone 0425 253119 (mobile)
Mathematics                        
UNSW SYDNEY 2052                 R.Standish at unsw.edu.au            
Australia                                http://parallel.hpc.unsw.edu.au/rks
            International prefix  +612, Interstate prefix 02
----------------------------------------------------------------------------



Reply | Threaded
Open this post in threaded view
|

Practical Parallelism

Douglas Roberts-2
I concur, but with conditions.  The number of agents that can comfortable
reside on a cpu depends on both the agent complexity and the amount of
memory on the cpu.  EpiSims agents are relativly complex,  yet we regularly
run 1.4 million agents per cpu on one of our dual-Opteron clusters where
each dual-cpu node has about 4 GB of memory.  If we are doing other things
in the simulation besides just moving the agents around and allowing disease
to propagate we require more cpus.  For example, we sometimes collect
person-person contact pattern dendograms during a run, which quickly eats
lots of memory, even though we cache and flush the growing dendogram
frequently.

--Doug
--
Doug Roberts, RTI International
droberts at rti.org
doug at parrot-farm.net
505-455-7333 - Office
505-670-8195 - Cell

On 10/5/06, Russell Standish <r.standish at unsw.edu.au> wrote:

>
> I concur. 1 million agents on a single CPU is usally feasible in
> C/C++. You do need to make sure agent interactions are local though,
> as otherwise doing the full n^2 interactions will kill any simulation.
>
> Cheers
>
> On Sun, Oct 08, 2006 at 11:04:25PM -0600, Marcus G. Daniels wrote:
> > Owen Densmore wrote:
> > >    - ABM: We'd love to be able to run really large simulations (the
> > > city of Santa Fe, for example) with up to 250,000 agents.
> > 250,000 should not be a problem on one CPU.   Profile the code, natively
> > compile and optimize the crucial bits.  Write core loops in C if you
> > have to.  Buy a copy of Intel VTune to really see what is going on to
> > please or displease the processor.   I've run half a million agents in
> > Swarm on an ordinary Athlon 64 in under 2GB.
> >
> > ============================================================
> > FRIAM Applied Complexity Group listserv
> > Meets Fridays 9a-11:30 at cafe at St. John's College
> > lectures, archives, unsubscribe, maps at http://www.friam.org
>
> --
> *PS: A number of people ask me about the attachment to my email, which
> is of type "application/pgp-signature". Don't worry, it is not a
> virus. It is an electronic signature, that may be used to verify this
> email came from me if you have PGP or GPG installed. Otherwise, you
> may safely ignore this attachment.
>
>
> ----------------------------------------------------------------------------
> A/Prof Russell Standish                  Phone 0425 253119 (mobile)
> Mathematics
> UNSW SYDNEY 2052                         R.Standish at unsw.edu.au
> Australia
> http://parallel.hpc.unsw.edu.au/rks
>             International prefix  +612, Interstate prefix 02
>
> ----------------------------------------------------------------------------
>
>
> ============================================================
> FRIAM Applied Complexity Group listserv
> Meets Fridays 9a-11:30 at cafe at St. John's College
> lectures, archives, unsubscribe, maps at http://www.friam.org
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: /pipermail/friam_redfish.com/attachments/20061009/8567d6f4/attachment.html