Re: Information request/Amazon EC2
Posted by
Douglas Roberts-2 on
URL: http://friam.383.s1.nabble.com/Re-Information-request-Amazon-EC2-tp3475158p3480830.html
Jack,
It would be a fun project to move some already running largish distributed ABM from a standard Linux cluster over to EC2.
If only my company would pay me to play just on the fun projects...
--Doug
On Thu, Aug 20, 2009 at 12:47 PM, Jack K. Horner
<[hidden email]> wrote:
At 09:00 AM 8/20/2009, Doug Roberts wrote:
Doug,
Whether a given parallel computing system performs well enough running a message-passing-oriented Agent Based Modeling (ABM) application depends on, among other things,
1. How the agents are distributed across the processing
elements (pes, nominally one microprocessor per pe) of the
system. Computational-mesh-oriented (CMO) applications that use
message-passing services are sufficiently analogous to
ABM-oriented applications that we can use mesh performance
data to help bound what ABM performance is likely to be,
given an allocation of agents per pe.
In particular, it is not uncommon for CMO
applications using ~50 state variable per cell to allocate
~100,000 cells per pe; state updates in such a system are
accomplished by message-passing (using OMP or MPI) among cells.
100,000 cells per pe is an empirically derived "rule of thumb",
but it is roughly invariant across modern production-class
compute nodes and a wide spectrum of mesh-oriented applications.
For optimal performance, the cells allocated to a pe should
be the set of cells that communicate most frequently with
each other. Sometimes a user can characterize that set
through a propagation-rate function defined in the
problem space (e.g., the speed of sound in a
medium, the speed at which a virus travels from one agent
to another, the speed of chemical reactions in a
biological network). Sometimes we don't know anything about
the communication/propagation dynamics, in which case
"reading" a pile of steaming chicken entrails predicts
performance about as well as anything else.
By analogy, if there were no more than ~50 state variables
per agent in an ABM application, an allocation of up to
100,000 tightly-communicating agents per pe would provide
usable performance on many production-class clusters today
(a cluster of PlayStations is an exception to
this rule of thumb, BTW).
Allocating one agent per pe would be a vast waste of
compute power for all except trivial problem setups.
All of the above is useful only if the user can control
the allocation of agents to pes. Most production-class
clusters, including the EC2, provide such controls.
Note that this problem has to be addressed by the
*user* in *any* cluster.
2. If the computation/communication ratio has to be near 1
to obtain tolerable time-to-solution, the
performance of the message-passing services matters
hugely. MPI and OMP have been optimized on only a few
commercially available systems. (A home-brew
multi-thousand-node Linux cluster, in contrast, is nowhere
near optimal in this sense. Optimizing the latter, as
a few incorrigibly optimistic souls have discovered,
amounts to redesigning much of Linux process-management.
If bleeding-edge performance matters, there is no free lunch.)
Jack
Jack K. Horner
P. O. Box 266
Los Alamos, NM 87544-0266
Voice: 505-455-0381
Fax: 505-455-0382
email:
[hidden email]
============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at
http://www.friam.org
--
Doug Roberts
[hidden email][hidden email]505-455-7333 - Office
505-670-8195 - Cell
============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at
http://www.friam.org