Anybody else remember all the fanfare when this was started?

classic Classic list List threaded Threaded
13 messages Options
Reply | Threaded
Open this post in threaded view
|

Anybody else remember all the fanfare when this was started?

Douglas Roberts-2
http://tech.slashdot.org/story/13/01/03/232259/supercomputer-repossessed-by-state-may-be-sold-in-pieces

--
Doug Roberts
[hidden email]
[hidden email]

505-455-7333 - Office
505-672-8213 - Mobile

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
Reply | Threaded
Open this post in threaded view
|

Re: Anybody else remember all the fanfare when this was started?

Marcus G. Daniels
On 1/4/13 9:59 AM, Douglas Roberts wrote:
> http://tech.slashdot.org/story/13/01/03/232259/supercomputer-repossessed-by-state-may-be-sold-in-pieces
>
At SC11, I heard that was used by Dreamworks to render Puss in Boots
(among their own supercomputers).
By now it's getting to be an antique anyway.

Marcus

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
Reply | Threaded
Open this post in threaded view
|

Re: Anybody else remember all the fanfare when this was started?

Owen Densmore
Administrator
In reply to this post by Douglas Roberts-2

What was the fuss at the time, I forget the specifics.

It seemed to me that really fast internet, which NM does not like, would have been required for its use.  Wasn't that the grand plan? .. institutions would use it remotely?

I'd far rather have HUGE internet bandwidth so that I could quickly configure Amazon instances for large computations that I fed massive data to and from via a fast network.

BTW: did TranSim and its related projects ever try Amazon?  Likely the routing between instances wouldn't be sufficient, but not that familiar with the architecture.

Over simplification, I realize, but nowadays it seems like supercomputing without a network is like a BMW without wheels.

   -- Owen


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
Reply | Threaded
Open this post in threaded view
|

Re: Anybody else remember all the fanfare when this was started?

Douglas Roberts-2
I built a 100-core cluster in AWS and did some testing on it using the AWS spot market, but never ran TranSims or EpiSims on it.  I've got a white paper on this that's awaiting publication as a letter to the editor in Nature.

--Doug


On Fri, Jan 4, 2013 at 10:18 AM, Owen Densmore <[hidden email]> wrote:

What was the fuss at the time, I forget the specifics.

It seemed to me that really fast internet, which NM does not like, would have been required for its use.  Wasn't that the grand plan? .. institutions would use it remotely?

I'd far rather have HUGE internet bandwidth so that I could quickly configure Amazon instances for large computations that I fed massive data to and from via a fast network.

BTW: did TranSim and its related projects ever try Amazon?  Likely the routing between instances wouldn't be sufficient, but not that familiar with the architecture.

Over simplification, I realize, but nowadays it seems like supercomputing without a network is like a BMW without wheels.

   -- Owen


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com



--
Doug Roberts
[hidden email]
[hidden email]

505-455-7333 - Office
505-672-8213 - Mobile

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
Reply | Threaded
Open this post in threaded view
|

Re: Anybody else remember all the fanfare when this was started?

Douglas Roberts-2
In reply to this post by Marcus G. Daniels
We're ALL antiques these days.

--Doug


On Fri, Jan 4, 2013 at 10:18 AM, Marcus G. Daniels <[hidden email]> wrote:
On 1/4/13 9:59 AM, Douglas Roberts wrote:
http://tech.slashdot.org/story/13/01/03/232259/supercomputer-repossessed-by-state-may-be-sold-in-pieces

At SC11, I heard that was used by Dreamworks to render Puss in Boots (among their own supercomputers).
By now it's getting to be an antique anyway.

Marcus

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com



--
Doug Roberts
[hidden email]
[hidden email]

505-455-7333 - Office
505-672-8213 - Mobile

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
Reply | Threaded
Open this post in threaded view
|

Re: Anybody else remember all the fanfare when this was started?

Marcus G. Daniels
In reply to this post by Douglas Roberts-2
A simulation platform built on NaCL or LLVM/JS would probably make more
sense for coarse-grained distributed simulation..

http://en.wikipedia.org/wiki/Google_Native_Client
https://github.com/kripken/emscripten/wiki

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
Reply | Threaded
Open this post in threaded view
|

Re: Anybody else remember all the fanfare when this was started?

Steve Smith
In reply to this post by Douglas Roberts-2
Before I hit <send> on one of my massive missives... here's a shorter but hopefully entertaining question:

What is the value of the familiar figurative description of Supercomputing as "Big Iron".

Our original "big iron"  (automotive and motorcycle anyway) came from the upper midwest (Wisconson, Minnesota, Michigan)  as did much of our supercomputing (CDC, Cray Research, etc.).

If the CDC 6600/7600 was a Chevy BelAir, was the CRAY 1 a Dodge Challenger or a Ford Mustang?
If the IBM 360 was a Harley Electra Glide, what was the Indian Scout or the  

Both Harley and Indian let the german BMW military bikes inform their own design for the same purpose, as the Nazi Enigma may have motivated if not inspired the Colossus, the ENIAC, the Mark?

I've always loved my rice-burners (mostly Hondas but at least one Suzuki, Yamaha, and a Kawasaki) so I suppose I should have be a fan of the Fujitsu Kei? 

And what is the Tesla Motorcars of Supercomputing?

And what would a BMW GS kinda guy like Doug consider  as capable in the big open highways, rain-slicked twisties and the logging roads of scientific computing?

Puzzling on it...

- Steve





============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
Reply | Threaded
Open this post in threaded view
|

Re: Anybody else remember all the fanfare when this was started?

Steve Smith
In reply to this post by Douglas Roberts-2
I remember it quite well...
> http://tech.slashdot.org/story/13/01/03/232259/supercomputer-repossessed-by-state-may-be-sold-in-pieces
>
With conflicted (but not mixed) feelings.  I (all but) bid on providing
the visualization gateways for the system as my first major project
after I left LANL in 08... I"m glad i saw the writing on the
dysfunctional wall of state procurement as well as the ill conceived
nature of the whole project.   I never cared much for our former
governor and didn't think he understood much outside of politics, this
project being one of the more obvious follies (to me).   But don't
mistake this for me agreeing with Martinez's knee-jerk attempts to
dismantle everything he did.   I think the Supercomputer initiative was
well intentioned, and maybe at one level of understanding, very promising.

But frankly the main things "Big Iron" are good for are creating an
expensive sandbox to spend money in.  You have to have a lot of money to
spend to even begin to use them effectively...   mostly programmers and
scientists-cum-programmers to make proper use of them (and machine rooms
with lots of power and environmental control and...) and staff to keep
them running and up to date and ...  The state (whether as a government,
a collection of academic institutions, a budding commercial venue for
high tech, thousands of entrepreneurs, etc.) simply didn't (and doesn't)
have the kind of oomph it needs.  Sandia and LANL and NCGR all use
UberComputing about as effectively as anyone, but have huge staff and
budgets to make that happen.

I had worked in "big iron" shops most of my career, never really
believing in them.  While I *do* think some important things were done
because there was big iron at places like LANL, I think the bulk of the
"important things" happened first on the smaller machines (Vax with VMS
or BSD) in the 80's and then the plethora of Scientific Workstations
(e.g. Sun, SGI, Apollo, HP, NeXT, etc.) in the 90's and ultimately the
PCs running Linux and the mini-clusters that grew from them.

Even though I worked with and on the big iron of different Generations
(CDC/Cray/TMC/*NIX-cluster-of-the-month) and even built a few utility
Linux clusters, I never believed that the roughly single or double order
of magnitude increases led to many qualitative advances in computing or
science.   There certainly have been *some* important advances made, the
most obvious (in my uneducated opinion) might have been in
bioinformatics.  Generally, the value seems to have been in
embarassingly parallel problems where there was funding to pay for the
"big iron" and a clear value to shortening the time to an answer by a
couple of orders of magnitude (like getting an answer in a day that
otherwise might take a week or even a few months).

I think some Science was accellerated quite well by that kind of
leverage.   But in other fields it simply became an excuse for bloated
budgets and distracting scientists from their science and making
(letting?) them become computer scientists.   There is plenty of
precedent for this as early as the 40's and 50's which I respect..
modern computing might not exist were it not for those early "ACS"  
(MANIAC, ILLIAC, etc)

It may seem contradictory, but I *do* believe all that flailing that I
observed (and too often participated in) with big iron and hordes of
small iron (clusters) and DYI/NIH development (from OSs to hardware to
text editors for crimeny sake!) was an important early seed for much of
our current consumer, entertainment and hobbyist-driven computing.  
While *games* may have really fueled the graphics cards, it was SGI that
really got it all moving in the right direction in the first place...
getting over the early hurdles.

I'm not an expert on the Space Program but while Tang, Space Blankets
and zero-G ballpoints might be the more obvious but mundane (trite?)
spinoffs, there are also more impactful spinoff technologies like Velcro
and photovoltaics and heat pipes and bears, oh my! to refer to.  
Similarly the huge (ginormous?) budgets that Defense and Energy put into
uber-computing over decades, have had valuable side-effects...  but I
never believed that a *State* could achieve the same thing.  Maybe the
Japanese or Chinese "state", but not NM...

Nevertheless I *am* sympathetic with those who really, really (really)
wanted it to work.   But I am not sympathetic with the Martinez gang who
have been using every opportunity to bash the previous administration.  
I think this particular failure is real, but I think the fanfare around
*demolitioning* it is totally politics-driven hype of the worst kind.

Yes, the gear is vintage if not antique and there is unlikely any
*commercial* market for it.   I'm not sure of all the implications of
"selling" it in pieces to the Universities (State run) but it seems
likely the funding to "buy" it comes out of the same pocket that it goes
back into when sold.   This might be a useful a useful bookkeeping
fiction, but I suspect it is another
Richardson-bashing/Martinez-grandstanding opportunity.

I don't really agree with Owen on the presumption that such resources
can't be used effectively without a fat pipe all the way into our houses
(or offices)...  Remote X, VNC, etc.   make it pretty easy to do 99% of
what you need to do without ever bringing the bulk of the data back over
the net.    I too romanticize having a direct drop on a Tbit/sec at my
dinner table but I don't think the lack of it explains my lack of
utilization of big iron (whether in Rio Rancho, Los Alamos or Mountain
View).

The availability of the Amazon Cloud and/or relatively affordable price
of a densely packed GPU/CPU mini-cluster challenges us all to put our
projects where our mouth is and actually implement some effective
parallel algorithms that can do the heavy lifting.  The tools are there
to make this 100 times easier than it ever was when I was
learning/developing the tricks of the trade...  and it is still hard.  
My only words of wisdom on the topic might be that instead of limiting
ourselves to well known "embarrassingly parallel" algorithms or swimming
upstream trying to force fit intrinsically serial algorithms into
parallel environments, we should look to discovering (recognizing,
inventing?) uniquely different approaches.   This is what the nonlinear
and complexity science movement of the 80's did in it's own way to
reconfigure formerly intractable (intellectually as well as
computational) problems into tractable, and sometimes even *elegant*
problems with similarly elegant solutions.

- Steve

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
Reply | Threaded
Open this post in threaded view
|

Re: Anybody else remember all the fanfare when this was started?

Steve Smith
Addendum to my previous rant/rave/ramble...

I do think that the rise in popularity of the Graph500 over the LINPACK
benchmarks is an important admission that not only have computer
architectures changed, but the characteristics of "interesting problems"
being run on them.


> I remember it quite well...
>> http://tech.slashdot.org/story/13/01/03/232259/supercomputer-repossessed-by-state-may-be-sold-in-pieces 
>>
>>
> With conflicted (but not mixed) feelings.  I (all but) bid on
> providing the visualization gateways for the system as my first major
> project after I left LANL in 08... I"m glad i saw the writing on the
> dysfunctional wall of state procurement as well as the ill conceived
> nature of the whole project.   I never cared much for our former
> governor and didn't think he understood much outside of politics, this
> project being one of the more obvious follies (to me).   But don't
> mistake this for me agreeing with Martinez's knee-jerk attempts to
> dismantle everything he did.   I think the Supercomputer initiative
> was well intentioned, and maybe at one level of understanding, very
> promising.
>
> But frankly the main things "Big Iron" are good for are creating an
> expensive sandbox to spend money in.  You have to have a lot of money
> to spend to even begin to use them effectively...   mostly programmers
> and scientists-cum-programmers to make proper use of them (and machine
> rooms with lots of power and environmental control and...) and staff
> to keep them running and up to date and ...  The state (whether as a
> government, a collection of academic institutions, a budding
> commercial venue for high tech, thousands of entrepreneurs, etc.)
> simply didn't (and doesn't) have the kind of oomph it needs.  Sandia
> and LANL and NCGR all use UberComputing about as effectively as
> anyone, but have huge staff and budgets to make that happen.
>
> I had worked in "big iron" shops most of my career, never really
> believing in them.  While I *do* think some important things were done
> because there was big iron at places like LANL, I think the bulk of
> the "important things" happened first on the smaller machines (Vax
> with VMS or BSD) in the 80's and then the plethora of Scientific
> Workstations (e.g. Sun, SGI, Apollo, HP, NeXT, etc.) in the 90's and
> ultimately the PCs running Linux and the mini-clusters that grew from
> them.
>
> Even though I worked with and on the big iron of different Generations
> (CDC/Cray/TMC/*NIX-cluster-of-the-month) and even built a few utility
> Linux clusters, I never believed that the roughly single or double
> order of magnitude increases led to many qualitative advances in
> computing or science.   There certainly have been *some* important
> advances made, the most obvious (in my uneducated opinion) might have
> been in bioinformatics.  Generally, the value seems to have been in
> embarassingly parallel problems where there was funding to pay for the
> "big iron" and a clear value to shortening the time to an answer by a
> couple of orders of magnitude (like getting an answer in a day that
> otherwise might take a week or even a few months).
>
> I think some Science was accellerated quite well by that kind of
> leverage.   But in other fields it simply became an excuse for bloated
> budgets and distracting scientists from their science and making
> (letting?) them become computer scientists.   There is plenty of
> precedent for this as early as the 40's and 50's which I respect..
> modern computing might not exist were it not for those early "ACS"  
> (MANIAC, ILLIAC, etc)
>
> It may seem contradictory, but I *do* believe all that flailing that I
> observed (and too often participated in) with big iron and hordes of
> small iron (clusters) and DYI/NIH development (from OSs to hardware to
> text editors for crimeny sake!) was an important early seed for much
> of our current consumer, entertainment and hobbyist-driven
> computing.   While *games* may have really fueled the graphics cards,
> it was SGI that really got it all moving in the right direction in the
> first place... getting over the early hurdles.
>
> I'm not an expert on the Space Program but while Tang, Space Blankets
> and zero-G ballpoints might be the more obvious but mundane (trite?)
> spinoffs, there are also more impactful spinoff technologies like
> Velcro and photovoltaics and heat pipes and bears, oh my! to refer
> to.   Similarly the huge (ginormous?) budgets that Defense and Energy
> put into uber-computing over decades, have had valuable
> side-effects...  but I never believed that a *State* could achieve the
> same thing.  Maybe the Japanese or Chinese "state", but not NM...
>
> Nevertheless I *am* sympathetic with those who really, really (really)
> wanted it to work.   But I am not sympathetic with the Martinez gang
> who have been using every opportunity to bash the previous
> administration.  I think this particular failure is real, but I think
> the fanfare around *demolitioning* it is totally politics-driven hype
> of the worst kind.
>
> Yes, the gear is vintage if not antique and there is unlikely any
> *commercial* market for it.   I'm not sure of all the implications of
> "selling" it in pieces to the Universities (State run) but it seems
> likely the funding to "buy" it comes out of the same pocket that it
> goes back into when sold.   This might be a useful a useful
> bookkeeping fiction, but I suspect it is another
> Richardson-bashing/Martinez-grandstanding opportunity.
>
> I don't really agree with Owen on the presumption that such resources
> can't be used effectively without a fat pipe all the way into our
> houses (or offices)...  Remote X, VNC, etc.   make it pretty easy to
> do 99% of what you need to do without ever bringing the bulk of the
> data back over the net.    I too romanticize having a direct drop on a
> Tbit/sec at my dinner table but I don't think the lack of it explains
> my lack of utilization of big iron (whether in Rio Rancho, Los Alamos
> or Mountain View).
>
> The availability of the Amazon Cloud and/or relatively affordable
> price of a densely packed GPU/CPU mini-cluster challenges us all to
> put our projects where our mouth is and actually implement some
> effective parallel algorithms that can do the heavy lifting.  The
> tools are there to make this 100 times easier than it ever was when I
> was learning/developing the tricks of the trade...  and it is still
> hard.   My only words of wisdom on the topic might be that instead of
> limiting ourselves to well known "embarrassingly parallel" algorithms
> or swimming upstream trying to force fit intrinsically serial
> algorithms into parallel environments, we should look to discovering
> (recognizing, inventing?) uniquely different approaches.   This is
> what the nonlinear and complexity science movement of the 80's did in
> it's own way to reconfigure formerly intractable (intellectually as
> well as computational) problems into tractable, and sometimes even
> *elegant* problems with similarly elegant solutions.
>
> - Steve
>
> ============================================================
> FRIAM Applied Complexity Group listserv
> Meets Fridays 9a-11:30 at cafe at St. John's College
> to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
Reply | Threaded
Open this post in threaded view
|

Re: Anybody else remember all the fanfare when this was started?

Douglas Roberts-2
In reply to this post by Steve Smith
Sorta like an old man's underwear, Steve: Depends.

On-demand cluster computing in Amazon Web Services using the spot market is certainly cost-effective for now & then computing demands.  If you're a 24x7 big computing kind of a company, it probably makes better sense still to buy & operate your own hardware.

Life-cycle econ cost analysis is your friend, here.

Oh, and the Beemer GS is a fine road bike that also just happens to not be allergic to dirt, thanks for asking.  :)

--Doug

On Fri, Jan 4, 2013 at 1:06 PM, Steve Smith <[hidden email]> wrote:
And what would a BMW GS kinda guy like Doug consider  as capable in the big open highways, rain-slicked twisties and the logging roads of scientific computing?



--
Doug Roberts
[hidden email]
[hidden email]

505-455-7333 - Office
505-672-8213 - Mobile

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
Reply | Threaded
Open this post in threaded view
|

Re: Anybody else remember all the fanfare when this was started?

Marcus G. Daniels
In reply to this post by Steve Smith
On 1/4/13 1:23 PM, Steve Smith wrote:
> I do think that the rise in popularity of the Graph500 over the
> LINPACK benchmarks is an important admission that not only have
> computer architectures changed, but the characteristics of
> "interesting problems" being run on them.
>
>
> And what is the Tesla Motorcars of Supercomputing?
In this spirit, Convey MX (power/performance on Graph 500).

Marcus

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
Reply | Threaded
Open this post in threaded view
|

Re: Anybody else remember all the fanfare when this was started?

Steve Smith
In reply to this post by Douglas Roberts-2
Doug -
> Sorta like an old man's underwear, Steve: Depends.
Is mine showing again?  I'll go back to suspenders I guess...
>
> On-demand cluster computing in Amazon Web Services using the spot
> market is certainly cost-effective for now & then computing demands.
But Koenicke didn't bring a Zipcar to race the Scorpions at the Viaduct,
even if he didn't need a daily driver.
>  If you're a 24x7 big computing kind of a company, it probably makes
> better sense still to buy & operate your own hardware.
>
> Life-cycle econ cost analysis is your friend, here.
OK... so I think we are talking about owning fleet of utility vehicles
with an in-house garage for maintenance and repair?
> Oh, and the Beemer GS is a fine road bike that also just happens to
> not be allergic to dirt, thanks for asking.  :)
I consider the GS line to be the ultimate in top-drawer (and priced
accordingly) multi-surface machinery but I still like my scrappy
(vintage if not antique) Japanese bikes for the cost/performance...  
like a few year old multicore, high memory, Intel machine running
self-installed Debian opposite the latest tricked out Alienware Gaming
system delivered with RHL?

I think a lot of computer purchase/ownership is not unlike that for
motorcycles... it's a lot more than simple economics.   Whether at the
laptop or in the machine room.

- Steve



============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
Reply | Threaded
Open this post in threaded view
|

Re: Anybody else remember all the fanfare when this was started?

Barry MacKichan
In reply to this post by Steve Smith
I have a soft spot for the early one at the Institute for Advanced Study in Princeton. But for it, I wouldn't have had an air conditioned office when I was there.
--Barry

On Jan 4, 2013, at 1:06 PM, Steve Smith wrote:

modern computing might not exist were it not for those early "ACS"  (MANIAC, ILLIAC, etc)


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com