Administrator
|
I started a discussion on large memory java applications on Java
Lobby. Basically how to approach graph problems with 250 million nodes. The response was surprisingly good! http://www.javalobby.org/java/forums/t72726.html It seems there are two approaches: - Hardware: build a large server or a cluster and use interesting distributed solutions that have become available for this - Software: use interesting streaming|piping solutions, or possibly DB-like disk solutions that can be tuned for your application. Has anyone here tackled truly large problems like this successfully? -- Owen Owen Densmore http://backspaces.net - http://redfish.com - http://friam.org |
For my thesis work, I'm looking at doing very large graph search to
find optimal solutions to Rubik's cube. As a component of our approach, we're planning to do a full breadth-first search of a graph with over 1 trillion nodes. To do this, we're using the combined disk space of a cluster of machines. Of course, disks are slow if you use random access, so you have to convert all of your algorithms to using streaming access only (depending on the algorithm, this can be very tricky). In fact, with today highly non-uniform memory hierarchies (L1 cache -> L2 cache -> ... -> main memory), even using RAM by way of random access can make large computations infeasible. So, these same streaming-access-only algorithms can also provide a massive speedup when disk isn't used. Further, if you use streaming access only with disk, the aggregate disk bandwidth can equal that of the bandwidth to main memory. So, if you have a cluster of 100 machines with 300 GB of disk each, this means your cluster works like a single machine with 30 TB of RAM. Depending on whether you have an explicit graph (the whole thing stored in memory) or an implicit graph (defined by a function providing the neighbor nodes of any given node), the techniques may differ. I'm more familiar with implicit graph search, but it sounds like you may be working with explicit graphs. Here is a reference from each camp dealing with very large searches in external memory: AI community -- implicit graph searches: Korf and Schultze, Large-Scale Parallel Breadth-First Search http://www.cs.ualberta.ca/~bulitko/F05/CMPUT651/papers/pbfs.pdf Explicit graph manipulation: STXXL, (Standard Template Library for Extra Large Data Sets (C++ library)) http://stxxl.sourceforge.net/ I don't have any Java specific advice, but the key of using only streaming access with large data structures is language independent. -Dan -- [ http://www.ccs.neu.edu/home/kunkle/ ] On 5/26/06, Owen Densmore <owen at backspaces.net> wrote: > I started a discussion on large memory java applications on Java > Lobby. Basically how to approach graph problems with 250 million > nodes. The response was surprisingly good! > http://www.javalobby.org/java/forums/t72726.html > > It seems there are two approaches: > - Hardware: build a large server or a cluster and use interesting > distributed solutions that have become available for this > - Software: use interesting streaming|piping solutions, or possibly > DB-like disk solutions that can be tuned for your application. > > Has anyone here tackled truly large problems like this successfully? > > -- Owen > > Owen Densmore > http://backspaces.net - http://redfish.com - http://friam.org > > > > ============================================================ > FRIAM Applied Complexity Group listserv > Meets Fridays 9a-11:30 at cafe at St. John's College > lectures, archives, unsubscribe, maps at http://www.friam.org > |
Hi Dan, that seems like a clever approach.. One remark on this bit:
> In fact, with > today highly non-uniform memory hierarchies (L1 cache -> L2 cache -> > ... -> main memory), even using RAM by way of random access can make > large computations infeasible. By Little's law (http://en.wikipedia.org/wiki/Little's_law), concurrency = bandwidth * latency. If it's possible to parallelize the search, then the latency of the memory system can be tolerated, as there are more threads of execution that can do useful work. Chances are they won't all be blocked on memory access. For example, with a distributed shared memory package on an Infiniband cluster (e.g. Intel's Cluster OpenMP*), or a highly multithreaded CPU architecture (UltraSparc T1), I think there's still a practical hope not to give up on random access algorithms. Marcus |
Indeed. A grid is not a grid is not a grid. A loosely coupled "roomful of linux boxes" running on a couple legs of gigabit ethernet has far different attributes than does a closely coupled system like the IBM SP2 or the the Cray XD1. The one and only "really big cluster" thing I did was back in the late 90s we bought one of the first really big Sun T10000 configurations, put 100 gig of memory in it, and loaded the most heavily used part of our Westlaw legal database. $2 million of Sun stuff replaced $50million of Plug Compatible VM mainframes. That project, though was done using C and a homebrew in memory knockoff of adabase. =jim At 10:16 AM 5/26/2006, you wrote: >Hi Dan, that seems like a clever approach.. One remark on this bit: > > In fact, with > > today highly non-uniform memory hierarchies (L1 cache -> L2 cache -> > > ... -> main memory), even using RAM by way of random access can make > > large computations infeasible. >By Little's law (http://en.wikipedia.org/wiki/Little's_law), concurrency >= bandwidth * latency. >If it's possible to parallelize the search, then the latency of the >memory system can be tolerated, as there are more threads of execution >that can do useful work. Chances are they won't all be blocked on >memory access. For example, with a distributed shared memory package on >an Infiniband cluster (e.g. Intel's Cluster OpenMP*), or a highly >multithreaded CPU architecture (UltraSparc T1), I think there's still a >practical hope not to give up on random access algorithms. > >Marcus > > >============================================================ >FRIAM Applied Complexity Group listserv >Meets Fridays 9a-11:30 at cafe at St. John's College >lectures, archives, unsubscribe, maps at http://www.friam.org =================================== Jim Rutt voice: 505-989-1115 |
Jim Rutt wrote:
> $2 million of Sun stuff replaced $50million of Plug > Compatible VM mainframes. That project, though was done using C and a homebrew in memory knockoff of adabase. > Ah, right, then there's the question of what kind of database that could store all of the data and access it fast enough (either for streaming or for parallel traversal). Someone in our group went to the recent MySQL conference and heard mention of their `Falcon' backend (that I guess they started because Oracle bought up InnoDB) that's in development. Below is a link to some notes on the talk, sounds intriguing to me.. http://mike.kruckenberg.com/archives/2006/04/jim_starkey_int.html |
Administrator
|
In reply to this post by Dan Kunkle
Hi Dan, good to hear from you.
So, given your understanding of RedFish .. would you recommend some sort of "super computing" stunt for us? Either a cluster or a biggish multiprocessor 64-bit linux/solaris box? We've been considering such a step, or hoping that there becomes available some sort of "compute farm" much like the Blender render farm we're using for great visualization work. -- Owen Owen Densmore http://backspaces.net - http://redfish.com - http://friam.org On May 26, 2006, at 9:43 AM, Dan Kunkle wrote: > For my thesis work, I'm looking at doing very large graph search to > find optimal solutions to Rubik's cube. As a component of our > approach, we're planning to do a full breadth-first search of a graph > with over 1 trillion nodes. > > To do this, we're using the combined disk space of a cluster of > machines. Of course, disks are slow if you use random access, so you > have to convert all of your algorithms to using streaming access only > (depending on the algorithm, this can be very tricky). In fact, with > today highly non-uniform memory hierarchies (L1 cache -> L2 cache -> > ... -> main memory), even using RAM by way of random access can make > large computations infeasible. So, these same streaming-access-only > algorithms can also provide a massive speedup when disk isn't used. > Further, if you use streaming access only with disk, the aggregate > disk bandwidth can equal that of the bandwidth to main memory. So, if > you have a cluster of 100 machines with 300 GB of disk each, this > means your cluster works like a single machine with 30 TB of RAM. > > Depending on whether you have an explicit graph (the whole thing > stored in memory) or an implicit graph (defined by a function > providing the neighbor nodes of any given node), the techniques may > differ. I'm more familiar with implicit graph search, but it sounds > like you may be working with explicit graphs. > > Here is a reference from each camp dealing with very large searches in > external memory: > > AI community -- implicit graph searches: > Korf and Schultze, Large-Scale Parallel Breadth-First Search > http://www.cs.ualberta.ca/~bulitko/F05/CMPUT651/papers/pbfs.pdf > > Explicit graph manipulation: > STXXL, (Standard Template Library for Extra Large Data Sets (C++ > library)) > http://stxxl.sourceforge.net/ > > I don't have any Java specific advice, but the key of using only > streaming access with large data structures is language independent. > > -Dan > > -- > [ http://www.ccs.neu.edu/home/kunkle/ ] > > On 5/26/06, Owen Densmore <owen at backspaces.net> wrote: >> I started a discussion on large memory java applications on Java >> Lobby. Basically how to approach graph problems with 250 million >> nodes. The response was surprisingly good! >> http://www.javalobby.org/java/forums/t72726.html >> >> It seems there are two approaches: >> - Hardware: build a large server or a cluster and use interesting >> distributed solutions that have become available for this >> - Software: use interesting streaming|piping solutions, or possibly >> DB-like disk solutions that can be tuned for your application. >> >> Has anyone here tackled truly large problems like this successfully? >> >> -- Owen >> >> Owen Densmore >> http://backspaces.net - http://redfish.com - http://friam.org >> >> >> >> ============================================================ >> FRIAM Applied Complexity Group listserv >> Meets Fridays 9a-11:30 at cafe at St. John's College >> lectures, archives, unsubscribe, maps at http://www.friam.org >> > > ============================================================ > FRIAM Applied Complexity Group listserv > Meets Fridays 9a-11:30 at cafe at St. John's College > lectures, archives, unsubscribe, maps at http://www.friam.org |
Administrator
|
In reply to this post by Marcus G. Daniels-2
Having worked at Sun, where an E10,000 or three were available for
our use in the labs (64 processor, huge memory), I remember rebuilding a huge shell script to be able to run in parallel a power law exploration: http://www.backspaces.net/sun/PLaw/ The job went from over night on a really fast laptop, to 15 minutes. Looking at the multi-processor perfmeter was like looking on a bunch of boxes catch fire! -- Owen Owen Densmore http://backspaces.net - http://redfish.com - http://friam.org On May 26, 2006, at 11:23 AM, Marcus G. Daniels wrote: > Jim Rutt wrote: >> $2 million of Sun stuff replaced $50million of Plug >> Compatible VM mainframes. That project, though was done using C >> and a homebrew in memory knockoff of adabase. >> > Ah, right, then there's the question of what kind of database that > could > store all of the data and access it fast enough (either for > streaming or > for parallel traversal). > > Someone in our group went to the recent MySQL conference and heard > mention of their `Falcon' backend (that I guess they started because > Oracle bought up InnoDB) that's in development. Below is a link to > some notes on the talk, sounds intriguing to me.. > > http://mike.kruckenberg.com/archives/2006/04/jim_starkey_int.html > > > > ============================================================ > FRIAM Applied Complexity Group listserv > Meets Fridays 9a-11:30 at cafe at St. John's College > lectures, archives, unsubscribe, maps at http://www.friam.org |
In reply to this post by Owen Densmore
Speaking of blender, this is very neat and produced (IIRC) entirely with with
blender. The story is short and very surreal, but I was quite impressed. http://orange.blender.org/ On Friday 26 May 2006 20:21, Owen Densmore wrote: > Blender |
We have the DVD of Elephants Dream in the office and will play it before
Carlos Gershenson's talk on Wednesday. Interestingly, all .blend files used to create the movie are included on the DVD and downloadable from the web. -Steve > -----Original Message----- > From: Tim Densmore [mailto:tim at backspaces.net] > Sent: Friday, May 26, 2006 9:15 PM > To: The Friday Morning Applied Complexity Coffee Group > Subject: Re: [FRIAM] Large Memory Java Applications - OFF TOPIC > > Speaking of blender, this is very neat and produced (IIRC) > entirely with with blender. The story is short and very > surreal, but I was quite impressed. > > http://orange.blender.org/ > > > On Friday 26 May 2006 20:21, Owen Densmore wrote: > > Blender > > ============================================================ > FRIAM Applied Complexity Group listserv > Meets Fridays 9a-11:30 at cafe at St. John's College > lectures, archives, unsubscribe, maps at http://www.friam.org > > |
http://www.tllts.org/dl.php
That's a link to a recent interview with Bassam Kurdali, the director of ED. It happens to be on my favorite leisure time podcast, The Linux Link Tech Show. The sound quality is often very poor, and the show tends to be very freeform, but I've been listening to their show for nearly 2 years - they must be doing something right! Beware of the occasional right-wing and/or Christian fundamentalist insert by Linc or Pat. On Friday 26 May 2006 21:29, Stephen Guerin wrote: > We have the DVD of Elephants Dream in the office and will play it before > Carlos Gershenson's talk on Wednesday. > > Interestingly, all .blend files used to create the movie are included on > the DVD and downloadable from the web. > > -Steve > > > -----Original Message----- > > From: Tim Densmore [mailto:tim at backspaces.net] > > Sent: Friday, May 26, 2006 9:15 PM > > To: The Friday Morning Applied Complexity Coffee Group > > Subject: Re: [FRIAM] Large Memory Java Applications - OFF TOPIC > > > > Speaking of blender, this is very neat and produced (IIRC) > > entirely with with blender. The story is short and very > > surreal, but I was quite impressed. > > > > http://orange.blender.org/ > > > > On Friday 26 May 2006 20:21, Owen Densmore wrote: > > > Blender > > > > ============================================================ > > FRIAM Applied Complexity Group listserv > > Meets Fridays 9a-11:30 at cafe at St. John's College > > lectures, archives, unsubscribe, maps at http://www.friam.org > > ============================================================ > FRIAM Applied Complexity Group listserv > Meets Fridays 9a-11:30 at cafe at St. John's College > lectures, archives, unsubscribe, maps at http://www.friam.org |
Oh, there's also a fair amount of "inappropriate language."
On Friday 26 May 2006 21:51, Tim Densmore wrote: > http://www.tllts.org/dl.php > > That's a link to a recent interview with Bassam Kurdali, the director of > ED. It happens to be on my favorite leisure time podcast, The Linux Link > Tech Show. The sound quality is often very poor, and the show tends to be > very freeform, but I've been listening to their show for nearly 2 years - > they must be doing something right! Beware of the occasional right-wing > and/or Christian fundamentalist insert by Linc or Pat. > > On Friday 26 May 2006 21:29, Stephen Guerin wrote: > > We have the DVD of Elephants Dream in the office and will play it before > > Carlos Gershenson's talk on Wednesday. > > > > Interestingly, all .blend files used to create the movie are included on > > the DVD and downloadable from the web. > > > > -Steve > > > > > -----Original Message----- > > > From: Tim Densmore [mailto:tim at backspaces.net] > > > Sent: Friday, May 26, 2006 9:15 PM > > > To: The Friday Morning Applied Complexity Coffee Group > > > Subject: Re: [FRIAM] Large Memory Java Applications - OFF TOPIC > > > > > > Speaking of blender, this is very neat and produced (IIRC) > > > entirely with with blender. The story is short and very > > > surreal, but I was quite impressed. > > > > > > http://orange.blender.org/ > > > > > > On Friday 26 May 2006 20:21, Owen Densmore wrote: > > > > Blender > > > > > > ============================================================ > > > FRIAM Applied Complexity Group listserv > > > Meets Fridays 9a-11:30 at cafe at St. John's College > > > lectures, archives, unsubscribe, maps at http://www.friam.org > > > > ============================================================ > > FRIAM Applied Complexity Group listserv > > Meets Fridays 9a-11:30 at cafe at St. John's College > > lectures, archives, unsubscribe, maps at http://www.friam.org > > ============================================================ > FRIAM Applied Complexity Group listserv > Meets Fridays 9a-11:30 at cafe at St. John's College > lectures, archives, unsubscribe, maps at http://www.friam.org |
In reply to this post by Marcus G. Daniels-2
> Ah, right, then there's the question of what kind of database that > could > store all of the data and access it fast enough (either for streaming > or > for parallel traversal). During my year at LBL last year, I worked with the principles of this system in the Scientific Data Management group: http://sdm.lbl.gov/fastbit/ We've been talking about it's application to UltraScale graphs since I got there... One motivation I have is to precompute various "local" graph theoretic measures to use later in heuristics for various larger-scale searches, measure and layout. A blindingly obvious application of this technology is "graph conditioning", or extracting a subgraph based on a set of conditions. One good "trick" we discovered was to derive ubiquitous properties of a raw data set, index the result in a way that compresses well (the resulting index uses a tiny fraction of the space required for the indexed quantitiy) and then discard the derived quantity... When a search returns a set of "candidates" for matching, the (often tiny subset of) the results can then be used to recalculate the derived property. It leads to an interesting multi-step tradeoff between computation and storage. FastBit uses a (patent pending) Word Aligned Hybrid Compression of Bit Indices. We discovered all kinds of nice (read potentially useful) properties of the compressed indices themselves, including providing a rough entropy measure of the data being indexed. FastBit shines in high-dimensional range-queries... so like, give me the call graph of all the phone calls in England originated between 9:00 PM and 9:23PM GMT which lasted for more than 10 seconds and less than 2 minutes with differing area codes but excluding those originating from 555 or terminating in a number above 799 ... I'm hoping it can be made to be meaningful in the context of ultra-scale graphs as well... I'm thinking it will require another level of indirection... - Steve |
Steve Smith wrote:
> FastBit shines in high-dimensional range-queries... so like, give me > the call graph of all the phone calls in England originated between > 9:00 PM and 9:23PM GMT which lasted for more than 10 seconds and less > than 2 minutes with differing area codes but excluding those > originating from 555 or terminating in a number above 799 ... > Is FastBit source code freely redistributable or is there a commercial implementation? I can't find mention of either on the web page. |
On May 27, 2006, at 4:33 PM, Marcus G. Daniels wrote: > Steve Smith wrote: >> FastBit shines in high-dimensional range-queries... so like, give me >> the call graph of all the phone calls in England originated between >> 9:00 PM and 9:23PM GMT which lasted for more than 10 seconds and less >> than 2 minutes with differing area codes but excluding those >> originating from 555 or terminating in a number above 799 ... >> > Is FastBit source code freely redistributable or is there a commercial > implementation? I can't find mention of either on the web page. > Right now (as I understand it) the answer (as you may antiicpate) is neither. This does not necessarily mean that someone cannot use their binaries in a non-commercial system... or ultimately license their software if you want to use it commercially.... or learn something completely new from their approach and work it from a different angle. - Steve -------------- next part -------------- An HTML attachment was scrubbed... URL: http://redfish.com/pipermail/friam_redfish.com/attachments/20060601/5e4acff2/attachment-0001.htm |
Hi,
The archives to this mailing list are available in one-month chunks, but there doesn't seem to be any way to download the entire archive or search the entire thing. This "split into months" seems to be standard for mailman, and I suspect it was chosen many years ago. Now that both disk space and bandwidth are so much bigger, does it make sense to switch to one big archive file? Is this possible/easy to do with mailman? Thanks, Martin |
I second that, the archives are getting to be of a size where a
search/indexing facility would be very useful. Carl Martin C. Martin wrote: > Hi, > > The archives to this mailing list are available in one-month chunks, but > there doesn't seem to be any way to download the entire archive or > search the entire thing. > > This "split into months" seems to be standard for mailman, and I suspect > it was chosen many years ago. Now that both disk space and bandwidth > are so much bigger, does it make sense to switch to one big archive > file? Is this possible/easy to do with mailman? > > Thanks, > Martin > > > ============================================================ > FRIAM Applied Complexity Group listserv > Meets Fridays 9a-11:30 at cafe at St. John's College > lectures, archives, unsubscribe, maps at http://www.friam.org > > > > |
On 01 Jun 2006, at 21:25, Carl Tollander wrote: > I second that, the archives are getting to be of a size where a > search/indexing facility would be > very useful. Maybe to put a Google searchbox only for the archive directories? I'm not sure if Google allows to do directory-only searches. If not, I'm sure http://www.master.com allows for it (also free). Best regards, Carlos Gershenson... Centrum Leo Apostel, Vrije Universiteit Brussel Krijgskundestraat 33. B-1160 Brussels, Belgium http://homepages.vub.ac.be/~cgershen/ ?Tendencies tend to change...? |
In reply to this post by Martin C. Martin-2
I briefly looked into this a few times in the past.
Here's a relevant FAQ from Mailman: http://www.python.org/cgi-bin/faqw-mm.py?req=show&file=faq01.011.htp Of the options listed there, I'll experiment with adding friam traffic to mail-archive.com and let you know how it goes. -Steve > -----Original Message----- > From: Martin C. Martin [mailto:martin at martincmartin.com] > Sent: Thursday, June 01, 2006 8:30 PM > To: The Friday Morning Applied Complexity Coffee Group > Subject: [FRIAM] Searchable archives? > > Hi, > > The archives to this mailing list are available in one-month > chunks, but there doesn't seem to be any way to download the > entire archive or search the entire thing. > > This "split into months" seems to be standard for mailman, > and I suspect it was chosen many years ago. Now that both > disk space and bandwidth are so much bigger, does it make > sense to switch to one big archive file? Is this > possible/easy to do with mailman? > > Thanks, > Martin > > > ============================================================ > FRIAM Applied Complexity Group listserv > Meets Fridays 9a-11:30 at cafe at St. John's College > lectures, archives, unsubscribe, maps at http://www.friam.org > > |
Administrator
|
How about GMane:
http://gmane.org/ As far as my experience, its by far the best of the mail/web/search interfaces. -- Owen Owen Densmore http://backspaces.net - http://redfish.com - http://friam.org On Jun 1, 2006, at 9:55 PM, Stephen Guerin wrote: > I briefly looked into this a few times in the past. > > Here's a relevant FAQ from Mailman: > http://www.python.org/cgi-bin/faqw-mm.py?req=show&file=faq01.011.htp > > Of the options listed there, I'll experiment with adding friam > traffic to > mail-archive.com and let you know how it goes. > > -Steve > > > >> -----Original Message----- >> From: Martin C. Martin [mailto:martin at martincmartin.com] >> Sent: Thursday, June 01, 2006 8:30 PM >> To: The Friday Morning Applied Complexity Coffee Group >> Subject: [FRIAM] Searchable archives? >> >> Hi, >> >> The archives to this mailing list are available in one-month >> chunks, but there doesn't seem to be any way to download the >> entire archive or search the entire thing. >> >> This "split into months" seems to be standard for mailman, >> and I suspect it was chosen many years ago. Now that both >> disk space and bandwidth are so much bigger, does it make >> sense to switch to one big archive file? Is this >> possible/easy to do with mailman? >> >> Thanks, >> Martin >> >> >> ============================================================ >> FRIAM Applied Complexity Group listserv >> Meets Fridays 9a-11:30 at cafe at St. John's College >> lectures, archives, unsubscribe, maps at http://www.friam.org >> >> > > > ============================================================ > FRIAM Applied Complexity Group listserv > Meets Fridays 9a-11:30 at cafe at St. John's College > lectures, archives, unsubscribe, maps at http://www.friam.org |
Archives and RSS feeds of Friam are now available at gmane.org.
http://tinyurl.com/zwjl4 -Steve > -----Original Message----- > From: Owen Densmore [mailto:owen at backspaces.net] > Sent: Thursday, June 01, 2006 10:18 PM > To: The Friday Morning Applied Complexity Coffee Group > Subject: Re: [FRIAM] Searchable archives? > > How about GMane: > http://gmane.org/ > As far as my experience, its by far the best of the > mail/web/search interfaces. > > -- Owen > > Owen Densmore > http://backspaces.net - http://redfish.com - http://friam.org > > > On Jun 1, 2006, at 9:55 PM, Stephen Guerin wrote: > > > I briefly looked into this a few times in the past. > > > > Here's a relevant FAQ from Mailman: > > http://www.python.org/cgi-bin/faqw-mm.py?req=show&file=faq01.011.htp > > > > Of the options listed there, I'll experiment with adding > friam traffic > > to mail-archive.com and let you know how it goes. > > > > -Steve > > > > > > > >> -----Original Message----- > >> From: Martin C. Martin [mailto:martin at martincmartin.com] > >> Sent: Thursday, June 01, 2006 8:30 PM > >> To: The Friday Morning Applied Complexity Coffee Group > >> Subject: [FRIAM] Searchable archives? > >> > >> Hi, > >> > >> The archives to this mailing list are available in > one-month chunks, > >> but there doesn't seem to be any way to download the > entire archive > >> or search the entire thing. > >> > >> This "split into months" seems to be standard for mailman, and I > >> suspect it was chosen many years ago. Now that both disk > space and > >> bandwidth are so much bigger, does it make sense to switch > to one big > >> archive file? Is this possible/easy to do with mailman? > >> > >> Thanks, > >> Martin > >> > >> > >> ============================================================ > >> FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at > >> cafe at St. John's College lectures, archives, > unsubscribe, maps at > >> http://www.friam.org > >> > >> > > > > > > ============================================================ > > FRIAM Applied Complexity Group listserv Meets Fridays > 9a-11:30 at cafe > > at St. John's College lectures, archives, unsubscribe, maps at > > http://www.friam.org > > > ============================================================ > FRIAM Applied Complexity Group listserv > Meets Fridays 9a-11:30 at cafe at St. John's College > lectures, archives, unsubscribe, maps at http://www.friam.org > > |
Free forum by Nabble | Edit this page |