Re: Turing's Cathedral

classic Classic list List threaded Threaded
1 message Options
Reply | Threaded
Open this post in threaded view
|

Re: Turing's Cathedral

Pieter Steenekamp
Nick,

Maybe an example of where we have used the Monte Carlo Method will help you?

We designed a model based control application in the pulp and paper industry to control the quality of the product in a batch digester where pulp is produced from wood chips. The pulp process liberates the fibres in the wood chips by dissolving the binding material. In many cases the pulp is used to make paper, but in this case it is used to make synthetic products. 

To convince the customer of the viability of the project, we used the Monte Carlo Method as part of the benefit study. 

From historical data we calculated the statistical distribution of random variables. If my memory serves, the main random variable was some properties of the wood chips, but I'll have to check the details to confirm this.

So we ran a large number of simulations of the process and both their current control and our new proposed control, with the random inputs varying using a software pseudo random generator that we designed to give the same statistical distribution than the historical data. We then used these results to calculate the expected benefits of our proposed model based control application. 

Because of other commitments, I personally have neglected this project a bit, I'll have to check with my colleagues for updated project details, but I think the current status is that we are developing an updated software model of the process using the customers physical process lab model in parallel with implementing a trial controller on one of the customers' 20 or 30 odd digesters. 


Pieter Steenekamp


On 28 January 2015 at 23:02, Nick Thompson <[hidden email]> wrote:

Dear Friammers,

This is going to be one of those longish Thompson emails, but I think even for those who are interested primarily in computation, it might have value.  I am reading a Pop book on the history of the Institute for Advanced Study.  It is sort of the Instutute for Advanced Study equivalent of Waldrop’s “biography” of the Santa FE Institute, COMPLEXITY, but the author, George Dyson, is a much stronger historian, and the level of detail about the people involved in the bomb- and computer-projects of the 40’s and 50’s is extraordinary.  The book is called, TURING’S CATHEDRAL.   I thought the following passage might be interesting to you-all, so I keyed it in. 

 

Monte Carlo originated as a form of emergency first-aid, in answer to the question:  What to do until the mathematician arrives?  “The idea was to try out thousands of such possibilities and, at each stage, to select by chance, by means of a ‘random number’ with suitable probability, the fate or kind of event, to follow it in a line, so to speak, instead of considering all branches,” Ulam explained.  “After examining the possible histories of only a few thousand, one will have a good sample and an approximate answer to the problem.”  The new technique propagated widely along with the growing number of computers on which it could run.  Refinements were made, especially the so-called Metropolis algorithm (later the Metropolis-Hastings algorithm) that made Monte Carlo even more effective by favoring more probably histories from the start.  “The most important property of the algorithm is … that deviations from the canonical distribution die away,” explains Marshall Rosenbluth, who helped invent it.  “Hence the computation converges on the right answer!  I recall being quite excited when I was able to prove this.”  [Dyson, p 191]

 

I would love to have this explained to me, either at our Friday meeting or here, on the list.  Ulam’s first sentence, above, seems to contradict all the others.  I thought the whole point was to consider all the branches.  Confused, as usual.

 

My personal interest in the passage arises from its relation to Charles Sanders Peirce’s account of induction.  Recall from my earlier diatribes that (1) Peirce is the inventor of much of what we learned in graduate schools as “statistics” and that (2) Peirce is a brick-wall monist.  Human experience  (not just individual experience) is all that there is, and reality is therefore that upon which human cognition will converge in the very long run.  Peirce’s account of induction goes [very] roughly like this:  Experience either will, or will not converge.  (There either will, or will not, be a measurement upon which our measurements of the acceleration of gravity will converge.)  Matters about which experience converges are particularly value to organisms and therefore they (we) have developed cognitive mechanisms (habits) to track them (see Hume?).  If our measurements do converge, our confidence in the “location” upon which they will converge gradually becomes more confident (or more precise) as experience is continued.   Since any random process produces periods of convergence, any such induction is always a hypothesis subject to disconfirmation.  How similar is this to the Monte Carlo idea? 

 

 

Nicholas S. Thompson

Emeritus Professor of Psychology and Biology

Clark University

http://home.earthlink.net/~nickthompson/naturaldesigns/

 


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com