Glen,
thanks for your fascinating answer. It answers all my questions for the moment. Here is the present state of my thinking of the Praeludium, which I have now read four times and with which my concern is approaching obsession. For what it is worth, I agree with Rosen (Praeludium, Life Itself) that formal systems are examples of models or, more precisely, that the relation of the formal system and the thing it represents is a modeling relation. But the modeling relation is much more ubiquitous than that. Natural selection is an example in good standing of a model but, as Owen Densmore keeps reminding me, is not a formalism. I think it is a general property of models that they are intentional. An intentional relation is one in which the truth or falsity of an assertion depends on the point of view from which he world is seen. The classic philosophical example has to do with Lady Astor* and the Titanic. As an "extensional" utterance, the statement that Lady Astor booked passage on the Titanic is plainly true. Whatever else one might say about the Titanic does not change the truth value of the utterance. That was the boat she booked herself on and she was on it when it hit the iceberg. However, as an intentional utterance, its truth value is utterly dependant on the point of view from which the titanic is seen. She probably did book passage on the Largest Ship in the White Star Line, and she did book passage on the ship whose maiden voyage was a society event on both sides of the Atlantic. We can have some confidence in these assertions, because behaviors directed toward status are part of what we know about Lady Astor's behavior repertoire.* She did not, in this sense, book passage on the ship that hit an iceberg and sank in the north Atlantic: not, I would assert, that because that idea was not in a mythic place called her mind, or even lodged in her brain, but because nothing in the design of Lady Astor's behavior is congruent with that intent. She was not into risky behavior. Thus, the formalism, "Lady Astor Booked Passage On the Titanic" is too impoverished in entailments to capture the essential quality of her act. Or to put it round the other way, "an infinite number of distinct formalizations ... [would be required ] ... to capture all the qualities ... [of her act.] What Rosen seems to be saying in the Prelude of LI is that formalisms are like other models is this respect. They are intentional in that their truth value depends to some degree on the uses to which the formalism is going to be put, where the formalizer is headed when the formalization is applied. With out that "reference" any formalism is incomplete. thus, to be a good formalism, a formalism has to be in some sense informal, right? Nick * My deep apologies to Lady Astor and her ancestors. In point of fact, I know nothing of lady Astor ... full stop. With full some prejudice based solely on her Name, I grant her whatever qualities are necessary for my exposition. She is a model, and like every unfortunate thing that has ever been used as a model, she has been abused. For all I know, she may have been a London street urchin whose first name was Lady and who climbed into a trunk on the pier and never knew upon what ship she booked passage. Nicholas S. Thompson Research Associate, Redfish Group, Santa Fe, NM (nick at redfish.com) Professor of Psychology and Ethology, Clark University (nthompson at clarku.edu) -------------- next part -------------- An HTML attachment was scrubbed... URL: http://redfish.com/pipermail/friam_redfish.com/attachments/20080101/798a82ad/attachment.html |
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1 Nicholas Thompson on 01/01/2008 10:59 PM: > thus, to be a good formalism, a formalism has to be in > some sense informal, right? This is a difficult question phrased in a misleadingly simple way. We now know that mathematics is _more_ than formal systems (thanks to Goedel and those that have continued his work). I.e. we cannot completely separate semantics from syntax. The semantic grounding of any given formalism (regardless of how "obvious" the grounding is) provides the hooks to the usage of the formalism. Hence, by the very nature of math, any formalism can be traced back to the intentions for the formalism (though the original intentions may be so densely compressed or that uncompressing them may be hard or impossible). And in that sense, including your statement above, all formalisms will then be good formalisms because they all have a semantic grounding. But just because all formalisms assume a semantic grounding doesn't mean they're "informal". The hallmark of a formalism is that it encompasses all the assumptions in axioms that are well-understood and clearly stated up front. I.e. a good formalism won't let new axioms slip in anytime during inference. So, that's what it now means to be "formal". An informal inferential structure loosens that constraint and will allow one to introduce new semantics as the inference chugs along. - -- glen e. p. ropella, 971-219-3846, http://tempusdictum.com It's too bad that stupidity isn't painful. -- Anton LaVey -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.6 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iD8DBQFHe98oZeB+vOTnLkoRAjKfAJ0fFwhcKlZulDmkoXZaDKb3a/b76QCfXjC5 WZaDT213cIPPOhP1bRH8rQE= =cWA0 -----END PGP SIGNATURE----- |
That's nice, describing informality as sneaking in new axioms (or 'understandings', perhaps) in a series of assertions. Of course it's all but impossible to not do that,... given the complex way that ideas arise out of feelings and intents. What then about the invisible assumptions that tend to be numerous in any attempt at making formal statements. Would the likely presence of hidden assumptions make all formal statements presumably informal?
Phil Sent from my Verizon Wireless BlackBerry -----Original Message----- From: "Glen E. P. Ropella" <[hidden email]> Date: Wed, 02 Jan 2008 10:59:52 To:The Friday Morning Applied Complexity Coffee Group <friam at redfish.com> Subject: Re: [FRIAM] Robert Rosen -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Nicholas Thompson on 01/01/2008 10:59 PM: > thus, to be a good formalism, a formalism has to be in > some sense informal, right? This is a difficult question phrased in a misleadingly simple way. We now know that mathematics is _more_ than formal systems (thanks to Goedel and those that have continued his work). I.e. we cannot completely separate semantics from syntax. The semantic grounding of any given formalism (regardless of how "obvious" the grounding is) provides the hooks to the usage of the formalism. Hence, by the very nature of math, any formalism can be traced back to the intentions for the formalism (though the original intentions may be so densely compressed or that uncompressing them may be hard or impossible). And in that sense, including your statement above, all formalisms will then be good formalisms because they all have a semantic grounding. But just because all formalisms assume a semantic grounding doesn't mean they're "informal". The hallmark of a formalism is that it encompasses all the assumptions in axioms that are well-understood and clearly stated up front. I.e. a good formalism won't let new axioms slip in anytime during inference. So, that's what it now means to be "formal". An informal inferential structure loosens that constraint and will allow one to introduce new semantics as the inference chugs along. - -- glen e. p. ropella, 971-219-3846, http://tempusdictum.com It's too bad that stupidity isn't painful. -- Anton LaVey -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.6 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iD8DBQFHe98oZeB+vOTnLkoRAjKfAJ0fFwhcKlZulDmkoXZaDKb3a/b76QCfXjC5 WZaDT213cIPPOhP1bRH8rQE= =cWA0 -----END PGP SIGNATURE----- ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College lectures, archives, unsubscribe, maps at http://www.friam.org |
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1 sy at synapse9.com on 01/02/2008 05:27 PM: > That's nice, describing informality as sneaking in new axioms (or > 'understandings', perhaps) in a series of assertions. Of course it's > all but impossible to not do that,... given the complex way that > ideas arise out of feelings and intents. What then about the > invisible assumptions that tend to be numerous in any attempt at > making formal statements. Would the likely presence of hidden > assumptions make all formal statements presumably informal? Well, with the stronger form of the word "formal" and the expansion of the word "informal" to refer to formal systems that allow the introduction of new axioms at will, we'd have to be careful to distinguish ill-formed systems from well-formed but open formal systems. Since adding new axioms as you go along might result in an inconsistent formal system (where the new axiom contradicts another axiom or a theorem derived from previous axioms), it's right to _mistrust_ the truth value of any formal statement unless one can demonstrate that: 1) no new axioms were added since consistency was demonstrated, or 2) if new axioms were added the resulting system is shown to be consistent. But, such mistrust is not the same as declaring the formal statement (or the system in which it's written) to be informal.... just not worthy of blind trust. In the case of (1), we would NOT accuse the statement or system of being "informal" in this new sense. In case (2), the _statement_ might not be informal but the system in which it's stated would become "informal" (in this new softer sense of the word). - -- glen e. p. ropella, 971-219-3846, http://tempusdictum.com In all affairs it's a healthy thing now and then to hang a question mark on the things you have long taken for granted. -- Bertrand Russell -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.6 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iD8DBQFHfVU6ZeB+vOTnLkoRAjesAKDO7DsLZ4HNxF18iWU7cPNQOlnxywCcCm/T Ga+wkjMxw0uYaXsgIzmFPyM= =FQt3 -----END PGP SIGNATURE----- |
In reply to this post by glen ep ropella
Hi,
I leafed through some of Rosen's stuff and the Kercel paper, I unfortunately do not have the time at the moment to work through it in detail, but some things which "disturb" me: 1) The assertion that the incomputable enters with "life". Rosen seems aware that he moves into the range of vitalism here, and tries to defend that he says it is not mechanism versus vitalism but simplicity versus complexity (=uncomputability in the Rosen sense) For my problems with his "uncomputability" see below. 2) Rosen repeatedly refers to G?del's result and talks about how it shows how impoverished formalization are in regard to "real" mathematics. This of course leads to the question what "real" mathematics is. It seems that Rosen is Platonist (how else would he know what "real" mathematics is?), but this is an opinion one must not share. He also ignores that G?del's results do not place limits on what one can formally model (in general), but only with regard to a formal system (finitely given, sufficient strenght etc). The question _if_ physics is completely formalizable/computable is indeed an interesting one, but why should this stage only start when life is concerned? (see below) Either it applies to the universe as a whole or it does not. 3) In the Kercel paper, we read: :START QUOTE: Given this, what does the (M,R)-system imply? In this model, the inferential entailments, the metabolism map f, the repair map F, and the replication map b represent the causal entailments in an organism, i.e., the efficient causes of metabolism, repair, and replication, respectively. If the (M,R)-system is actually in a modeling relation with the organism, then the same closed-loop hierarchical structure of containment of entailment must apply to the efficient causes. Just as map F contains map f contains in map b contains map F, ad infinitum, the efficient cause of repair contains the efficient cause of metabolism contains the efficient cause of replication contains the efficient cause of repair, ad infinitum. This is what it means to say that organisms contain the causal counterpart of impredicative loops. Rosen's expression "closed to efficient cause" now becomes clear. A real-world process is "closed to efficient cause" when it contains a closed-loop hierarchy of containment of efficient causes. Each efficient cause is contained by all the members of the loop that come before it, and contains all the members of the loop that come after it. :END QUOTE: What I fail to see that "life" embodies this "infinite" cycle as in his (M,R) system: after all, life started around 4 billion years ago - so I can _finitely_ list all cycles till some point where we are not interested anymore (depending on which theory of origin of life you prefer, rna first or metabolism first or whatever). 4) An ultrafinitistic view would generally rule out noncomputable models anyway (see for instance the nice essay by Doron Zeilberger: http://www.math.rutgers.edu/~zeilberg/mamarim/mamarimPDF/real.pdf) Or: http://en.wikipedia.org/wiki/Ultrafinitism So Rosen's model's also make some mathematical assumptions (which, admittedly, are widely shared - but may change, of course) 5) What I also find strange is the opposedness to computation: after all, with computers we are just beginning to find an "embarassment of riches"; fine to explore other avenues (Rosen), but I think it is much to early to dismiss the computational approach. So why his radical assertion that computational approaches to describe life must fail? 6) A point addressed in the Kercel paper: The ambiguity of language and the definiteness of computation: this is of import for the AI/Alife community, and it is indeed a problem, but is I think addressed if one can control the symbol grounding problem(Harnad, http://citeseer.ist.psu.edu/harnad90symbol.html). If one can let an AI/Alife really learn symbols (instead of programming them or assigning meaning to symbols by specification of the prog. language; the "learned" symbols would not make sense to us then, of course) they would inherently have the same ambiguity as our concepts have for us (because they would be learned in an ambiguous world). Conclusion: I think Rosen's ideas are valuable contributions in that they sensitivize us to certain problems, especially in modelling life. But the case against computatability is unconcinving. I would be very interested in thoughts of other FRIAMers, especially Glen who seems to have read a lot of Rosen's work - maybe you can clear up some things. Regards, G?nther Glen E. P. Ropella wrote: > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > Nicholas Thompson on 01/01/2008 10:59 PM: >> thus, to be a good formalism, a formalism has to be in >> some sense informal, right? > > This is a difficult question phrased in a misleadingly simple way. > > We now know that mathematics is _more_ than formal systems (thanks to > Goedel and those that have continued his work). I.e. we cannot > completely separate semantics from syntax. The semantic grounding of > any given formalism (regardless of how "obvious" the grounding is) > provides the hooks to the usage of the formalism. Hence, by the very > nature of math, any formalism can be traced back to the intentions for > the formalism (though the original intentions may be so densely > compressed or that uncompressing them may be hard or impossible). > > And in that sense, including your statement above, all formalisms will > then be good formalisms because they all have a semantic grounding. > > But just because all formalisms assume a semantic grounding doesn't mean > they're "informal". The hallmark of a formalism is that it encompasses > all the assumptions in axioms that are well-understood and clearly > stated up front. I.e. a good formalism won't let new axioms slip in > anytime during inference. So, that's what it now means to be "formal". > An informal inferential structure loosens that constraint and will > allow one to introduce new semantics as the inference chugs along. > > - -- > glen e. p. ropella, 971-219-3846, http://tempusdictum.com > It's too bad that stupidity isn't painful. -- Anton LaVey > > -----BEGIN PGP SIGNATURE----- > Version: GnuPG v1.4.6 (GNU/Linux) > Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org > > iD8DBQFHe98oZeB+vOTnLkoRAjKfAJ0fFwhcKlZulDmkoXZaDKb3a/b76QCfXjC5 > WZaDT213cIPPOhP1bRH8rQE= > =cWA0 > -----END PGP SIGNATURE----- > > ============================================================ > FRIAM Applied Complexity Group listserv > Meets Fridays 9a-11:30 at cafe at St. John's College > lectures, archives, unsubscribe, maps at http://www.friam.org > -- G?nther Greindl Department of Philosophy of Science University of Vienna guenther.greindl at univie.ac.at http://www.univie.ac.at/Wissenschaftstheorie/ Blog: http://dao.complexitystudies.org/ Site: http://www.complexitystudies.org |
In reply to this post by glen ep ropella
Glen,
> > sy at synapse9.com on 01/02/2008 05:27 PM: > > That's nice, describing informality as sneaking in new axioms (or > > 'understandings', perhaps) in a series of assertions. Of > course it's > > all but impossible to not do that,... given the complex way > that ideas > > arise out of feelings and intents. What then about the invisible > > assumptions that tend to be numerous in any attempt at > making formal > > statements. Would the likely presence of hidden > assumptions make all > > formal statements presumably informal? > > Well, with the stronger form of the word "formal" and the > expansion of the word "informal" to refer to formal systems > that allow the introduction of new axioms at will, we'd have > to be careful to distinguish ill-formed systems from > well-formed but open formal systems. > > Since adding new axioms as you go along might result in an > inconsistent formal system (where the new axiom contradicts > another axiom or a theorem derived from previous axioms), > it's right to _mistrust_ the truth value of any formal > statement unless one can demonstrate that: > > 1) no new axioms were added since consistency was demonstrated, or > 2) if new axioms were added the resulting system is shown to > be consistent. That's about where I get too, that we need to accept that formal systems are all embedded in informal ones. Introducing new principles in a formal argument is then just an error in constructing the argument from accepted principles. It that occurs it means you need to 'get to know' the new principle or go back to the old ones. > But, such mistrust is not the same as declaring the formal > statement (or the system in which it's written) to be > informal.... just not worthy of blind trust. In the case of > (1), we would NOT accuse the statement or system of being > "informal" in this new sense. In case (2), the _statement_ > might not be informal but the system in which it's stated > would become "informal" (in this new softer sense of the word). But then going back to the thread, Rosen's theorem seems to be offered as proof that life requires gaps in efficient causation. Could those gaps be regions? Would it be a corollary to say no formal system can explain emergent organization of self-referencing causal loops, and so maybe make ordinary complex systems which develop by growth a typical case example for Rosen's idea? That would imply a map of the deterministic plane sort of like Swiss cheese, with all individual emergent systems defining 'dark matter' islands of self-organization isolated from efficient causation by the 'white matter' of 'the cheese itself'... ..Whew!... ;-) > - -- > glen e. p. ropella, 971-219-3846, http://tempusdictum.com > In all affairs it's a healthy thing now and then to hang a > question mark on the things you have long taken for granted. > -- Bertrand Russell > > -----BEGIN PGP SIGNATURE----- > Version: GnuPG v1.4.6 (GNU/Linux) > Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org > > iD8DBQFHfVU6ZeB+vOTnLkoRAjesAKDO7DsLZ4HNxF18iWU7cPNQOlnxywCcCm/T > Ga+wkjMxw0uYaXsgIzmFPyM= > =FQt3 > -----END PGP SIGNATURE----- > > |
In reply to this post by Günther Greindl
G?nther Greindl wrote:
> The question _if_ physics is completely formalizable/computable is > indeed an interesting one, but why should this stage only start when > life is concerned? (see below) Either it applies to the universe as a > whole or it does not. > Even in digital systems there are unprovable things, like determining whether a program will stop or behavioral non-determinism from parallelism -- things about the physical hardware not even in the logical programming model. I can see why category theory could be useful to reason about different takes on abstract function, but it's not clear to me that's a better way to understand what and why cells do the things they do, than, e.g. building on solved structures using molecular dynamics simulation, or by instrumenting parts of cells with flourescent nanocrystals, e.g. http://link.aip.org/link/?APPLAB/91/224106/1 > Guillaume A. Lessard > <http://scitation.aip.org/vsearch/servlet/VerityServlet?KEY=ALL&possible1=Lessard%2C+Guillaume+A.&possible1zone=author&maxdisp=25&smode=strresults&aqs=true>, > Peter M. Goodwin > <http://scitation.aip.org/vsearch/servlet/VerityServlet?KEY=ALL&possible1=Goodwin%2C+Peter+M.&possible1zone=author&maxdisp=25&smode=strresults&aqs=true>, > and James H. Werner > <http://scitation.aip.org/vsearch/servlet/VerityServlet?KEY=ALL&possible1=Werner%2C+James+H.&possible1zone=author&maxdisp=25&smode=strresults&aqs=true> > /Center for Integrated Nanotechnologies (MPA-CINT), Los Alamos > National Laboratory, Los Alamos, New Mexico 87545, USA/ > We^ describe an instrument that extends the state of the art^ in a > single-molecule tracking technology, allowing extended observations of > single^ fluorophores and fluorescently labeled proteins as they > undergo directed and^ diffusive transport in three dimensions. We > demonstrate three-dimensional tracking of^ individual quantum dots > undergoing diffusion for durations of over a^ second at velocities > comparable to those of intracellular signaling processes. |
In reply to this post by Günther Greindl
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1 G?nther Greindl on 01/03/2008 03:29 PM: > 1) The assertion that the incomputable enters with "life". Rosen seems > aware that he moves into the range of vitalism here, and tries to defend > that he says it is not mechanism versus vitalism but simplicity versus > complexity (=uncomputability in the Rosen sense) > For my problems with his "uncomputability" see below. Living systems are just the particular example set of the (possibly very large) category of complex_rr* systems. It doesn't _start_ with life. Life just happens to be what RR (Robert Rosen) was interested in. There is a tinge of vitalism in there. And vitalists of all kinds seem to be attracted to RR's writings. But, I believe he was not appealing to any sort of vitalism. There are other, more insidious, assumptions he makes, though. > 2) Rosen repeatedly refers to G?del's result and talks about how it > shows how impoverished formalization are in regard to "real" > mathematics. This of course leads to the question what "real" > mathematics is. It seems that Rosen is Platonist (how else would he know > what "real" mathematics is?), but this is an opinion > one must not share. > He also ignores that G?del's results do not place limits on what one can > formally model (in general), but only with regard to a formal system > (finitely given, sufficient strenght etc). > > The question _if_ physics is completely formalizable/computable is > indeed an interesting one, but why should this stage only start when > life is concerned? (see below) Either it applies to the universe as a > whole or it does not. RR held that it applied to many systems, not necessarily just living ones. He does, however, seem to avoid being explicit about the influence of Goedel's theorems on his own ideas. As far as I can tell, he never even approaches a technical explanation that extrapolates from Goedel to his work. His exposition is purely philosophical and others claim to be able to map what he said directly to Goedel's results. I'm not that smart, though. A better exposition comes in Penrose's work, in which tries to argue that math (as done by humans) regularly involves hopping outside of any given formal system in order to catch a glimpse of a solution, then hopping back inside the formal system in order to develop a formal proof. And in this regard, RR's rhetoric is not inconsistent. RR's basic claim would be that math is _more_ than computation (automated inference... formal systems... whatever you want to call it). Namely, it involves jumping levels of discourse to provide entailment when none such can be provided inside the formal system. If you take that to its logical conclusion, you can imagine a _holarchy_ of formal systems that each patch up the entailments for other formal systems in the holarchy. In order to avoid an infinite regress or an infinite progression, however, the level hopping _must_ loop back in on itself. So, RR's position is that causal loops (a self-justifying rhetorical holarchy of formal systems), if formalized, might provide the mathematical infrastructure necessary to more completely capture (model) living systems. > 3) In the Kercel paper, we read: > :START QUOTE: > Given this, what does the (M,R)-system imply? In this model, the > inferential entailments, the metabolism map f, the repair map F, and the > replication map b represent the causal entailments in an organism, i.e., > the efficient causes of metabolism, repair, and replication, > respectively. If the (M,R)-system is actually in a modeling > relation with the organism, then the same closed-loop hierarchical > structure of containment of entailment must apply to the efficient > causes. Just as map F contains map f contains in map b contains map F, > ad infinitum, the efficient cause of repair contains the efficient cause > of metabolism contains the efficient cause of replication contains the > efficient cause of repair, ad infinitum. > > This is what it means to say that organisms contain the causal > counterpart of impredicative loops. Rosen's expression "closed to > efficient cause" now becomes clear. > > A real-world process is "closed to efficient cause" when it contains a > closed-loop hierarchy of containment of efficient causes. Each efficient > cause is contained by all the members of the loop that come before it, > and contains all the members of the loop that come after it. > :END QUOTE: > > What I fail to see that "life" embodies this "infinite" cycle as in his > (M,R) system: after all, life started around 4 billion years ago - so I > can _finitely_ list all cycles till some point where we are not > interested anymore (depending on which theory of origin of life you > prefer, rna first or metabolism first or whatever). The part that RR seems to think is not covered is the force or influence that "guides" a living system in its behaviors. In many contexts, people tend to make vague claims that "natural selection" or the "environment" provide such pressure in the form of limited resources or optimization or even co-evolution. But, those sorts of answers to _why_ a living system assembles and maintains itself are really just question begging... they put off the question without answering it. It's this "why" that leads him to consider "final cause". He takes the most prevalent answer to the why question seriously: living systems do what they do in order to benefit _themselves_. But how can an organism at time t_0 know what actions will benefit that organism at time t_100? The question he asks specifically is: "How can we have organization without finality?" I.e. How can we say that an activity of an organism is purposeful without some external _agent_ declaring the purpose of the organism? In the end, he comes to the idea that effects cause their causes, which is obviously cyclic. So, placing it on the 4 billion year timeline, each tiny process obtains because the effect of that process will be more processes like the prior tiny process. In layman's terms, "use it or lose it" or "practice makes perfect". These positive feedback loops where the effect of a process is to reinforce the process are the heart of RR's idea. The trouble is that they are not _simply_ self-reinforcing. Each iteration through the cycle _changes_ the system. So, you cannot _finitely_ list all cycles up until some point UNLESS you actually do it. I.e. the end result of the 4 billion years of iteration is not analytically predictable from the very first set of axioms we started with 4 billion years ago. It's incompressible because each iteration changes the building blocks. (And as our discussion about "informal" formal systems covers, consistency is not necessarily preserved when new axioms are added or when the axioms are changed, which means that formal systems can't accurately model these self-modifying systems.) True, if hindsight were 20/20, we could finitely list everything that has already happened; but, we (probably) wouldn't be able to finitely list everything that will happen from now until, say, 100 years from now _because_ the underlying ontology changes at each iteration. Of course, we _simulants_ are familiar with this argument because we have to use it when we argue against those that put full faith in the power of analytic solutions. But most of us only have to use the form of the argument that has a _fixed_ ontology. The outcome of a chess game is relatively easy to predict because the chess pieces, board, and repertoire don't change with each move. > 4) An ultrafinitistic view would generally rule out noncomputable models > anyway (see for instance the nice essay by Doron Zeilberger: > http://www.math.rutgers.edu/~zeilberg/mamarim/mamarimPDF/real.pdf) > Or: > http://en.wikipedia.org/wiki/Ultrafinitism > > So Rosen's model's also make some mathematical assumptions (which, > admittedly, are widely shared - but may change, of course) I don't understand how ultrafinitism rules out noncomputable models. It seems to me that even in ultrafinitism, the halting problem is still noncomputable (as Marcus mentioned). > 5) What I also find strange is the opposedness to computation: after > all, with computers we are just beginning to find an "embarassment of > riches"; fine to explore other avenues (Rosen), but I think it is much > to early to dismiss the computational approach. So why his radical > assertion that computational approaches to describe life must fail? Because he'd bought into the idea that effects cause their causes in living systems and he believed computation (as we know it today) cannot represent these causal cycles. And many people seem to agree. But, it's not clear how much math RR was aware of. For example, did he know about non-well-founded set theory? Did he know of quantum computation? Etc. It's entirely possible that if he were in his prime today, he would not have come to the same pessimistic conclusions about "computation". You have to remember that he did much of this work in the 70s and 80s, perhaps even earlier. You also have to remember that he rarely got a fair hearing from his contemporaries for whatever reason. Science is full of pompous idiotic "experts" (probably including RR as well) who destructively criticize anything they don't immediately understand or agree with. It's entirely possible that if he'd had a more receptive group of people to work with, he might have made better progress and/or changed his mind. This continual rejection also gave him quite an attitude, understandably. > 6) A point addressed in the Kercel paper: The ambiguity of language and > the definiteness of computation: this is of import for the AI/Alife > community, and it is indeed a problem, but is I think addressed if one > can control the symbol grounding problem(Harnad, > http://citeseer.ist.psu.edu/harnad90symbol.html). > > If one can let an AI/Alife really learn symbols (instead of programming > them or assigning meaning to symbols by specification of the prog. > language; the "learned" symbols would not make sense to us then, of > course) they would inherently have the same ambiguity as our concepts > have for us (because they would be learned in an ambiguous world). I agree. In fact, I don't think ALife will achieve it's ultimate goals until we develop ambiguous computing (which is more than soft computing, by the way). > Conclusion: I think Rosen's ideas are valuable contributions in that > they sensitivize us to certain problems, especially in modelling life. > > But the case against computatability is unconcinving. I agree... though I would not say "the case against computability"... I'd say "the case against the expressive power of computation" is unconvincing. I do believe that there are certain processes in reality that are noncomputable in terms of what we now call "computation". And please remember that I'm not an expert on RR. I've done just enough digging to satisfy my initial curiosity... And I'm LAZY! (I'm only up to page 166 in "The Road to Reality"... and I bought it when it first came out. ;-) So, you can rest assured that others are far more credible and correct. * I'll try to use "complex_rr" when talking specifically about RR's definition of complexity. - -- glen e. p. ropella, 971-219-3846, http://tempusdictum.com The only good is knowledge and the only evil is ignorance. -- Socrates -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.6 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iD8DBQFHfan8ZeB+vOTnLkoRAj4MAJ9NHNZl1DxVdKdsfRiH5JZ+M2Zb0gCdHGnA 3VQ96KRWOeZptPnpKTPVKMc= =XFiM -----END PGP SIGNATURE----- |
Glen E. P. Ropella wrote:
> Because he'd bought into the idea that effects cause their causes in > living systems and he believed computation (as we know it today) cannot > represent these causal cycles. > > You have to remember that he did much of this work in the 70s and 80s Here's a paper by Ken Thompson where he describes a regular expression implementation based on object code generated on-the-fly for the IBM 7094. The 7094 was a 60's mainframe had instructions designed for self-modifying code. (Lisp predates that..) http://portal.acm.org/citation.cfm?doid=363347.363387 Also note that ~8% of human DNA is highly similar to retroviruses -- we're slowly being rewritten from the outside. http://genomebiology.com/2001/2/6/reviews/1017. ...and even by each other.. http://www.sciencedirect.com/science?_ob=ArticleURL&_udi=B6T3B-47HPGPW-4&_user=10&_rdoc=1&_fmt=&_orig=search&_sort=d&view=c&_acct=C000050221&_version=1&_urlVersion=0&_userid=10&md5=937dc8c7b72003e57bbd2971e7ae71be Marcus |
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1 What's your point? Oh let me guess. The rest of us are all idiots and this has all been solved already? Marcus G. Daniels on 01/03/2008 08:40 PM: > Here's a paper by Ken Thompson where he describes a regular expression > implementation based on object code generated on-the-fly for the IBM > 7094. The 7094 was a 60's mainframe had instructions designed for > self-modifying code. (Lisp predates that..) > > http://portal.acm.org/citation.cfm?doid=363347.363387 > > Also note that ~8% of human DNA is highly similar to retroviruses -- > we're slowly being rewritten from the outside. > > http://genomebiology.com/2001/2/6/reviews/1017. > > ...and even by each other.. > > http://www.sciencedirect.com/science?_ob=ArticleURL&_udi=B6T3B-47HPGPW-4&_user=10&_rdoc=1&_fmt=&_orig=search&_sort=d&view=c&_acct=C000050221&_version=1&_urlVersion=0&_userid=10&md5=937dc8c7b72003e57bbd2971e7ae71be - -- glen e. p. ropella, 971-219-3846, http://tempusdictum.com One of the symptoms of an approaching nervous breakdown is the belief that one's work is terribly important. -- Bertrand Russell -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.6 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iD8DBQFHfb5OZeB+vOTnLkoRAoMKAJwK7LCgsbKcrle0z0AGMXPm3KftbwCfZ/KN NnfZQa+/phj21spObgcwo0A= =NCGS -----END PGP SIGNATURE----- |
In reply to this post by Phil Henshaw-2
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1 Phil Henshaw on 01/03/2008 03:52 PM: > But then going back to the thread, Rosen's theorem seems to be offered > as proof that life requires gaps in efficient causation. I don't know where you got this. Perhaps I don't understand what you're saying here. But, RR was saying the opposite. Life requires NO gaps in efficient causation... i.e. closure. > Would it be a corollary to say no formal system can > explain emergent organization of self-referencing causal loops, and so > maybe make ordinary complex systems which develop by growth a typical > case example for Rosen's idea? I think complex (including complex_rr) systems which develop by growth could be a typical example for RR. But, one would have to somehow show that "develop by growth" implies complexity_rr. And I'm not sure that can be done because "develop by growth" is pretty vague. > That would imply a map of the > deterministic plane sort of like Swiss cheese, with all individual > emergent systems defining 'dark matter' islands of self-organization > isolated from efficient causation by the 'white matter' of 'the cheese > itself'... ..Whew!... ;-) You've totally lost me, here. What is the "deterministic plane"? - -- glen e. p. ropella, 971-219-3846, http://tempusdictum.com American workers spend more of their day working to pay taxes than they do to feed, clothe, and house their families. -- The Tax Foundation -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.6 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iD8DBQFHflxvZeB+vOTnLkoRAisbAKCXAcIa9HMvMViPP4ZnDbAEJfImywCfd0CA CegsNKYKySvsQJKkpk3yA5g= =nd51 -----END PGP SIGNATURE----- |
In reply to this post by glen ep ropella
Dear Glen,
thanks for taking the time to write such a long response, here some comments: >> complexity (=uncomputability in the Rosen sense) >> For my problems with his "uncomputability" see below. > > Living systems are just the particular example set of the (possibly very > large) category of complex_rr* systems. It doesn't _start_ with life. > Life just happens to be what RR (Robert Rosen) was interested in. Ok - but are all Rosenites sure about this? complex_rr is a thesis which I find scientifically ok because it does not introduce an arbitrary distinction between matter in different organizational forms (animate vs inanimate), although I disagree (with complex_rr) ;-) > He does, however, seem to avoid being explicit about the influence of > Goedel's theorems on his own ideas. As far as I can tell, he never even > approaches a technical explanation that extrapolates from Goedel to his > work. His exposition is purely philosophical and others claim to be > able to map what he said directly to Goedel's results. Yes I did also not find any explicit mapping; but if one is not given, I am always very skeptical, because G?del is abused for all kinds of things. (see the excellent book by Torkel Franzen: G?del's Theorem: An Incomplete Guide to Its Use and Abuse http://www.amazon.com/Godels-Theorem-Incomplete-Guide-Abuse/dp/1568812388/ref=pd_bbs_sr_1?ie=UTF8&s=books&qid=1199723625&sr=8-1) > A better exposition comes in Penrose's work, in which tries to argue > that math (as done by humans) regularly involves hopping outside of any > given formal system in order to catch a glimpse of a solution, then > hopping back inside the formal system in order to develop a formal > proof. And in this regard, RR's rhetoric is not inconsistent. The Penrose/Lucas argument has been debunked many times. In the Torkel Franzen book above, in Rudy Rucker's "Ininity and the mind" (another excellent book, recommended reading, good fun and educational) and in a number of philosophy papers. But it still sticks around :-)) > RR's basic claim would be that math is _more_ than computation > (automated inference... formal systems... whatever you want to call it). > Namely, it involves jumping levels of discourse to provide entailment > when none such can be provided inside the formal system. If you take > that to its logical conclusion, you can imagine a _holarchy_ of formal > systems that each patch up the entailments for other formal systems in > the holarchy. In order to avoid an infinite regress or an infinite > progression, however, the level hopping _must_ loop back in on itself. I like the holarchy idea, I think this is important, but I don't see why this should not be capturable via computation. We can model formal systems in other formals systems (indeed is being done in foundations of math, as ZFC is currently seen as basis for math together with classical logic, but that is another discussion entirely). > The part that RR seems to think is not covered is the force or influence > that "guides" a living system in its behaviors. In many contexts, > people tend to make vague claims that "natural selection" or the > "environment" provide such pressure in the form of limited resources or > optimization or even co-evolution. But, those sorts of answers to _why_ > a living system assembles and maintains itself are really just question > begging... they put off the question without answering it. Either one is strictly materialist like Dawkins, then natural selection is indeed enough of an explanation. (that what can stay will stay, because if it couldn't it wouldn't) :-)) Or you assume a purpose to the universe, maybe something like Teilhard de Chardin's Omega point (which "draws" evolution toward it). Maybe Rosen is somewhere in between? > It's this "why" that leads him to consider "final cause". He takes the > most prevalent answer to the why question seriously: living systems do > what they do in order to benefit _themselves_. But how can an organism > at time t_0 know what actions will benefit that organism at time t_100? It does not know. It it chooses wrongly, it will not be here to complain. > The question he asks specifically is: "How can we have organization > without finality?" I.e. How can we say that an activity of an organism > is purposeful without some external _agent_ declaring the purpose of the > organism? In the end, he comes to the idea that effects cause their > causes, which is obviously cyclic. So he not also challenges the "mechanist/computationalist" thesis but also standard neo-darwinism? > makes perfect". These positive feedback loops where the effect of a > process is to reinforce the process are the heart of RR's idea. Sounds a bit like converging toward an attractor - that is a nice idea (and would also fit nicely with the Omega point) - but one does not need any final causation for that - rather it is normal causality which inevitably produces a result. Like a stone which is dropped on the Earth will fall toward the Earth and not, say, to the moon. (Aristoteles would have attributed this stone falling to final causation: the stone's resting position would naturally be the earth in his view, so that's where it goes when dropped; final causation is a little en vogue these days, but I think it is a flawed concept; or rather: it is a descriptional heuristic, but not an active force. > The trouble is that they are not _simply_ self-reinforcing. Each > iteration through the cycle _changes_ the system. So, you cannot > _finitely_ list all cycles up until some point UNLESS you actually do > it. Absolutely, I agree. > I.e. the end result of the 4 billion years of iteration is not > analytically predictable from the very first set of axioms we started > with 4 billion years ago. It's incompressible because each iteration > changes the building blocks. (And as our discussion about "informal" > formal systems covers, consistency is not necessarily preserved when new > axioms are added or when the axioms are changed, which means that formal > systems can't accurately model these self-modifying systems.) > True, if hindsight were 20/20, we could finitely list everything that > has already happened; but, we (probably) wouldn't be able to finitely > list everything that will happen from now until, say, 100 years from now > _because_ the underlying ontology changes at each iteration. But the new iterations are not new axioms; also, I am of course not claiming that one could simulate in advance the outcome of evolution - we do not know how the dice fall (in random mutation, crossover etc etc) I just mean it would be computable in principle. >> 4) An ultrafinitistic view would generally rule out noncomputable models >> anyway (see for instance the nice essay by Doron Zeilberger: >> http://www.math.rutgers.edu/~zeilberg/mamarim/mamarimPDF/real.pdf) >> Or: >> http://en.wikipedia.org/wiki/Ultrafinitism >> >> So Rosen's model's also make some mathematical assumptions (which, >> admittedly, are widely shared - but may change, of course) > > I don't understand how ultrafinitism rules out noncomputable models. It > seems to me that even in ultrafinitism, the halting problem is still > noncomputable (as Marcus mentioned). This is not what I meant - at the beginning of my post I wrote: >> complexity (=uncomputability in the Rosen sense) I was talking about Rosen's theory that _reality_ was uncomputable, not if there are uncomputable functions in a mathematical system. If, for instance, space-time is a "continuum" in the math sense, we get a problem, because we can only approximate real numbers on a finite computer. Then complexity_rr would hold (except if you could exploit the power of the continuum in analog computers). The current (Church Turing) model of computation assumes denumerably many states are available for computation. When I spoke of adopting ultrafinitism, I meant that this should hold in regard to reality. In philosophy of mathematics we can speak about two things: the mathematical objects themselves, or their relation to reality. The working mathematician is usually a platonist - he imagines, for instance, the real numbers existing in the mindscape/platonia or whatever. If you adopt an ultrafinitist stance, it only makes sense within a strong coupling to reality: the claim that reality is in the end discrete (QM, loop quantum gravity, holographic principle etc are all theories which give hints in this direction) Ultrafinitism is an extrapolation of the physical world (at least in my interpretation, I am sure one can also hold it as a pure philosophy of math, although it loses much of it's appeal then I would say). >> But the case against computatability is unconcinving. > > I agree... though I would not say "the case against computability"... > I'd say "the case against the expressive power of computation" is > unconvincing. Yes, your formulation "the case against the expressive power of computation" is what I meant - computability of course raises different associations, I was sloppy in the formulation. > I do believe that there are certain processes in reality > that are noncomputable in terms of what we now call "computation". Ok for the computability issues - I can't build a computer which solves the halting problem; but what I was speaking about above was the assumption that the universe could _be_ a computation (Seth LLoyd, Max Tegmark, J?rgen Schmidhuber come to mind). Or do you think there are physical processes which rule this conclusion out? (if yes, I would be very interested to hear about this, because I am currently researching this issue) Best wishes, G?nther -- G?nther Greindl Department of Philosophy of Science University of Vienna guenther.greindl at univie.ac.at http://www.univie.ac.at/Wissenschaftstheorie/ Blog: http://dao.complexitystudies.org/ Site: http://www.complexitystudies.org |
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1 G?nther Greindl on 01/07/2008 12:57 PM: > thanks for taking the time to write such a long response, here some > comments: And thank you for pursuing it. Since I'm only slightly versed in RR, I enjoy the opportunity to talk about it. It helps me think clearly. >>> complexity (=uncomputability in the Rosen sense) >>> For my problems with his "uncomputability" see below. >> Living systems are just the particular example set of the (possibly very >> large) category of complex_rr* systems. It doesn't _start_ with life. >> Life just happens to be what RR (Robert Rosen) was interested in. > > Ok - but are all Rosenites sure about this? > complex_rr is a thesis which I find scientifically ok because it does > not introduce an arbitrary distinction between matter in different > organizational forms (animate vs inanimate), although I disagree (with > complex_rr) ;-) Hmmm. I guess that depends on what you mean by "Rosenite". [grin] But off the top of my head, I'd say "no". Most Rosenites I've talked to seem to hang the distinction clearly between living and non-living systems. In many cases, I just didn't have the chance to dig deep enough to find out whether they, too, believe that living systems are just a sub-set of complex_rr systems. In the end, I don't know the distribution of Rosenites who think life and complexity_rr are tightly correlated. > (see the excellent book by Torkel Franzen: G?del's Theorem: An > Incomplete Guide to Its Use and Abuse > http://www.amazon.com/Godels-Theorem-Incomplete-Guide-Abuse/dp/1568812388/ref=pd_bbs_sr_1?ie=UTF8&s=books&qid=1199723625&sr=8-1) I second that recommendation. It's a very cool (and small!) book. >> A better exposition comes in Penrose's work, in which tries to argue >> that math (as done by humans) regularly involves hopping outside of any >> given formal system in order to catch a glimpse of a solution, then >> hopping back inside the formal system in order to develop a formal >> proof. And in this regard, RR's rhetoric is not inconsistent. > > The Penrose/Lucas argument has been debunked many times. In the Torkel > Franzen book above, in Rudy Rucker's "Ininity and the mind" (another > excellent book, recommended reading, good fun and educational) and in a > number of philosophy papers. But it still sticks around :-)) The argument sticks around because the "debunking" (at least what I've seen) _merely_ targets the validity of the argument, not the truth or falsity of the conclusion. So, it's true that the argument is INVALID; but that doesn't mean the conclusion is false. In any case, my point was that RR's argument is _like_ Penrose's argument. And, to the best of my knowledge, RR's argument hasn't been formalized to the point where we could show it to be invalid. (Kudos should go to anyone who makes their statements clear enough so that their rhetoric can be shown invalid!) > I like the holarchy idea, I think this is important, but I don't see why > this should not be capturable via computation. We can model formal > systems in other formals systems (indeed is being done in foundations of > math, as ZFC is currently seen as basis for math together with classical > logic, but that is another discussion entirely). I agree completely. In fact, to the best of my knowledge, no Rosenite put that idea into my head. I think it came to me after listening to a presentation by Terry Bristol (entitled "Carnot's Epiphany") wherein Bristol tried to make the case that the universe is an engine composed of sub-engines. His talk of symmetry and of energy being an asymmetry had already reminded me of RR because of the central role symmetry plays in RR's work. Then when I asked Terry what happens at the very top of the engine of sub-engines, he said something about it folding back down to the tiniest sub-engines (and vice versa). At that point, I began thinking of RR's efficient causation band-aides as a holarchy. The important point being that if you try to talk to a Rosenite about this "holarchy of formal systems", they may not know what you're talking about since it might merely be my extrapolation. [grin] Sorry for my lack of scholarship. > Either one is strictly materialist like Dawkins, then natural selection > is indeed enough of an explanation. > (that what can stay will stay, because if it couldn't it wouldn't) :-)) > > Or you assume a purpose to the universe, maybe something like Teilhard > de Chardin's Omega point (which "draws" evolution toward it). > > Maybe Rosen is somewhere in between? Hmmm. I reject the false dichotomy of "materialist or not". There is no "either-or", here. I can't defend my opinion; but, when people start extrapolating from the very tiny amount we know for sure out into the huge universe of which we're mostly ignorant, my warning bells go off too loud for me to think. So, if anyone (including Dawkins) claims he knows how the universe works well enough to be strictly materialist (or strictly _anything_), then that person loses most credibility in my book. I'm more ignorant than many; but I'm pretty sure that nobody here on earth can rigorously defend any statement that starts with "The Universe is ...". Given that, I'm open to all sorts of wacky ideas about how things may or may not work. And from that perspective, RR is simply trying to toss out a few ideas for why living systems seem so different from non-living ones. >> It's this "why" that leads him to consider "final cause". He takes the >> most prevalent answer to the why question seriously: living systems do >> what they do in order to benefit _themselves_. But how can an organism >> at time t_0 know what actions will benefit that organism at time t_100? > > It does not know. It it chooses wrongly, it will not be here to complain. Well, actually, it might "know" enough to estimate. And their estimate will be caused by the events in their history and their current interaction with their environment. RR calls this "anticipation". In order to talk about things like this estimate or a unit's ability to guide themselves toward a (imaginary?) goal, we need a name. "Final cause" is that name. >> The question he asks specifically is: "How can we have organization >> without finality?" I.e. How can we say that an activity of an organism >> is purposeful without some external _agent_ declaring the purpose of the >> organism? In the end, he comes to the idea that effects cause their >> causes, which is obviously cyclic. > > So he not also challenges the "mechanist/computationalist" thesis but > also standard neo-darwinism? I don't think RR provides an explicit challenge to neo-darwinism. However, I do think he would say that neo-Darwinism is either incomplete or just a (small?) part of the theory (of life) we will eventually develop, because it doesn't fully explain the organization of living systems. I think this "challenge" (were he to make it) would not invalidate neo-Darwinism; but it would posit that neo-Darwinism is only part of the explanation for biology as we know it. > Sounds a bit like converging toward an attractor - that is a nice idea > (and would also fit nicely with the Omega point) - but one does not need > any final causation for that - rather it is normal causality which > inevitably produces a result. Like a stone which is dropped on the Earth > will fall toward the Earth and not, say, to the moon. Not quite. This positive feedback would modify not only the state of the system, but also the ontology in which the system sits. So, it's not like two gravitational bodies falling toward one another because both bodies (rock and earth) are slavishly imprisoned within the ontology (gravitational physics). Any two biological units, on the other hand, can fundamentally change the rules by which they interact. The "physics" is mutable and is (purposefully) changed by the components of the system.... at least that's the position I think RR would take. I suppose they would do this by changing the assembly of formal systems and their respective semantic groundings. At heart, RR seems (to me) to assume a type of "fluidity" (or "logical abstraction") for the biological units that allows them to change their mechanisms but achieve the same (or similar) phenomena/outcome. This "fluidity" hinges on the "modeling relation" between causal and inferential entailment. (See * below.) > But the new iterations are not new axioms; Well, I'm claiming that (some subset of) the new iterations WOULD be new axioms. The whole idea of involving a the Goedel-related ideas (I think) into RR's conception is to talk explicitly about "level jumping", when a formal system becomes inadequate, you jump out of it and plug any holes with new axioms, then jump back in and continue on with your "inference". Of course, any new axioms might just be assumed temporarily in order to escape some temporary ambiguity. Or they might be permanent. > If you adopt an ultrafinitist stance, it only makes sense within a > strong coupling to reality: the claim that reality is in the end > discrete (QM, loop quantum gravity, holographic principle etc are all > theories which give hints in this direction) > Ultrafinitism is an extrapolation of the physical world (at least in my > interpretation, I am sure one can also hold it as a pure philosophy of > math, although it loses much of it's appeal then I would say). Ahhh. I get it now. If the world _is_ finite, then every state of the world can be achieved by an effective method (a dumb machine that just chugs mechanically along). When I talk of "computable", I tend to think in a limited way and only refer to the truth value (and decidability) of any given statement (or hypothesis). Something is incomputable if it's truth value cannot be determined within the formal system. And in that usage, reality is totally unrelated.... it's about the validity and NOT the soundness of any given statement. "Computer", as a term, belongs to the realm of thought not reality. A computer is an abstract machine. An HP ze4510 is a concrete machine. Hence, "computable", as a term, belongs in the lexicon of thought and inference, whereas "HP ze4510" belongs in the lexicon of reality and cause. Ultimately, what you're talking about is the _accuracy_ of any given formal system in describing the real world. If the world _is_ finite, then we could (in principle) discover/invent a language that describes the world _perfectly_. Of course, this might be true even if the world isn't finite ... * basically (back to RR) if causal entailment is equivalent to inferential entailment in the right ways, then we can accurately describe cause (reality) with inference (thought). RR's main premise is that inferential entailment is missing some fundamental attributes that causal entailment has. Hence, our languages are incapable of representing the world in important ways (namely, the ability to handle self-referencing loops). Of course, I'm not personally opposed to the idea that thought and matter are identical. But, RR (and most people actually) are steeped in dualism. So, for the purposes of this conversation, we have to preserve the dual. >> I do believe that there are certain processes in reality >> that are noncomputable in terms of what we now call "computation". > > Ok for the computability issues - I can't build a computer which solves > the halting problem; but what I was speaking about above was the > assumption that the universe could _be_ a computation (Seth LLoyd, Max > Tegmark, J?rgen Schmidhuber come to mind). Or do you think there are > physical processes which rule this conclusion out? (if yes, I would be > very interested to hear about this, because I am currently researching > this issue) Good question. I believe (emphasize _believe_ ;-) reality is NOT equivalent to computation as we currently know "computation". But, I'm no authority and please take anything I say as random chatter. But, before I go on to explain which parts of reality I think cannot be captured by computation, I want to be as clear as possible about what "computation" means. Computation, as the term is commonly used, is very close to the concept of "effective calculability", a set of well-defined steps (requiring NO intuition, special skills, or intelligence) that's guaranteed to stop and guaranteed to behave correctly, including adhering to the valence of any given operator. This is basically the way our computers work. You tell it what to do and it stupidly obeys you, eventually resulting in an infinite loop, a final answer, or a crash. And if you tell it that there are only, say, 10 possible answers, it will _merely_ produce one of those prescribed 10 possible answers. (I live for the day when I ask a computer: "Is this true or false?" And it answers: "Neither, it's _blue_!" ;-) - -- glen e. p. ropella, 971-219-3846, http://tempusdictum.com A government which robs Peter to pay Paul, can always count on the support of Paul -- George Bernard Shaw -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.6 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iD8DBQFHg4tZZeB+vOTnLkoRAnTGAJ4vvyZONzNjlZJLUHtDxZ950X8u3gCglPPl +N0E5dwHhTCluOPdd0PRcFI= =yhEI -----END PGP SIGNATURE----- |
Free forum by Nabble | Edit this page |