anonymity/deniability/ambiguity

classic Classic list List threaded Threaded
26 messages Options
12
Reply | Threaded
Open this post in threaded view
|

Re: anonymity/deniability/ambiguity

thompnickson2

Thank you, David,

 

I need to think about all of this. 

 

A brief early response:  There are two things that words do: they stroke and they convey information.  AT the core, I think, my authoritarian impatience (to use a word that has recently blossomed in the correspondence on the list)  arise when people confuse one use of words for another.   When we speak of that of which we cannot speak we are like primates who groom but do not remove any lice.  Grooming and being groomed is very nice; but I am really interested in louse removal.

 

Nick

 

Nicholas Thompson

Emeritus Professor of Ethology and Psychology

Clark University

[hidden email]

https://wordpress.clarku.edu/nthompson/

 

 

From: Friam <[hidden email]> On Behalf Of David Eric Smith
Sent: Thursday, May 21, 2020 5:15 PM
To: The Friday Morning Applied Complexity Coffee Group <[hidden email]>
Subject: Re: [FRIAM] anonymity/deniability/ambiguity

 

Signal to Nick:

 

You commented on wanting to understand the conversation about formalists and intuitionists which I have been using in various conversations with Glen and Jon.  Now is the chance to do it at low cost.

 

Frank has provided two proofs of irrationality of the square root of 2, one formalist (using proof by contradiction requiring acceptance of the law of the excluded middle) a few days ago, this most recent one being constructive, meaning that it constructs a degree of difference that you can point to concretely, rather than concluding from the syntax that there must be such.  One gets at the core of anything I was trying to say by looking at these two proofs, and deciding whether one can see what is different in their sense.

 

For me, these concrete, super-simple minimal pairs are the mental tools to get at the difference between one style of thought and another.  I can then try to decide whether, in some much more difficult context, where it is very hard to be concrete, I think I see the same kind of contrast in style.  Since I am too slow to almost ever work out the watertight version of anything, and some of these would be too hard for me to do at all, I don’t even seriously intend to check whether my imagistic impression is reliable.  I am willing to use the simple cases I do understand as perceptive filters to try to make some kind of approximate sense of the hard cases, as the alternative to just letting it all go by.

 

You commented in one of these emails that you could accept “irreducible” as long as it didn’t mean “can’t be described”, and I have been thinking over the past days whether I can come down on one side of that or the other.  You might also have said, “as long as it didn’t mean `can’t be observed’ “.

 

I decided I don’t know.  To know what can or can’t be observed, can or can’t be described, is or isn’t behavior, one has to operationalize any of those and decide how reliable the operationalization is.  The exchange mostly of Glen, EricC, and Jon about what is or isn’t behavior, often quite tedious, seemed like it took seriously the right caution.  One could build comparable tedious harangues around “observe” and “describe”, and perhaps must to resolve this.

 

You might think you can say, as a matter of syntax, that “of course it must be observable” or else one is denying science.  Physicists though for almost 200 years that that “of course” was unproblematic, that they had an operationalization that was both flexible enough to extend to more and more subjects, restrictive enough to have content, and expressible in equivalence to mathematical objects.  Then they learned that the way they had assumed “of course” it could be done wasn’t the correct formalization to be extended to quantum mechanics.  That didn’t mean that there wasn’t a correct formalization, only that a different one was required, to subsume all that had worked before, and also extend where the former one couldn’t go.  The proof of inadequacy of the former was only demonstrated by putting one that was more correct in its place and exhibiting the difference (constructive); it seems like it would have been hopeless to anticipate, in the pre-quantum days, that the notion of observability was inadequate in the way it actually was, and even more hopeless to try to use a syntactic argument (formalist) either to assert its sufficiency or identify the specific defect that quantum mechanics would ultimately reveal.  So when I ask “what is the value of a formalist-style declaration that inner-ness can’t be a real property, if one is not constructing something to show that to be the case”, this is the style difference I am using as a reference to put that question.

 

I don’t imagine that what we learned about definitions of observability in physics will have any direct relevance to whatever challenges the term may pose in psychology.  The physics example is just a nice reminder of ways in which it can be very hard to decide when one is really saying something, and likewise an example that constructing the alternative sometimes seems to give the only perspective from which to see that there had formerly been a problem.  

 

Because Pierce et seq. have done so much to try to be precise, practical, and useful in defining what science is, it allows me to be lazy, say “yes I accept and defend all that”, and then ask for an ultra-stripped-down abstraction of what science is then.

 

I may already have written this (senility), but my imagistic definition would be that science is the premise that mistakes aren’t all sui generis, but that they have family resemblances, and that there are methods of practice that give one a better-than-random chance of recognizing that something may be a mistake even short of knowing what ‘the' (or ‘a better’) answer is.  I choose that framing in part because it is also the framing that formalizes the notion of error correction in computer science (so I have a mental image to refer to as an exemplar accompanied by some formal tools).  One wants to identify the fact that a message contains an error, without having to know, for every message in advance, what it was supposed to have contained (else you didn’t need to be sending messages in the first place).  

 

I use the stripped down form in the hope of building a recursive tree of mutual refereeing, for all elements of scientific practice, now appealing to my mental image of Peter Gacs’s error-correcting 1D cellular automaton, which does this by nesting correcting structure within correcting structure.  Then I can look for every aspect of our practice that is trying to play this role in some way.  A subset include:

1. Intersubjectivity to guard against individual delusion, ignorance, oversight, and similar hazards.

2. Experimentation to guard against individual and group delusion etc, and to provide an additional active corrective against erroneous abduction from instances to classes.

3. Adoption of formal language protocols:

3a. Definitions, with both operational (semantic) and syntactic (formalist) criteria for their scope and usage

3b. Rigid languages for argument, including logic but also less-formal standards of scientific argument, like insistence on null models and significance measures for statistical claims

 

There must be more, but the above are the ones I am mostly aware of in daily work.

 

These are, to some extent, hierarchical, in that those further down the list are often taken to have a control-theoretic-like authority to tag those higher-up in the list as “errors”.  However, like any control system, the controller can also be wrong, and then its authority allows it to impose cascades of errors before being caught.  Hence, I guess Kant thought that a Newtonian space x time geometry was so self-evident that it was part of the “a priori” to physical reasoning. It was a kind of more-definite-than-a-definition criterion in arguments.  And it turned out not to describe the universe we live in, if one requires sufficient scope and precision.  Likewise, the amount of a semantics that we can capture in syntactic rules for formal speech is likely to always be less than all the semantics we have, and even the validity of a syntax could be undermined (Godel).  But most common in practice is that the syntax could be used as a kind of parlor entertainment, but the interpretation of it becomes either invalid or essentially useless when tokens that appeared in it turn out not to actually stand for anything.  This is what happens when things we thought were operational definitions are shown by construction of their replacements to have been invalid, as with the classical physics notion of “observable”, or the Newtonian convention of “absolute time”.

 

I would like to give Pierce’s “truth == reliable in the long run” a modern gloss by regarding the above the way an engineer would in designing an error-correction system.  The instances that are grouped in the above list are not just subroutines in a computer code, but embodied artifacts and events of practice by living-cognizing-social behavers and reasoners.  And then decide from a post-Shannon vantage point what such a system can and cannot do.  What notions of truth are constructible?  How long is the long run, for any particular problem?  What are the sample fluctuations in our state of understanding, as represented in placeholders for terms, rules, or other forms we adopt in the above list in any era, relative to asymptotes that we may or may not yet think we can identify?  How have errors cascaded through that list as we have it now, and can we use those to learn something about the performance of this way of organizing science?  (Dave Ackley of UNM did a lovely project on the statistics of library overhauls for Linux utilities some years ago, which is my mental model in framing that last question.)  Formal tools to answer more interesting versions of questions like those.

 

I mentioned some stuff about this in a post a month or two ago, and EricC included in a later post by way of reply that Pierce did a lot of statistics, so I understand I can’t take anything here outside the playpen of a listserve until I have first read everything Pierce wrote, and everything others wrote about what Pierce wrote, etc.  I suspect that, since Pierce lived before the publication of at least part of what is now understood about reliable error correction, large deviations, renormalization, automata theory, etc., there should be something new to say from a modern standpoint that Pierce didn’t already know, but that assertion is formalist, and thus valueless.  I have to do the exhaustive search through everything he actually did know, to point out something new that isn’t already in it (constructivist).  

 

Which is why I won’t have time, resources, or ability to do it.  So I will never know whether the things said above actually mean something.

 

Eric

 

 

 



On May 22, 2020, at 2:44 AM, Frank Wimberly <[hidden email]> wrote:

 

The badly rendered part:

 

{\displaystyle \left|{\sqrt {2}}-{\frac {a}{b}}\right|={\frac {|2b^{2}-a^{2}|}{b^{2}\left({\sqrt {2}}+{\frac {a}{b}}\right)}}\geq {\frac {1}{b^{2}\left({\sqrt {2}}+{\frac {a}{b}}\right)}}\geq {\frac {1}{3b^{2}}},}{\displaystyle \left|{\sqrt {2}}-{\frac {a}{b}}\right|={\frac {|2b^{2}-a^{2}|}{b^{2}\left({\sqrt {2}}+{\frac {a}{b}}\right)}}\geq {\frac {1}{b^{2}\left({\sqrt {2}}+{\frac {a}{b}}\right)}}\geq {\frac {1}{3b^{2}}},}

 

 

On Thu, May 21, 2020 at 11:30 AM Frank Wimberly <[hidden email]> wrote:

Clinicians often call that "being oppositional".  

 

You say that I've known authorities.  I was just talking to John Baez about my advisor Errett Bishop, often called the inventor of constructive mathematics.  Here is a constructive proof, with no use of the excluded middle, of the irrationality of sqrt(2) that I found in Wikipedia.  Apologies to those who don't care:

 

In a constructive approach, one distinguishes between on the one hand not being rational, and on the other hand being irrational (i.e., being quantifiably apart from every rational), the latter being a stronger property. Given positive integers a and b, because the valuation (i.e., highest power of 2 dividing a number) of 2b2 is odd, while the valuation of a2 is even, they must be distinct integers; thus |2b2 − a2| ≥ 1. Then[17]

{\displaystyle \left|{\sqrt {2}}-{\frac {a}{b}}\right|={\frac {|2b^{2}-a^{2}|}{b^{2}\left({\sqrt {2}}+{\frac {a}{b}}\right)}}\geq {\frac {1}{b^{2}\left({\sqrt {2}}+{\frac {a}{b}}\right)}}\geq {\frac {1}{3b^{2}}},}{\displaystyle \left|{\sqrt {2}}-{\frac {a}{b}}\right|={\frac {|2b^{2}-a^{2}|}{b^{2}\left({\sqrt {2}}+{\frac {a}{b}}\right)}}\geq {\frac {1}{b^{2}\left({\sqrt {2}}+{\frac {a}{b}}\right)}}\geq {\frac {1}{3b^{2}}},}

the latter inequality being true because it is assumed that a/b ≤ 3 − √2 (otherwise the quantitative apartness can be trivially established). This gives a lower bound of 1/3b2 for the difference |√2 − a/b|, yielding a direct proof of irrationality not relying on the law of excluded middle; see Errett Bishop (1985, p. 18). This proof constructively exhibits a discrepancy between 2 and any rational.

 

On Thu, May 21, 2020 at 10:50 AM Steve Smith <[hidden email]> wrote:


On 5/21/20 10:32 AM, uǝlƃ wrote:
> Don't be fooled. "The problem with communication is the illusion that it exists." Or ie I believe in a stronger form of privacy than you believe in.
I KNOW! I know just what you mean!

<note to Frank...  one of the species of animal in this group is "the
Contrarian", but you probably already guessed that>


-- --- .-. . .-.. --- -.-. -.- ... -..-. .- .-. . -..-. - .... . -..-. . ... ... . -. - .. .- .-.. -..-. .-- --- .-. -.- . .-. ...
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives: http://friam.471366.n2.nabble.com/
FRIAM-COMIC
http://friam-comic.blogspot.com/


 

--

Frank Wimberly
140 Calle Ojo Feliz
Santa Fe, NM 87505
505 670-9918


 

--

Frank Wimberly
140 Calle Ojo Feliz
Santa Fe, NM 87505
505 670-9918

-- --- .-. . .-.. --- -.-. -.- ... -..-. .- .-. . -..-. - .... . -..-. . ... ... . -. - .. .- .-.. -..-. .-- --- .-. -.- . .-. ...
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/

 


-- --- .-. . .-.. --- -.-. -.- ... -..-. .- .-. . -..-. - .... . -..-. . ... ... . -. - .. .- .-.. -..-. .-- --- .-. -.- . .-. ...
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/ 
Reply | Threaded
Open this post in threaded view
|

Re: anonymity/deniability/ambiguity

thompnickson2

Thank you, ERIC!

 

I KNEW I was going to make that mistake some day.

 

Nick

 

Nicholas Thompson

Emeritus Professor of Ethology and Psychology

Clark University

[hidden email]

https://wordpress.clarku.edu/nthompson/

 

 

From: [hidden email] <[hidden email]>
Sent: Thursday, May 21, 2020 5:50 PM
To: 'The Friday Morning Applied Complexity Coffee Group' <[hidden email]>
Subject: RE: [FRIAM] anonymity/deniability/ambiguity

 

Thank you, David,

 

I need to think about all of this. 

 

A brief early response:  There are two things that words do: they stroke and they convey information.  AT the core, I think, my authoritarian impatience (to use a word that has recently blossomed in the correspondence on the list)  arise when people confuse one use of words for another.   When we speak of that of which we cannot speak we are like primates who groom but do not remove any lice.  Grooming and being groomed is very nice; but I am really interested in louse removal.

 

Nick

 

Nicholas Thompson

Emeritus Professor of Ethology and Psychology

Clark University

[hidden email]

https://wordpress.clarku.edu/nthompson/

 

 

 

From: Friam <[hidden email]> On Behalf Of David Eric Smith
Sent: Thursday, May 21, 2020 5:15 PM
To: The Friday Morning Applied Complexity Coffee Group <[hidden email]>
Subject: Re: [FRIAM] anonymity/deniability/ambiguity

 

Signal to Nick:

 

You commented on wanting to understand the conversation about formalists and intuitionists which I have been using in various conversations with Glen and Jon.  Now is the chance to do it at low cost.

 

Frank has provided two proofs of irrationality of the square root of 2, one formalist (using proof by contradiction requiring acceptance of the law of the excluded middle) a few days ago, this most recent one being constructive, meaning that it constructs a degree of difference that you can point to concretely, rather than concluding from the syntax that there must be such.  One gets at the core of anything I was trying to say by looking at these two proofs, and deciding whether one can see what is different in their sense.

 

For me, these concrete, super-simple minimal pairs are the mental tools to get at the difference between one style of thought and another.  I can then try to decide whether, in some much more difficult context, where it is very hard to be concrete, I think I see the same kind of contrast in style.  Since I am too slow to almost ever work out the watertight version of anything, and some of these would be too hard for me to do at all, I don’t even seriously intend to check whether my imagistic impression is reliable.  I am willing to use the simple cases I do understand as perceptive filters to try to make some kind of approximate sense of the hard cases, as the alternative to just letting it all go by.

 

You commented in one of these emails that you could accept “irreducible” as long as it didn’t mean “can’t be described”, and I have been thinking over the past days whether I can come down on one side of that or the other.  You might also have said, “as long as it didn’t mean `can’t be observed’ “.

 

I decided I don’t know.  To know what can or can’t be observed, can or can’t be described, is or isn’t behavior, one has to operationalize any of those and decide how reliable the operationalization is.  The exchange mostly of Glen, EricC, and Jon about what is or isn’t behavior, often quite tedious, seemed like it took seriously the right caution.  One could build comparable tedious harangues around “observe” and “describe”, and perhaps must to resolve this.

 

You might think you can say, as a matter of syntax, that “of course it must be observable” or else one is denying science.  Physicists though for almost 200 years that that “of course” was unproblematic, that they had an operationalization that was both flexible enough to extend to more and more subjects, restrictive enough to have content, and expressible in equivalence to mathematical objects.  Then they learned that the way they had assumed “of course” it could be done wasn’t the correct formalization to be extended to quantum mechanics.  That didn’t mean that there wasn’t a correct formalization, only that a different one was required, to subsume all that had worked before, and also extend where the former one couldn’t go.  The proof of inadequacy of the former was only demonstrated by putting one that was more correct in its place and exhibiting the difference (constructive); it seems like it would have been hopeless to anticipate, in the pre-quantum days, that the notion of observability was inadequate in the way it actually was, and even more hopeless to try to use a syntactic argument (formalist) either to assert its sufficiency or identify the specific defect that quantum mechanics would ultimately reveal.  So when I ask “what is the value of a formalist-style declaration that inner-ness can’t be a real property, if one is not constructing something to show that to be the case”, this is the style difference I am using as a reference to put that question.

 

I don’t imagine that what we learned about definitions of observability in physics will have any direct relevance to whatever challenges the term may pose in psychology.  The physics example is just a nice reminder of ways in which it can be very hard to decide when one is really saying something, and likewise an example that constructing the alternative sometimes seems to give the only perspective from which to see that there had formerly been a problem.  

 

Because Pierce et seq. have done so much to try to be precise, practical, and useful in defining what science is, it allows me to be lazy, say “yes I accept and defend all that”, and then ask for an ultra-stripped-down abstraction of what science is then.

 

I may already have written this (senility), but my imagistic definition would be that science is the premise that mistakes aren’t all sui generis, but that they have family resemblances, and that there are methods of practice that give one a better-than-random chance of recognizing that something may be a mistake even short of knowing what ‘the' (or ‘a better’) answer is.  I choose that framing in part because it is also the framing that formalizes the notion of error correction in computer science (so I have a mental image to refer to as an exemplar accompanied by some formal tools).  One wants to identify the fact that a message contains an error, without having to know, for every message in advance, what it was supposed to have contained (else you didn’t need to be sending messages in the first place).  

 

I use the stripped down form in the hope of building a recursive tree of mutual refereeing, for all elements of scientific practice, now appealing to my mental image of Peter Gacs’s error-correcting 1D cellular automaton, which does this by nesting correcting structure within correcting structure.  Then I can look for every aspect of our practice that is trying to play this role in some way.  A subset include:

1. Intersubjectivity to guard against individual delusion, ignorance, oversight, and similar hazards.

2. Experimentation to guard against individual and group delusion etc, and to provide an additional active corrective against erroneous abduction from instances to classes.

3. Adoption of formal language protocols:

3a. Definitions, with both operational (semantic) and syntactic (formalist) criteria for their scope and usage

3b. Rigid languages for argument, including logic but also less-formal standards of scientific argument, like insistence on null models and significance measures for statistical claims

 

There must be more, but the above are the ones I am mostly aware of in daily work.

 

These are, to some extent, hierarchical, in that those further down the list are often taken to have a control-theoretic-like authority to tag those higher-up in the list as “errors”.  However, like any control system, the controller can also be wrong, and then its authority allows it to impose cascades of errors before being caught.  Hence, I guess Kant thought that a Newtonian space x time geometry was so self-evident that it was part of the “a priori” to physical reasoning. It was a kind of more-definite-than-a-definition criterion in arguments.  And it turned out not to describe the universe we live in, if one requires sufficient scope and precision.  Likewise, the amount of a semantics that we can capture in syntactic rules for formal speech is likely to always be less than all the semantics we have, and even the validity of a syntax could be undermined (Godel).  But most common in practice is that the syntax could be used as a kind of parlor entertainment, but the interpretation of it becomes either invalid or essentially useless when tokens that appeared in it turn out not to actually stand for anything.  This is what happens when things we thought were operational definitions are shown by construction of their replacements to have been invalid, as with the classical physics notion of “observable”, or the Newtonian convention of “absolute time”.

 

I would like to give Pierce’s “truth == reliable in the long run” a modern gloss by regarding the above the way an engineer would in designing an error-correction system.  The instances that are grouped in the above list are not just subroutines in a computer code, but embodied artifacts and events of practice by living-cognizing-social behavers and reasoners.  And then decide from a post-Shannon vantage point what such a system can and cannot do.  What notions of truth are constructible?  How long is the long run, for any particular problem?  What are the sample fluctuations in our state of understanding, as represented in placeholders for terms, rules, or other forms we adopt in the above list in any era, relative to asymptotes that we may or may not yet think we can identify?  How have errors cascaded through that list as we have it now, and can we use those to learn something about the performance of this way of organizing science?  (Dave Ackley of UNM did a lovely project on the statistics of library overhauls for Linux utilities some years ago, which is my mental model in framing that last question.)  Formal tools to answer more interesting versions of questions like those.

 

I mentioned some stuff about this in a post a month or two ago, and EricC included in a later post by way of reply that Pierce did a lot of statistics, so I understand I can’t take anything here outside the playpen of a listserve until I have first read everything Pierce wrote, and everything others wrote about what Pierce wrote, etc.  I suspect that, since Pierce lived before the publication of at least part of what is now understood about reliable error correction, large deviations, renormalization, automata theory, etc., there should be something new to say from a modern standpoint that Pierce didn’t already know, but that assertion is formalist, and thus valueless.  I have to do the exhaustive search through everything he actually did know, to point out something new that isn’t already in it (constructivist).  

 

Which is why I won’t have time, resources, or ability to do it.  So I will never know whether the things said above actually mean something.

 

Eric

 

 

 

 

On May 22, 2020, at 2:44 AM, Frank Wimberly <[hidden email]> wrote:

 

The badly rendered part:

 

{\displaystyle \left|{\sqrt {2}}-{\frac {a}{b}}\right|={\frac {|2b^{2}-a^{2}|}{b^{2}\left({\sqrt {2}}+{\frac {a}{b}}\right)}}\geq {\frac {1}{b^{2}\left({\sqrt {2}}+{\frac {a}{b}}\right)}}\geq {\frac {1}{3b^{2}}},}{\displaystyle \left|{\sqrt {2}}-{\frac {a}{b}}\right|={\frac {|2b^{2}-a^{2}|}{b^{2}\left({\sqrt {2}}+{\frac {a}{b}}\right)}}\geq {\frac {1}{b^{2}\left({\sqrt {2}}+{\frac {a}{b}}\right)}}\geq {\frac {1}{3b^{2}}},}

 

 

On Thu, May 21, 2020 at 11:30 AM Frank Wimberly <[hidden email]> wrote:

Clinicians often call that "being oppositional".  

 

You say that I've known authorities.  I was just talking to John Baez about my advisor Errett Bishop, often called the inventor of constructive mathematics.  Here is a constructive proof, with no use of the excluded middle, of the irrationality of sqrt(2) that I found in Wikipedia.  Apologies to those who don't care:

 

In a constructive approach, one distinguishes between on the one hand not being rational, and on the other hand being irrational (i.e., being quantifiably apart from every rational), the latter being a stronger property. Given positive integers a and b, because the valuation (i.e., highest power of 2 dividing a number) of 2b2 is odd, while the valuation of a2 is even, they must be distinct integers; thus |2b2 − a2| ≥ 1. Then[17]

{\displaystyle \left|{\sqrt {2}}-{\frac {a}{b}}\right|={\frac {|2b^{2}-a^{2}|}{b^{2}\left({\sqrt {2}}+{\frac {a}{b}}\right)}}\geq {\frac {1}{b^{2}\left({\sqrt {2}}+{\frac {a}{b}}\right)}}\geq {\frac {1}{3b^{2}}},}{\displaystyle \left|{\sqrt {2}}-{\frac {a}{b}}\right|={\frac {|2b^{2}-a^{2}|}{b^{2}\left({\sqrt {2}}+{\frac {a}{b}}\right)}}\geq {\frac {1}{b^{2}\left({\sqrt {2}}+{\frac {a}{b}}\right)}}\geq {\frac {1}{3b^{2}}},}

the latter inequality being true because it is assumed that a/b ≤ 3 − √2 (otherwise the quantitative apartness can be trivially established). This gives a lower bound of 1/3b2 for the difference |√2 − a/b|, yielding a direct proof of irrationality not relying on the law of excluded middle; see Errett Bishop (1985, p. 18). This proof constructively exhibits a discrepancy between 2 and any rational.

 

On Thu, May 21, 2020 at 10:50 AM Steve Smith <[hidden email]> wrote:


On 5/21/20 10:32 AM, uǝlƃ wrote:
> Don't be fooled. "The problem with communication is the illusion that it exists." Or ie I believe in a stronger form of privacy than you believe in.
I KNOW! I know just what you mean!

<note to Frank...  one of the species of animal in this group is "the
Contrarian", but you probably already guessed that>


-- --- .-. . .-.. --- -.-. -.- ... -..-. .- .-. . -..-. - .... . -..-. . ... ... . -. - .. .- .-.. -..-. .-- --- .-. -.- . .-. ...
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives: http://friam.471366.n2.nabble.com/
FRIAM-COMIC
http://friam-comic.blogspot.com/


 

--

Frank Wimberly
140 Calle Ojo Feliz
Santa Fe, NM 87505
505 670-9918


 

--

Frank Wimberly
140 Calle Ojo Feliz
Santa Fe, NM 87505
505 670-9918

-- --- .-. . .-.. --- -.-. -.- ... -..-. .- .-. . -..-. - .... . -..-. . ... ... . -. - .. .- .-.. -..-. .-- --- .-. -.- . .-. ...
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/

 


-- --- .-. . .-.. --- -.-. -.- ... -..-. .- .-. . -..-. - .... . -..-. . ... ... . -. - .. .- .-.. -..-. .-- --- .-. -.- . .-. ...
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/ 
Reply | Threaded
Open this post in threaded view
|

Re: anonymity/deniability/ambiguity

David Eric Smith
Yes, doesn’t matter.  Email is a clunky chanel.

Best,

E


On May 22, 2020, at 8:52 AM, <[hidden email]> <[hidden email]> wrote:

Thank you, ERIC!
 
I KNEW I was going to make that mistake some day. 
 
Nick 
 
Nicholas Thompson
Emeritus Professor of Ethology and Psychology
Clark University
 
 
From: [hidden email] <[hidden email]> 
Sent: Thursday, May 21, 2020 5:50 PM
To: 'The Friday Morning Applied Complexity Coffee Group' <[hidden email]>
Subject: RE: [FRIAM] anonymity/deniability/ambiguity
 
Thank you, David, 
 
I need to think about all of this.  
 
A brief early response:  There are two things that words do: they stroke and they convey information.  AT the core, I think, my authoritarian impatience (to use a word that has recently blossomed in the correspondence on the list)  arise when people confuse one use of words for another.   When we speak of that of which we cannot speak we are like primates who groom but do not remove any lice.  Grooming and being groomed is very nice; but I am really interested in louse removal. 
 
Nick 
 
Nicholas Thompson
Emeritus Professor of Ethology and Psychology
Clark University
 
 
 
From: Friam <[hidden email]> On Behalf Of David Eric Smith
Sent: Thursday, May 21, 2020 5:15 PM
To: The Friday Morning Applied Complexity Coffee Group <[hidden email]>
Subject: Re: [FRIAM] anonymity/deniability/ambiguity
 
Signal to Nick:
 
You commented on wanting to understand the conversation about formalists and intuitionists which I have been using in various conversations with Glen and Jon.  Now is the chance to do it at low cost.
 
Frank has provided two proofs of irrationality of the square root of 2, one formalist (using proof by contradiction requiring acceptance of the law of the excluded middle) a few days ago, this most recent one being constructive, meaning that it constructs a degree of difference that you can point to concretely, rather than concluding from the syntax that there must be such.  One gets at the core of anything I was trying to say by looking at these two proofs, and deciding whether one can see what is different in their sense.
 
For me, these concrete, super-simple minimal pairs are the mental tools to get at the difference between one style of thought and another.  I can then try to decide whether, in some much more difficult context, where it is very hard to be concrete, I think I see the same kind of contrast in style.  Since I am too slow to almost ever work out the watertight version of anything, and some of these would be too hard for me to do at all, I don’t even seriously intend to check whether my imagistic impression is reliable.  I am willing to use the simple cases I do understand as perceptive filters to try to make some kind of approximate sense of the hard cases, as the alternative to just letting it all go by.
 
You commented in one of these emails that you could accept “irreducible” as long as it didn’t mean “can’t be described”, and I have been thinking over the past days whether I can come down on one side of that or the other.  You might also have said, “as long as it didn’t mean `can’t be observed’ “.
 
I decided I don’t know.  To know what can or can’t be observed, can or can’t be described, is or isn’t behavior, one has to operationalize any of those and decide how reliable the operationalization is.  The exchange mostly of Glen, EricC, and Jon about what is or isn’t behavior, often quite tedious, seemed like it took seriously the right caution.  One could build comparable tedious harangues around “observe” and “describe”, and perhaps must to resolve this.
 
You might think you can say, as a matter of syntax, that “of course it must be observable” or else one is denying science.  Physicists though for almost 200 years that that “of course” was unproblematic, that they had an operationalization that was both flexible enough to extend to more and more subjects, restrictive enough to have content, and expressible in equivalence to mathematical objects.  Then they learned that the way they had assumed “of course” it could be done wasn’t the correct formalization to be extended to quantum mechanics.  That didn’t mean that there wasn’t a correct formalization, only that a different one was required, to subsume all that had worked before, and also extend where the former one couldn’t go.  The proof of inadequacy of the former was only demonstrated by putting one that was more correct in its place and exhibiting the difference (constructive); it seems like it would have been hopeless to anticipate, in the pre-quantum days, that the notion of observability was inadequate in the way it actually was, and even more hopeless to try to use a syntactic argument (formalist) either to assert its sufficiency or identify the specific defect that quantum mechanics would ultimately reveal.  So when I ask “what is the value of a formalist-style declaration that inner-ness can’t be a real property, if one is not constructing something to show that to be the case”, this is the style difference I am using as a reference to put that question.
 
I don’t imagine that what we learned about definitions of observability in physics will have any direct relevance to whatever challenges the term may pose in psychology.  The physics example is just a nice reminder of ways in which it can be very hard to decide when one is really saying something, and likewise an example that constructing the alternative sometimes seems to give the only perspective from which to see that there had formerly been a problem.  
 
Because Pierce et seq. have done so much to try to be precise, practical, and useful in defining what science is, it allows me to be lazy, say “yes I accept and defend all that”, and then ask for an ultra-stripped-down abstraction of what science is then.
 
I may already have written this (senility), but my imagistic definition would be that science is the premise that mistakes aren’t all sui generis, but that they have family resemblances, and that there are methods of practice that give one a better-than-random chance of recognizing that something may be a mistake even short of knowing what ‘the' (or ‘a better’) answer is.  I choose that framing in part because it is also the framing that formalizes the notion of error correction in computer science (so I have a mental image to refer to as an exemplar accompanied by some formal tools).  One wants to identify the fact that a message contains an error, without having to know, for every message in advance, what it was supposed to have contained (else you didn’t need to be sending messages in the first place).  
 
I use the stripped down form in the hope of building a recursive tree of mutual refereeing, for all elements of scientific practice, now appealing to my mental image of Peter Gacs’s error-correcting 1D cellular automaton, which does this by nesting correcting structure within correcting structure.  Then I can look for every aspect of our practice that is trying to play this role in some way.  A subset include:
1. Intersubjectivity to guard against individual delusion, ignorance, oversight, and similar hazards.
2. Experimentation to guard against individual and group delusion etc, and to provide an additional active corrective against erroneous abduction from instances to classes.
3. Adoption of formal language protocols:
3a. Definitions, with both operational (semantic) and syntactic (formalist) criteria for their scope and usage
3b. Rigid languages for argument, including logic but also less-formal standards of scientific argument, like insistence on null models and significance measures for statistical claims
 
There must be more, but the above are the ones I am mostly aware of in daily work.
 
These are, to some extent, hierarchical, in that those further down the list are often taken to have a control-theoretic-like authority to tag those higher-up in the list as “errors”.  However, like any control system, the controller can also be wrong, and then its authority allows it to impose cascades of errors before being caught.  Hence, I guess Kant thought that a Newtonian space x time geometry was so self-evident that it was part of the “a priori” to physical reasoning. It was a kind of more-definite-than-a-definition criterion in arguments.  And it turned out not to describe the universe we live in, if one requires sufficient scope and precision.  Likewise, the amount of a semantics that we can capture in syntactic rules for formal speech is likely to always be less than all the semantics we have, and even the validity of a syntax could be undermined (Godel).  But most common in practice is that the syntax could be used as a kind of parlor entertainment, but the interpretation of it becomes either invalid or essentially useless when tokens that appeared in it turn out not to actually stand for anything.  This is what happens when things we thought were operational definitions are shown by construction of their replacements to have been invalid, as with the classical physics notion of “observable”, or the Newtonian convention of “absolute time”.
 
I would like to give Pierce’s “truth == reliable in the long run” a modern gloss by regarding the above the way an engineer would in designing an error-correction system.  The instances that are grouped in the above list are not just subroutines in a computer code, but embodied artifacts and events of practice by living-cognizing-social behavers and reasoners.  And then decide from a post-Shannon vantage point what such a system can and cannot do.  What notions of truth are constructible?  How long is the long run, for any particular problem?  What are the sample fluctuations in our state of understanding, as represented in placeholders for terms, rules, or other forms we adopt in the above list in any era, relative to asymptotes that we may or may not yet think we can identify?  How have errors cascaded through that list as we have it now, and can we use those to learn something about the performance of this way of organizing science?  (Dave Ackley of UNM did a lovely project on the statistics of library overhauls for Linux utilities some years ago, which is my mental model in framing that last question.)  Formal tools to answer more interesting versions of questions like those.
 
I mentioned some stuff about this in a post a month or two ago, and EricC included in a later post by way of reply that Pierce did a lot of statistics, so I understand I can’t take anything here outside the playpen of a listserve until I have first read everything Pierce wrote, and everything others wrote about what Pierce wrote, etc.  I suspect that, since Pierce lived before the publication of at least part of what is now understood about reliable error correction, large deviations, renormalization, automata theory, etc., there should be something new to say from a modern standpoint that Pierce didn’t already know, but that assertion is formalist, and thus valueless.  I have to do the exhaustive search through everything he actually did know, to point out something new that isn’t already in it (constructivist).  
 
Which is why I won’t have time, resources, or ability to do it.  So I will never know whether the things said above actually mean something.
 
Eric
 
 
 

 

On May 22, 2020, at 2:44 AM, Frank Wimberly <[hidden email]> wrote:
 
The badly rendered part:

 

{\displaystyle \left|{\sqrt {2}}-{\frac {a}{b}}\right|={\frac {|2b^{2}-a^{2}|}{b^{2}\left({\sqrt {2}}+{\frac {a}{b}}\right)}}\geq {\frac {1}{b^{2}\left({\sqrt {2}}+{\frac {a}{b}}\right)}}\geq {\frac {1}{3b^{2}}},}{\displaystyle \left|{\sqrt {2}}-{\frac {a}{b}}\right|={\frac {|2b^{2}-a^{2}|}{b^{2}\left({\sqrt {2}}+{\frac {a}{b}}\right)}}\geq {\frac {1}{b^{2}\left({\sqrt {2}}+{\frac {a}{b}}\right)}}\geq {\frac {1}{3b^{2}}},}

 
 
On Thu, May 21, 2020 at 11:30 AM Frank Wimberly <[hidden email]> wrote:
Clinicians often call that "being oppositional".  
 
You say that I've known authorities.  I was just talking to John Baez about my advisor Errett Bishop, often called the inventor of constructive mathematics.  Here is a constructive proof, with no use of the excluded middle, of the irrationality of sqrt(2) that I found in Wikipedia.  Apologies to those who don't care:
 
In a constructive approach, one distinguishes between on the one hand not being rational, and on the other hand being irrational (i.e., being quantifiably apart from every rational), the latter being a stronger property. Given positive integers a and b, because the valuation (i.e., highest power of 2 dividing a number) of 2b2 is odd, while the valuation of a2 is even, they must be distinct integers; thus |2b2 − a2| ≥ 1. Then[17]

{\displaystyle \left|{\sqrt {2}}-{\frac {a}{b}}\right|={\frac {|2b^{2}-a^{2}|}{b^{2}\left({\sqrt {2}}+{\frac {a}{b}}\right)}}\geq {\frac {1}{b^{2}\left({\sqrt {2}}+{\frac {a}{b}}\right)}}\geq {\frac {1}{3b^{2}}},}{\displaystyle \left|{\sqrt {2}}-{\frac {a}{b}}\right|={\frac {|2b^{2}-a^{2}|}{b^{2}\left({\sqrt {2}}+{\frac {a}{b}}\right)}}\geq {\frac {1}{b^{2}\left({\sqrt {2}}+{\frac {a}{b}}\right)}}\geq {\frac {1}{3b^{2}}},}

the latter inequality being true because it is assumed that a/b ≤ 3 − √2 (otherwise the quantitative apartness can be trivially established). This gives a lower bound of 1/3b2 for the difference |√2 − a/b|, yielding a direct proof of irrationality not relying on the law of excluded middle; see Errett Bishop (1985, p. 18). This proof constructively exhibits a discrepancy between 2 and any rational.
 
On Thu, May 21, 2020 at 10:50 AM Steve Smith <[hidden email]> wrote:

On 5/21/20 10:32 AM, uǝlƃ  wrote:
> Don't be fooled. "The problem with communication is the illusion that it exists." Or ie I believe in a stronger form of privacy than you believe in.
I KNOW! I know just what you mean!

<note to Frank...  one of the species of animal in this group is "the
Contrarian", but you probably already guessed that>


-- --- .-. . .-.. --- -.-. -.- ... -..-. .- .-. . -..-. - .... . -..-. . ... ... . -. - .. .- .-.. -..-. .-- --- .-. -.- . .-. ...
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives: http://friam.471366.n2.nabble.com/
FRIAM-COMIC
 http://friam-comic.blogspot.com/

 
-- 
Frank Wimberly
140 Calle Ojo Feliz
Santa Fe, NM 87505
505 670-9918

 
-- 
Frank Wimberly
140 Calle Ojo Feliz
Santa Fe, NM 87505
505 670-9918
-- --- .-. . .-.. --- -.-. -.- ... -..-. .- .-. . -..-. - .... . -..-. . ... ... . -. - .. .- .-.. -..-. .-- --- .-. -.- . .-. ...
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/
 
-- --- .-. . .-.. --- -.-. -.- ... -..-. .- .-. . -..-. - .... . -..-. . ... ... . -. - .. .- .-.. -..-. .-- --- .-. -.- . .-. ...
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/


-- --- .-. . .-.. --- -.-. -.- ... -..-. .- .-. . -..-. - .... . -..-. . ... ... . -. - .. .- .-.. -..-. .-- --- .-. -.- . .-. ...
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/ 
Reply | Threaded
Open this post in threaded view
|

Re: anonymity/deniability/ambiguity

gepr
In reply to this post by David Eric Smith
I really *want* to say something about building a machine (to be provocative) that implements a "reliable in the long-run without predicting the contents of reliable sentences" mechanism. I'm purposefully trying to elide your cognizing-social behavers in order to "flatten" the mechanism somewhat ... to root out the unspeakable-innerness-bogeyman, flatten the leaves of the graph, at least. This would still allow for hierarchy (even a very deep one), just without allowing for things that cannot be talked about.

I don't think it's all that useful to painstakingly knead Peirce's writings looking for a proto-structure, even though I often complain about people like Wolfram who consistently fail to cite those whose shoulders on which they stand. It would be more interesting to simply try to build a system that has some hint of the sought features. Here, I'm thinking of Luc Steels' robots playing language games. A simulator [†] of Ackley's work you mention, or even of something like the Debian package dependencies might approach it, too. (Marcus often raises branch prediction methods, which may also apply to some extent.) I can't help but also think of Edelman and Tononi's "neural darwinism" and Hoffman's "interface theory of perception". I mention these because they used mechanistic simulation as persuasive rhetoric, albeit purely justificationist -- i.e. little to no attempt to *falsify* the simulation mechanisms against data taken from an ultimate referent, please correct me if I'm wrong.

Along some similar lines, I've been exposed to (again, mechanistic/constructive) simulation of "innovation", wherein propositions about how/why seemingly unique phenomena like Silicon Valley (as a system) or particular disruptors like the iPhone emerge.

I don't find any of these machines compelling, though. So I can't really say anything useful in response to your post, except to say that it would be *great fun* to try to construct a self-correcting truth machine. It would be even more fun to construct several of them and have them compete and be evaluated against an implicit objective function.


[†] Re: Jon's cite of Baudrillard's dissimulation, I (obviously) have to disagree with the dichotomy between [dis]simulation. To act as if you don't have something you do have requires you to use other things you do have to hide the something you're hiding. I'm struggling to say this concretely, though. In the trustafarian case, the spanging (dissimulation) couples well with the dreadlock wax (simulation). Can there be dissimulation without a complementary simulation? And if not, if they always occur together, then distinguishing them may not buy us much.


On 5/21/20 4:14 PM, David Eric Smith wrote:

> I use the stripped down form in the hope of building a recursive tree of mutual refereeing, for all elements of scientific practice, now appealing to my mental image of Peter Gacs’s error-correcting 1D cellular automaton, which does this by nesting correcting structure within correcting structure.  Then I can look for every aspect of our practice that is trying to play this role in some way.  A subset include:
> 1. Intersubjectivity to guard against individual delusion, ignorance, oversight, and similar hazards.
> 2. Experimentation to guard against individual and group delusion etc, and to provide an additional active corrective against erroneous abduction from instances to classes.
> 3. Adoption of formal language protocols:
> 3a. Definitions, with both operational (semantic) and syntactic (formalist) criteria for their scope and usage
> 3b. Rigid languages for argument, including logic but also less-formal standards of scientific argument, like insistence on null models and significance measures for statistical claims
>
> There must be more, but the above are the ones I am mostly aware of in daily work.
>
> These are, to some extent, hierarchical, in that those further down the list are often taken to have a control-theoretic-like authority to tag those higher-up in the list as “errors”.  However, like any control system, the controller can also be wrong, and then its authority allows it to impose cascades of errors before being caught.  Hence, I guess Kant thought that a Newtonian space x time geometry was so self-evident that it was part of the “a priori” to physical reasoning. It was a kind of more-definite-than-a-definition criterion in arguments.  And it turned out not to describe the universe we live in, if one requires sufficient scope and precision.  Likewise, the amount of a semantics that we can capture in syntactic rules for formal speech is likely to always be less than all the semantics we have, and even the validity of a syntax could be undermined (Godel).  But most common in practice is that the syntax could be used as a kind of parlor entertainment, but the
> interpretation of it becomes either invalid or essentially useless when tokens that appeared in it turn out not to actually stand for anything.  This is what happens when things we thought were operational definitions are shown by construction of their replacements to have been invalid, as with the classical physics notion of “observable”, or the Newtonian convention of “absolute time”.
>
> I would like to give Pierce’s “truth == reliable in the long run” a modern gloss by regarding the above the way an engineer would in designing an error-correction system.  The instances that are grouped in the above list are not just subroutines in a computer code, but embodied artifacts and events of practice by living-cognizing-social behavers and reasoners.  And then decide from a post-Shannon vantage point what such a system can and cannot do.  What notions of truth are constructible?  How long is the long run, for any particular problem?  What are the sample fluctuations in our state of understanding, as represented in placeholders for terms, rules, or other forms we adopt in the above list in any era, relative to asymptotes that we may or may not yet think we can identify?  How have errors cascaded through that list as we have it now, and can we use those to learn something about the performance of this way of organizing science?  (Dave Ackley of UNM did a lovely
> project on the statistics of library overhauls for Linux utilities some years ago, which is my mental model in framing that last question.)  Formal tools to answer more interesting versions of questions like those.
>
> I mentioned some stuff about this in a post a month or two ago, and EricC included in a later post by way of reply that Pierce did a lot of statistics, so I understand I can’t take anything here outside the playpen of a listserve until I have first read everything Pierce wrote, and everything others wrote about what Pierce wrote, etc.  I suspect that, since Pierce lived before the publication of at least part of what is now understood about reliable error correction, large deviations, renormalization, automata theory, etc., there should be something new to say from a modern standpoint that Pierce didn’t already know, but that assertion is formalist, and thus valueless.  I have to do the exhaustive search through everything he actually did know, to point out something new that isn’t already in it (constructivist).  


--
☣ uǝlƃ
-- --- .-. . .-.. --- -.-. -.- ... -..-. .- .-. . -..-. - .... . -..-. . ... ... . -. - .. .- .-.. -..-. .-- --- .-. -.- . .-. ...
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/ 
uǝʃƃ ⊥ glen
Reply | Threaded
Open this post in threaded view
|

Re: anonymity/deniability/ambiguity

jon zingale
In reply to this post by jon zingale
Glen writes:

I don't find any of these machines compelling, though. So I can't really say anything useful in response to your post, except to say that it would be *great fun* to try to construct a self-correcting truth machine. It would be even more fun to construct several of them and have them compete and be evaluated against an implicit objective function.

Yeah, that would be fun. Then perhaps we could tour
new media festivals with our circus of self-correcting
truth machines. Well, not this year but maybe the next.

Jon

-- --- .-. . .-.. --- -.-. -.- ... -..-. .- .-. . -..-. - .... . -..-. . ... ... . -. - .. .- .-.. -..-. .-- --- .-. -.- . .-. ...
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/ 
Reply | Threaded
Open this post in threaded view
|

Re: anonymity/deniability/ambiguity

David Eric Smith
In reply to this post by gepr
Thanks Glen,

Yes, me too.  In these very wide-ranging discussions, it is hard to say which aspect of the question I most want to get at.  I think it changes depending on whom I am listening to, that leaves something out, which I wish to say is all part of the same system as the parts they mention and thus can’t be left out.  But then forced to ask what _I_ most want to do, maybe I don’t know.  A random list, of things that are done and what they suggest as next steps not-yet done.

1. Shannon block coding, Gacc cellular automaton, done.  Beautiful in that they clarify that the idea of asymptotic error correction is large-deviation in origin, and that they show the nature of solutions through these nested-block structures, with weakening error correction capacities as scales increase, but even weaker error leakage out of the lower blocks, so the scaling still works.  They achieve this, however, by having all homogeneous components, and a very clear and externally imposed notion of “message” and “error”.  That is what makes them comprehensible and allows us to see the point, but makes them hard to apply except as metaphor to problems in behavior that seem like they should be similar.

2. The people who say “What rescues Science and makes it different is empiricism.”  The part I like is the invocation of a kind of Darwinism for concepts, meaning a mapping to Bayesian updating.  It puts a boundary between the rules of the world that a social-cognitive system doesn’t get to change, and the patterns and habits that they are allowed to freely innovate, and then tries to track the information flow through that boundary as the world constrains the behavioral patterns and habits.  What such a high-level gloss seems to leave out is that different organizations within language, and in the coupling of language to actions, are more or less good at taking in Bayesian suggestions.  It would only be in that different internal organization that “science” is a distinguishable branch of human communication and social cognition from other things people do, all of which are ultimately kept or lost by survival or extinction.  So I guess one wants to capture architectural aspects that distinguish those behavior systems.

3. The people whose emphasis on “empiricism” and “experiment” seems to underemphasize the role of formal systems for communication and reason.  I think the thing I want here, which is maybe most “me” in this is to get beyond the component-homogeneity in the Shannon-Fano or Gacs paradigms of 1, and the sense of having an articulated “goal” to be referred in 2.  I would like having a toy model that has behavioral error-correcting layers, and environmental-Darwinian layers, in which we could do an information accounting for which errors are trapped within layers and which must be caught with signals that flow between them.  I think what most disappoints me in the little MTV-attention-span models one has to write for academic papers is that they don’t get at this heterogeneity of components that interact, and yet within which a single joint distribution is being narrowed and stabilized.  A version of that comes up in my life/metabolism interests, another version comes up in economics, wishing to understand how non-cooperative individual level decision structure can have, as its outputs, not “payoffs”, but _actions in the world_ that amount to building the infrastructure that make coalitional-form games possible.  So something like what “embodied cogntiion” did for robotics: to get away from having all the symbols represent numbers or other symbols, and having more of them somehow represent things.

4. There remains the perennial problem of the phenomenologists.  They want to situate all of “reality” within “experience”, yet they insist they are not talking about introspection, and that they are not the modern incarnation of Descartes or even Russell when he says “sense data are immediate and everything else is mediated” [more or less].  That seems to me like another difference of kind, much as the talking/testing interaction is between things of different kind.  It becomes a tangle in my mind as I try to decide how many axes of difference are really at work here.  There seems to be one between intersubjectivity and subjective experience, where the former acts as a check on the latter.  But there may be a different one between structured action and speech, nominally serving coordination at the group level, but applicable reflexively toward oneself, distinguishing subjective from (however approximate) objective aspects of some experience.  Maybe it’s a Silence of the Lambs thing I want; just whatever will make their chattering in my head stop.

5. Somewhere in here, I keep thinking it would be nice to combine the talking/being/doing implementation of robotics with things we know about expressive power and reflexivity in formal languages, to get at the idea that a system’s actions together with utterances can have information about real categories, but not contain (at least within the symbol set alone) a representation of those categories.  To get at a sense that things can be meaningful, but language can be an unsuited medium to carry some of the dimensions of meaning.  Or that languages of different expressive powers can have different scopes for reflexivity.  I feel like this was behind Frank’s comments the other day that “it’s all grist for the mill’.  I have a badly hallucinatory image of a fixed point theorem, where the fixed point corresponds to “meaning”, but it isn’t carried necessarily “on” or “in” the patterns within any one component of the system, but rather somehow constructed from what they do together.  So the way to express the meaning is to be able to represent and solve the construction.

Actually, let me give up now.  I didn’t want to let this post from you go, because I am so much I agreement with it, but my list above is atrocious.  It starts okay, but gets more conceptually confused and jumbly as I go down.  Maybe in a different frame of mind I can think more clearly, and see an idea I could imagine putting time into.

Thanks,

Eric



> On May 27, 2020, at 7:59 AM, uǝlƃ ☣ <[hidden email]> wrote:
>
> I really *want* to say something about building a machine (to be provocative) that implements a "reliable in the long-run without predicting the contents of reliable sentences" mechanism. I'm purposefully trying to elide your cognizing-social behavers in order to "flatten" the mechanism somewhat ... to root out the unspeakable-innerness-bogeyman, flatten the leaves of the graph, at least. This would still allow for hierarchy (even a very deep one), just without allowing for things that cannot be talked about.
>
> I don't think it's all that useful to painstakingly knead Peirce's writings looking for a proto-structure, even though I often complain about people like Wolfram who consistently fail to cite those whose shoulders on which they stand. It would be more interesting to simply try to build a system that has some hint of the sought features. Here, I'm thinking of Luc Steels' robots playing language games. A simulator [†] of Ackley's work you mention, or even of something like the Debian package dependencies might approach it, too. (Marcus often raises branch prediction methods, which may also apply to some extent.) I can't help but also think of Edelman and Tononi's "neural darwinism" and Hoffman's "interface theory of perception". I mention these because they used mechanistic simulation as persuasive rhetoric, albeit purely justificationist -- i.e. little to no attempt to *falsify* the simulation mechanisms against data taken from an ultimate referent, please correct me if I'm wrong.
>
> Along some similar lines, I've been exposed to (again, mechanistic/constructive) simulation of "innovation", wherein propositions about how/why seemingly unique phenomena like Silicon Valley (as a system) or particular disruptors like the iPhone emerge.
>
> I don't find any of these machines compelling, though. So I can't really say anything useful in response to your post, except to say that it would be *great fun* to try to construct a self-correcting truth machine. It would be even more fun to construct several of them and have them compete and be evaluated against an implicit objective function.
>
>
> [†] Re: Jon's cite of Baudrillard's dissimulation, I (obviously) have to disagree with the dichotomy between [dis]simulation. To act as if you don't have something you do have requires you to use other things you do have to hide the something you're hiding. I'm struggling to say this concretely, though. In the trustafarian case, the spanging (dissimulation) couples well with the dreadlock wax (simulation). Can there be dissimulation without a complementary simulation? And if not, if they always occur together, then distinguishing them may not buy us much.
>
>
> On 5/21/20 4:14 PM, David Eric Smith wrote:
>> I use the stripped down form in the hope of building a recursive tree of mutual refereeing, for all elements of scientific practice, now appealing to my mental image of Peter Gacs’s error-correcting 1D cellular automaton, which does this by nesting correcting structure within correcting structure.  Then I can look for every aspect of our practice that is trying to play this role in some way.  A subset include:
>> 1. Intersubjectivity to guard against individual delusion, ignorance, oversight, and similar hazards.
>> 2. Experimentation to guard against individual and group delusion etc, and to provide an additional active corrective against erroneous abduction from instances to classes.
>> 3. Adoption of formal language protocols:
>> 3a. Definitions, with both operational (semantic) and syntactic (formalist) criteria for their scope and usage
>> 3b. Rigid languages for argument, including logic but also less-formal standards of scientific argument, like insistence on null models and significance measures for statistical claims
>>
>> There must be more, but the above are the ones I am mostly aware of in daily work.
>>
>> These are, to some extent, hierarchical, in that those further down the list are often taken to have a control-theoretic-like authority to tag those higher-up in the list as “errors”.  However, like any control system, the controller can also be wrong, and then its authority allows it to impose cascades of errors before being caught.  Hence, I guess Kant thought that a Newtonian space x time geometry was so self-evident that it was part of the “a priori” to physical reasoning. It was a kind of more-definite-than-a-definition criterion in arguments.  And it turned out not to describe the universe we live in, if one requires sufficient scope and precision.  Likewise, the amount of a semantics that we can capture in syntactic rules for formal speech is likely to always be less than all the semantics we have, and even the validity of a syntax could be undermined (Godel).  But most common in practice is that the syntax could be used as a kind of parlor entertainment, but the
>> interpretation of it becomes either invalid or essentially useless when tokens that appeared in it turn out not to actually stand for anything.  This is what happens when things we thought were operational definitions are shown by construction of their replacements to have been invalid, as with the classical physics notion of “observable”, or the Newtonian convention of “absolute time”.
>>
>> I would like to give Pierce’s “truth == reliable in the long run” a modern gloss by regarding the above the way an engineer would in designing an error-correction system.  The instances that are grouped in the above list are not just subroutines in a computer code, but embodied artifacts and events of practice by living-cognizing-social behavers and reasoners.  And then decide from a post-Shannon vantage point what such a system can and cannot do.  What notions of truth are constructible?  How long is the long run, for any particular problem?  What are the sample fluctuations in our state of understanding, as represented in placeholders for terms, rules, or other forms we adopt in the above list in any era, relative to asymptotes that we may or may not yet think we can identify?  How have errors cascaded through that list as we have it now, and can we use those to learn something about the performance of this way of organizing science?  (Dave Ackley of UNM did a lovely
>> project on the statistics of library overhauls for Linux utilities some years ago, which is my mental model in framing that last question.)  Formal tools to answer more interesting versions of questions like those.
>>
>> I mentioned some stuff about this in a post a month or two ago, and EricC included in a later post by way of reply that Pierce did a lot of statistics, so I understand I can’t take anything here outside the playpen of a listserve until I have first read everything Pierce wrote, and everything others wrote about what Pierce wrote, etc.  I suspect that, since Pierce lived before the publication of at least part of what is now understood about reliable error correction, large deviations, renormalization, automata theory, etc., there should be something new to say from a modern standpoint that Pierce didn’t already know, but that assertion is formalist, and thus valueless.  I have to do the exhaustive search through everything he actually did know, to point out something new that isn’t already in it (constructivist).  
>
>
> --
> ☣ uǝlƃ
> -- --- .-. . .-.. --- -.-. -.- ... -..-. .- .-. . -..-. - .... . -..-. . ... ... . -. - .. .- .-.. -..-. .-- --- .-. -.- . .-. ...
> FRIAM Applied Complexity Group listserv
> Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
> un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
> archives: http://friam.471366.n2.nabble.com/
> FRIAM-COMIC http://friam-comic.blogspot.com/ 


-- --- .-. . .-.. --- -.-. -.- ... -..-. .- .-. . -..-. - .... . -..-. . ... ... . -. - .. .- .-.. -..-. .-- --- .-. -.- . .-. ...
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/ 
12