PM-2017-MethodologicalBehaviorismCausalChainsandCausalForks(1).pdf

classic Classic list List threaded Threaded
35 messages Options
12
Reply | Threaded
Open this post in threaded view
|

Re: PM-2017-MethodologicalBehaviorismCausalChainsandCausalForks(1).pdf

gepr
OK. But let's say you only have 2 shops in town. And you have A. One shop will convert A into B or C. And the other shop will convert C into D, but not C. Intransitivity of conversion means that if you choose C you cannot ever get D.

I think I can see how linear logic allows for that. But I can't see how that relies on the symmetric monoidal category.



On February 10, 2021 12:50:06 PM PST, Marcus Daniels <[hidden email]> wrote:
>For chemical reactions, linear logic seems more realistic.  

--
glen ⛧

- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives: http://friam.471366.n2.nabble.com/
uǝʃƃ ⊥ glen
Reply | Threaded
Open this post in threaded view
|

Re: PM-2017-MethodologicalBehaviorismCausalChainsandCausalForks(1).pdf

David Eric Smith
In reply to this post by Frank Wimberly-2
What did Obama say in that last Correspondents’ Dinner speech?

~”I said at the start of my presidency that I hoped that the discourse in our politics would change.
In hindsight I realize I should have been more specific.”

Eric

On Feb 10, 2021, at 2:38 PM, Frank Wimberly <[hidden email]> wrote:

I was warning you not to bring up spark plugs and other BS that made the example lack causal sufficiency thereby muddying the water unhelpfully.  I guess the warning was too vague.

---
Frank C. Wimberly
140 Calle Ojo Feliz,
Santa Fe, NM 87505

505 670-9918
Santa Fe, NM

On Wed, Feb 10, 2021, 12:34 PM <[hidden email]> wrote:

So the presence and absence of spark Plugs screens off cart Starting from the Gas tank and the Battery charge.  To put in terms of ANOVA, there is no additivity of variance in the effects of G, B, and P upon S.  One could, of course, achieve additivity by partitioning the variance into the various interactions in G, B, and P’s effects upon (Pr S).  Or is the analogy between ANOVA and Currying completely without merit.

 

This only demonstrates further that FRIWWMFTT. 

 

N

 

Nick Thompson

[hidden email]

https://wordpress.clarku.edu/nthompson/

 

From: Friam <[hidden email]> On Behalf Of Frank Wimberly
Sent: Wednesday, February 10, 2021 1:15 PM
To: The Friday Morning Applied Complexity Coffee Group <[hidden email]>
Subject: Re: [FRIAM] PM-2017-MethodologicalBehaviorismCausalChainsandCausalForks(1).pdf

 

Nick, 

 

Somehow I don't relate to the sandwich case.  Is having ham and having eggs different from having ham and eggs.

 

Your second question may be related to the following:  if A and B are both causes of C then A and B are not independent given C.  Let C be "car starts", A be "gas in tank" and B be "battery charged".  If you know there's gas in the tank and you observe that the car starts then you infer whether the battery is charged.  There are numerous ways to object to this which are irrelevant.  "What if the spark  plugs are missing?"  Etc.  

 

Frank

---
Frank C. Wimberly
140 Calle Ojo Feliz,
Santa Fe, NM 87505

505 670-9918
Santa Fe, NM

 

On Wed, Feb 10, 2021, 11:50 AM <[hidden email]> wrote:

Thanks, Glen,

You consistently give me thoughts to chew on.  Your introduction of "point of view' into the conversation is a "New Thought" for me, and I am grateful for it.  In particular, it makes apt the metaphor of screening off.  So, let it be the case that a third variable, C, also affects B.  In that case, one could not make predictions about  B to A without knowing about C.  Thus, C screens off A from B.  I think I get it. 

Nick   

Nick Thompson
[hidden email]
https://wordpress.clarku.edu/nthompson/

-----Original Message-----
From: Friam <[hidden email]> On Behalf Of u?l? ???
Sent: Wednesday, February 10, 2021 12:00 PM
To: [hidden email]
Subject: Re: [FRIAM] PM-2017-MethodologicalBehaviorismCausalChainsandCausalForks(1).pdf

I think I have useful things to say about it. But who knows for sure?

I regard this sort of screen as if *from* the present looking into the past. From the perspective of the 3rd node, can you *see* the 1st node? Or can you only see the 2nd node? (I think I alluded to this in my post about Barbour's "Janus Point".)

As to the meshed gears, as usual, it's useful to crack cause into multiple meanings like agency vs material, formal, and final. But you can also adopt a perspective. From the 2nd gear's perspective, the 1st gear is causing it to move. From the 1st gear's perspective, you are causing it to move. And from a multi-gear perspective, either you *or* the designer is causing the 2nd gear to move. Scoping, scoping, scoping, scoping.


On 2/10/21 9:06 AM, [hidden email] wrote:


> Hi, All,
>

>
> If any of you had any spare brain time, I am interested in  the attached VERY SHORT <https://documentcloud.adobe.com/link/track?uri=urn:aaid:scds:US:a6e9c10b-06dc-4ea1-8ffa-d450df62489a> article:
>

>
> I am struggling here with the idea of "screening off".  Does it mean more or less than the following:  Granted that, If I had ham, and I had eggs, I would have ham and eggs, having eggs screens off having ham from having ham and eggs?   Screening off seems a very odd metaphor.  Is it a term of art in logic?
>

>
> Also, a general problem I have with causality:  My understanding of causality is that event A can cause event B  if and only if A is independently known from B (an event cannot cause itself) AND occurs prior to B  Now imagine  two perfectly meshed gears, such that motion in one is instantly conveyed to the other.  I turn gear A and gear B turns.  Has the motion in A /caused/ the turning of B or has my turning of A caused the motion of B?  With the gears, this may just seem like a fussy “in the limit” sort of question, but there seem to be other phenomena where it’s worth asking.  Does the discharge of potential along the ionized (?) path CAUSE the lightning?
>

>
> I realize that the rest of you have spouses, dogs, cats, hobbies, and day jobs, but any off hand thoughts you have on these matters would be greatly appreciated.

--
↙↙↙ uǝlƃ

- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives: http://friam.471366.n2.nabble.com/


- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives: http://friam.471366.n2.nabble.com/

- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives: http://friam.471366.n2.nabble.com/
- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe https://linkprotect.cudasvc.com/url?a=http%3a%2f%2fredfish.com%2fmailman%2flistinfo%2ffriam_redfish.com&c=E,1,1kYqdSXpUDF7niDEu4A7oaeSnh-GhBN-L3KF2Q8eeOB0pTzzIFBG-evTqSpj0n476HlDnxBZF9ZgktUHW_Sl3kKTRHcwiFt7jr12AVhr8yEEvDnFhnY,&typo=1
FRIAM-COMIC https://linkprotect.cudasvc.com/url?a=http%3a%2f%2ffriam-comic.blogspot.com%2f&c=E,1,7DXpKFZhT86c1P3D77mvyGDoxd6pzkbVWsiYENktcVsMCWSL_rSrBdU2ovU43kYqYYPpOLeMYu1DI88iMD3FQjprFwlZK1JJCHKG0lyn-QPXD9vUvxLLYg,,&typo=1
archives: http://friam.471366.n2.nabble.com/


- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives: http://friam.471366.n2.nabble.com/
Reply | Threaded
Open this post in threaded view
|

Re: PM-2017-MethodologicalBehaviorismCausalChainsandCausalForks(1).pdf

Marcus G. Daniels
In reply to this post by gepr
If you have a non-reaction, then I think you'd switch to a non-unique type or you'd explicitly duplicate.  Return (x,x) instead of x and then destructure the former.

-----Original Message-----
From: Friam <[hidden email]> On Behalf Of ? glen
Sent: Wednesday, February 10, 2021 1:51 PM
To: The Friday Morning Applied Complexity Coffee Group <[hidden email]>
Subject: Re: [FRIAM] PM-2017-MethodologicalBehaviorismCausalChainsandCausalForks(1).pdf

OK. But let's say you only have 2 shops in town. And you have A. One shop will convert A into B or C. And the other shop will convert C into D, but not C. Intransitivity of conversion means that if you choose C you cannot ever get D.

I think I can see how linear logic allows for that. But I can't see how that relies on the symmetric monoidal category.



On February 10, 2021 12:50:06 PM PST, Marcus Daniels <[hidden email]> wrote:
>For chemical reactions, linear logic seems more realistic.  

--
glen ⛧

- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives: http://friam.471366.n2.nabble.com/
- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives: http://friam.471366.n2.nabble.com/
Reply | Threaded
Open this post in threaded view
|

Re: PM-2017-MethodologicalBehaviorismCausalChainsandCausalForks(1).pdf

gepr
OK. So this is where things like identity, terminal, zero, introduction, elimination, etc. flesh out the symmetry. Ugh. It just feels like a hack. But maybe I'm not thinking clearly. I'll stop now.

On 2/10/21 2:08 PM, Marcus Daniels wrote:
> If you have a non-reaction, then I think you'd switch to a non-unique type or you'd explicitly duplicate.  Return (x,x) instead of x and then destructure the former.

--
↙↙↙ uǝlƃ

- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives: http://friam.471366.n2.nabble.com/
uǝʃƃ ⊥ glen
Reply | Threaded
Open this post in threaded view
|

Re: PM-2017-MethodologicalBehaviorismCausalChainsandCausalForks(1).pdf

David Eric Smith
In reply to this post by thompnickson2
Nick,

There is a character to this conversation like many in this list.  There is a thing you do by instinct, which drives me batty and makes me want to call you an Analytical Philosopher and other such insults.  Like anything done by instinct (think of Trump’s instinct for instigation), it is surprisingly difficult to try to characterize didactically.

I think the core issue is this (for me):  Each of these terms — screening off, Currying, partitioning of variance — developed within a language in the service of _doing something_.  The thing that makes me want to call you an analytical philosopher is your penchant for insisting on ignoring the context in _doing_ where the term was given to you, treating it as if the whole meaning of something should be carried in the form of its expressions, and then calling the surface expression a metaphor when it isn’t self-interpreting.  I shouldn’t criticize; that is in some sense the essence of the Hilbert program, of Montague Grammar, or of any systematic study of forms: to ask how much of the sense of a thing is mirrored in the form of its expressions.  The default is that you seem to insist on treating everything as if _all_ of the meaning should be carried in form, whereas my expectation is that very little will be carried in form and most in context.  Hence your insistence on taking every sentence out of any context where it makes sense, and insisting on imposing it on a context where it seems non-sequitur (to “madly squeeze a right-hand foot into a left-hand shoe").  

Jon’s emails are helpful (and Glen’s), because they take this exercise seriously, and show that one can get away with a surprising amount of it.  I guess this is where Fields medals come from.

But this is why I introduced Currying in response to your Ham and Eggs; it was only slightly teasing you, but more my opinion that, in _totally and willfully ignoring_ the context for causal inference that Frank gave you, you had moved into another context where Currying was the natural language and Screening Off not the natural one.  So, of course, in response to that, you did what you do, and ignored the context where Currying arises, to jump to Partitioning of Variance.

Let me acknowledge that Jon’s logical renderings show how far one can get away with such things.  All good.  

Then, let me try to back-fill what makes the contexts different, so that I try to build more of that _into_ the forms of the expressions, so that the poor “definitions” of Screening Off or Currying will not have to stand so much on their own.

1. Screening Off:  I think of this as having arisen for a class of problems like updating of a set of variables that live on a network.  Think of coins spread on a table that may be head or tail, with threads telling which coins “affect” which other coins.  Then have some procedure to filp some of the coins, with probabilities that depend on the current state of the other coins to which they are connected in the network.  (The case I described is an instance of a Boolean Network.  One could generalize to many others while keeping its essential spirit.)

Here is what such cases have in common.  The network that tells which coins affect which others, and the algorithm for flipping coins based on the current states of other coins, are _given outside the state of the coins_.  The coins are _peers_ to each other.  They do not create the network; they do not impose the flipping rule; their job is very limited, to carry the _state_ of the system at any moment.

The notion of Screening Off comes from the act of “marking” a subset of the coins, to get at the sense in which their states may stand between the future states of some other focal coins you may wish to discuss, and the universe of other coins whose states you want to know if you can ignore.  But the “screening” part of Screening Off comes from the peer-status of any coin to any other coin, in context of a network that is provided to you as context.


To this you brought Having Ham and Eggs, as a kind of propositional expression that can go from truth values for ham and for eggs to truth values for the proposition, and you asked how some other propositional expression such as Having Eggs could be combined with the former.

HUH?

Where did the peer status of coins go?  Where did the externalness of the network and the flipping rule go?  Where did these new maps and operations come from?  None of that had been part of the _doing_ context in which the Screening Off expression was found helpful to organize thought.

2. Currying:  But, if we took the expression Having Ham and Eggs and asked whether it was familiar, and in what context it _had_ been found useful, we would recognize that it was at home in discussions of function application.  A function is a map from some domain that we call the inputs to some range that we call the outputs.  As Jon wrote:
(+) 2 5 = 7
can take a pair of numbers (2 and 5) and return a single number (7).
Currying is an operation that changes one kind of map to another kind of map, allowing us to do such things as change its domain or its range.  Hence, from (+) and (2) we can create a new map
(+ 2) 5 = 7
which takes a single number (rather than a pair) as its input (5) and returns a single number (still) as its output (7).

Now the general habit in _calling_ something a function is to emphasize its role as a mapping — an activity that responds to inputs and delivers outputs.  The act is of interest; what the inputs and outputs are, or whether there is one or another algorithm relating them, can all be varied within the same notion _that_ a certain activity counts as “mapping”, and that the map is thus a function.  

To this, you branched to ANOVA which lives in the domain of regression in statistics.  Fair enough, and good as a case study.  (I am a heavily example-based thinker, so I have great sympathy for people who quickly look for examples.)  But additivity, or other properties of the _algorithm_ for relating outputs to inputs, is a separate matter of context from something’s _being_ a function.  To the extent that Currying was about converting one kind of function to another kind of function, that aspect of the abstraction gets lots in a kind of inattention blindness if one goes to asking whether projection of a regression onto a subset of coordinates is central to the dimensional reduction of functions (not general; it is the special feature that sets additivity apart among algorithms).  So it is not that projection might not be an instance of function transformation — that is okay — but that the awareness that what you are doing is _converting a map_ gets lost in focusing on the accidental features of a case.

But if we had chosen once again to shift contexts, we would have arrived at

3. Partitioning of Variance: a property of statistical reductions through linear regression, which is quite at home in the doing-context of ANOVA, and useful there.

etc.

I don’t know if my above is at all helpful.  But I do think that contexts can be made explicit with a lot of work.  When these circles (circuses? Jamborees?) of confusion result, it often seems that they can be untangled by making explicit within descriptions, the contexts that before were not described.

Eric




> On Feb 10, 2021, at 4:08 PM, <[hidden email]> <[hidden email]> wrote:
>
> All,
>
> I guess, since nobody has responded to it, my attempt to analogize currying
> to partitioning of variance in an ANOVA is NOT apt.   Definitely a case of
> FRIWWMFTT.
>
>
>
> Nick
>
> Nick Thompson
> [hidden email]
> https://linkprotect.cudasvc.com/url?a=https%3a%2f%2fwordpress.clarku.edu%2fnthompson%2f&c=E,1,2PeawXnMwmzyrvQYWpZ8tNyYZPGTG4AvbA_TApMZOhdR3ezvpDIZqFkq1aqBMTp3-cT-gCA1YvsHLEgbS_ivXXxsXIiVfxruxIzMt8n2WQ,,&typo=1
>
> -----Original Message-----
> From: Friam <[hidden email]> On Behalf Of jon zingale
> Sent: Wednesday, February 10, 2021 2:54 PM
> To: [hidden email]
> Subject: Re: [FRIAM]
> PM-2017-MethodologicalBehaviorismCausalChainsandCausalForks(1).pdf
>
> Ha! just posted on that point!
>
>
>
> --
> Sent from: http://friam.471366.n2.nabble.com/
>
> - .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
> FRIAM Applied Complexity Group listserv
> Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam un/subscribe
> https://linkprotect.cudasvc.com/url?a=http%3a%2f%2fredfish.com%2fmailman%2flistinfo%2ffriam_redfish.com&c=E,1,9ax7aKQ2rDtB8XKgxQEtGYkUQa4o9h4N0xis7GA00gN5-7hU624IO8L9ZVEXeaASrQpXPjmVdIllrZnkdp1EO15RhxlmJV4hS6CSnV6XW_6HiieyzQ,,&typo=1
> FRIAM-COMIC https://linkprotect.cudasvc.com/url?a=http%3a%2f%2ffriam-comic.blogspot.com%2f&c=E,1,BY7izUgB00UInE98yrIv8jSrhHGfRkxj96lBumMKZVoOdRBIOe9L5Wk4ryCZCRUrsTZcwfptzoIggSX2oRrwfi98ZWEBFIUO5Mc28bNIp14SgxhbRVPCm9TFomzL&typo=1
> archives: http://friam.471366.n2.nabble.com/
>
>
> - .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
> FRIAM Applied Complexity Group listserv
> Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
> un/subscribe https://linkprotect.cudasvc.com/url?a=http%3a%2f%2fredfish.com%2fmailman%2flistinfo%2ffriam_redfish.com&c=E,1,gDrKkHblMaSD16jkkL4YQNm1y4Z5MX8OgxIgJSaNSjXueJW-YK3jOqaXWQZRaTIUTJTBQyqTDBVjUe5x0-8Wp6lxXidiT6OX08_4Ugj7ZrtT&typo=1
> FRIAM-COMIC https://linkprotect.cudasvc.com/url?a=http%3a%2f%2ffriam-comic.blogspot.com%2f&c=E,1,AGvktz0V8pHCFrSAUsEfkKuGEF2880OKtGtnmoM88a9oOjO3EXCl935beWuYOoD57s6U8u9nW-v5CBQ8_smMt8kjKBS9mtPH-eSouWIBmY2paTXx3j6coXAU_yRu&typo=1
> archives: http://friam.471366.n2.nabble.com/


- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives: http://friam.471366.n2.nabble.com/
Reply | Threaded
Open this post in threaded view
|

Re: PM-2017-MethodologicalBehaviorismCausalChainsandCausalForks(1).pdf

Marcus G. Daniels
I thought this is where you were going, but if you can cause trouble, so can we!

-----Original Message-----
From: Friam <[hidden email]> On Behalf Of David Eric Smith
Sent: Thursday, February 11, 2021 5:32 AM
To: The Friday Morning Applied Complexity Coffee Group <[hidden email]>
Subject: Re: [FRIAM] PM-2017-MethodologicalBehaviorismCausalChainsandCausalForks(1).pdf

Nick,

There is a character to this conversation like many in this list.  There is a thing you do by instinct, which drives me batty and makes me want to call you an Analytical Philosopher and other such insults.  Like anything done by instinct (think of Trump’s instinct for instigation), it is surprisingly difficult to try to characterize didactically.

I think the core issue is this (for me):  Each of these terms — screening off, Currying, partitioning of variance — developed within a language in the service of _doing something_.  The thing that makes me want to call you an analytical philosopher is your penchant for insisting on ignoring the context in _doing_ where the term was given to you, treating it as if the whole meaning of something should be carried in the form of its expressions, and then calling the surface expression a metaphor when it isn’t self-interpreting.  I shouldn’t criticize; that is in some sense the essence of the Hilbert program, of Montague Grammar, or of any systematic study of forms: to ask how much of the sense of a thing is mirrored in the form of its expressions.  The default is that you seem to insist on treating everything as if _all_ of the meaning should be carried in form, whereas my expectation is that very little will be carried in form and most in context.  Hence your insistence on taking every sentence out of any context where it makes sense, and insisting on imposing it on a context where it seems non-sequitur (to “madly squeeze a right-hand foot into a left-hand shoe").  

Jon’s emails are helpful (and Glen’s), because they take this exercise seriously, and show that one can get away with a surprising amount of it.  I guess this is where Fields medals come from.

But this is why I introduced Currying in response to your Ham and Eggs; it was only slightly teasing you, but more my opinion that, in _totally and willfully ignoring_ the context for causal inference that Frank gave you, you had moved into another context where Currying was the natural language and Screening Off not the natural one.  So, of course, in response to that, you did what you do, and ignored the context where Currying arises, to jump to Partitioning of Variance.

Let me acknowledge that Jon’s logical renderings show how far one can get away with such things.  All good.  

Then, let me try to back-fill what makes the contexts different, so that I try to build more of that _into_ the forms of the expressions, so that the poor “definitions” of Screening Off or Currying will not have to stand so much on their own.

1. Screening Off:  I think of this as having arisen for a class of problems like updating of a set of variables that live on a network.  Think of coins spread on a table that may be head or tail, with threads telling which coins “affect” which other coins.  Then have some procedure to filp some of the coins, with probabilities that depend on the current state of the other coins to which they are connected in the network.  (The case I described is an instance of a Boolean Network.  One could generalize to many others while keeping its essential spirit.)

Here is what such cases have in common.  The network that tells which coins affect which others, and the algorithm for flipping coins based on the current states of other coins, are _given outside the state of the coins_.  The coins are _peers_ to each other.  They do not create the network; they do not impose the flipping rule; their job is very limited, to carry the _state_ of the system at any moment.

The notion of Screening Off comes from the act of “marking” a subset of the coins, to get at the sense in which their states may stand between the future states of some other focal coins you may wish to discuss, and the universe of other coins whose states you want to know if you can ignore.  But the “screening” part of Screening Off comes from the peer-status of any coin to any other coin, in context of a network that is provided to you as context.


To this you brought Having Ham and Eggs, as a kind of propositional expression that can go from truth values for ham and for eggs to truth values for the proposition, and you asked how some other propositional expression such as Having Eggs could be combined with the former.

HUH?

Where did the peer status of coins go?  Where did the externalness of the network and the flipping rule go?  Where did these new maps and operations come from?  None of that had been part of the _doing_ context in which the Screening Off expression was found helpful to organize thought.

2. Currying:  But, if we took the expression Having Ham and Eggs and asked whether it was familiar, and in what context it _had_ been found useful, we would recognize that it was at home in discussions of function application.  A function is a map from some domain that we call the inputs to some range that we call the outputs.  As Jon wrote:
(+) 2 5 = 7
can take a pair of numbers (2 and 5) and return a single number (7).
Currying is an operation that changes one kind of map to another kind of map, allowing us to do such things as change its domain or its range.  Hence, from (+) and (2) we can create a new map (+ 2) 5 = 7 which takes a single number (rather than a pair) as its input (5) and returns a single number (still) as its output (7).

Now the general habit in _calling_ something a function is to emphasize its role as a mapping — an activity that responds to inputs and delivers outputs.  The act is of interest; what the inputs and outputs are, or whether there is one or another algorithm relating them, can all be varied within the same notion _that_ a certain activity counts as “mapping”, and that the map is thus a function.  

To this, you branched to ANOVA which lives in the domain of regression in statistics.  Fair enough, and good as a case study.  (I am a heavily example-based thinker, so I have great sympathy for people who quickly look for examples.)  But additivity, or other properties of the _algorithm_ for relating outputs to inputs, is a separate matter of context from something’s _being_ a function.  To the extent that Currying was about converting one kind of function to another kind of function, that aspect of the abstraction gets lots in a kind of inattention blindness if one goes to asking whether projection of a regression onto a subset of coordinates is central to the dimensional reduction of functions (not general; it is the special feature that sets additivity apart among algorithms).  So it is not that projection might not be an instance of function transformation — that is okay — but that the awareness that what you are doing is _converting a map_ gets lost in focusing on the accidental features of a case.

But if we had chosen once again to shift contexts, we would have arrived at

3. Partitioning of Variance: a property of statistical reductions through linear regression, which is quite at home in the doing-context of ANOVA, and useful there.

etc.

I don’t know if my above is at all helpful.  But I do think that contexts can be made explicit with a lot of work.  When these circles (circuses? Jamborees?) of confusion result, it often seems that they can be untangled by making explicit within descriptions, the contexts that before were not described.

Eric




> On Feb 10, 2021, at 4:08 PM, <[hidden email]> <[hidden email]> wrote:
>
> All,
>
> I guess, since nobody has responded to it, my attempt to analogize currying
> to partitioning of variance in an ANOVA is NOT apt.   Definitely a case of
> FRIWWMFTT.
>
>
>
> Nick
>
> Nick Thompson
> [hidden email]
> https://linkprotect.cudasvc.com/url?a=https%3a%2f%2fwordpress.clarku.e
> du%2fnthompson%2f&c=E,1,2PeawXnMwmzyrvQYWpZ8tNyYZPGTG4AvbA_TApMZOhdR3e
> zvpDIZqFkq1aqBMTp3-cT-gCA1YvsHLEgbS_ivXXxsXIiVfxruxIzMt8n2WQ,,&typo=1
>
> -----Original Message-----
> From: Friam <[hidden email]> On Behalf Of jon zingale
> Sent: Wednesday, February 10, 2021 2:54 PM
> To: [hidden email]
> Subject: Re: [FRIAM]
> PM-2017-MethodologicalBehaviorismCausalChainsandCausalForks(1).pdf
>
> Ha! just posted on that point!
>
>
>
> --
> Sent from: http://friam.471366.n2.nabble.com/
>
> - .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
> FRIAM Applied Complexity Group listserv Zoom Fridays 9:30a-12p Mtn
> GMT-6  bit.ly/virtualfriam un/subscribe
> https://linkprotect.cudasvc.com/url?a=http%3a%2f%2fredfish.com%2fmailm
> an%2flistinfo%2ffriam_redfish.com&c=E,1,9ax7aKQ2rDtB8XKgxQEtGYkUQa4o9h
> 4N0xis7GA00gN5-7hU624IO8L9ZVEXeaASrQpXPjmVdIllrZnkdp1EO15RhxlmJV4hS6CS
> nV6XW_6HiieyzQ,,&typo=1 FRIAM-COMIC
> https://linkprotect.cudasvc.com/url?a=http%3a%2f%2ffriam-comic.blogspo
> t.com%2f&c=E,1,BY7izUgB00UInE98yrIv8jSrhHGfRkxj96lBumMKZVoOdRBIOe9L5Wk
> 4ryCZCRUrsTZcwfptzoIggSX2oRrwfi98ZWEBFIUO5Mc28bNIp14SgxhbRVPCm9TFomzL&
> typo=1
> archives: http://friam.471366.n2.nabble.com/
>
>
> - .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
> FRIAM Applied Complexity Group listserv Zoom Fridays 9:30a-12p Mtn
> GMT-6  bit.ly/virtualfriam un/subscribe
> https://linkprotect.cudasvc.com/url?a=http%3a%2f%2fredfish.com%2fmailm
> an%2flistinfo%2ffriam_redfish.com&c=E,1,gDrKkHblMaSD16jkkL4YQNm1y4Z5MX
> 8OgxIgJSaNSjXueJW-YK3jOqaXWQZRaTIUTJTBQyqTDBVjUe5x0-8Wp6lxXidiT6OX08_4
> Ugj7ZrtT&typo=1 FRIAM-COMIC
> https://linkprotect.cudasvc.com/url?a=http%3a%2f%2ffriam-comic.blogspo
> t.com%2f&c=E,1,AGvktz0V8pHCFrSAUsEfkKuGEF2880OKtGtnmoM88a9oOjO3EXCl935
> beWuYOoD57s6U8u9nW-v5CBQ8_smMt8kjKBS9mtPH-eSouWIBmY2paTXx3j6coXAU_yRu&
> typo=1
> archives: http://friam.471366.n2.nabble.com/


- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives: http://friam.471366.n2.nabble.com/
- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives: http://friam.471366.n2.nabble.com/
Reply | Threaded
Open this post in threaded view
|

Re: PM-2017-MethodologicalBehaviorismCausalChainsandCausalForks(1).pdf

thompnickson2
In reply to this post by David Eric Smith
Dear David,  

Just a quick note to think you for the time you took to lay that all out.  I hope that, at some future time, that effort is useful for you.  Part of why I miss teaching so badly is that often the most useful insights I have arise from trying explain something to somebody else.  

To be honest, nothing would give my more pleasure than an exhaustive and friendly critique of my mode of thinking.  Remember, what ever it is, I am stuck inside it.  Only when others reflect it back to me can I see it.  Unfortunately, even my tolerance of my own narcissism has its limits, and so I am writing off line.  I am, I think, a metaphoric thinker.  (or a chronic abducer, which may be the same.)  I hold things up from different domains and tree to see how they are  the same.  When I don't understand something, I hold it up beside all the things I think I do understand to try to find the similarities.  I think I get it from my parents, who were in the trade  book publishing industry.  To be a book editor you have a certain kind of arrogance to think that you can read and evaluate anything, which means you have to find a way to bring a context to almost anything.  I cannot NOT think that way.  That mode of thinking has got me where I am today, which is not very far, but far enough.   I am, however, 83, and at some point this "skill" I have developed will degrade into the simply inability to tell things apart.  At that point, I hope I will Do A Reagan.  I count on my colleagues at FRIAM to tell me when that time has come.

Nick

Nick Thompson
[hidden email]
https://wordpress.clarku.edu/nthompson/

-----Original Message-----
From: Friam <[hidden email]> On Behalf Of David Eric Smith
Sent: Thursday, February 11, 2021 7:32 AM
To: The Friday Morning Applied Complexity Coffee Group <[hidden email]>
Subject: Re: [FRIAM] PM-2017-MethodologicalBehaviorismCausalChainsandCausalForks(1).pdf

Nick,

There is a character to this conversation like many in this list.  There is a thing you do by instinct, which drives me batty and makes me want to call you an Analytical Philosopher and other such insults.  Like anything done by instinct (think of Trump’s instinct for instigation), it is surprisingly difficult to try to characterize didactically.

I think the core issue is this (for me):  Each of these terms — screening off, Currying, partitioning of variance — developed within a language in the service of _doing something_.  The thing that makes me want to call you an analytical philosopher is your penchant for insisting on ignoring the context in _doing_ where the term was given to you, treating it as if the whole meaning of something should be carried in the form of its expressions, and then calling the surface expression a metaphor when it isn’t self-interpreting.  I shouldn’t criticize; that is in some sense the essence of the Hilbert program, of Montague Grammar, or of any systematic study of forms: to ask how much of the sense of a thing is mirrored in the form of its expressions.  The default is that you seem to insist on treating everything as if _all_ of the meaning should be carried in form, whereas my expectation is that very little will be carried in form and most in context.  Hence your insistence on taking every sentence out of any context where it makes sense, and insisting on imposing it on a context where it seems non-sequitur (to “madly squeeze a right-hand foot into a left-hand shoe").  

Jon’s emails are helpful (and Glen’s), because they take this exercise seriously, and show that one can get away with a surprising amount of it.  I guess this is where Fields medals come from.

But this is why I introduced Currying in response to your Ham and Eggs; it was only slightly teasing you, but more my opinion that, in _totally and willfully ignoring_ the context for causal inference that Frank gave you, you had moved into another context where Currying was the natural language and Screening Off not the natural one.  So, of course, in response to that, you did what you do, and ignored the context where Currying arises, to jump to Partitioning of Variance.

Let me acknowledge that Jon’s logical renderings show how far one can get away with such things.  All good.  

Then, let me try to back-fill what makes the contexts different, so that I try to build more of that _into_ the forms of the expressions, so that the poor “definitions” of Screening Off or Currying will not have to stand so much on their own.

1. Screening Off:  I think of this as having arisen for a class of problems like updating of a set of variables that live on a network.  Think of coins spread on a table that may be head or tail, with threads telling which coins “affect” which other coins.  Then have some procedure to filp some of the coins, with probabilities that depend on the current state of the other coins to which they are connected in the network.  (The case I described is an instance of a Boolean Network.  One could generalize to many others while keeping its essential spirit.)

Here is what such cases have in common.  The network that tells which coins affect which others, and the algorithm for flipping coins based on the current states of other coins, are _given outside the state of the coins_.  The coins are _peers_ to each other.  They do not create the network; they do not impose the flipping rule; their job is very limited, to carry the _state_ of the system at any moment.

The notion of Screening Off comes from the act of “marking” a subset of the coins, to get at the sense in which their states may stand between the future states of some other focal coins you may wish to discuss, and the universe of other coins whose states you want to know if you can ignore.  But the “screening” part of Screening Off comes from the peer-status of any coin to any other coin, in context of a network that is provided to you as context.


To this you brought Having Ham and Eggs, as a kind of propositional expression that can go from truth values for ham and for eggs to truth values for the proposition, and you asked how some other propositional expression such as Having Eggs could be combined with the former.

HUH?

Where did the peer status of coins go?  Where did the externalness of the network and the flipping rule go?  Where did these new maps and operations come from?  None of that had been part of the _doing_ context in which the Screening Off expression was found helpful to organize thought.

2. Currying:  But, if we took the expression Having Ham and Eggs and asked whether it was familiar, and in what context it _had_ been found useful, we would recognize that it was at home in discussions of function application.  A function is a map from some domain that we call the inputs to some range that we call the outputs.  As Jon wrote:
(+) 2 5 = 7
can take a pair of numbers (2 and 5) and return a single number (7).
Currying is an operation that changes one kind of map to another kind of map, allowing us to do such things as change its domain or its range.  Hence, from (+) and (2) we can create a new map (+ 2) 5 = 7 which takes a single number (rather than a pair) as its input (5) and returns a single number (still) as its output (7).

Now the general habit in _calling_ something a function is to emphasize its role as a mapping — an activity that responds to inputs and delivers outputs.  The act is of interest; what the inputs and outputs are, or whether there is one or another algorithm relating them, can all be varied within the same notion _that_ a certain activity counts as “mapping”, and that the map is thus a function.  

To this, you branched to ANOVA which lives in the domain of regression in statistics.  Fair enough, and good as a case study.  (I am a heavily example-based thinker, so I have great sympathy for people who quickly look for examples.)  But additivity, or other properties of the _algorithm_ for relating outputs to inputs, is a separate matter of context from something’s _being_ a function.  To the extent that Currying was about converting one kind of function to another kind of function, that aspect of the abstraction gets lots in a kind of inattention blindness if one goes to asking whether projection of a regression onto a subset of coordinates is central to the dimensional reduction of functions (not general; it is the special feature that sets additivity apart among algorithms).  So it is not that projection might not be an instance of function transformation — that is okay — but that the awareness that what you are doing is _converting a map_ gets lost in focusing on the accidental features of a case.

But if we had chosen once again to shift contexts, we would have arrived at

3. Partitioning of Variance: a property of statistical reductions through linear regression, which is quite at home in the doing-context of ANOVA, and useful there.

etc.

I don’t know if my above is at all helpful.  But I do think that contexts can be made explicit with a lot of work.  When these circles (circuses? Jamborees?) of confusion result, it often seems that they can be untangled by making explicit within descriptions, the contexts that before were not described.

Eric




> On Feb 10, 2021, at 4:08 PM, <[hidden email]> <[hidden email]> wrote:
>
> All,
>
> I guess, since nobody has responded to it, my attempt to analogize currying
> to partitioning of variance in an ANOVA is NOT apt.   Definitely a case of
> FRIWWMFTT.
>
>
>
> Nick
>
> Nick Thompson
> [hidden email]
> https://linkprotect.cudasvc.com/url?a=https%3a%2f%2fwordpress.clarku.e
> du%2fnthompson%2f&c=E,1,2PeawXnMwmzyrvQYWpZ8tNyYZPGTG4AvbA_TApMZOhdR3e
> zvpDIZqFkq1aqBMTp3-cT-gCA1YvsHLEgbS_ivXXxsXIiVfxruxIzMt8n2WQ,,&typo=1
>
> -----Original Message-----
> From: Friam <[hidden email]> On Behalf Of jon zingale
> Sent: Wednesday, February 10, 2021 2:54 PM
> To: [hidden email]
> Subject: Re: [FRIAM]
> PM-2017-MethodologicalBehaviorismCausalChainsandCausalForks(1).pdf
>
> Ha! just posted on that point!
>
>
>
> --
> Sent from: http://friam.471366.n2.nabble.com/
>
> - .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
> FRIAM Applied Complexity Group listserv Zoom Fridays 9:30a-12p Mtn
> GMT-6  bit.ly/virtualfriam un/subscribe
> https://linkprotect.cudasvc.com/url?a=http%3a%2f%2fredfish.com%2fmailm
> an%2flistinfo%2ffriam_redfish.com&c=E,1,9ax7aKQ2rDtB8XKgxQEtGYkUQa4o9h
> 4N0xis7GA00gN5-7hU624IO8L9ZVEXeaASrQpXPjmVdIllrZnkdp1EO15RhxlmJV4hS6CS
> nV6XW_6HiieyzQ,,&typo=1 FRIAM-COMIC
> https://linkprotect.cudasvc.com/url?a=http%3a%2f%2ffriam-comic.blogspo
> t.com%2f&c=E,1,BY7izUgB00UInE98yrIv8jSrhHGfRkxj96lBumMKZVoOdRBIOe9L5Wk
> 4ryCZCRUrsTZcwfptzoIggSX2oRrwfi98ZWEBFIUO5Mc28bNIp14SgxhbRVPCm9TFomzL&
> typo=1
> archives: http://friam.471366.n2.nabble.com/
>
>
> - .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
> FRIAM Applied Complexity Group listserv Zoom Fridays 9:30a-12p Mtn
> GMT-6  bit.ly/virtualfriam un/subscribe
> https://linkprotect.cudasvc.com/url?a=http%3a%2f%2fredfish.com%2fmailm
> an%2flistinfo%2ffriam_redfish.com&c=E,1,gDrKkHblMaSD16jkkL4YQNm1y4Z5MX
> 8OgxIgJSaNSjXueJW-YK3jOqaXWQZRaTIUTJTBQyqTDBVjUe5x0-8Wp6lxXidiT6OX08_4
> Ugj7ZrtT&typo=1 FRIAM-COMIC
> https://linkprotect.cudasvc.com/url?a=http%3a%2f%2ffriam-comic.blogspo
> t.com%2f&c=E,1,AGvktz0V8pHCFrSAUsEfkKuGEF2880OKtGtnmoM88a9oOjO3EXCl935
> beWuYOoD57s6U8u9nW-v5CBQ8_smMt8kjKBS9mtPH-eSouWIBmY2paTXx3j6coXAU_yRu&
> typo=1
> archives: http://friam.471366.n2.nabble.com/


- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives: http://friam.471366.n2.nabble.com/


- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives: http://friam.471366.n2.nabble.com/
Reply | Threaded
Open this post in threaded view
|

Re: PM-2017-MethodologicalBehaviorismCausalChainsandCausalForks(1).pdf

jon zingale
In reply to this post by thompnickson2
"""
The notion of Screening Off comes from the act of “marking” a subset of
the coins, to get at the sense in which their states may stand between
the future states of some other focal coins you may wish to discuss, and
the universe of other coins whose states you want to know if you can
ignore.  But the “screening” part of Screening Off comes from the
peer-status of any coin to any other coin, in context of a network that
is provided to you as context.
"""

I find this elaboration helpful. The metaphor of Screening Off seems
right to me in that it is not a walling off, but rather acting *as if*
something was in a different room though it is not, “marking”. Once we
introduce marked variables, the bookkeeping has a calculus all its own.
From a SEP article[S], there is a nice explication of Screening Off from
the perspective of a Markov condition:

  For every variable X in V, and every set of variables Y ⊆ V ∖ DE(X),
  P(X ∣ PA(X) & Y) = P(X ∣ PA(X)).

  where DE(X) is the collection of descendants of X, PA(X) the parents.

This definition highlights the arbitrary nature of Screening Off.
Y may be a parent of X, in which case, the triviality comes from claiming
that we can cancel the redundant Y as it already is accounted for. In
the other case, we can cancel Y because it has no causal effect on X.

From the Sober paper, I gather that the introduction of an intermediate
stage (X) into his 'V' model gives rise to a 'Y' model which screens off
some initial stage (S) from later stages (R1, R2)[?]. He further asserts
(and this would better be addressed by a practicing bayesian) that this
introduction is non-trivial. Riffing off of Glen's comments, allow me
a bit more rope to hang myself. X depends causally on S, the total
effect of S on the later network is present at X and therefore the result
of X and the probability associated with X is sufficient for causation
at R1 and R2. However, wrt the stage of definition S, X introduces some
uncertainty having the effect of correlating uncertainty in A and B, a
possibly uncertain representation is an uncertain representation.

In the 'V' model we have a lack of dependence and a Screening Off. This
then is also the case for R1 and R2 conditioned on X in the 'Y' model.
However, with respect to conditioning on S in the 'Y' model, uncertainty
creeps in. Now, like quantum states, R1 & R2 relative to S, cannot be
written in product form and so they must be handled as an irreducible,
entangled.

I am not sure that this post contributes much to what others have
already said, but I wanted to struggle on a bit.

[S] https://plato.stanford.edu/entries/causation-probabilistic/

[?] A continued point of confusion for me, relative to the paper, is
determining whether the Screening Off is between R1 and R2 or between S
and (R1, R2) or both. The other confusion for me occurs because Screening
Off is a cancellation property on the condition and he appears to want
to apply screening to variables *left of the bar*. I likely just need to sit
with it a bit, but any clarifications are welcome.



--
Sent from: http://friam.471366.n2.nabble.com/

- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives: http://friam.471366.n2.nabble.com/
Reply | Threaded
Open this post in threaded view
|

Re: PM-2017-MethodologicalBehaviorismCausalChainsandCausalForks(1).pdf

Frank Wimberly-2
For every variable X in V, and every set of variables Y ⊆ V ∖ DE(X),
  P(X ∣ PA(X) & Y) = P(X ∣ PA(X)).

I believe my example using (A -> B -> C) is a very specific example of this.

Frank

On Thu, Feb 11, 2021 at 9:43 AM jon zingale <[hidden email]> wrote:
"""
The notion of Screening Off comes from the act of “marking” a subset of
the coins, to get at the sense in which their states may stand between
the future states of some other focal coins you may wish to discuss, and
the universe of other coins whose states you want to know if you can
ignore.  But the “screening” part of Screening Off comes from the
peer-status of any coin to any other coin, in context of a network that
is provided to you as context.
"""

I find this elaboration helpful. The metaphor of Screening Off seems
right to me in that it is not a walling off, but rather acting *as if*
something was in a different room though it is not, “marking”. Once we
introduce marked variables, the bookkeeping has a calculus all its own.
From a SEP article[S], there is a nice explication of Screening Off from
the perspective of a Markov condition:

  For every variable X in V, and every set of variables Y ⊆ V ∖ DE(X),
  P(X ∣ PA(X) & Y) = P(X ∣ PA(X)).

  where DE(X) is the collection of descendants of X, PA(X) the parents.

This definition highlights the arbitrary nature of Screening Off.
Y may be a parent of X, in which case, the triviality comes from claiming
that we can cancel the redundant Y as it already is accounted for. In
the other case, we can cancel Y because it has no causal effect on X.

From the Sober paper, I gather that the introduction of an intermediate
stage (X) into his 'V' model gives rise to a 'Y' model which screens off
some initial stage (S) from later stages (R1, R2)[?]. He further asserts
(and this would better be addressed by a practicing bayesian) that this
introduction is non-trivial. Riffing off of Glen's comments, allow me
a bit more rope to hang myself. X depends causally on S, the total
effect of S on the later network is present at X and therefore the result
of X and the probability associated with X is sufficient for causation
at R1 and R2. However, wrt the stage of definition S, X introduces some
uncertainty having the effect of correlating uncertainty in A and B, a
possibly uncertain representation is an uncertain representation.

In the 'V' model we have a lack of dependence and a Screening Off. This
then is also the case for R1 and R2 conditioned on X in the 'Y' model.
However, with respect to conditioning on S in the 'Y' model, uncertainty
creeps in. Now, like quantum states, R1 & R2 relative to S, cannot be
written in product form and so they must be handled as an irreducible,
entangled.

I am not sure that this post contributes much to what others have
already said, but I wanted to struggle on a bit.

[S] https://plato.stanford.edu/entries/causation-probabilistic/

[?] A continued point of confusion for me, relative to the paper, is
determining whether the Screening Off is between R1 and R2 or between S
and (R1, R2) or both. The other confusion for me occurs because Screening
Off is a cancellation property on the condition and he appears to want
to apply screening to variables *left of the bar*. I likely just need to sit
with it a bit, but any clarifications are welcome.



--
Sent from: http://friam.471366.n2.nabble.com/

- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives: http://friam.471366.n2.nabble.com/


--
Frank Wimberly
140 Calle Ojo Feliz
Santa Fe, NM 87505
505 670-9918


- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives: http://friam.471366.n2.nabble.com/
Reply | Threaded
Open this post in threaded view
|

Re: PM-2017-MethodologicalBehaviorismCausalChainsandCausalForks(1).pdf

jon zingale
In reply to this post by jon zingale
Perhaps of further use(fulness/lessness) is a cartesian product
interpretation of screening-off in the V-Y model[&]. If we consider each
stage to be a set of *observations* and functions between them as
relating *evidence*, we can interpret *cause* as the epimorphisms, those
functions that are right cancellable (screening off earlier stages) and
whose domain observations fully account for the codomain observations[s].
Correlation ultimately sneaks in whenever we coequalize (also an
epimorphism and so causal) functions from the product.

The Bayesian interpretation, as far as I can tell, gives criteria for
when this modding out should occur (distal causes?) and how it is to be
handled. My hope for this approach is to elucidate when one can infer
linkages safely in a causal network and when one cannot, the distinction
being that while evidence ought to compose without side-effects,
causality can not. From a high level, the *screen-breaking* condition is
effectively summarized as 'no functions on products without modding out'.

Now, given any product data: (π1: X -> R1, π2: X -> R2)[𝝥] we can look
at how maps from earlier stages relate to the triple (X, π1, π2).

It follows that any pair of functions with a common domain:
(a: S -> R1, b: S -> R2) have a unique interpretation through X, as X is
a product as-well-as a cause. The functions (a or b, say) can come in
two varieties, causal or not, the latter perhaps contributing evidence.

In the case that a and b are causal, we are not guaranteed 'classical'
screening-off of S from R1 and R2, via X. As stated above, X being a
product guarantees a unique representation (r1, r2): S -> X, such that
we can recover a: S -> R1 as π1∘(r1, r2) and b by π2∘(r1, r2). Now in
the case that the map (r1, r2) is epic, we either have just as much
information as is carried into the projection, or something unnecessary
is lost. Otherwise, S is 'smaller' than X and cannot be a cause of X.

This being said, we can now return to the Markov interpretation: For S
causal on X, S is in 'V', X is in DE(S) and so nothing can be said about
the truth of P(R1 & R2 ∣ PA(S) & X) = P(R1 & R2 ∣ PA(S)). For S non-
causal on X, S is neither a parent nor descendent cause and so classically,
P(R1 & R2 ∣ PA(X) & S) = P(R1 & R2 ∣ PA(X)).

I continued to sketch out a handful of other ideas, but they were much
sketcher than even that above. Let me stop here for now.

[&] Cartesian product is what I think of whenever we invoke 'AND'.

[s] I wish to connect the question of *choice* in variation partitioning
with the idea of *section* for epimorphisms, further suggesting that
the collection of these sections may give a presheaf category. It is not
yet at all clear to me that this intuition is correct, but hey.

[𝝥] The projection maps (π1, π2) are epimorphic by design and so are
directly interpretable as causal.




--
Sent from: http://friam.471366.n2.nabble.com/

- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives: http://friam.471366.n2.nabble.com/
Reply | Threaded
Open this post in threaded view
|

Re: PM-2017-MethodologicalBehaviorismCausalChainsandCausalForks(1).pdf

jon zingale
two brief addenda:
1) I am thinking of the product as being 'at least a product'.
2) Perhaps a further connection to symmetric monoidal categories exists by
thinking of all proximate links as tensored products and all distal links as
quotiented products. idk.



--
Sent from: http://friam.471366.n2.nabble.com/

- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives: http://friam.471366.n2.nabble.com/
Reply | Threaded
Open this post in threaded view
|

Re: PM-2017-MethodologicalBehaviorismCausalChainsandCausalForks(1).pdf

Frank Wimberly-2
Jon,

I sort of understand what you wrote.  It's not clear to me what categories add over the usual Bayes net approach.  I hope we'll have a chance to talk about this after our vaccinations have taken effect.

Frank


---
Frank C. Wimberly
140 Calle Ojo Feliz,
Santa Fe, NM 87505

505 670-9918
Santa Fe, NM

On Thu, Feb 11, 2021, 12:55 PM jon zingale <[hidden email]> wrote:
two brief addenda:
1) I am thinking of the product as being 'at least a product'.
2) Perhaps a further connection to symmetric monoidal categories exists by
thinking of all proximate links as tensored products and all distal links as
quotiented products. idk.



--
Sent from: http://friam.471366.n2.nabble.com/

- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives: http://friam.471366.n2.nabble.com/

- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives: http://friam.471366.n2.nabble.com/
Reply | Threaded
Open this post in threaded view
|

Re: PM-2017-MethodologicalBehaviorismCausalChainsandCausalForks(1).pdf

jon zingale
Yes, I would like that. Perhaps, at Saveur over coffee :)



--
Sent from: http://friam.471366.n2.nabble.com/

- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives: http://friam.471366.n2.nabble.com/
Reply | Threaded
Open this post in threaded view
|

Re: PM-2017-MethodologicalBehaviorismCausalChainsandCausalForks(1).pdf

jon zingale
This post was updated on .
In reply to this post by Frank Wimberly-2
In an attempt to ground my above thoughts, and respond to your apprehension,
I found these two John Baez articles offering some explanation for why one
might want to use category theory to guide Bayesian network calculations.
Additionally, I was pleased to see that he makes explicit the connection to
monoidal categories[2].

[1]
https://golem.ph.utexas.edu/category/2018/07/bayesian_networks.html#:~:text=in%20causal%20theory.-,Introduction,nodes%2C%20satisfying%20the%20Markov%20condition.

[2]
https://golem.ph.utexas.edu/category/2018/01/a_categorical_semantics_for_ca.html#more

For my own part, the intuition to approach the problem of separating actual
*causes* from collections of *evidence* via presheaves mostly follows from
choosing to study the epimorphisms associated with such networks. The
sections are going to need to compose, forming *narratives* (not sure what
else to call these) that give a *space* of causal lineages. It strikes me (a
tourist) that such a framework could be useful when reasoning about
evidence.

Reply | Threaded
Open this post in threaded view
|

Re: PM-2017-MethodologicalBehaviorismCausalChainsandCausalForks(1).pdf

Frank Wimberly-2
Thanks, Jon.  I will work on this.

One of my first tasks in my causal reasoning job was to write Java classes and methods and and applet that used them to allow a user to interactively enter a DAG and then enter pairs of nodes and return the d-separation facts for each pair.  Chris Meek who is mentioned by Baez defined the task for me.

---
Frank C. Wimberly
140 Calle Ojo Feliz,
Santa Fe, NM 87505

505 670-9918
Santa Fe, NM

On Thu, Feb 18, 2021, 9:20 AM jon zingale <[hidden email]> wrote:
In an attempt to ground my above thoughts, and respond to your apprehension,
I found these two John Baez articles offering some explanation for why one
might want to use category theory to guide Bayesian network calculations.
Additionally, I was pleased to see that he makes explicit the connection to
monoidal categories[2].

[1]
https://golem.ph.utexas.edu/category/2018/07/bayesian_networks.html#:~:text=in%20causal%20theory.-,Introduction,nodes%2C%20satisfying%20the%20Markov%20condition.

[2]
https://golem.ph.utexas.edu/category/2018/01/a_categorical_semantics_for_ca.html#more

For my own part, the intuition to approach the problem of separating actual
*causes* from collections of *evidence* via presheaves mostly follows from
choosing to study the epimorphisms associated with such networks. The
sections are going to need to compose, forming *narratives* (not sure what
else to call these) that give _a_ lineage of causes. It strikes me (a
tourist) that such a framework could be useful when reasoning about
evidence.



--
Sent from: http://friam.471366.n2.nabble.com/

- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives: http://friam.471366.n2.nabble.com/

- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives: http://friam.471366.n2.nabble.com/
12