I am referring to free will as the possibility that choices can be made contrary to the coupled quantum mechanical wave function associated with a human brain or other intelligent agent as entangled with the universe. The dictionary definition is unhelpful because it connects free will with deterministic systems, and it is easy (with a credit card) to extend a computing system with a true random number generator. I do think that is a distinction without a difference, and that also casts doubt on the utility of the concept of free will. If faking it can't be detected, we might as well not talk about a distinction. Turing test, etc.
On 6/15/20, 5:45 PM, "Friam on behalf of glen∉ℂ" <[hidden email] on behalf of [hidden email]> wrote: Well, you only said you could write an ABM. You didn't mention "conventionally call a serial computer". Given where you work, you might have been hypothesizing that you could write an ABM on some other kind of computer. But whatever, I already agreed that it wouldn't. I'll repeat that what's more interesting is whether it would *look* like it did ... whether it could *simulate* free will, which is the topic at hand. Again, I'm not talking about a different concept. I'm talking about this: https://www.merriam-webster.com/dictionary/freewill On 6/15/20 5:40 PM, Marcus Daniels wrote: > Anyone that says an ABM can display Free Will, running on what we conventionally call a serial computer, is certainly talking about a different concept. - .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. . FRIAM Applied Complexity Group listserv Zoom Fridays 9:30a-12p Mtn GMT-6 bit.ly/virtualfriam un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com archives: http://friam.471366.n2.nabble.com/ FRIAM-COMIC http://friam-comic.blogspot.com/ - .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. . FRIAM Applied Complexity Group listserv Zoom Fridays 9:30a-12p Mtn GMT-6 bit.ly/virtualfriam un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com archives: http://friam.471366.n2.nabble.com/ FRIAM-COMIC http://friam-comic.blogspot.com/ |
In reply to this post by thompnickson2
Hi, yall, I think, for most people, the idea of free will is irrevocably tied to Descartes's notion that, while animals are machines, God Gave to Man (and perhaps even to Woman) the power to choose between good an evil. It is the idea of the ghost in the machine, and lies at the core of our legal system. Now stripped of all its religious and legal freight, it boils down to the notion that, when I act deliberately, there are two of me, the me that acts and the me that chooses to act. So, the me that decides whether or not to pick up the dropped dried cranberry is different from the me that kicks it under the toe board, whence the mice, hopefully, will carry it off before Penny sees it. Now from a third person point of view, you have no need of any of that. You see the cranberry fall, you see, if you know me well, perhaps that two parts of me are activated, a preparing to bend down and a preparing to move my foot into place. For a flash, I seem hesitatant; my behavior is, momentarily disintegrated. But then, it re-integrates, my foot scuffs the cranberry out of sight, and you might, if you are observant, see me scan the room with my eyes to reassure myself that nobody has seen me. From your point of view, it is like one of those moments when the mercury bubble in the thermostat jiggles in its vial and the furnace stutters, coming on and off twice or three times in a few seconds. No need for free will there. Now I am under no illusion that human individuals are wholly integrated beings. In fact, evolutionary theory suggests that we have been designed by two selection regimens, one that privileges the individual, and one that privileges any group that we associate with. At any one time, these two behavioral tendencies are struggling for the controls of our body-engine, like the villain and the hero, struggling for the controls of the locomotive hurtling down the tracks toward the the bound maiden. In ethology, the field in which I trained, this sort of struggle for the control of the apparatus of the body is commonplace in animals. Two ducks, competing for a female, balanced between stimuli that tell them to attach and stimuli that tell them to flee, will suddenly break into elaborate preening, the "energy" aroused by the conflict allegedly spilling over and taking control of the preening apparatus. Such displacement preening serves both combatants because it prevents either from fruitless combat, and so it gets woven into aggressive displays, and has even resulted in special plumage to enhance the visual impact of the bogus preening. All the best, Nick The only need for free will arises from my first person sense that I have made a decision not to pick up the cranberry and then acted on that decision. It's this strange notion that something other than decisive action constitutes decision. But we entertain many illusions in our perception and I chose to give this one no more credence than the illusion I had the other night that the full moon rose in the east, or that it shrunk in size as it vaulted toward the zenith. Now I got bogged down over the weekend, so I still don't know where you guys came down on that issue. I get the impression, perhaps, that what you have been arguing about is entirely orthogonal to my concern. Nick Nicholas Thompson Emeritus Professor of Ethology and Psychology Clark University [hidden email] https://wordpress.clarku.edu/nthompson/ -----Original Message----- From: [hidden email] <[hidden email]> Sent: Monday, June 15, 2020 4:05 PM To: 'The Friday Morning Applied Complexity Coffee Group' <[hidden email]> Subject: RE: [FRIAM] alternative response Jon, Glen, As a matter of historical fact, I think Jon is right. But for me the most interesting cases of free will occur in the most trivial and banal situations. Let it be the case that I drop a dried cranberry on the floor: Am I going to bend down and pick it up? Or am I going to slip it into the toe space under the cupboard. I used to ask myself, as if I were in charge, Which shall I do? Now I just wait to see what I do. Nick Nicholas Thompson Emeritus Professor of Ethology and Psychology Clark University [hidden email] https://wordpress.clarku.edu/nthompson/ -----Original Message----- From: Friam <[hidden email]> On Behalf Of Jon Zingale Sent: Monday, June 15, 2020 3:56 PM To: [hidden email] Subject: Re: [FRIAM] alternative response Glen says: I don't think free will is bound with (naive) morality at all. It's all about selection functions. Do I turn this way or that. Do I eat some food, go for a run, or read a book. So, I don't see it as "importing" anything. Free will is all about which things are bound and which things are free (and which things are partially bound ... constrained). I would have to disagree. While I think that *will* more generally has to do with the agency you mention, conversations of *free will* are a kind of pathology that happens in the limit. When we discuss whether or not I have this choice or that, the most trivial philosophical cases are those of selection functions and don't require the full import of FREE will. Again, the discussion of free will is for the benefit of whom? Outside of conversations where we go back and forth about determinism and the degree to which biology is or is not able to exploit indeterminism, the motivating impetus for discussing free will is one of assigning responsibility. -- Sent from: http://friam.471366.n2.nabble.com/ - .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. . FRIAM Applied Complexity Group listserv Zoom Fridays 9:30a-12p Mtn GMT-6 bit.ly/virtualfriam un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com archives: http://friam.471366.n2.nabble.com/ FRIAM-COMIC http://friam-comic.blogspot.com/ - .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. . FRIAM Applied Complexity Group listserv Zoom Fridays 9:30a-12p Mtn GMT-6 bit.ly/virtualfriam un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com archives: http://friam.471366.n2.nabble.com/ FRIAM-COMIC http://friam-comic.blogspot.com/ |
In reply to this post by Marcus G. Daniels
Marcus,
Free from WHAT exactly? What is having free will freedom FROM? And what is Will that is not free. Aren't we having an oxymoron problem? n Nicholas Thompson Emeritus Professor of Ethology and Psychology Clark University [hidden email] https://wordpress.clarku.edu/nthompson/ -----Original Message----- From: Friam <[hidden email]> On Behalf Of Marcus Daniels Sent: Monday, June 15, 2020 6:40 PM To: The Friday Morning Applied Complexity Coffee Group <[hidden email]> Subject: Re: [FRIAM] alternative response Anyone that says an ABM can display Free Will, running on what we conventionally call a serial computer, is certainly talking about a different concept. On 6/15/20, 5:35 PM, "Friam on behalf of glen∉ℂ" <[hidden email] on behalf of [hidden email]> wrote: I agree. I doubt it would display free will, too. But it's an interesting question whether it would or not. It's an even more interesting question whether it would *look* like it exhibited free will, which is the question RussA asked. On 6/15/20 3:25 PM, Marcus Daniels wrote: > Ok, I can make an ABM of that. Surely such an ABM does not display free will. - .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. . FRIAM Applied Complexity Group listserv Zoom Fridays 9:30a-12p Mtn GMT-6 bit.ly/virtualfriam un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com archives: http://friam.471366.n2.nabble.com/ FRIAM-COMIC http://friam-comic.blogspot.com/ - .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. . FRIAM Applied Complexity Group listserv Zoom Fridays 9:30a-12p Mtn GMT-6 bit.ly/virtualfriam un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com archives: http://friam.471366.n2.nabble.com/ FRIAM-COMIC http://friam-comic.blogspot.com/ - .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. . FRIAM Applied Complexity Group listserv Zoom Fridays 9:30a-12p Mtn GMT-6 bit.ly/virtualfriam un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com archives: http://friam.471366.n2.nabble.com/ FRIAM-COMIC http://friam-comic.blogspot.com/ |
In reply to this post by gepr
It was once said that nuns were the free-est of all humans because they
never did anything out of habit. Freedom from what? Webster says "spontaneous". Like most dictionary definitions, it just postpones the problem. Nick Nicholas Thompson Emeritus Professor of Ethology and Psychology Clark University [hidden email] https://wordpress.clarku.edu/nthompson/ -----Original Message----- From: Friam <[hidden email]> On Behalf Of glen?C Sent: Monday, June 15, 2020 6:46 PM To: [hidden email] Subject: Re: [FRIAM] alternative response Well, you only said you could write an ABM. You didn't mention "conventionally call a serial computer". Given where you work, you might have been hypothesizing that you could write an ABM on some other kind of computer. But whatever, I already agreed that it wouldn't. I'll repeat that what's more interesting is whether it would *look* like it did ... whether it could *simulate* free will, which is the topic at hand. Again, I'm not talking about a different concept. I'm talking about this: https://www.merriam-webster.com/dictionary/freewill On 6/15/20 5:40 PM, Marcus Daniels wrote: > Anyone that says an ABM can display Free Will, running on what we conventionally call a serial computer, is certainly talking about a different concept. - .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. . FRIAM Applied Complexity Group listserv Zoom Fridays 9:30a-12p Mtn GMT-6 bit.ly/virtualfriam un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com archives: http://friam.471366.n2.nabble.com/ FRIAM-COMIC http://friam-comic.blogspot.com/ - .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. . FRIAM Applied Complexity Group listserv Zoom Fridays 9:30a-12p Mtn GMT-6 bit.ly/virtualfriam un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com archives: http://friam.471366.n2.nabble.com/ FRIAM-COMIC http://friam-comic.blogspot.com/ |
In reply to this post by thompnickson2
`Freedom' from physical laws.
On 6/15/20, 8:34 PM, "Friam on behalf of [hidden email]" <[hidden email] on behalf of [hidden email]> wrote: Marcus, Free from WHAT exactly? What is having free will freedom FROM? And what is Will that is not free. Aren't we having an oxymoron problem? n Nicholas Thompson Emeritus Professor of Ethology and Psychology Clark University [hidden email] https://wordpress.clarku.edu/nthompson/ -----Original Message----- From: Friam <[hidden email]> On Behalf Of Marcus Daniels Sent: Monday, June 15, 2020 6:40 PM To: The Friday Morning Applied Complexity Coffee Group <[hidden email]> Subject: Re: [FRIAM] alternative response Anyone that says an ABM can display Free Will, running on what we conventionally call a serial computer, is certainly talking about a different concept. On 6/15/20, 5:35 PM, "Friam on behalf of glen∉ℂ" <[hidden email] on behalf of [hidden email]> wrote: I agree. I doubt it would display free will, too. But it's an interesting question whether it would or not. It's an even more interesting question whether it would *look* like it exhibited free will, which is the question RussA asked. On 6/15/20 3:25 PM, Marcus Daniels wrote: > Ok, I can make an ABM of that. Surely such an ABM does not display free will. - .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. . FRIAM Applied Complexity Group listserv Zoom Fridays 9:30a-12p Mtn GMT-6 bit.ly/virtualfriam un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com archives: http://friam.471366.n2.nabble.com/ FRIAM-COMIC http://friam-comic.blogspot.com/ - .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. . FRIAM Applied Complexity Group listserv Zoom Fridays 9:30a-12p Mtn GMT-6 bit.ly/virtualfriam un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com archives: http://friam.471366.n2.nabble.com/ FRIAM-COMIC http://friam-comic.blogspot.com/ - .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. . FRIAM Applied Complexity Group listserv Zoom Fridays 9:30a-12p Mtn GMT-6 bit.ly/virtualfriam un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com archives: http://friam.471366.n2.nabble.com/ FRIAM-COMIC http://friam-comic.blogspot.com/ - .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. . FRIAM Applied Complexity Group listserv Zoom Fridays 9:30a-12p Mtn GMT-6 bit.ly/virtualfriam un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com archives: http://friam.471366.n2.nabble.com/ FRIAM-COMIC http://friam-comic.blogspot.com/ |
So, I exert my free will if and only I do something that could not be anticipated by any generality?
Nick Nicholas Thompson Emeritus Professor of Ethology and Psychology Clark University [hidden email] https://wordpress.clarku.edu/nthompson/ -----Original Message----- From: Friam <[hidden email]> On Behalf Of Marcus Daniels Sent: Monday, June 15, 2020 9:51 PM To: The Friday Morning Applied Complexity Coffee Group <[hidden email]> Subject: Re: [FRIAM] alternative response `Freedom' from physical laws. On 6/15/20, 8:34 PM, "Friam on behalf of [hidden email]" <[hidden email] on behalf of [hidden email]> wrote: Marcus, Free from WHAT exactly? What is having free will freedom FROM? And what is Will that is not free. Aren't we having an oxymoron problem? n Nicholas Thompson Emeritus Professor of Ethology and Psychology Clark University [hidden email] https://wordpress.clarku.edu/nthompson/ -----Original Message----- From: Friam <[hidden email]> On Behalf Of Marcus Daniels Sent: Monday, June 15, 2020 6:40 PM To: The Friday Morning Applied Complexity Coffee Group <[hidden email]> Subject: Re: [FRIAM] alternative response Anyone that says an ABM can display Free Will, running on what we conventionally call a serial computer, is certainly talking about a different concept. On 6/15/20, 5:35 PM, "Friam on behalf of glen∉ℂ" <[hidden email] on behalf of [hidden email]> wrote: I agree. I doubt it would display free will, too. But it's an interesting question whether it would or not. It's an even more interesting question whether it would *look* like it exhibited free will, which is the question RussA asked. On 6/15/20 3:25 PM, Marcus Daniels wrote: > Ok, I can make an ABM of that. Surely such an ABM does not display free will. - .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. . FRIAM Applied Complexity Group listserv Zoom Fridays 9:30a-12p Mtn GMT-6 bit.ly/virtualfriam un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com archives: http://friam.471366.n2.nabble.com/ FRIAM-COMIC http://friam-comic.blogspot.com/ - .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. . FRIAM Applied Complexity Group listserv Zoom Fridays 9:30a-12p Mtn GMT-6 bit.ly/virtualfriam un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com archives: http://friam.471366.n2.nabble.com/ FRIAM-COMIC http://friam-comic.blogspot.com/ - .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. . FRIAM Applied Complexity Group listserv Zoom Fridays 9:30a-12p Mtn GMT-6 bit.ly/virtualfriam un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com archives: http://friam.471366.n2.nabble.com/ FRIAM-COMIC http://friam-comic.blogspot.com/ - .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. . FRIAM Applied Complexity Group listserv Zoom Fridays 9:30a-12p Mtn GMT-6 bit.ly/virtualfriam un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com archives: http://friam.471366.n2.nabble.com/ FRIAM-COMIC http://friam-comic.blogspot.com/ - .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. . FRIAM Applied Complexity Group listserv Zoom Fridays 9:30a-12p Mtn GMT-6 bit.ly/virtualfriam un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com archives: http://friam.471366.n2.nabble.com/ FRIAM-COMIC http://friam-comic.blogspot.com/ |
Let's say the cranberry drops on the floor and at that moment a cosmic ray passes through your skull, causing you to experience a visual flash in your peripheral vision. You might know the risks of leaving food for mice, but you are startled and forget the cranberry. A mouse in your house eats it and experiences a sugar rush which keeps it up for hours and it chews through some electrical insulation, and your house catches fire. Oops. Why did you not exercise free will and retrieve the cranberry? Aren't your executive functions immune from mere nerve impulses?
On 6/15/20, 9:02 PM, "Friam on behalf of [hidden email]" <[hidden email] on behalf of [hidden email]> wrote: So, I exert my free will if and only I do something that could not be anticipated by any generality? Nick Nicholas Thompson Emeritus Professor of Ethology and Psychology Clark University [hidden email] https://wordpress.clarku.edu/nthompson/ -----Original Message----- From: Friam <[hidden email]> On Behalf Of Marcus Daniels Sent: Monday, June 15, 2020 9:51 PM To: The Friday Morning Applied Complexity Coffee Group <[hidden email]> Subject: Re: [FRIAM] alternative response `Freedom' from physical laws. On 6/15/20, 8:34 PM, "Friam on behalf of [hidden email]" <[hidden email] on behalf of [hidden email]> wrote: Marcus, Free from WHAT exactly? What is having free will freedom FROM? And what is Will that is not free. Aren't we having an oxymoron problem? n Nicholas Thompson Emeritus Professor of Ethology and Psychology Clark University [hidden email] https://wordpress.clarku.edu/nthompson/ -----Original Message----- From: Friam <[hidden email]> On Behalf Of Marcus Daniels Sent: Monday, June 15, 2020 6:40 PM To: The Friday Morning Applied Complexity Coffee Group <[hidden email]> Subject: Re: [FRIAM] alternative response Anyone that says an ABM can display Free Will, running on what we conventionally call a serial computer, is certainly talking about a different concept. On 6/15/20, 5:35 PM, "Friam on behalf of glen∉ℂ" <[hidden email] on behalf of [hidden email]> wrote: I agree. I doubt it would display free will, too. But it's an interesting question whether it would or not. It's an even more interesting question whether it would *look* like it exhibited free will, which is the question RussA asked. On 6/15/20 3:25 PM, Marcus Daniels wrote: > Ok, I can make an ABM of that. Surely such an ABM does not display free will. - .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. . FRIAM Applied Complexity Group listserv Zoom Fridays 9:30a-12p Mtn GMT-6 bit.ly/virtualfriam un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com archives: http://friam.471366.n2.nabble.com/ FRIAM-COMIC http://friam-comic.blogspot.com/ - .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. . FRIAM Applied Complexity Group listserv Zoom Fridays 9:30a-12p Mtn GMT-6 bit.ly/virtualfriam un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com archives: http://friam.471366.n2.nabble.com/ FRIAM-COMIC http://friam-comic.blogspot.com/ - .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. . FRIAM Applied Complexity Group listserv Zoom Fridays 9:30a-12p Mtn GMT-6 bit.ly/virtualfriam un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com archives: http://friam.471366.n2.nabble.com/ FRIAM-COMIC http://friam-comic.blogspot.com/ - .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. . FRIAM Applied Complexity Group listserv Zoom Fridays 9:30a-12p Mtn GMT-6 bit.ly/virtualfriam un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com archives: http://friam.471366.n2.nabble.com/ FRIAM-COMIC http://friam-comic.blogspot.com/ - .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. . FRIAM Applied Complexity Group listserv Zoom Fridays 9:30a-12p Mtn GMT-6 bit.ly/virtualfriam un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com archives: http://friam.471366.n2.nabble.com/ FRIAM-COMIC http://friam-comic.blogspot.com/ - .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. . FRIAM Applied Complexity Group listserv Zoom Fridays 9:30a-12p Mtn GMT-6 bit.ly/virtualfriam un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com archives: http://friam.471366.n2.nabble.com/ FRIAM-COMIC http://friam-comic.blogspot.com/ |
In reply to this post by thompnickson2
As usual, I am below arguing a position that I don't necessarily believe. This is a game, a temporary hypothesis. Precede every assertion with "Let's say that ..."
1) There's no need for two of you. You are a steady mesh of choices in parallel, from the tiniest cellular process to picking up the cranberry. And I agree, there's no need for free will there. 2) The "two behavioral tendencies" are not *two*. They are a loose collection of many behaviors that *might* group, ungroup, and regroup. The compositional machinery that does the grouping does NOT pit one group of behaviors against another group of behaviors. It mixes and matches behaviors to arrive at a grouping that (kinda-sorta) optimizes for least effort. 3) The "first person sense" is the perception of irreversibility. It is the mesh of you clipping the tree of possibilities. In a different post, you asked "freedom from what?" The answer I'm proposing here is: freedom from evaluating/realizing every POSSIBLE next event. At any given instant, there's a (composite) probability distribution for everything that *could* happen in the next instant. Some events are vanishingly unlikely. Other events are overwhelmingly likely. The interesting stuff is somewhere in between, like 50% likely to happen. Within some ε of 50% are the things you sense/feel/perceive. And as the options fall away, you feel/realize the lost opportunity. That is the first person perspective you talk about. Again, no free will is required. 4) When you feel that lost opportunity, i.e. when you sense that you've now gone down an irreversible path, for a little while, you can ask "what if I'd taken that path and not this one?" Again, no free will is required, only the ability to *perceive* that there were other paths your mesh/machine could have taken if the universe had been different. 5) That cohesive sensing is identical to the compositional machinery in (2) above. There's a storage/memory to that compositional machinery that can remember the historical trace the mesh took ... the "choices" made by the mesh. So, the NEXT time your mesh is on a similar trajectory, your compositional machinery will be slightly biased by your history. That memory of lost opportunities is what we call free will. On 6/15/20 8:29 PM, [hidden email] wrote: > ... when I act deliberately, there are two of me, the me that acts > and the me that chooses to act. [...] Now from a third person point of view, you have no need of > any of that. [...] No need for free > will there. > > Now I am under no illusion that human individuals are wholly integrated > beings. In fact, evolutionary theory suggests that we have been designed by > two selection regimens, one that privileges the individual, and one that > privileges any group that we associate with. At any one time, these two > behavioral tendencies are struggling for the controls of our body-engine, > [...] > > The only need for free will arises from my first person sense that I have > made a decision not to pick up the cranberry and then acted on that > decision. [...] > > Now I got bogged down over the weekend, so I still don't know where you guys > came down on that issue. I get the impression, perhaps, that what you have > been arguing about is entirely orthogonal to my concern. - .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. . FRIAM Applied Complexity Group listserv Zoom Fridays 9:30a-12p Mtn GMT-6 bit.ly/virtualfriam un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com archives: http://friam.471366.n2.nabble.com/ FRIAM-COMIC http://friam-comic.blogspot.com/
uǝʃƃ ⊥ glen
|
Glen writes:
"5) That cohesive sensing is identical to the compositional machinery in (2) above. There's a storage/memory to that compositional machinery that can remember the historical trace the mesh took ... the "choices" made by the mesh. So, the NEXT time your mesh is on a similar trajectory, your compositional machinery will be slightly biased by your history. That memory of lost opportunities is what we call free will." Let's say a person is faced with a decision to 1) pay, 2) report, 3) intimidate, or 4) kill a blackmailer without much in the way of predictive ability on what will happen if she takes any option. Every option has a different kind of risk. For the sake of argument let's say the probability of the very next thing is exactly degenerate at a probability of 0.25. One of those four things will happen, and by construction there is no way to know or control which one. Then that thing happens, and the formation of that memory regarding the lost other three options will follow all of the same probabilistic rules. We could build a simulation of it, and the simulation won't be able to defy its programming. Marcus - .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. . FRIAM Applied Complexity Group listserv Zoom Fridays 9:30a-12p Mtn GMT-6 bit.ly/virtualfriam un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com archives: http://friam.471366.n2.nabble.com/ FRIAM-COMIC http://friam-comic.blogspot.com/ |
That's exactly what I said. Thanks for repeating it in your own words!
On 6/16/20 8:40 AM, Marcus Daniels wrote: > We could build a simulation of it, and the simulation won't be able to defy its programming. - .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. . FRIAM Applied Complexity Group listserv Zoom Fridays 9:30a-12p Mtn GMT-6 bit.ly/virtualfriam un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com archives: http://friam.471366.n2.nabble.com/ FRIAM-COMIC http://friam-comic.blogspot.com/
uǝʃƃ ⊥ glen
|
But you also gamed this proposition:
< That memory of lost opportunities is what we call free will. > Many people apparently believe they can defy their programming and think it is reasonable to expect people to do the same. But punishing the sin and the sinner are the same, and it only matters if the "trace" ever can be exercised again. Marcus On 6/16/20, 10:52 AM, "Friam on behalf of glen∉ℂ" <[hidden email] on behalf of [hidden email]> wrote: That's exactly what I said. Thanks for repeating it in your own words! On 6/16/20 8:40 AM, Marcus Daniels wrote: > We could build a simulation of it, and the simulation won't be able to defy its programming. - .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. . FRIAM Applied Complexity Group listserv Zoom Fridays 9:30a-12p Mtn GMT-6 bit.ly/virtualfriam un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com archives: http://friam.471366.n2.nabble.com/ FRIAM-COMIC http://friam-comic.blogspot.com/ - .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. . FRIAM Applied Complexity Group listserv Zoom Fridays 9:30a-12p Mtn GMT-6 bit.ly/virtualfriam un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com archives: http://friam.471366.n2.nabble.com/ FRIAM-COMIC http://friam-comic.blogspot.com/ |
Right. What I set up was in preparation for an argument about exactly that. Can a system (any system we know about) be programmed to factor in its experience so that the next time around, the probabilities will be different, even if only slightly so? Personally, I could go either way. FW) I can see a situation where, immediately after some branch was taken, the memory structure would dampen/lower (or raise) the chances of that branch being taken again. NFW) Or, alternatively, maybe each situation is so concrete, so forcibly contextual, that there is no such thing as "coming around again". In the former, "free will" exists in the form of successively modified ("deliberate") behavior [†]. In the latter, it doesn't. I'm sure there are other ways to make the argument either way. This argument boils down to pattern recognition, similarity between "traces", approximation, and truncation.
I'm sure it's not obvious how/if the (FW) case fits the typical understanding of free will [‡]. But I think I can make the argument that the scopes/degrees of the branch-points (including the speed of the events, size of the clusters of events, etc.) suggest whether it falls under what we'd normally call "free will". Scope that is too small/fast (biochemistry up to limbic system) is below the threshold. Scope that is too large (being reared in a society that forces some behavior like eating meat) is above the threshold. But somewhere in between might be an adaptive trend that kinda-sorta fits our usual understanding. [†] I think this is distinct from, though related to, the concept of _learning_ or entrainment. I think there's a sweet spot in between ignorant and enslaved that we target with our concept of free will. [‡] One of the phenomena this setup could help test is the idea that "you never know what you'll do until you're in that situation." I.e. the first time you experience something (like a fist fight, or a hit of whiskey, or whatever), there can be no free will. The 2nd time, maybe. The 100th time, for sure. But the common understanding is that the "decision" is made 100 times. This setup violates the vernacular in that the "decision" is smeared out through the nearly-repeated experiences. But at some point, you fall out of the "free will zone". After 50,000 glasses of whiskey, we might say you no longer have free will. You're a slave to your addiction. Another phenomenon this setup might help think about is whether *some* machines have free will but others don't. E.g. if the components that remember and adjust the probabilities for the next time around are damaged, the machine can't "deliberate" like it normally would ... or the free will zone (event/process scopes in the sweet spot) might be shorter or longer. On 6/16/20 11:28 AM, Marcus Daniels wrote: > But you also gamed this proposition: > > < That memory of lost opportunities is what we call free will. > > > Many people apparently believe they can defy their programming and think it is reasonable to expect people to do the same. But punishing the sin and the sinner are the same, and it only matters if the "trace" ever can be exercised again. - .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. . FRIAM Applied Complexity Group listserv Zoom Fridays 9:30a-12p Mtn GMT-6 bit.ly/virtualfriam un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com archives: http://friam.471366.n2.nabble.com/ FRIAM-COMIC http://friam-comic.blogspot.com/
uǝʃƃ ⊥ glen
|
I've only read your first paragraph but isn't that exactly what Samuel's checker program did by revising regression coefficients as it gained experience. We're talking late 1960s. --- Frank C. Wimberly 140 Calle Ojo Feliz, Santa Fe, NM 87505 505 670-9918 Santa Fe, NM On Tue, Jun 16, 2020, 2:05 PM glen∉ℂ <[hidden email]> wrote: Right. What I set up was in preparation for an argument about exactly that. Can a system (any system we know about) be programmed to factor in its experience so that the next time around, the probabilities will be different, even if only slightly so? Personally, I could go either way. FW) I can see a situation where, immediately after some branch was taken, the memory structure would dampen/lower (or raise) the chances of that branch being taken again. NFW) Or, alternatively, maybe each situation is so concrete, so forcibly contextual, that there is no such thing as "coming around again". In the former, "free will" exists in the form of successively modified ("deliberate") behavior [†]. In the latter, it doesn't. I'm sure there are other ways to make the argument either way. This argument boils down to pattern recognition, similarity between "traces", approximation, and truncation. - .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. . FRIAM Applied Complexity Group listserv Zoom Fridays 9:30a-12p Mtn GMT-6 bit.ly/virtualfriam un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com archives: http://friam.471366.n2.nabble.com/ FRIAM-COMIC http://friam-comic.blogspot.com/ |
Actually late 1950s. --- Frank C. Wimberly 140 Calle Ojo Feliz, Santa Fe, NM 87505 505 670-9918 Santa Fe, NM On Tue, Jun 16, 2020, 2:17 PM Frank Wimberly <[hidden email]> wrote:
- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. . FRIAM Applied Complexity Group listserv Zoom Fridays 9:30a-12p Mtn GMT-6 bit.ly/virtualfriam un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com archives: http://friam.471366.n2.nabble.com/ FRIAM-COMIC http://friam-comic.blogspot.com/ |
In reply to this post by Frank Wimberly-2
Well, the reason I'm throwing this idea out there in the first place is because parts of it map (roughly) to LOTS of algorithms I've run across over the years. Nick was asking and I'm simply trying to spitball something that might be constructive. You're free to analogize to your heart's content. 8^D If Nick tries to write a paper with something like this in it, HE would have to do a LOT of literature searching to trace each element.
On 6/16/20 1:17 PM, Frank Wimberly wrote: > I've only read your first paragraph but isn't that exactly what Samuel's checker program did by revising regression coefficients as it gained experience. We're talking late 1960s. - .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. . FRIAM Applied Complexity Group listserv Zoom Fridays 9:30a-12p Mtn GMT-6 bit.ly/virtualfriam un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com archives: http://friam.471366.n2.nabble.com/ FRIAM-COMIC http://friam-comic.blogspot.com/
uǝʃƃ ⊥ glen
|
Yes. I don't think Nick is every going to write such a paper (as opposed, say, to participating along with a bunch of you in writing such a book). However, as I work through the correspondence of the last week (Gawd what a splatter), I have yet to see any support for the idea that there is any fundamental reason why a computer could not be constructed to exhibit any free will that humans have.
It begins to seem to me that "free will" and "emergence" are the same sort of concept and likely to die by the same sword. Once you define "free will" as that which is "spontaneous" (i.e., not explained by anything), you have to prepare yourself for the moment when it is explained. Nick Nicholas Thompson Emeritus Professor of Ethology and Psychology Clark University [hidden email] https://wordpress.clarku.edu/nthompson/ -----Original Message----- From: Friam <[hidden email]> On Behalf Of glen?C Sent: Tuesday, June 16, 2020 2:24 PM To: [hidden email] Subject: Re: [FRIAM] alternative response Well, the reason I'm throwing this idea out there in the first place is because parts of it map (roughly) to LOTS of algorithms I've run across over the years. Nick was asking and I'm simply trying to spitball something that might be constructive. You're free to analogize to your heart's content. 8^D If Nick tries to write a paper with something like this in it, HE would have to do a LOT of literature searching to trace each element. On 6/16/20 1:17 PM, Frank Wimberly wrote: > I've only read your first paragraph but isn't that exactly what Samuel's checker program did by revising regression coefficients as it gained experience. We're talking late 1960s. - .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. . FRIAM Applied Complexity Group listserv Zoom Fridays 9:30a-12p Mtn GMT-6 bit.ly/virtualfriam un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com archives: http://friam.471366.n2.nabble.com/ FRIAM-COMIC http://friam-comic.blogspot.com/ - .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. . FRIAM Applied Complexity Group listserv Zoom Fridays 9:30a-12p Mtn GMT-6 bit.ly/virtualfriam un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com archives: http://friam.471366.n2.nabble.com/ FRIAM-COMIC http://friam-comic.blogspot.com/ |
Exactly! If humans have free will, we can program a machine to have it too (someday, anyway). And since we don't know how to *construct* free will and the evidence against it is accumulating, it's reasonable to claim it doesn't exist and the burden is increasingly on those who believe in it to make their case.
But note that the construction I spitballed does NOT define free will as spontaneous. It's cumulative. In fact, that construction rejects the idea that free will is spontaneous in any way. On 6/16/20 1:39 PM, [hidden email] wrote: > Yes. I don't think Nick is every going to write such a paper (as opposed, say, to participating along with a bunch of you in writing such a book). However, as I work through the correspondence of the last week (Gawd what a splatter), I have yet to see any support for the idea that there is any fundamental reason why a computer could not be constructed to exhibit any free will that humans have. > > It begins to seem to me that "free will" and "emergence" are the same sort of concept and likely to die by the same sword. Once you define "free will" as that which is "spontaneous" (i.e., not explained by anything), you have to prepare yourself for the moment when it is explained. - .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. . FRIAM Applied Complexity Group listserv Zoom Fridays 9:30a-12p Mtn GMT-6 bit.ly/virtualfriam un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com archives: http://friam.471366.n2.nabble.com/ FRIAM-COMIC http://friam-comic.blogspot.com/
uǝʃƃ ⊥ glen
|
In the paper by Glymour I mentioned the point of view (not necessarily his) that we are zombies who tell our bodies to go thataway a few milliseconds after they've already taken off in that direction. This is one of the steelman theories of mental causation he discusses. If I'm using the term correctly. --- Frank C. Wimberly 140 Calle Ojo Feliz, Santa Fe, NM 87505 505 670-9918 Santa Fe, NM On Tue, Jun 16, 2020, 3:20 PM glen∉ℂ <[hidden email]> wrote: Exactly! If humans have free will, we can program a machine to have it too (someday, anyway). And since we don't know how to *construct* free will and the evidence against it is accumulating, it's reasonable to claim it doesn't exist and the burden is increasingly on those who believe in it to make their case. - .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. . FRIAM Applied Complexity Group listserv Zoom Fridays 9:30a-12p Mtn GMT-6 bit.ly/virtualfriam un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com archives: http://friam.471366.n2.nabble.com/ FRIAM-COMIC http://friam-comic.blogspot.com/ |
This post was updated on .
In reply to this post by thompnickson2
An attempt to steelman via wingman:
The idea that Glen is proposing is to highlight a sweet spot in one's experience where unfamiliarity competes with habit. Glen advocates for bracketing questions of a prime mover or that which happens in pathological limits. Instead, he wishes to constrain the scope of free will to a question of free versus bound with respect to some arbitrary component/scale/neighborhood (the free will zone). I will try not to fight this as I still think of this interpretation of *free will* as being a discussion of will, determined or not. For instance, I may be willful and determined. The value I see in Glen's perspective is that we can develop a grammar for discussing deliberate action, perhaps involving a Bayesian update rule to an otherwise evaporative memory or local foresight. He is advocating to not concern ourselves with whether or not Charles Bukowski was *predestined* to be a drunk, but rather with determining where the *choice* to do otherwise may have been. -- Sent from: http://friam.471366.n2.nabble.com/ - .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. . FRIAM Applied Complexity Group listserv Zoom Fridays 9:30a-12p Mtn GMT-6 bit.ly/virtualfriam un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com archives: http://friam.471366.n2.nabble.com/ FRIAM-COMIC http://friam-comic.blogspot.com/ |
As for Bayesian models of the mind se Glymour's book "The Mind's Arrows" https://www.amazon.com/dp/0262072203/ref=cm_sw_r_cp_apa_i_SXt6EbVR2ASJB --- Frank C. Wimberly 140 Calle Ojo Feliz, Santa Fe, NM 87505 505 670-9918 Santa Fe, NM On Tue, Jun 16, 2020, 3:37 PM Jon Zingale <[hidden email]> wrote: An attempt to steelman via wingman: - .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. . FRIAM Applied Complexity Group listserv Zoom Fridays 9:30a-12p Mtn GMT-6 bit.ly/virtualfriam un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com archives: http://friam.471366.n2.nabble.com/ FRIAM-COMIC http://friam-comic.blogspot.com/ |
Free forum by Nabble | Edit this page |