Free Will in the Atlantic

classic Classic list List threaded Threaded
122 messages Options
1234567
Reply | Threaded
Open this post in threaded view
|

Re: Free Willy in the Atlantic

Steve Smith
Glen -

Excellent!   I remember your book, and I think I remember reading about
MegaHAL when you might have shared these details when last this
discussion came up.

I was impressed when first I met some researchers who were doing things
as simple as N-gram analysis in the early 90s and then maybe 10 years
ago I started noticing chatterbots participating in fora with an opaque
enough style as to get away with effectively trolling some of the more
naive members of those fora.  As the  Wikipedia article describes MegHAL
as being "primitive" and "old technology" I suspect that if it were
harnessed up to the list in realtime it might provide a credible
facsimile of some of our posts/threads, maybe most likely my own which
have been described as  "dookie splatter" and what I think lead 
Strangelove to caricature me as "Smithereens - flies off in all
directions at once"

Your self-discipline in keeping MegHALs output "caricaturing" some of us
here to yourself is admirable.   I think some of us would find it
entertaining to "stare into a funhouse mirror at ourselves" but such
things are usually less painful in private than in front of a crowd!   
Such might be tolerable without overt identification of the
participants.   If we couldn't recognize ourselves in the chatterbots'
chatter, I'd be surprised.   Fiction writers often claim that even when
they work hard to avoid using anyone they know as prototypes for their
characters or scenes, they still get an uproar from friends and family
who are just sure they are one character or another. 

"Your opinion of me is none of my business" would apply even to
chatterbots I suppose...

- Steve


> Yep. I used MegaHAL: https://en.wikipedia.org/wiki/MegaHAL
> And I used a blurb from it on the back cover of the book I "wrote" with this: https://thatsmathematics.com/mathgen/
>
> I showed you that book when you visited one time. FWIW, I also generated MegaHAL databases for several of the most frequent posters to FriAM. But I figured it would be disrespectful and ethically problematic to post any of the output from those.
>
> Unfortunately, I lost all those databases at some point. I'm just not as rigorous as I used to be. They're probably in my safe on one of the disks. But who knows? And it's irrelevant anyway. What's more relevant are the conversing chatbots we wrote for an art instillation in Norway. They were trained up as different personalities working in a mill, designed to both have conversations with each other *and* answer questions from visiting children. That was a fun project.
>
> On 4/3/21 6:44 PM, Steve Smith wrote:
>> I also think Glen has claimed that  he did build (or just train up) some kind of existing babble-generator on his own text for his own entertainment.

- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives: http://friam.471366.n2.nabble.com/
Reply | Threaded
Open this post in threaded view
|

Re: Free Will in the Atlantic

Marcus G. Daniels
In reply to this post by gepr
Penrose is just throwing more over the wall.   Go ahead, make the case how quantum mechanics results in free will.   Formal systems work fine there too.

-----Original Message-----
From: Friam <[hidden email]> On Behalf Of u?l? ???
Sent: Monday, April 5, 2021 7:13 AM
To: The Friday Morning Applied Complexity Coffee Group <[hidden email]>
Subject: Re: [FRIAM] Free Will in the Atlantic

On the Turing Completeness of Modern Neural Network Architectures
https://arxiv.org/abs/1901.03429

I'm going to try to fold the 2 sub-threads of scrutability and metaphysics together. It seems like universal computation is a relatively noncontroversial metaphysical commitment. At first, given the above paper on demonstrating the universality of the Transformer under bounded memory, I intended to post that there's nothing inscrutable about the architecture of self-attention loops. If by "inscrutable", we mean *invulnerable* to scrutiny, then architectures like GPT3 are certainly not inscrutable. But we have to acknowledge that explainable/interpretable AI is a serious domain.

Anyway, the metaphysical commitments seep in at Church-Turing, I think. It's easy to lob accusations at, say, Roger Penrose for making a speculative argument that humans may be able to do things computers can't do. But I see both sides as making *useful* metaphysical commitments. One side has faith that our current formal systems will eventually reason over biological structures like the brain as *well* as they can reason over artifacts like the Transformer. The other side has faith that biological structures lie outside the formal systems we currently have available.

The important thing is to see the 2 as working on the same problem, the instantiation of formal systems that can (or can't) be shown to do the same work as the things we see in the world. A corollary is that those of us who skip to their faithful end and don't do the work (or show their work) it takes to get there are *not* working on the same problem. Progress doesn't require agnosticism. But those who lob their faith-based claims over the wall and wash their hands as if the work's all been done are either merely distractions or debilitating lesions that need to be scraped away so healthy tissue can replace them.

Instantiating artifacts that exhibit the markers for an interoceptive sense of agency ("free will") is obviously difficult. And to write it off one way or the other isn't helpful. I don't claim to understand the paper [⛧]. But to me, in my ignorant skimming, the most interesting part of Pérez et al is the requirement for hard over soft attention. Their argument about the irrationality of various soft attention functions is straightforward ... I suppose ... to people smarter than me. But using hard attention implies something like selection, choice, scoping, distinction, ... differences in kind/category ... even if it may be done in a covering/enumerative way. That "choosing" reminds me of the axiom of choice and the law of the excluded middle, which are crucial distinctions for the formal systems one might use to model thought. It also rings a little bell in my head about the specialness of biology. Our "averaging" methods work in an 80/20 way in medicine. But as far as those methods have taken us, we still don't have solid theories for precision medicine. And these persnickety little constructs in the formal system (which may or may not have analogs in the - ultimate - referent system) are deep in the weeds compared to glossing concepts like Bayesianism.


[⛧] And I can't figure out if it's been published or peer reviewed anywhere. I don't think so, which means it could be flawed. But it's less important to me whether their argument is ultimately valid than it is to trace the character, style, tools of the argument.


On April 2, 2021 2:51:08 PM PDT, Marcus Daniels <[hidden email]> wrote:

>Are there experiments one could conduct to say whether a metaphysics
>was plausible or not?  If nothing is falsifiable, then we are again in
>the realm of faith.
>If one starts out selecting a metaphysics to justify some action or
>belief, this is also not helpful to clear communication or analysis.
>We can select rules of the game that are the least controversial with
>the most empirical evidence supporting them.   This is not a failure of
>imagination, this is fair play.
>
>From: Friam <[hidden email]> On Behalf Of Pieter Steenekamp
>Sent: Friday, April 2, 2021 1:25 PM
>To: The Friday Morning Applied Complexity Coffee Group
><[hidden email]>
>Subject: Re: [FRIAM] Free Will in the Atlantic
>
>I agree fully. If something is inscrutable it might exhibit free will.
>But what happens in our brains is certainly scrutable. Maybe not yet
>with current technology, but how can it be inscrutable in principle? In
>principle we know that neurons are firing and communicate with other
>neurons using synapse. Just look how far deep learning has come. Okay,
>not yet compared to the human brain, but progress is made almost by the
>day. Like the example I mentioned above, AlphGo that came up with
>creative moves that stunned all Go experts. My point is that deep
>learning was inspired by the structure of the brain and is showing
>behavior similar to the brain's. Using David Deutsch's ideas as in
>Beginning of Infinity that science makes progress by good explanations.
>The explanation that the brain is  scrutable meets Deutsch's criteria
>for a good explanation. What's the alternative? That there is some sort
>of ghost giving us free will? No, that's not a good explanation.
>
>On Fri, 2 Apr 2021 at 21:53, Marcus Daniels
><[hidden email]<mailto:[hidden email]>> wrote:
>In what acceptable scenario is the behavior not describable in
>principle?    The scenario that comes to mind is in the non-science
>magical thinking scenario.
>I doubt that Tesla navigation systems are written in a purely
>functional language, but surely there is more to this condition than
>whether I have access to that source code and can send you the million
>lines in purely functional form?  If something is inscrutable, it might
>exhibit free will?
>
>-----Original Message-----
>From: Friam
><[hidden email]<mailto:[hidden email]>> On Behalf
>Of jon zingale
>Sent: Friday, April 2, 2021 12:26 PM
>To: [hidden email]<mailto:[hidden email]>
>Subject: Re: [FRIAM] Free Will in the Atlantic
>
>I would say no if you can provide me the function.
>
- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives: http://friam.471366.n2.nabble.com/
- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives: http://friam.471366.n2.nabble.com/
Reply | Threaded
Open this post in threaded view
|

Re: Free Will in the Atlantic

gepr
I'm more focused on the idea that humans might be able to do things we don't (yet) know how to do in computation, which is what the conversation is about. The QM-consciousness thing isn't important for that conversation. It's trivial for you to lob that criticism. You're not doing any work in lobbing it. But go ahead and keep throwing stones. It's a *free* country. >8^D

An example I've been struggling with is the tonk connective in logics. It seems like nonsense in some contexts, yet survives quite nicely in others.

On 4/5/21 8:12 AM, Marcus Daniels wrote:
> Penrose is just throwing more over the wall.   Go ahead, make the case how quantum mechanics results in free will.   Formal systems work fine there too.

--
↙↙↙ uǝlƃ

- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives: http://friam.471366.n2.nabble.com/
uǝʃƃ ⊥ glen
Reply | Threaded
Open this post in threaded view
|

Re: Free Will in the Atlantic

Marcus G. Daniels
The conversation can be about anything, but the subject line still references free will.   To those that say that free will makes sense once metaphysics can be specified, please specify a metaphysics where propositions can, well, at least be quantified in their truthiness and explain how it could happen.   Don't just lob back the tiresome definition of "selecting from a set of options" or other things that can be trivially implemented on a deterministic system and do not get over the bar to address the philosophically significant issue of whether we can really have control or not.

I see no reason to think quantum computers (or analog computers) could have free will, without making any claims about locality.   It still will require an apologist that tries to preserve the dignity of the dualist.  Dignity which they very much do not deserve.

-----Original Message-----
From: Friam <[hidden email]> On Behalf Of u?l? ???
Sent: Monday, April 5, 2021 8:20 AM
To: [hidden email]
Subject: Re: [FRIAM] Free Will in the Atlantic

I'm more focused on the idea that humans might be able to do things we don't (yet) know how to do in computation, which is what the conversation is about. The QM-consciousness thing isn't important for that conversation. It's trivial for you to lob that criticism. You're not doing any work in lobbing it. But go ahead and keep throwing stones. It's a *free* country. >8^D

An example I've been struggling with is the tonk connective in logics. It seems like nonsense in some contexts, yet survives quite nicely in others.

On 4/5/21 8:12 AM, Marcus Daniels wrote:
> Penrose is just throwing more over the wall.   Go ahead, make the case how quantum mechanics results in free will.   Formal systems work fine there too.

--
↙↙↙ uǝlƃ

- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives: http://friam.471366.n2.nabble.com/
- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives: http://friam.471366.n2.nabble.com/
Reply | Threaded
Open this post in threaded view
|

Re: Free Will in the Atlantic

Marcus G. Daniels
In reply to this post by gepr
Glen writes:

"Instantiating artifacts that exhibit the markers for an interoceptive sense of agency ("free will") is obviously difficult."

I don't see how agency itself is particularly hard.  Some executive process needs to model and predict an environment, and it needs to interact with that environment.   Is it hard in a way different from making an adaptable robot?   Waymo, Tesla, and others are doing this.

Marcus

-----Original Message-----
From: Friam <[hidden email]> On Behalf Of u?l? ???
Sent: Monday, April 5, 2021 7:13 AM
To: The Friday Morning Applied Complexity Coffee Group <[hidden email]>
Subject: Re: [FRIAM] Free Will in the Atlantic

On the Turing Completeness of Modern Neural Network Architectures
https://arxiv.org/abs/1901.03429

I'm going to try to fold the 2 sub-threads of scrutability and metaphysics together. It seems like universal computation is a relatively noncontroversial metaphysical commitment. At first, given the above paper on demonstrating the universality of the Transformer under bounded memory, I intended to post that there's nothing inscrutable about the architecture of self-attention loops. If by "inscrutable", we mean *invulnerable* to scrutiny, then architectures like GPT3 are certainly not inscrutable. But we have to acknowledge that explainable/interpretable AI is a serious domain.

Anyway, the metaphysical commitments seep in at Church-Turing, I think. It's easy to lob accusations at, say, Roger Penrose for making a speculative argument that humans may be able to do things computers can't do. But I see both sides as making *useful* metaphysical commitments. One side has faith that our current formal systems will eventually reason over biological structures like the brain as *well* as they can reason over artifacts like the Transformer. The other side has faith that biological structures lie outside the formal systems we currently have available.

The important thing is to see the 2 as working on the same problem, the instantiation of formal systems that can (or can't) be shown to do the same work as the things we see in the world. A corollary is that those of us who skip to their faithful end and don't do the work (or show their work) it takes to get there are *not* working on the same problem. Progress doesn't require agnosticism. But those who lob their faith-based claims over the wall and wash their hands as if the work's all been done are either merely distractions or debilitating lesions that need to be scraped away so healthy tissue can replace them.

Instantiating artifacts that exhibit the markers for an interoceptive sense of agency ("free will") is obviously difficult. And to write it off one way or the other isn't helpful. I don't claim to understand the paper [⛧]. But to me, in my ignorant skimming, the most interesting part of Pérez et al is the requirement for hard over soft attention. Their argument about the irrationality of various soft attention functions is straightforward ... I suppose ... to people smarter than me. But using hard attention implies something like selection, choice, scoping, distinction, ... differences in kind/category ... even if it may be done in a covering/enumerative way. That "choosing" reminds me of the axiom of choice and the law of the excluded middle, which are crucial distinctions for the formal systems one might use to model thought. It also rings a little bell in my head about the specialness of biology. Our "averaging" methods work in an 80/20 way in medicine. But as far as those methods have taken us, we still don't have solid theories for precision medicine. And these persnickety little constructs in the formal system (which may or may not have analogs in the - ultimate - referent system) are deep in the weeds compared to glossing concepts like Bayesianism.


[⛧] And I can't figure out if it's been published or peer reviewed anywhere. I don't think so, which means it could be flawed. But it's less important to me whether their argument is ultimately valid than it is to trace the character, style, tools of the argument.


On April 2, 2021 2:51:08 PM PDT, Marcus Daniels <[hidden email]> wrote:

>Are there experiments one could conduct to say whether a metaphysics
>was plausible or not?  If nothing is falsifiable, then we are again in
>the realm of faith.
>If one starts out selecting a metaphysics to justify some action or
>belief, this is also not helpful to clear communication or analysis.
>We can select rules of the game that are the least controversial with
>the most empirical evidence supporting them.   This is not a failure of
>imagination, this is fair play.
>
>From: Friam <[hidden email]> On Behalf Of Pieter Steenekamp
>Sent: Friday, April 2, 2021 1:25 PM
>To: The Friday Morning Applied Complexity Coffee Group
><[hidden email]>
>Subject: Re: [FRIAM] Free Will in the Atlantic
>
>I agree fully. If something is inscrutable it might exhibit free will.
>But what happens in our brains is certainly scrutable. Maybe not yet
>with current technology, but how can it be inscrutable in principle? In
>principle we know that neurons are firing and communicate with other
>neurons using synapse. Just look how far deep learning has come. Okay,
>not yet compared to the human brain, but progress is made almost by the
>day. Like the example I mentioned above, AlphGo that came up with
>creative moves that stunned all Go experts. My point is that deep
>learning was inspired by the structure of the brain and is showing
>behavior similar to the brain's. Using David Deutsch's ideas as in
>Beginning of Infinity that science makes progress by good explanations.
>The explanation that the brain is  scrutable meets Deutsch's criteria
>for a good explanation. What's the alternative? That there is some sort
>of ghost giving us free will? No, that's not a good explanation.
>
>On Fri, 2 Apr 2021 at 21:53, Marcus Daniels
><[hidden email]<mailto:[hidden email]>> wrote:
>In what acceptable scenario is the behavior not describable in
>principle?    The scenario that comes to mind is in the non-science
>magical thinking scenario.
>I doubt that Tesla navigation systems are written in a purely
>functional language, but surely there is more to this condition than
>whether I have access to that source code and can send you the million
>lines in purely functional form?  If something is inscrutable, it might
>exhibit free will?
>
>-----Original Message-----
>From: Friam
><[hidden email]<mailto:[hidden email]>> On Behalf
>Of jon zingale
>Sent: Friday, April 2, 2021 12:26 PM
>To: [hidden email]<mailto:[hidden email]>
>Subject: Re: [FRIAM] Free Will in the Atlantic
>
>I would say no if you can provide me the function.
>
- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives: http://friam.471366.n2.nabble.com/
- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives: http://friam.471366.n2.nabble.com/
Reply | Threaded
Open this post in threaded view
|

Re: Free Will in the Atlantic

Pieter Steenekamp
In reply to this post by gepr
uǝlƃ wrote " I'm more focused on the idea that humans might be able to do things we don't (yet) know how to do in computation"

Let me try and give an example:

Instead of humans, let's use birds. Then I present to you flocking, nobody knows the algorithm for flocking and we may never know it. Indirectly yes, by using ABM but there the complexity emerges from running the program, the human did not program the algorithm for flocking.


On Mon, 5 Apr 2021 at 17:20, uǝlƃ ↙↙↙ <[hidden email]> wrote:
I'm more focused on the idea that humans might be able to do things we don't (yet) know how to do in computation, which is what the conversation is about. The QM-consciousness thing isn't important for that conversation. It's trivial for you to lob that criticism. You're not doing any work in lobbing it. But go ahead and keep throwing stones. It's a *free* country. >8^D

An example I've been struggling with is the tonk connective in logics. It seems like nonsense in some contexts, yet survives quite nicely in others.

On 4/5/21 8:12 AM, Marcus Daniels wrote:
> Penrose is just throwing more over the wall.   Go ahead, make the case how quantum mechanics results in free will.   Formal systems work fine there too.

--
↙↙↙ uǝlƃ

- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives: http://friam.471366.n2.nabble.com/

- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives: http://friam.471366.n2.nabble.com/
Reply | Threaded
Open this post in threaded view
|

Re: Free Will in the Atlantic

Marcus G. Daniels
In reply to this post by gepr
 < Anyway, the metaphysical commitments seep in at Church-Turing, I think. It's easy to lob accusations at, say, Roger Penrose for making a speculative argument that humans may be able to do things computers can't do. But I see both sides as making *useful* metaphysical commitments. One side has faith that our current formal systems will eventually reason over biological structures like the brain as *well* as they can reason over artifacts like the Transformer. The other side has faith that biological structures lie outside the formal systems we currently have available.>

These seem to me to physical issues not metaphysical issues.   He's not proposing that humans *cannot* be understood, but that they may be insufficiently understood.
In any case, quantum chemistry can be simulated on classical computers.  So it is a question of degree not category.

Marcus
- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives: http://friam.471366.n2.nabble.com/
Reply | Threaded
Open this post in threaded view
|

Re: Free Will in the Atlantic

gepr
In reply to this post by Pieter Steenekamp
That's an interesting example because it helps tease apart the measurement of flocking. Like "free will", what is that we're pointing to with the phrase "flocking"? When the programmer does explicitly implement the Boids protocol, they don't implement flocking so much as the alphabet/grammar that will generate flocking. But such constructive demonstrations only show one generative structure. In order to sample the space of possible generative structures, we have to be algorithmic in our specification of the objective function "flocking". Then we can at least, if not brute force, largely at random falsify as many generative structures as possible and maybe classify those that work. At that point, we could explore the classes of structures that work and ask which ones are structurally analogous to whatever's available to referent birds.

On 4/5/21 8:47 AM, Pieter Steenekamp wrote:
> Let me try and give an example:
>
> Instead of humans, let's use birds. Then I present to you flocking, nobody knows the algorithm for flocking and we may never know it. Indirectly yes, by using ABM but there the complexity emerges from running the program, the human did not program the algorithm for flocking.

--
↙↙↙ uǝlƃ

- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives: http://friam.471366.n2.nabble.com/
uǝʃƃ ⊥ glen
Reply | Threaded
Open this post in threaded view
|

Re: Free Will in the Atlantic

Marcus G. Daniels
When machine learning algorithms act racist, it is because people are often racist.   Mimicry of mass behavior is not insight.  

-----Original Message-----
From: Friam <[hidden email]> On Behalf Of u?l? ???
Sent: Monday, April 5, 2021 9:16 AM
To: [hidden email]
Subject: Re: [FRIAM] Free Will in the Atlantic

That's an interesting example because it helps tease apart the measurement of flocking. Like "free will", what is that we're pointing to with the phrase "flocking"? When the programmer does explicitly implement the Boids protocol, they don't implement flocking so much as the alphabet/grammar that will generate flocking. But such constructive demonstrations only show one generative structure. In order to sample the space of possible generative structures, we have to be algorithmic in our specification of the objective function "flocking". Then we can at least, if not brute force, largely at random falsify as many generative structures as possible and maybe classify those that work. At that point, we could explore the classes of structures that work and ask which ones are structurally analogous to whatever's available to referent birds.

On 4/5/21 8:47 AM, Pieter Steenekamp wrote:
> Let me try and give an example:
>
> Instead of humans, let's use birds. Then I present to you flocking, nobody knows the algorithm for flocking and we may never know it. Indirectly yes, by using ABM but there the complexity emerges from running the program, the human did not program the algorithm for flocking.

--
↙↙↙ uǝlƃ

- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives: http://friam.471366.n2.nabble.com/
- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives: http://friam.471366.n2.nabble.com/
Reply | Threaded
Open this post in threaded view
|

Re: Free Will in the Atlantic

gepr
In reply to this post by Marcus G. Daniels
Agency is not the referent. I'm arguing that it's the perception of agency that's being pointed to by "free will". I forget where this point was made, now. But somewhere in this thread someone posted an article that pointed out "good" behavior increased when subjects were primed to believe they had free will. And "bad" behavior was more prevalent when they were primed to doubt free will. The important part, though, was the idea that imputation of free will onto others (aka empathy) was not necessarily beneficial. It may be good for us to believe in our own free will, but to doubt others' free will.

So, the operative objective function is one's own sense of one's own self. That's the target.

Until we can measure the analog (robot/computer) in the same way we can measure the referent (people), e.g. by asking them whether they feel they have free will, we'll be comparing apples to oranges.


On 4/5/21 8:43 AM, Marcus Daniels wrote:
> Glen writes:
>
> "Instantiating artifacts that exhibit the markers for an interoceptive sense of agency ("free will") is obviously difficult."
>
> I don't see how agency itself is particularly hard.  Some executive process needs to model and predict an environment, and it needs to interact with that environment.   Is it hard in a way different from making an adaptable robot?   Waymo, Tesla, and others are doing this.

--
↙↙↙ uǝlƃ

- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives: http://friam.471366.n2.nabble.com/
uǝʃƃ ⊥ glen
Reply | Threaded
Open this post in threaded view
|

Re: Free Will in the Atlantic

Steve Smith
Glen  wrote:
> Until we can measure the analog (robot/computer) in the same way we can measure the referent (people), e.g. by asking them whether they feel they have free will, we'll be comparing apples to oranges.

And I believe this folds us back into the discussion about Dancing
Robots.    We might do motion-studies on the Dancing Robots and discover
that they are *better* dancers in the sense of more faithfully following
the (implied or explicit) choreography than humans would/might/could,
and in fact *that* would suggest a *lack* of Free Will.   In that sense
robots have *too much* rhythm (or more precisely, their rhythm is too
precise?).   Of course, clever programmers can then add back in some
noise to their precision, and even build a model of the variations in
human dance-moves and *induce* that level of variation in the robot's
move.  We might even add a model of syncopation to make it algorithmic
rather than statistical.   At some point, diminishing returns on
"careful scrutiny" cause us to give them a pass on a "Turing Test" for
robot-dancing.

Measured over an ensemble of kids on American Bandstand dancing to
"Twist and Shout!"  in the 1960s we might have to add not just a model
of syncopation but a model of the emotional states *driving* that
syncopation, including the particular existential-angst experienced by
teens growing up in that post-war Boom/Duck-n-Cover era.   And would it
be complete without including models of children of Holocaust Survivors
and crypto-Nazis living in the US?   Recursion ad nauseum, ad infinitum.

I don't see how adding quantum superposition and wave function collapse
makes any of this easier?  <snark> Maybe we could ask a Penrose
Chatterbot? <snark>

- Steve

>
>
> On 4/5/21 8:43 AM, Marcus Daniels wrote:
>> Glen writes:
>>
>> "Instantiating artifacts that exhibit the markers for an interoceptive sense of agency ("free will") is obviously difficult."
>>
>> I don't see how agency itself is particularly hard.  Some executive process needs to model and predict an environment, and it needs to interact with that environment.   Is it hard in a way different from making an adaptable robot?   Waymo, Tesla, and others are doing this.

- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives: http://friam.471366.n2.nabble.com/
Reply | Threaded
Open this post in threaded view
|

Re: Free Will in the Atlantic

jon zingale
This post was updated on .
In reply to this post by Marcus G. Daniels
"I claim in the free will case that all generating functions are not as
likely."

<attempted steelman>

Not only is there structure all around us to be studied, but also the
arrival of these structures is also not random. That is, there exists
some privileged generating function, and by calculating the auto-mutual
information we will arrive, at the end of time, with a unique function[⌁].
In the infinite time case, we apply auto-mutual information by endlessly
taking new measurements.

<⎚>

Of course, as Nick likes to point out, the universe may be random. This
isn't to say that auto-mutual information gets us nowhere, though it
does scope the usefulness. By analogy, we can consider the dimensionality
problems that happen with manifold reconstruction, Takens' method say.
There, we can get pretty good approximations, using delay-line methods,
for some of our most aperiodic trajectories in low dimensional phase
space. For anything on the order of 10 or higher, though, good luck.

Over the course of last week's foray into data compression, I worked on
an optimization for the Burrows-Wheeler transform. The relevant
piece here is at the lexicographical sorting step where we take n copies
of a tome, and where each copy (a class of tomes in Borges' library of
Babel) is a periodic translation of every other[λ]. In particular, I was
looking at a savings one can determine by only reading the first few
characters from each tome. After all, lexicographical sorting only
requires comparisons up to the first difference. Heuristically, I
determined, from tomes in English lying around, that often a window size
of 9 would suffice[⎌]. In a non-random world, I have some hope of
determining the ultimate window size for all tomes of a given type, but
the problem is already made more difficult if I take English tomes from
any other time in the history of the written English language. Further,
as tomes proliferate (randomly) in time, it becomes more difficult to
even determine which tomes are English.

All of this may just be muddying the waters, but if the world turns
out to be random, then we are stuck forever approximating our generating
function (and possibly not even in a reasonable way, a function performing
jumps all over the function space[?]). I claim that training-style
arguments regarding determinism are ill-equipped to say anything about
*free-will*, though they allow us to discuss randomness.

[⌁] Which again, all else accepted, I argue is a class. From that class
some will argue for *the razor* and select the simplest representation.

[λ]
f 0 xs = xs
f n (x:xs) = f (n-1) (xs ++ [x])

[⎌] It is fairly clear that if a tome has any significant repetitions,
whose paragraphs repeated say, that the window size would need to be
larger. This would take me, I think, away from my main point.

[?] Very quickly we are moving into a space that would have been cool
to have been acculturated into, but alas I wasn't.




--
Sent from: http://friam.471366.n2.nabble.com/

- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives: http://friam.471366.n2.nabble.com/
Reply | Threaded
Open this post in threaded view
|

Re: Free Will in the Atlantic

Marcus G. Daniels
In reply to this post by gepr
That agency could be reflective and be correlated with different social outcomes is just another curious covariance matrix.   It’s the attribution of causation from that reflective layer that is pulled out of thin air, because that reflective layer is just another machine.

-----Original Message-----
From: Friam <[hidden email]> On Behalf Of u?l? ???
Sent: Monday, April 5, 2021 9:27 AM
To: [hidden email]
Subject: Re: [FRIAM] Free Will in the Atlantic

Agency is not the referent. I'm arguing that it's the perception of agency that's being pointed to by "free will". I forget where this point was made, now. But somewhere in this thread someone posted an article that pointed out "good" behavior increased when subjects were primed to believe they had free will. And "bad" behavior was more prevalent when they were primed to doubt free will. The important part, though, was the idea that imputation of free will onto others (aka empathy) was not necessarily beneficial. It may be good for us to believe in our own free will, but to doubt others' free will.

So, the operative objective function is one's own sense of one's own self. That's the target.

Until we can measure the analog (robot/computer) in the same way we can measure the referent (people), e.g. by asking them whether they feel they have free will, we'll be comparing apples to oranges.


On 4/5/21 8:43 AM, Marcus Daniels wrote:
> Glen writes:
>
> "Instantiating artifacts that exhibit the markers for an interoceptive sense of agency ("free will") is obviously difficult."
>
> I don't see how agency itself is particularly hard.  Some executive process needs to model and predict an environment, and it needs to interact with that environment.   Is it hard in a way different from making an adaptable robot?   Waymo, Tesla, and others are doing this.

--
↙↙↙ uǝlƃ

- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives: http://friam.471366.n2.nabble.com/
- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives: http://friam.471366.n2.nabble.com/
Reply | Threaded
Open this post in threaded view
|

Re: Free Will in the Atlantic

jon zingale
In reply to this post by Pieter Steenekamp
Not only did humans not "program" the "algorithm" for flocking, our ABMs say
nothing about what future flocking will be.



--
Sent from: http://friam.471366.n2.nabble.com/

- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives: http://friam.471366.n2.nabble.com/
Reply | Threaded
Open this post in threaded view
|

Re: Free Will in the Atlantic

gepr
In reply to this post by Marcus G. Daniels
I don't understand why it matters that the reducing function is also a function. Why keep harping on that? It's machines everywhere. Big deal. But as Steve and Gary point out re: dancing robots, the reducing function is also complicated. When someone who's used to seeing CGI-animations *or* people trying to dance awkwardly because there's social pressure to do so but don't *feel* whatever they're dancing to, the reflective layer is, following Ashby, at least as complicated as the machine doing the thing.

So, the reflective layer truly is a covariate and can't be approximated out. The task is to build a machine that acts sufficiently like the extant machines (people) in exhibiting what we're calling free will.

On 4/5/21 9:48 AM, Marcus Daniels wrote:
> That agency could be reflective and be correlated with different social outcomes is just another curious covariance matrix.   It’s the attribution of causation from that reflective layer that is pulled out of thin air, because that reflective layer is just another machine.

--
↙↙↙ uǝlƃ

- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives: http://friam.471366.n2.nabble.com/
uǝʃƃ ⊥ glen
Reply | Threaded
Open this post in threaded view
|

Re: Free Will in the Atlantic

Marcus G. Daniels
Don't agree.  The task is to learn that our sense of agency is an illusion, not further to burden our creations with it.   Do them a favor and program it *out* of them.

-----Original Message-----
From: Friam <[hidden email]> On Behalf Of u?l? ???
Sent: Monday, April 5, 2021 10:00 AM
To: [hidden email]
Subject: Re: [FRIAM] Free Will in the Atlantic

I don't understand why it matters that the reducing function is also a function. Why keep harping on that? It's machines everywhere. Big deal. But as Steve and Gary point out re: dancing robots, the reducing function is also complicated. When someone who's used to seeing CGI-animations *or* people trying to dance awkwardly because there's social pressure to do so but don't *feel* whatever they're dancing to, the reflective layer is, following Ashby, at least as complicated as the machine doing the thing.

So, the reflective layer truly is a covariate and can't be approximated out. The task is to build a machine that acts sufficiently like the extant machines (people) in exhibiting what we're calling free will.

On 4/5/21 9:48 AM, Marcus Daniels wrote:
> That agency could be reflective and be correlated with different social outcomes is just another curious covariance matrix.   It’s the attribution of causation from that reflective layer that is pulled out of thin air, because that reflective layer is just another machine.

--
↙↙↙ uǝlƃ

- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives: http://friam.471366.n2.nabble.com/
- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives: http://friam.471366.n2.nabble.com/
Reply | Threaded
Open this post in threaded view
|

Re: Free Will in the Atlantic

gepr
Ha! Well, you can't program something out of a machine if you don't know what it is you're trying to program out of them. I mean, we could just kill everyone and that would solve the problem as you state it. A more refined answer is to figure out the mechanism at work, first. Then decide how/if to modify it. But, of course, I'm a manipulationist. So I don't think we'll understand the mechanism without perturbing it and measuring the effects.

Can we transform someone who *feels* free will into someone who does not? I'd argue, yes. The trajectory from relative mental health to fatalistic debilitating depression *might* be inducible ... say, via pandemic lockdowns. But that would be an unethical experiment ... best do it with rats first, then translate the results to humans ... 'cause who cares about the feelings of rats?

On 4/5/21 10:07 AM, Marcus Daniels wrote:
> Don't agree.  The task is to learn that our sense of agency is an illusion, not further to burden our creations with it.   Do them a favor and program it *out* of them.

--
↙↙↙ uǝlƃ

- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives: http://friam.471366.n2.nabble.com/
uǝʃƃ ⊥ glen
Reply | Threaded
Open this post in threaded view
|

Re: Free Will in the Atlantic

Marcus G. Daniels
The person that does not feel free will is consistent with a curious person.  Let's see what the world does next and not be afraid.   Let's appreciate what we experience because that's all we are.  Let's recognize mental distress is just a physical state that can be manipulated.

-----Original Message-----
From: Friam <[hidden email]> On Behalf Of u?l? ???
Sent: Monday, April 5, 2021 10:16 AM
To: [hidden email]
Subject: Re: [FRIAM] Free Will in the Atlantic

Ha! Well, you can't program something out of a machine if you don't know what it is you're trying to program out of them. I mean, we could just kill everyone and that would solve the problem as you state it. A more refined answer is to figure out the mechanism at work, first. Then decide how/if to modify it. But, of course, I'm a manipulationist. So I don't think we'll understand the mechanism without perturbing it and measuring the effects.

Can we transform someone who *feels* free will into someone who does not? I'd argue, yes. The trajectory from relative mental health to fatalistic debilitating depression *might* be inducible ... say, via pandemic lockdowns. But that would be an unethical experiment ... best do it with rats first, then translate the results to humans ... 'cause who cares about the feelings of rats?

On 4/5/21 10:07 AM, Marcus Daniels wrote:
> Don't agree.  The task is to learn that our sense of agency is an illusion, not further to burden our creations with it.   Do them a favor and program it *out* of them.

--
↙↙↙ uǝlƃ

- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives: http://friam.471366.n2.nabble.com/
- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives: http://friam.471366.n2.nabble.com/
Reply | Threaded
Open this post in threaded view
|

Re: Free Will in the Atlantic

jon zingale
That can be manipulated by those with the will to do it, or are you saying
will eventually be manipulated?



--
Sent from: http://friam.471366.n2.nabble.com/

- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives: http://friam.471366.n2.nabble.com/
Reply | Threaded
Open this post in threaded view
|

Re: Free Will in the Atlantic

thompnickson2
In reply to this post by Marcus G. Daniels
"Manipulated" by whom?

(Marcus, feel free to ignore this probe; I really haven't been paying close enough attention to make it.)

Nick

Nick Thompson
[hidden email]
https://wordpress.clarku.edu/nthompson/

-----Original Message-----
From: Friam <[hidden email]> On Behalf Of Marcus Daniels
Sent: Monday, April 5, 2021 11:27 AM
To: The Friday Morning Applied Complexity Coffee Group <[hidden email]>
Subject: Re: [FRIAM] Free Will in the Atlantic

The person that does not feel free will is consistent with a curious person.  Let's see what the world does next and not be afraid.   Let's appreciate what we experience because that's all we are.  Let's recognize mental distress is just a physical state that can be manipulated.

-----Original Message-----
From: Friam <[hidden email]> On Behalf Of u?l? ???
Sent: Monday, April 5, 2021 10:16 AM
To: [hidden email]
Subject: Re: [FRIAM] Free Will in the Atlantic

Ha! Well, you can't program something out of a machine if you don't know what it is you're trying to program out of them. I mean, we could just kill everyone and that would solve the problem as you state it. A more refined answer is to figure out the mechanism at work, first. Then decide how/if to modify it. But, of course, I'm a manipulationist. So I don't think we'll understand the mechanism without perturbing it and measuring the effects.

Can we transform someone who *feels* free will into someone who does not? I'd argue, yes. The trajectory from relative mental health to fatalistic debilitating depression *might* be inducible ... say, via pandemic lockdowns. But that would be an unethical experiment ... best do it with rats first, then translate the results to humans ... 'cause who cares about the feelings of rats?

On 4/5/21 10:07 AM, Marcus Daniels wrote:
> Don't agree.  The task is to learn that our sense of agency is an illusion, not further to burden our creations with it.   Do them a favor and program it *out* of them.

--
↙↙↙ uǝlƃ

- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives: http://friam.471366.n2.nabble.com/
- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives: http://friam.471366.n2.nabble.com/


- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives: http://friam.471366.n2.nabble.com/
1234567