GPT-3 and the chinese room

classic Classic list List threaded Threaded
26 messages Options
12
Reply | Threaded
Open this post in threaded view
|

Re: gelotophilia

Steve Smith

https://www.wired.com/2011/07/international-humor-conference/

https://www.zora.uzh.ch/id/eprint/14037/1/Ruch_Proyer_PhoPhiKat_V.pdf

And this link too: https://www.wired.com/2011/04/ff-humorcode/
- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/ 
Reply | Threaded
Open this post in threaded view
|

Re: GPT-3 and the chinese room

doug carmichael
In reply to this post by thompnickson2
words emerge as adapted sounds to complex contexts of emergence. They are not defined,  except approximately. if ever, later. 

(I tried a Heider and Simmel film  of dots that elicited human feelings of drama while I was at Berkeley:  line with a break in it, and a bunch of agitated  dots,  made with a three hole punch and black paper,  on one side of the doorway/hole, and then try to pass through and block each other. It was visceral for the viewer but probably not for the dots.)

On Jul 28, 2020, at 9:38 AM, <[hidden email]> <[hidden email]> wrote:

Hi Doug, 
 
I changed the subject line to head off accusations of dragging this lofty discussion into my nasty, fetid den.
 
dog is highly interconnected - hormones, nerves, senses, and environment. neurons are not binary . every synapse is an infinite state variable. 
 
These points might serve as an explanation for why dogs can and computers cannot exhibit joy – but only once we had agreed, up front, what it would be for a computer to exhibit joy.    For my part, I guess, I would say that to exhibit joy, a computer would have to be “embodied” – i.e., be a robot acting in an environment, probably a social environment – and that robot would have to behave joyously.  Or perhaps it could instruct an icon, in a screen environment, to behavior joyously.  But I assume any one of a dozen of the people on this list could design such a robot, or icon, once you and I had done the hard work of defining “joyous.”
 
Programmers do this with games, etc., all the time. 
 
Heider and Simmel did it with a time-lapse camera and a few felt icons on a glass draft deflector.
 
Lee Rudolph, if he is still amongst us, can send you a program in netlogo where an icon exhibits joy. 
 
Following early Tolman here.  
 
N
 
Nicholas Thompson
Emeritus Professor of Ethology and Psychology
Clark University
 
 
From: Friam <[hidden email]> On Behalf Of doug carmichael
Sent: Tuesday, July 28, 2020 9:20 AM
To: The Friday Morning Applied Complexity Coffee Group <[hidden email]>
Subject: Re: [FRIAM] GPT-3 and the chinese room
 

dog is highly interconnected - hormones, nerves, senses, and environment. neurons are not binary . every synapse is an infinite state variable. 

doug


On Jul 27, 2020, at 10:45 PM, [hidden email] wrote:


Doug, 
 
Dog do joy; why not computers?  
 
n
 
Nicholas Thompson
Emeritus Professor of Ethology and Psychology
Clark University
 
 
From: Friam <[hidden email]> On Behalf Of doug carmichael
Sent: Monday, July 27, 2020 9:54 PM
To: The Friday Morning Applied Complexity Coffee Group <[hidden email]>
Subject: Re: [FRIAM] GPT-3 and the chinese room
 
I imagine playing chess, or go, with  a computer. As I play I have a very enlivening experience of playing. The computer seems to have no such thing.  For me, in my engagement, “Every neuron is listening to the mutter of the crowd.” Jerry Lettvin, MIT.   If The computer goes on to win it has nothing like the experience of winning. it just stops. I can’t imagine a computer saying,  except by playing a pre recorded sound file, “that is disgusting.”
 
 



On Jul 27, 2020, at 8:12 PM, Frank Wimberly <[hidden email]> wrote:
 

Smart computer (program).  Nick, is the behaviorist position that if it behaves as though it thinks than it thinks?

---
Frank C. Wimberly
140 Calle Ojo Feliz, 
Santa Fe, NM 87505

505 670-9918
Santa Fe, NM
 
On Mon, Jul 27, 2020, 8:28 PM Alexander Rasmus <[hidden email]> wrote:
There's also a lot of stuff that isn't poetry. He generates Dwarf Fortress patch notes in the Misc section. 
 
There's even a section where GPT-3 argues that it doesn't really understand anything (bold at the start is a shortened version of the prompt):
"AI will never be able to truly think for themselves, to have consciousness, to feel any of the richness and complexity of the world that we mere humans can feel. Mere, unenlightened humans might be impressed by the abilities of simple deep learning programs, but when looked at in a more holistic manner, it all adds up to… well, nothing. They still don’t exhibit any trace of consciousness. All of the available data support the notion that humans feel and experience the world differently than computers do. While a computer can beat a human master at chess or Go or some other game of structured rules, it will never be able to truly think outside of those rules, it will never be able to come up with its own new strategies on the fly, it will never be able to feel, to react, the way a human can. Artificial intelligence programs lack consciousness and self-awareness. They will never be able to have a sense of humor. They will never be able to appreciate art, or beauty, or love. They will never feel lonely. They will never have empathy for other people, for animals, for the environment. They will never enjoy music or fall in love, or cry at the drop of a hat. Merely by existing, mere, unenlightened humans are intellectually superior to computers, no matter how good our computers get at winning games like Go or Jeopardy. We don’t live by the rules of those games. Our minds are much, much bigger than that.
 
Compared to the awesome power of the human intellect, it’s obvious that computers are just dense electrochemical machines with intricate wiring. And while one day we might develop smarter (though still soulless) computers that surpass the human intellect in the games mentioned above, it will just end up being some more sophisticated way of doing the same thing. Even if it’s more efficient than our brains at doing complex mathematical operations, it will remain entirely in the dark about what it all means. Mindless processing power alone is not the answer to our existential questions. So, in the end, we don’t need to create another HAL 9000, or Skynet, or some evil global intelligent network of computers that rule the world and try to exterminate humanity as we become obsolete, which is a pretty cool subplot for a future Terminator movie. We don’t need any of that. We’re humans, and we are already intellectually superior to machines. End of story.
 
To be sure, I’m not denying that there are hard problems in AI. Of course, there are. All of these so called “advances” in this field are a bunch of crap if we still don’t have a program that can putt around on a miniaturized golf course all by itself, let alone actually play a decent game of golf like the humans do, without going into meltdown if it misses the hole five out of six times in a row. Can we get a movie about that? If you ask me, we don’t even really know how to define intelligence yet. How does it work, exactly? It’s a mess, really. If we’re ever going to create another actual HAL 9000, or yet another Skynet (what fun), we’d better be all on the same page when it comes to AI: what intelligence is, how it works, what consciousness is, what it feels like, what it really means to be self-aware. Without that common framework, trying to program yet another AI that can play yet another game like Go is like trying to blow up another Death Star with yet another way-too-large superlaser.
 
I think one of the big mistakes that computer scientists are making is that they are conflating intelligence with problem-solving. They’ve gotten into this habit of creating intricate Turing test competitions: give the computer a series of math problems, a chess board, etc., etc., give it a chat interface so you can interact with it like you would with another human being, and then see if the machine can fool you into thinking that it is a human. Once it does this, computers will have passed the Turing test and achieved general AI. Really? Is that really the way it works? I don’t see how. A computer has succeeded in faking it until it makes it, in terms of passing a Turing test competition, only if it has satisfied some pre-specified set of conditions that we know to be what a human would do in the same situation. But that is no guarantee that it has actually achieved intelligence! For all we know, computers can imitate humans until they generate the most plausible patterns of thought and behavior we know of, while all along remaining as soulless as ever. Who’s to say that the computer doesn’t merely use its programming to cheat the test? Who’s to say that it isn’t just shuffling its data around in an effort to do the most computations possible with the least amount of effort? It may succeed in conning us into thinking that it is self-aware, but that doesn’t prove that it actually is. It hasn’t actually passed the Turing test, unless we have defined it in a way that pre-determines the outcome: i.e., if the human pretends to be a computer, then it passes the test, but if the computer pretends to be a human, then it doesn’t pass the test! To me, that just doesn’t sound all that scientific."
 
Best,
Rasmus
 
On Mon, Jul 27, 2020 at 8:04 PM glen <[hidden email]> wrote:
Excellent. Thanks! I'd seen the link to Gwern from Slate Star Codex. But I loathe poetry. Now that you've recommended it, I have no choice. 8^)

On July 27, 2020 6:32:15 PM PDT, Alexander Rasmus <[hidden email]> wrote:

>Glen,
>
>Gwern has an extensive post on GPT-3 poetry experimentation here:
>https://www.gwern.net/GPT-3
>
>I strongly recommend the section on the Cyberiad, where GPT-3 stands in
>for
>Trurl's Electronic Bard:
>https://www.gwern.net/GPT-3#stanislaw-lems-cyberiad
>
>There's some discussion of fine tuning input, but I think more cases
>where
>they keep the prompt fixed and show several different outputs.

-- 
glen

- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/
- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/
- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/
 
- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/
- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/


- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/ 
Reply | Threaded
Open this post in threaded view
|

Re: gelotophilia

gepr
In reply to this post by Steve Smith
Yes! This is exactly my sentiment in objecting to the (torturously defined) concept of definite. There are a number of us here on the list who seem dyed-in-the-wool predicativists and impredicativity will be rejected at every turn, often imperiously and pretentiously. I'm not *committed* to the idea that loopiness is a primary constituent of living systems. But so few can construct a good argument *against* it that I've remained in this state for decades, now.

On 7/28/20 9:57 AM, Steve Smith wrote:
> Perhaps a properly broadly conceived General Artificial Intelligence would ultimately include all of this as well, and as deep learning evolves, it seems that there is no reason that a GI couldn't simulate the physiological feedback loops that drive and regulate some aspects of humore?


--
↙↙↙ uǝlƃ

- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/ 
uǝʃƃ ⊥ glen
Reply | Threaded
Open this post in threaded view
|

Re: GPT-3 and the chinese room

thompnickson2
In reply to this post by doug carmichael

NeAT!

 

Nicholas Thompson

Emeritus Professor of Ethology and Psychology

Clark University

[hidden email]

https://wordpress.clarku.edu/nthompson/

 

 

From: Friam <[hidden email]> On Behalf Of doug carmichael
Sent: Tuesday, July 28, 2020 11:06 AM
To: The Friday Morning Applied Complexity Coffee Group <[hidden email]>
Subject: Re: [FRIAM] GPT-3 and the chinese room

 

words emerge as adapted sounds to complex contexts of emergence. They are not defined,  except approximately. if ever, later. 

 

(I tried a Heider and Simmel film  of dots that elicited human feelings of drama while I was at Berkeley:  line with a break in it, and a bunch of agitated  dots,  made with a three hole punch and black paper,  on one side of the doorway/hole, and then try to pass through and block each other. It was visceral for the viewer but probably not for the dots.)



On Jul 28, 2020, at 9:38 AM, <[hidden email]> <[hidden email]> wrote:

 

Hi Doug, 

 

I changed the subject line to head off accusations of dragging this lofty discussion into my nasty, fetid den.

 

dog is highly interconnected - hormones, nerves, senses, and environment. neurons are not binary . every synapse is an infinite state variable. 

 

These points might serve as an explanation for why dogs can and computers cannot exhibit joy – but only once we had agreed, up front, what it would be for a computer to exhibit joy.    For my part, I guess, I would say that to exhibit joy, a computer would have to be “embodied” – i.e., be a robot acting in an environment, probably a social environment – and that robot would have to behave joyously.  Or perhaps it could instruct an icon, in a screen environment, to behavior joyously.  But I assume any one of a dozen of the people on this list could design such a robot, or icon, once you and I had done the hard work of defining “joyous.”

 

Programmers do this with games, etc., all the time. 

 

Heider and Simmel did it with a time-lapse camera and a few felt icons on a glass draft deflector.

 

Lee Rudolph, if he is still amongst us, can send you a program in netlogo where an icon exhibits joy. 

 

Following early Tolman here.  

 

N

 

Nicholas Thompson

Emeritus Professor of Ethology and Psychology

Clark University

 

 

From: Friam <[hidden email]> On Behalf Of doug carmichael
Sent: Tuesday, July 28, 2020 9:20 AM
To: The Friday Morning Applied Complexity Coffee Group <[hidden email]>
Subject: Re: [FRIAM] GPT-3 and the chinese room

 

dog is highly interconnected - hormones, nerves, senses, and environment. neurons are not binary . every synapse is an infinite state variable. 

doug




On Jul 27, 2020, at 10:45 PM, [hidden email] wrote:



Doug, 

 

Dog do joy; why not computers?  

 

n

 

Nicholas Thompson

Emeritus Professor of Ethology and Psychology

Clark University

 

 

From: Friam <[hidden email]> On Behalf Of doug carmichael
Sent: Monday, July 27, 2020 9:54 PM
To: The Friday Morning Applied Complexity Coffee Group <[hidden email]>
Subject: Re: [FRIAM] GPT-3 and the chinese room

 

I imagine playing chess, or go, with  a computer. As I play I have a very enlivening experience of playing. The computer seems to have no such thing.  For me, in my engagement, “Every neuron is listening to the mutter of the crowd.” Jerry Lettvin, MIT.   If The computer goes on to win it has nothing like the experience of winning. it just stops. I can’t imagine a computer saying,  except by playing a pre recorded sound file, “that is disgusting.”

 

 





On Jul 27, 2020, at 8:12 PM, Frank Wimberly <[hidden email]> wrote:

 

Smart computer (program).  Nick, is the behaviorist position that if it behaves as though it thinks than it thinks?

---
Frank C. Wimberly
140 Calle Ojo Feliz, 
Santa Fe, NM 87505

505 670-9918
Santa Fe, NM

 

On Mon, Jul 27, 2020, 8:28 PM Alexander Rasmus <[hidden email]> wrote:

There's also a lot of stuff that isn't poetry. He generates Dwarf Fortress patch notes in the Misc section. 

 

There's even a section where GPT-3 argues that it doesn't really understand anything (bold at the start is a shortened version of the prompt):

"AI will never be able to truly think for themselves, to have consciousness, to feel any of the richness and complexity of the world that we mere humans can feel. Mere, unenlightened humans might be impressed by the abilities of simple deep learning programs, but when looked at in a more holistic manner, it all adds up to… well, nothing. They still don’t exhibit any trace of consciousness. All of the available data support the notion that humans feel and experience the world differently than computers do. While a computer can beat a human master at chess or Go or some other game of structured rules, it will never be able to truly think outside of those rules, it will never be able to come up with its own new strategies on the fly, it will never be able to feel, to react, the way a human can. Artificial intelligence programs lack consciousness and self-awareness. They will never be able to have a sense of humor. They will never be able to appreciate art, or beauty, or love. They will never feel lonely. They will never have empathy for other people, for animals, for the environment. They will never enjoy music or fall in love, or cry at the drop of a hat. Merely by existing, mere, unenlightened humans are intellectually superior to computers, no matter how good our computers get at winning games like Go or Jeopardy. We don’t live by the rules of those games. Our minds are much, much bigger than that.

 

Compared to the awesome power of the human intellect, it’s obvious that computers are just dense electrochemical machines with intricate wiring. And while one day we might develop smarter (though still soulless) computers that surpass the human intellect in the games mentioned above, it will just end up being some more sophisticated way of doing the same thing. Even if it’s more efficient than our brains at doing complex mathematical operations, it will remain entirely in the dark about what it all means. Mindless processing power alone is not the answer to our existential questions. So, in the end, we don’t need to create another HAL 9000, or Skynet, or some evil global intelligent network of computers that rule the world and try to exterminate humanity as we become obsolete, which is a pretty cool subplot for a future Terminator movie. We don’t need any of that. We’re humans, and we are already intellectually superior to machines. End of story.

 

To be sure, I’m not denying that there are hard problems in AI. Of course, there are. All of these so called “advances” in this field are a bunch of crap if we still don’t have a program that can putt around on a miniaturized golf course all by itself, let alone actually play a decent game of golf like the humans do, without going into meltdown if it misses the hole five out of six times in a row. Can we get a movie about that? If you ask me, we don’t even really know how to define intelligence yet. How does it work, exactly? It’s a mess, really. If we’re ever going to create another actual HAL 9000, or yet another Skynet (what fun), we’d better be all on the same page when it comes to AI: what intelligence is, how it works, what consciousness is, what it feels like, what it really means to be self-aware. Without that common framework, trying to program yet another AI that can play yet another game like Go is like trying to blow up another Death Star with yet another way-too-large superlaser.

 

I think one of the big mistakes that computer scientists are making is that they are conflating intelligence with problem-solving. They’ve gotten into this habit of creating intricate Turing test competitions: give the computer a series of math problems, a chess board, etc., etc., give it a chat interface so you can interact with it like you would with another human being, and then see if the machine can fool you into thinking that it is a human. Once it does this, computers will have passed the Turing test and achieved general AI. Really? Is that really the way it works? I don’t see how. A computer has succeeded in faking it until it makes it, in terms of passing a Turing test competition, only if it has satisfied some pre-specified set of conditions that we know to be what a human would do in the same situation. But that is no guarantee that it has actually achieved intelligence! For all we know, computers can imitate humans until they generate the most plausible patterns of thought and behavior we know of, while all along remaining as soulless as ever. Who’s to say that the computer doesn’t merely use its programming to cheat the test? Who’s to say that it isn’t just shuffling its data around in an effort to do the most computations possible with the least amount of effort? It may succeed in conning us into thinking that it is self-aware, but that doesn’t prove that it actually is. It hasn’t actually passed the Turing test, unless we have defined it in a way that pre-determines the outcome: i.e., if the human pretends to be a computer, then it passes the test, but if the computer pretends to be a human, then it doesn’t pass the test! To me, that just doesn’t sound all that scientific."

 

Best,

Rasmus

 

On Mon, Jul 27, 2020 at 8:04 PM glen <[hidden email]> wrote:

Excellent. Thanks! I'd seen the link to Gwern from Slate Star Codex. But I loathe poetry. Now that you've recommended it, I have no choice. 8^)

On July 27, 2020 6:32:15 PM PDT, Alexander Rasmus <[hidden email]> wrote:


>Glen,
>
>Gwern has an extensive post on GPT-3 poetry experimentation here:
>https://www.gwern.net/GPT-3
>
>I strongly recommend the section on the Cyberiad, where GPT-3 stands in
>for
>Trurl's Electronic Bard:
>https://www.gwern.net/GPT-3#stanislaw-lems-cyberiad
>
>There's some discussion of fine tuning input, but I think more cases
>where
>they keep the prompt fixed and show several different outputs.

-- 
glen

- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/

- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/

- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/

 

- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/

- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  
bit.ly/virtualfriam
un/subscribe 
http://redfish.com/mailman/listinfo/friam_redfish.com
archives: 
http://friam.471366.n2.nabble.com/
FRIAM-COMIC 
http://friam-comic.blogspot.com/

 


- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/ 
Reply | Threaded
Open this post in threaded view
|

Re: GPT-3 and the chinese room

thompnickson2
In reply to this post by doug carmichael

Jon Wrote:

 

“In the spirit of EricC's comments about the distinction of surface tension and PH, if consciousness is a thing, then it should be so whether or not we all agree.”

 

Agreed.  But for the assertion “T is a thing such that it is conscious”, we must offer, if  only tentatively, some definition of what it would be for a such a thing to be conscious.  If T’s are not, by definition, the sorts of things that can be conscious, up front, then there is no discussion to be had. 

 

“How many angels can dance on the head of a pin?”

“Angels don’t dance.”

“Oh, all right; how may angels can stand on the head of a pin?”

“Angel’s don’t stand.”

“Oh, all right; what is the foot size of your average angel?”

“Angel’s don’t have feet. “

“How can an angel carry me to heaven, if he don’t got no feet?”

“Angel’s don’t carry no bodies into heaven.” Etc.

 

This is the sort of argument that happens when a metaphysical commitment is confused with an empirical assertion.

 

 

 

Nick

 

Nicholas Thompson

Emeritus Professor of Ethology and Psychology

Clark University

[hidden email]

https://wordpress.clarku.edu/nthompson/

 

 

From: Friam <[hidden email]> On Behalf Of doug carmichael
Sent: Tuesday, July 28, 2020 9:20 AM
To: The Friday Morning Applied Complexity Coffee Group <[hidden email]>
Subject: Re: [FRIAM] GPT-3 and the chinese room

 

dog is highly interconnected - hormones, nerves, senses, and environment. neurons are not binary . every synapse is an infinite state variable. 

doug



On Jul 27, 2020, at 10:45 PM, [hidden email] wrote:



Doug,

 

Dog do joy; why not computers?  

 

n

 

Nicholas Thompson

Emeritus Professor of Ethology and Psychology

Clark University

[hidden email]

https://wordpress.clarku.edu/nthompson/

 

 

From: Friam <[hidden email]> On Behalf Of doug carmichael
Sent: Monday, July 27, 2020 9:54 PM
To: The Friday Morning Applied Complexity Coffee Group <[hidden email]>
Subject: Re: [FRIAM] GPT-3 and the chinese room

 

I imagine playing chess, or go, with  a computer. As I play I have a very enlivening experience of playing. The computer seems to have no such thing.  For me, in my engagement, “Every neuron is listening to the mutter of the crowd.” Jerry Lettvin, MIT.   If The computer goes on to win it has nothing like the experience of winning. it just stops. I can’t imagine a computer saying,  except by playing a pre recorded sound file, “that is disgusting.”

 

 




On Jul 27, 2020, at 8:12 PM, Frank Wimberly <[hidden email]> wrote:

 

Smart computer (program).  Nick, is the behaviorist position that if it behaves as though it thinks than it thinks?

---
Frank C. Wimberly
140 Calle Ojo Feliz,
Santa Fe, NM 87505

505 670-9918
Santa Fe, NM

 

On Mon, Jul 27, 2020, 8:28 PM Alexander Rasmus <[hidden email]> wrote:

There's also a lot of stuff that isn't poetry. He generates Dwarf Fortress patch notes in the Misc section.

 

There's even a section where GPT-3 argues that it doesn't really understand anything (bold at the start is a shortened version of the prompt):

"AI will never be able to truly think for themselves, to have consciousness, to feel any of the richness and complexity of the world that we mere humans can feel. Mere, unenlightened humans might be impressed by the abilities of simple deep learning programs, but when looked at in a more holistic manner, it all adds up to… well, nothing. They still don’t exhibit any trace of consciousness. All of the available data support the notion that humans feel and experience the world differently than computers do. While a computer can beat a human master at chess or Go or some other game of structured rules, it will never be able to truly think outside of those rules, it will never be able to come up with its own new strategies on the fly, it will never be able to feel, to react, the way a human can. Artificial intelligence programs lack consciousness and self-awareness. They will never be able to have a sense of humor. They will never be able to appreciate art, or beauty, or love. They will never feel lonely. They will never have empathy for other people, for animals, for the environment. They will never enjoy music or fall in love, or cry at the drop of a hat. Merely by existing, mere, unenlightened humans are intellectually superior to computers, no matter how good our computers get at winning games like Go or Jeopardy. We don’t live by the rules of those games. Our minds are much, much bigger than that.

 

Compared to the awesome power of the human intellect, it’s obvious that computers are just dense electrochemical machines with intricate wiring. And while one day we might develop smarter (though still soulless) computers that surpass the human intellect in the games mentioned above, it will just end up being some more sophisticated way of doing the same thing. Even if it’s more efficient than our brains at doing complex mathematical operations, it will remain entirely in the dark about what it all means. Mindless processing power alone is not the answer to our existential questions. So, in the end, we don’t need to create another HAL 9000, or Skynet, or some evil global intelligent network of computers that rule the world and try to exterminate humanity as we become obsolete, which is a pretty cool subplot for a future Terminator movie. We don’t need any of that. We’re humans, and we are already intellectually superior to machines. End of story.

 

To be sure, I’m not denying that there are hard problems in AI. Of course, there are. All of these so called “advances” in this field are a bunch of crap if we still don’t have a program that can putt around on a miniaturized golf course all by itself, let alone actually play a decent game of golf like the humans do, without going into meltdown if it misses the hole five out of six times in a row. Can we get a movie about that? If you ask me, we don’t even really know how to define intelligence yet. How does it work, exactly? It’s a mess, really. If we’re ever going to create another actual HAL 9000, or yet another Skynet (what fun), we’d better be all on the same page when it comes to AI: what intelligence is, how it works, what consciousness is, what it feels like, what it really means to be self-aware. Without that common framework, trying to program yet another AI that can play yet another game like Go is like trying to blow up another Death Star with yet another way-too-large superlaser.

 

I think one of the big mistakes that computer scientists are making is that they are conflating intelligence with problem-solving. They’ve gotten into this habit of creating intricate Turing test competitions: give the computer a series of math problems, a chess board, etc., etc., give it a chat interface so you can interact with it like you would with another human being, and then see if the machine can fool you into thinking that it is a human. Once it does this, computers will have passed the Turing test and achieved general AI. Really? Is that really the way it works? I don’t see how. A computer has succeeded in faking it until it makes it, in terms of passing a Turing test competition, only if it has satisfied some pre-specified set of conditions that we know to be what a human would do in the same situation. But that is no guarantee that it has actually achieved intelligence! For all we know, computers can imitate humans until they generate the most plausible patterns of thought and behavior we know of, while all along remaining as soulless as ever. Who’s to say that the computer doesn’t merely use its programming to cheat the test? Who’s to say that it isn’t just shuffling its data around in an effort to do the most computations possible with the least amount of effort? It may succeed in conning us into thinking that it is self-aware, but that doesn’t prove that it actually is. It hasn’t actually passed the Turing test, unless we have defined it in a way that pre-determines the outcome: i.e., if the human pretends to be a computer, then it passes the test, but if the computer pretends to be a human, then it doesn’t pass the test! To me, that just doesn’t sound all that scientific."

 

Best,

Rasmus

 

On Mon, Jul 27, 2020 at 8:04 PM glen <[hidden email]> wrote:

Excellent. Thanks! I'd seen the link to Gwern from Slate Star Codex. But I loathe poetry. Now that you've recommended it, I have no choice. 8^)

On July 27, 2020 6:32:15 PM PDT, Alexander Rasmus <[hidden email]> wrote:


>Glen,
>
>Gwern has an extensive post on GPT-3 poetry experimentation here:
>https://www.gwern.net/GPT-3
>
>I strongly recommend the section on the Cyberiad, where GPT-3 stands in
>for
>Trurl's Electronic Bard:
>https://www.gwern.net/GPT-3#stanislaw-lems-cyberiad
>
>There's some discussion of fine tuning input, but I think more cases
>where
>they keep the prompt fixed and show several different outputs.

--
glen

- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/

- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/

- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/

 

- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/


- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/ 
Reply | Threaded
Open this post in threaded view
|

Re: gelotophilia

Merle Lefkoff-2
In reply to this post by Steve Smith
Sorry, Steve, to be a bit off topic here, but your reference to "certain codes of personal conduct" emerging from institutions of "higher education" are now considered racist.  And I suggest that the list take a look at this amazing piece in a recent NYTimes titled "Whiteness Lessons".  Your generation may not be able to tackle the article with an open mind, but I suggest that we need to pay close attention.


On Tue, Jul 28, 2020 at 9:58 AM Steve Smith <[hidden email]> wrote:

As I read the interchange about GPT-3 and the Chinese room, I was drawn off into side-musings which were finally polyped off to a pure tangent (in my head) when DougC and NickT exchanged:

NLT> Dog do joy; why not computers?

DC> dog is highly interconnected - hormones, nerves, senses, and environment. neurons are not binary . every synapse is an infinite state variable.

While Joy and humor are not identical, there is some positive correlation.   Poking around, I was only mildly surprised to find that there was a body of literature and in fact international organizations and conferences on humor (not mimes or clowns or  stand up comedians, but real scholars studying the former as well as regular people).   I was looking for the physiological complexes implied by humor or joy.   I haven't (yet) found as much on the topic as I would like, maybe because I got sidelined reading about 2 neologisms (ca 2007) and a related ancient (Greek) term:   Gelotophobia, Gelotophilia, and Katagelasticism.   My limited Italian and Spanish had me reading it as "Gelato" or "Helado" which translates roughly into our own "Ice Cream", though the ingredients differ toward less rich technically.

Their meanings, however are roughly:  Fear of being laughed at; Love of being laughed at; and the Pleasure of laughing at others.     These are apparently more than the usual discomfort or warm feelings we might get from being laughed at, or from laughing at others, but a more deep and acute sense of it.

https://www.wired.com/2011/07/international-humor-conference/

https://www.zora.uzh.ch/id/eprint/14037/1/Ruch_Proyer_PhoPhiKat_V.pdf

Part of why I bring it up on this list is because as I study myself and others as we exchange our ideas, observations, and occasional (un)pleasantries, I am fascinated by the intersection between (convolution amongsT?) personal styles and perhaps more formal "training" each of us might have learned from our parents, among our peers, by our teachers, our workplaces, possibly professional organizations, etc.  

It appears to me that institutions of higher education enforce/impose a certain code of personal conduct first on their participants (undergrads, grads, postdocs, staff, faculty) which is a microcosm of the larger world.  White Collar and Blue Collar contexts are also similarly dissimilar, and within those, a cube-farm of programmer-geeks and a bullpen of writers, and a trading floor of traders (all white collar, taking their showers at the beginning of the day) have a wide spectrum while blue collar workers (taking their showers at the end of the day) do as well.   Construction crews,  oilfield roughnecks, cowboys, farmhands, etc.   each have their own myriad ways of interacting... sometimes *requiring* a level of mocking to feel connected, etc.  There may also be a strong generational component... as we cross roughly 3 generations.  Greatest/Boomers/X/Millenials/Zoomers/??? and all the cusps between.

But what I was most interested in is related to the original discussion which is what is the extended physiological response to humor, joy, mockery that a human (or animal?) may have which a synthetic being would need to be designed to include.   Perhaps a properly broadly conceived General Artificial Intelligence would ultimately include all of this as well, and as deep learning evolves, it seems that there is no reason that a GI couldn't simulate the physiological feedback loops that drive and regulate some aspects of humore?

- Steve

- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/


--
Merle Lefkoff, Ph.D.
Center for Emergent Diplomacy
emergentdiplomacy.org
Santa Fe, New Mexico, USA

mobile:  (303) 859-5609
skype:  merle.lelfkoff2
twitter: @Merle_Lefkoff

- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/ 
12