GPT-3 and the chinese room

classic Classic list List threaded Threaded
26 messages Options
12
Reply | Threaded
Open this post in threaded view
|

GPT-3 and the chinese room

gepr
Just for any old cf:
https://analyticsindiamag.com/open-ai-gpt-3-code-generator-app-building/

Someone mentioned in a recent thread, here, the Chinese Room thought experiment, to which my reaction is always "Bah! That's nothing but a loaded question" ... like "have you stopped beating your child?" But the truth is, my answer to the Chinese Room is that it *is* intelligent. GPT-3 is nothing but the Chinese Room. Similarly, all we are is deep memory machines trained up on huge datasets. At some point, I've made the argument that the demonstration of *understanding* can't be made through language. As fond as I am of repeating back someone's expression in one's own words to demonstrate you grokked their point, *ultimately* the only demonstration of understanding that I really accept is in the *doing* or the *making* of stuff.

Now, there's some prestidigitation behind debuild.co. But at first blush, here is a machine that *understands* the website specification well enough to actually code the website. The AI skeptics will move the goalposts, of course, as they always do. E.g. they can say that programming a website to meet specs isn't a big deal, we've had declarative and domain-specific languages for awhile. And web pages and programming languages are all purely linguistic anyway. But it's a short trip from here to, say, a CNC machine, a 3D printer, a script for a light show, or even algorithmic composition of music.

I'm reminded of people who are expert at some task, like playing baseball or whatever, but when asked *how* they do what they do, they're at a loss ... tacit but no reflective understanding ... like a cat not really recognizing itself in a mirror, where dolphins do.

What's actually missing in the machines we berate as being mindless algorithms is not general intelligence or universal computation. It's general-purpose sensorimotor sytems ... universal manipulation ... hands with thumbs, tightly coupled feedback loops like our sense of touch, excruciatingly sensitive data fusion organelles like olfactory bulbs, etc. I think I can argue that's what gives us "understanding" ... not whatever internal computation we're capable of.


--
↙↙↙ uǝlƃ

- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/ 
uǝʃƃ ⊥ glen
Reply | Threaded
Open this post in threaded view
|

Re: GPT-3 and the chinese room

Frank Wimberly-2
Re:  Chinese Room

I mentioned the Chinese Room thought experiment to my erstwhile boss, a bona fide philosopher.  His reaction, "Anything follows from a false premise.". I think he meant that having a room full of Chinese scholars who laboriously execute a complex algorithm they don't understand is preposterous.  Maybe something like that reasoning caused you to react disdainfully when you did.

---
Frank C. Wimberly
140 Calle Ojo Feliz,
Santa Fe, NM 87505

505 670-9918
Santa Fe, NM

On Tue, Jul 21, 2020, 2:22 PM uǝlƃ ↙↙↙ <[hidden email]> wrote:
Just for any old cf:
https://analyticsindiamag.com/open-ai-gpt-3-code-generator-app-building/

Someone mentioned in a recent thread, here, the Chinese Room thought experiment, to which my reaction is always "Bah! That's nothing but a loaded question" ... like "have you stopped beating your child?" But the truth is, my answer to the Chinese Room is that it *is* intelligent. GPT-3 is nothing but the Chinese Room. Similarly, all we are is deep memory machines trained up on huge datasets. At some point, I've made the argument that the demonstration of *understanding* can't be made through language. As fond as I am of repeating back someone's expression in one's own words to demonstrate you grokked their point, *ultimately* the only demonstration of understanding that I really accept is in the *doing* or the *making* of stuff.

Now, there's some prestidigitation behind debuild.co. But at first blush, here is a machine that *understands* the website specification well enough to actually code the website. The AI skeptics will move the goalposts, of course, as they always do. E.g. they can say that programming a website to meet specs isn't a big deal, we've had declarative and domain-specific languages for awhile. And web pages and programming languages are all purely linguistic anyway. But it's a short trip from here to, say, a CNC machine, a 3D printer, a script for a light show, or even algorithmic composition of music.

I'm reminded of people who are expert at some task, like playing baseball or whatever, but when asked *how* they do what they do, they're at a loss ... tacit but no reflective understanding ... like a cat not really recognizing itself in a mirror, where dolphins do.

What's actually missing in the machines we berate as being mindless algorithms is not general intelligence or universal computation. It's general-purpose sensorimotor sytems ... universal manipulation ... hands with thumbs, tightly coupled feedback loops like our sense of touch, excruciatingly sensitive data fusion organelles like olfactory bulbs, etc. I think I can argue that's what gives us "understanding" ... not whatever internal computation we're capable of.


--
↙↙↙ uǝlƃ

- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives: http://friam.471366.n2.nabble.com/
FRIAM-COMIC
http://friam-comic.blogspot.com/

- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/ 
Reply | Threaded
Open this post in threaded view
|

Re: GPT-3 and the chinese room

gepr
I doubt it. I remember it being in the context of someone asking for a *simple* explanation of Noether's theorem. There was no context of your boss or a philosopher or anything like that. I don't even remember you being there. The Chinese Room wasn't really even relevant to that conversation. So, whoever said "Chinese Room" then, missed the point entirely.

Of course, one *could* make a relevant point about *compression*. Sometimes an elegant re-stating of someone else's thoughts/expression might seem simple or simpler. But I'd argue elegant/compressed expressions are no simpler than their rambling brethren. They often rely on hierarchy, where in order to get a compact expression, you have to rely on jargon that actually hinders comprehension by outsiders. So, by some measure, the more rambling version would be the simpler one.

And that recalls this article I read the other day:
https://aeon.co/essays/the-intellectual-character-of-conspiracy-theorists

which dovetails nicely with the one SteveS posted. The idea is that, rather than focus on reasons (justification) for thoughts/behaviors, we should infer the *character* of a person (or machine) and work with that instead. On the one hand, I agree. But similar to my rhetoric about the longer explanation being the simpler one, I often think that if our *inference* method is biased, then whatever character we infer will be biased ... perhaps even amplifying that bias. I'm always a fan of sticking as close to the data as possible. Never trust your inferences. So, I'm pretty sure I disagree completely with Cassam.  I have the same intuition with foreign language translations of books. I think I may *prefer* Google translate's mangled output to the nuanced, heavily modeled, re-imagined output of humans who speak the language of the original text.

This video comes to mind:

The Alternative Facts Gospel
https://youtu.be/78bsM7RbK0A

On 7/21/20 1:29 PM, Frank Wimberly wrote:
> Re:  Chinese Room
>
> I mentioned the Chinese Room thought experiment to my erstwhile boss, a bona fide philosopher.  His reaction, "Anything follows from a false premise.". I think he meant that having a room full of Chinese scholars who laboriously execute a complex algorithm they don't understand is preposterous.  Maybe something like that reasoning caused you to react disdainfully when you did.


--
↙↙↙ uǝlƃ

- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/ 
uǝʃƃ ⊥ glen
Reply | Threaded
Open this post in threaded view
|

Re: GPT-3 and the chinese room

Russell Standish-2
In reply to this post by gepr
As I noted on the slashdot post, I was really surprised at the number
of trainable parameters. 175 billion. Wow! The trainable parameters in
an ANN is basically just the synapses, so this is actually a human
brain scale ANN (I think I read elsewhere this model is an ANN), as
the human brain is estimated to have some 100 billion synapses.

I remember the Singulatarian guys predicting human scale AIs by 2020,
based on Moore's law extrapolation. In a sense they're right. Clearly,
it is not human scale competence yet, and probably won't be for a
while, but it is coming. Remember also that it also takes 20 years
plus to train a human-scale AI to full human-scale competence - we'll
see some short cuts, of course, and continuing technological
improvements in hardware.

What's the likelihood of a Singularity by mid-century (30 years from now)?

On Tue, Jul 21, 2020 at 01:20:31PM -0700, uǝlƃ ↙↙↙ wrote:

> Just for any old cf:
> https://analyticsindiamag.com/open-ai-gpt-3-code-generator-app-building/
>
> Someone mentioned in a recent thread, here, the Chinese Room thought experiment, to which my reaction is always "Bah! That's nothing but a loaded question" ... like "have you stopped beating your child?" But the truth is, my answer to the Chinese Room is that it *is* intelligent. GPT-3 is nothing but the Chinese Room. Similarly, all we are is deep memory machines trained up on huge datasets. At some point, I've made the argument that the demonstration of *understanding* can't be made through language. As fond as I am of repeating back someone's expression in one's own words to demonstrate you grokked their point, *ultimately* the only demonstration of understanding that I really accept is in the *doing* or the *making* of stuff.
>
> Now, there's some prestidigitation behind debuild.co. But at first blush, here is a machine that *understands* the website specification well enough to actually code the website. The AI skeptics will move the goalposts, of course, as they always do. E.g. they can say that programming a website to meet specs isn't a big deal, we've had declarative and domain-specific languages for awhile. And web pages and programming languages are all purely linguistic anyway. But it's a short trip from here to, say, a CNC machine, a 3D printer, a script for a light show, or even algorithmic composition of music.
>
> I'm reminded of people who are expert at some task, like playing baseball or whatever, but when asked *how* they do what they do, they're at a loss ... tacit but no reflective understanding ... like a cat not really recognizing itself in a mirror, where dolphins do.
>
> What's actually missing in the machines we berate as being mindless algorithms is not general intelligence or universal computation. It's general-purpose sensorimotor sytems ... universal manipulation ... hands with thumbs, tightly coupled feedback loops like our sense of touch, excruciatingly sensitive data fusion organelles like olfactory bulbs, etc. I think I can argue that's what gives us "understanding" ... not whatever internal computation we're capable of.
>
>
> --
> ↙↙↙ uǝlƃ
>
> - .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
> FRIAM Applied Complexity Group listserv
> Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
> un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
> archives: http://friam.471366.n2.nabble.com/
> FRIAM-COMIC http://friam-comic.blogspot.com/ 

--

----------------------------------------------------------------------------
Dr Russell Standish                    Phone 0425 253119 (mobile)
Principal, High Performance Coders     [hidden email]
                      http://www.hpcoders.com.au
----------------------------------------------------------------------------

- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/ 
Reply | Threaded
Open this post in threaded view
|

Re: GPT-3 and the chinese room

gepr
I think I read somewhere that width is 2048. What's that? Like a short paper ... half an in-depth paper? An Atlantic article, maybe? I know they delayed the release of GPT-2 and haven't released GPT-3 because of the abuse potential. But it would be very cool to prime it with a long expression, get the response, make a point mutation, get the response, make a hoist mutation, ..., steadily moving up in changes, classify the results and see if there are clear features in the output that are not commensurate with those in the inputs. Do you know of anyone reporting anything like that?

Re: the singularity - I think it's like the big bang. It kindasorta looks like a singularity from way out on the flat part, but it'll always be locally flat. From that perspective, we're already deep into asymptopia.

On 7/21/20 6:13 PM, Russell Standish wrote:

> As I noted on the slashdot post, I was really surprised at the number
> of trainable parameters. 175 billion. Wow! The trainable parameters in
> an ANN is basically just the synapses, so this is actually a human
> brain scale ANN (I think I read elsewhere this model is an ANN), as
> the human brain is estimated to have some 100 billion synapses.
>
> I remember the Singulatarian guys predicting human scale AIs by 2020,
> based on Moore's law extrapolation. In a sense they're right. Clearly,
> it is not human scale competence yet, and probably won't be for a
> while, but it is coming. Remember also that it also takes 20 years
> plus to train a human-scale AI to full human-scale competence - we'll
> see some short cuts, of course, and continuing technological
> improvements in hardware.
>
> What's the likelihood of a Singularity by mid-century (30 years from now)?

--
↙↙↙ uǝlƃ

- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/ 
uǝʃƃ ⊥ glen
Reply | Threaded
Open this post in threaded view
|

Re: GPT-3 and the chinese room

Alexander Rasmus
Glen,

Gwern has an extensive post on GPT-3 poetry experimentation here: https://www.gwern.net/GPT-3

I strongly recommend the section on the Cyberiad, where GPT-3 stands in for Trurl's Electronic Bard: https://www.gwern.net/GPT-3#stanislaw-lems-cyberiad

There's some discussion of fine tuning input, but I think more cases where they keep the prompt fixed and show several different outputs.

Best,
Rasmus



On Mon, Jul 27, 2020 at 6:14 PM uǝlƃ ↙↙↙ <[hidden email]> wrote:
I think I read somewhere that width is 2048. What's that? Like a short paper ... half an in-depth paper? An Atlantic article, maybe? I know they delayed the release of GPT-2 and haven't released GPT-3 because of the abuse potential. But it would be very cool to prime it with a long expression, get the response, make a point mutation, get the response, make a hoist mutation, ..., steadily moving up in changes, classify the results and see if there are clear features in the output that are not commensurate with those in the inputs. Do you know of anyone reporting anything like that?

Re: the singularity - I think it's like the big bang. It kindasorta looks like a singularity from way out on the flat part, but it'll always be locally flat. From that perspective, we're already deep into asymptopia.

On 7/21/20 6:13 PM, Russell Standish wrote:
> As I noted on the slashdot post, I was really surprised at the number
> of trainable parameters. 175 billion. Wow! The trainable parameters in
> an ANN is basically just the synapses, so this is actually a human
> brain scale ANN (I think I read elsewhere this model is an ANN), as
> the human brain is estimated to have some 100 billion synapses.
>
> I remember the Singulatarian guys predicting human scale AIs by 2020,
> based on Moore's law extrapolation. In a sense they're right. Clearly,
> it is not human scale competence yet, and probably won't be for a
> while, but it is coming. Remember also that it also takes 20 years
> plus to train a human-scale AI to full human-scale competence - we'll
> see some short cuts, of course, and continuing technological
> improvements in hardware.
>
> What's the likelihood of a Singularity by mid-century (30 years from now)?

--
↙↙↙ uǝlƃ

- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives: http://friam.471366.n2.nabble.com/
FRIAM-COMIC
http://friam-comic.blogspot.com/

- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/ 
Reply | Threaded
Open this post in threaded view
|

Re: GPT-3 and the chinese room

gepr
Excellent. Thanks! I'd seen the link to Gwern from Slate Star Codex. But I loathe poetry. Now that you've recommended it, I have no choice. 8^)

On July 27, 2020 6:32:15 PM PDT, Alexander Rasmus <[hidden email]> wrote:

>Glen,
>
>Gwern has an extensive post on GPT-3 poetry experimentation here:
>https://www.gwern.net/GPT-3
>
>I strongly recommend the section on the Cyberiad, where GPT-3 stands in
>for
>Trurl's Electronic Bard:
>https://www.gwern.net/GPT-3#stanislaw-lems-cyberiad
>
>There's some discussion of fine tuning input, but I think more cases
>where
>they keep the prompt fixed and show several different outputs.

--
glen

- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/ 
uǝʃƃ ⊥ glen
Reply | Threaded
Open this post in threaded view
|

Re: GPT-3 and the chinese room

Alexander Rasmus
There's also a lot of stuff that isn't poetry. He generates Dwarf Fortress patch notes in the Misc section.

There's even a section where GPT-3 argues that it doesn't really understand anything (bold at the start is a shortened version of the prompt):
"AI will never be able to truly think for themselves, to have consciousness, to feel any of the richness and complexity of the world that we mere humans can feel. Mere, unenlightened humans might be impressed by the abilities of simple deep learning programs, but when looked at in a more holistic manner, it all adds up to… well, nothing. They still don’t exhibit any trace of consciousness. All of the available data support the notion that humans feel and experience the world differently than computers do. While a computer can beat a human master at chess or Go or some other game of structured rules, it will never be able to truly think outside of those rules, it will never be able to come up with its own new strategies on the fly, it will never be able to feel, to react, the way a human can. Artificial intelligence programs lack consciousness and self-awareness. They will never be able to have a sense of humor. They will never be able to appreciate art, or beauty, or love. They will never feel lonely. They will never have empathy for other people, for animals, for the environment. They will never enjoy music or fall in love, or cry at the drop of a hat. Merely by existing, mere, unenlightened humans are intellectually superior to computers, no matter how good our computers get at winning games like Go or Jeopardy. We don’t live by the rules of those games. Our minds are much, much bigger than that.

Compared to the awesome power of the human intellect, it’s obvious that computers are just dense electrochemical machines with intricate wiring. And while one day we might develop smarter (though still soulless) computers that surpass the human intellect in the games mentioned above, it will just end up being some more sophisticated way of doing the same thing. Even if it’s more efficient than our brains at doing complex mathematical operations, it will remain entirely in the dark about what it all means. Mindless processing power alone is not the answer to our existential questions. So, in the end, we don’t need to create another HAL 9000, or Skynet, or some evil global intelligent network of computers that rule the world and try to exterminate humanity as we become obsolete, which is a pretty cool subplot for a future Terminator movie. We don’t need any of that. We’re humans, and we are already intellectually superior to machines. End of story.

To be sure, I’m not denying that there are hard problems in AI. Of course, there are. All of these so called “advances” in this field are a bunch of crap if we still don’t have a program that can putt around on a miniaturized golf course all by itself, let alone actually play a decent game of golf like the humans do, without going into meltdown if it misses the hole five out of six times in a row. Can we get a movie about that? If you ask me, we don’t even really know how to define intelligence yet. How does it work, exactly? It’s a mess, really. If we’re ever going to create another actual HAL 9000, or yet another Skynet (what fun), we’d better be all on the same page when it comes to AI: what intelligence is, how it works, what consciousness is, what it feels like, what it really means to be self-aware. Without that common framework, trying to program yet another AI that can play yet another game like Go is like trying to blow up another Death Star with yet another way-too-large superlaser.

I think one of the big mistakes that computer scientists are making is that they are conflating intelligence with problem-solving. They’ve gotten into this habit of creating intricate Turing test competitions: give the computer a series of math problems, a chess board, etc., etc., give it a chat interface so you can interact with it like you would with another human being, and then see if the machine can fool you into thinking that it is a human. Once it does this, computers will have passed the Turing test and achieved general AI. Really? Is that really the way it works? I don’t see how. A computer has succeeded in faking it until it makes it, in terms of passing a Turing test competition, only if it has satisfied some pre-specified set of conditions that we know to be what a human would do in the same situation. But that is no guarantee that it has actually achieved intelligence! For all we know, computers can imitate humans until they generate the most plausible patterns of thought and behavior we know of, while all along remaining as soulless as ever. Who’s to say that the computer doesn’t merely use its programming to cheat the test? Who’s to say that it isn’t just shuffling its data around in an effort to do the most computations possible with the least amount of effort? It may succeed in conning us into thinking that it is self-aware, but that doesn’t prove that it actually is. It hasn’t actually passed the Turing test, unless we have defined it in a way that pre-determines the outcome: i.e., if the human pretends to be a computer, then it passes the test, but if the computer pretends to be a human, then it doesn’t pass the test! To me, that just doesn’t sound all that scientific."

Best,
Rasmus

On Mon, Jul 27, 2020 at 8:04 PM glen <[hidden email]> wrote:
Excellent. Thanks! I'd seen the link to Gwern from Slate Star Codex. But I loathe poetry. Now that you've recommended it, I have no choice. 8^)

On July 27, 2020 6:32:15 PM PDT, Alexander Rasmus <[hidden email]> wrote:
>Glen,
>
>Gwern has an extensive post on GPT-3 poetry experimentation here:
>https://www.gwern.net/GPT-3
>
>I strongly recommend the section on the Cyberiad, where GPT-3 stands in
>for
>Trurl's Electronic Bard:
>https://www.gwern.net/GPT-3#stanislaw-lems-cyberiad
>
>There's some discussion of fine tuning input, but I think more cases
>where
>they keep the prompt fixed and show several different outputs.

--
glen

- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/

- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/ 
Reply | Threaded
Open this post in threaded view
|

Re: GPT-3 and the chinese room

Frank Wimberly-2
Smart computer (program).  Nick, is the behaviorist position that if it behaves as though it thinks than it thinks?

---
Frank C. Wimberly
140 Calle Ojo Feliz,
Santa Fe, NM 87505

505 670-9918
Santa Fe, NM

On Mon, Jul 27, 2020, 8:28 PM Alexander Rasmus <[hidden email]> wrote:
There's also a lot of stuff that isn't poetry. He generates Dwarf Fortress patch notes in the Misc section.

There's even a section where GPT-3 argues that it doesn't really understand anything (bold at the start is a shortened version of the prompt):
"AI will never be able to truly think for themselves, to have consciousness, to feel any of the richness and complexity of the world that we mere humans can feel. Mere, unenlightened humans might be impressed by the abilities of simple deep learning programs, but when looked at in a more holistic manner, it all adds up to… well, nothing. They still don’t exhibit any trace of consciousness. All of the available data support the notion that humans feel and experience the world differently than computers do. While a computer can beat a human master at chess or Go or some other game of structured rules, it will never be able to truly think outside of those rules, it will never be able to come up with its own new strategies on the fly, it will never be able to feel, to react, the way a human can. Artificial intelligence programs lack consciousness and self-awareness. They will never be able to have a sense of humor. They will never be able to appreciate art, or beauty, or love. They will never feel lonely. They will never have empathy for other people, for animals, for the environment. They will never enjoy music or fall in love, or cry at the drop of a hat. Merely by existing, mere, unenlightened humans are intellectually superior to computers, no matter how good our computers get at winning games like Go or Jeopardy. We don’t live by the rules of those games. Our minds are much, much bigger than that.

Compared to the awesome power of the human intellect, it’s obvious that computers are just dense electrochemical machines with intricate wiring. And while one day we might develop smarter (though still soulless) computers that surpass the human intellect in the games mentioned above, it will just end up being some more sophisticated way of doing the same thing. Even if it’s more efficient than our brains at doing complex mathematical operations, it will remain entirely in the dark about what it all means. Mindless processing power alone is not the answer to our existential questions. So, in the end, we don’t need to create another HAL 9000, or Skynet, or some evil global intelligent network of computers that rule the world and try to exterminate humanity as we become obsolete, which is a pretty cool subplot for a future Terminator movie. We don’t need any of that. We’re humans, and we are already intellectually superior to machines. End of story.

To be sure, I’m not denying that there are hard problems in AI. Of course, there are. All of these so called “advances” in this field are a bunch of crap if we still don’t have a program that can putt around on a miniaturized golf course all by itself, let alone actually play a decent game of golf like the humans do, without going into meltdown if it misses the hole five out of six times in a row. Can we get a movie about that? If you ask me, we don’t even really know how to define intelligence yet. How does it work, exactly? It’s a mess, really. If we’re ever going to create another actual HAL 9000, or yet another Skynet (what fun), we’d better be all on the same page when it comes to AI: what intelligence is, how it works, what consciousness is, what it feels like, what it really means to be self-aware. Without that common framework, trying to program yet another AI that can play yet another game like Go is like trying to blow up another Death Star with yet another way-too-large superlaser.

I think one of the big mistakes that computer scientists are making is that they are conflating intelligence with problem-solving. They’ve gotten into this habit of creating intricate Turing test competitions: give the computer a series of math problems, a chess board, etc., etc., give it a chat interface so you can interact with it like you would with another human being, and then see if the machine can fool you into thinking that it is a human. Once it does this, computers will have passed the Turing test and achieved general AI. Really? Is that really the way it works? I don’t see how. A computer has succeeded in faking it until it makes it, in terms of passing a Turing test competition, only if it has satisfied some pre-specified set of conditions that we know to be what a human would do in the same situation. But that is no guarantee that it has actually achieved intelligence! For all we know, computers can imitate humans until they generate the most plausible patterns of thought and behavior we know of, while all along remaining as soulless as ever. Who’s to say that the computer doesn’t merely use its programming to cheat the test? Who’s to say that it isn’t just shuffling its data around in an effort to do the most computations possible with the least amount of effort? It may succeed in conning us into thinking that it is self-aware, but that doesn’t prove that it actually is. It hasn’t actually passed the Turing test, unless we have defined it in a way that pre-determines the outcome: i.e., if the human pretends to be a computer, then it passes the test, but if the computer pretends to be a human, then it doesn’t pass the test! To me, that just doesn’t sound all that scientific."

Best,
Rasmus

On Mon, Jul 27, 2020 at 8:04 PM glen <[hidden email]> wrote:
Excellent. Thanks! I'd seen the link to Gwern from Slate Star Codex. But I loathe poetry. Now that you've recommended it, I have no choice. 8^)

On July 27, 2020 6:32:15 PM PDT, Alexander Rasmus <[hidden email]> wrote:
>Glen,
>
>Gwern has an extensive post on GPT-3 poetry experimentation here:
>https://www.gwern.net/GPT-3
>
>I strongly recommend the section on the Cyberiad, where GPT-3 stands in
>for
>Trurl's Electronic Bard:
>https://www.gwern.net/GPT-3#stanislaw-lems-cyberiad
>
>There's some discussion of fine tuning input, but I think more cases
>where
>they keep the prompt fixed and show several different outputs.

--
glen

- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/
- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/

- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/ 
Reply | Threaded
Open this post in threaded view
|

Re: GPT-3 and the chinese room

doug carmichael
I imagine playing chess, or go, with  a computer. As I play I have a very enlivening experience of playing. The computer seems to have no such thing.  For me, in my engagement, “Every neuron is listening to the mutter of the crowd.” Jerry Lettvin, MIT.   If The computer goes on to win it has nothing like the experience of winning. it just stops. I can’t imagine a computer saying,  except by playing a pre recorded sound file, “that is disgusting.”



On Jul 27, 2020, at 8:12 PM, Frank Wimberly <[hidden email]> wrote:

Smart computer (program).  Nick, is the behaviorist position that if it behaves as though it thinks than it thinks?

---
Frank C. Wimberly
140 Calle Ojo Feliz,
Santa Fe, NM 87505

505 670-9918
Santa Fe, NM

On Mon, Jul 27, 2020, 8:28 PM Alexander Rasmus <[hidden email]> wrote:
There's also a lot of stuff that isn't poetry. He generates Dwarf Fortress patch notes in the Misc section.

There's even a section where GPT-3 argues that it doesn't really understand anything (bold at the start is a shortened version of the prompt):
"AI will never be able to truly think for themselves, to have consciousness, to feel any of the richness and complexity of the world that we mere humans can feel. Mere, unenlightened humans might be impressed by the abilities of simple deep learning programs, but when looked at in a more holistic manner, it all adds up to… well, nothing. They still don’t exhibit any trace of consciousness. All of the available data support the notion that humans feel and experience the world differently than computers do. While a computer can beat a human master at chess or Go or some other game of structured rules, it will never be able to truly think outside of those rules, it will never be able to come up with its own new strategies on the fly, it will never be able to feel, to react, the way a human can. Artificial intelligence programs lack consciousness and self-awareness. They will never be able to have a sense of humor. They will never be able to appreciate art, or beauty, or love. They will never feel lonely. They will never have empathy for other people, for animals, for the environment. They will never enjoy music or fall in love, or cry at the drop of a hat. Merely by existing, mere, unenlightened humans are intellectually superior to computers, no matter how good our computers get at winning games like Go or Jeopardy. We don’t live by the rules of those games. Our minds are much, much bigger than that.

Compared to the awesome power of the human intellect, it’s obvious that computers are just dense electrochemical machines with intricate wiring. And while one day we might develop smarter (though still soulless) computers that surpass the human intellect in the games mentioned above, it will just end up being some more sophisticated way of doing the same thing. Even if it’s more efficient than our brains at doing complex mathematical operations, it will remain entirely in the dark about what it all means. Mindless processing power alone is not the answer to our existential questions. So, in the end, we don’t need to create another HAL 9000, or Skynet, or some evil global intelligent network of computers that rule the world and try to exterminate humanity as we become obsolete, which is a pretty cool subplot for a future Terminator movie. We don’t need any of that. We’re humans, and we are already intellectually superior to machines. End of story.

To be sure, I’m not denying that there are hard problems in AI. Of course, there are. All of these so called “advances” in this field are a bunch of crap if we still don’t have a program that can putt around on a miniaturized golf course all by itself, let alone actually play a decent game of golf like the humans do, without going into meltdown if it misses the hole five out of six times in a row. Can we get a movie about that? If you ask me, we don’t even really know how to define intelligence yet. How does it work, exactly? It’s a mess, really. If we’re ever going to create another actual HAL 9000, or yet another Skynet (what fun), we’d better be all on the same page when it comes to AI: what intelligence is, how it works, what consciousness is, what it feels like, what it really means to be self-aware. Without that common framework, trying to program yet another AI that can play yet another game like Go is like trying to blow up another Death Star with yet another way-too-large superlaser.

I think one of the big mistakes that computer scientists are making is that they are conflating intelligence with problem-solving. They’ve gotten into this habit of creating intricate Turing test competitions: give the computer a series of math problems, a chess board, etc., etc., give it a chat interface so you can interact with it like you would with another human being, and then see if the machine can fool you into thinking that it is a human. Once it does this, computers will have passed the Turing test and achieved general AI. Really? Is that really the way it works? I don’t see how. A computer has succeeded in faking it until it makes it, in terms of passing a Turing test competition, only if it has satisfied some pre-specified set of conditions that we know to be what a human would do in the same situation. But that is no guarantee that it has actually achieved intelligence! For all we know, computers can imitate humans until they generate the most plausible patterns of thought and behavior we know of, while all along remaining as soulless as ever. Who’s to say that the computer doesn’t merely use its programming to cheat the test? Who’s to say that it isn’t just shuffling its data around in an effort to do the most computations possible with the least amount of effort? It may succeed in conning us into thinking that it is self-aware, but that doesn’t prove that it actually is. It hasn’t actually passed the Turing test, unless we have defined it in a way that pre-determines the outcome: i.e., if the human pretends to be a computer, then it passes the test, but if the computer pretends to be a human, then it doesn’t pass the test! To me, that just doesn’t sound all that scientific."

Best,
Rasmus

On Mon, Jul 27, 2020 at 8:04 PM glen <[hidden email]> wrote:
Excellent. Thanks! I'd seen the link to Gwern from Slate Star Codex. But I loathe poetry. Now that you've recommended it, I have no choice. 8^)

On July 27, 2020 6:32:15 PM PDT, Alexander Rasmus <[hidden email]> wrote:
>Glen,
>
>Gwern has an extensive post on GPT-3 poetry experimentation here:
>https://www.gwern.net/GPT-3
>
>I strongly recommend the section on the Cyberiad, where GPT-3 stands in
>for
>Trurl's Electronic Bard:
>https://www.gwern.net/GPT-3#stanislaw-lems-cyberiad
>
>There's some discussion of fine tuning input, but I think more cases
>where
>they keep the prompt fixed and show several different outputs.

--
glen

- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/
- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/
- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/


- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/ 
Reply | Threaded
Open this post in threaded view
|

Re: GPT-3 and the chinese room

thompnickson2
In reply to this post by Frank Wimberly-2

Hi Frank,

 

Well, yes.  Roughly, “if it quacks like a duck…”*.  But we have to understand “behavior” in a pretty broad sense.

 

The rules of the game are, define “thinking” in some way that satisfies everybody in the room, once everybody agrees, look and see if the entity in question “thinks”.  But you have to be honest about it.   Obviously if everybody in the room agrees that thinking requires “posting to FRIAM”, then chimpanzees don’t think.  So really the whole project is in how you frame the question.  There are a lot of arguments that continue uselessly because people have illicit criteria for their definitions.  Many arguments  at FRIAM about consciousness continue more or less indefinitely  because some participants implicitly include in their definition  of consciousness the possession of an immortal soul or of a human brain, or both, but don’t own up those criteria.  Thus their belief that computers or chimpanzees, or blades of grass are not conscious arises from their premises, not from any facts of any matter.

 

Nick  

* I once expressed a worry to a friend of mine concerning a Doctor we had both seen that the Doctor was not really qualified because he was constantly evading and deflecting my questions.  “Well,” my friend said.  “If he ducks like a quack, he probably is one.” 

Nicholas Thompson

Emeritus Professor of Ethology and Psychology

Clark University

[hidden email]

https://wordpress.clarku.edu/nthompson/

 

 

From: Friam <[hidden email]> On Behalf Of Frank Wimberly
Sent: Monday, July 27, 2020 9:12 PM
To: The Friday Morning Applied Complexity Coffee Group <[hidden email]>
Subject: Re: [FRIAM] GPT-3 and the chinese room

 

Smart computer (program).  Nick, is the behaviorist position that if it behaves as though it thinks than it thinks?

---
Frank C. Wimberly
140 Calle Ojo Feliz,
Santa Fe, NM 87505

505 670-9918
Santa Fe, NM

 

On Mon, Jul 27, 2020, 8:28 PM Alexander Rasmus <[hidden email]> wrote:

There's also a lot of stuff that isn't poetry. He generates Dwarf Fortress patch notes in the Misc section.

 

There's even a section where GPT-3 argues that it doesn't really understand anything (bold at the start is a shortened version of the prompt):

"AI will never be able to truly think for themselves, to have consciousness, to feel any of the richness and complexity of the world that we mere humans can feel. Mere, unenlightened humans might be impressed by the abilities of simple deep learning programs, but when looked at in a more holistic manner, it all adds up to… well, nothing. They still don’t exhibit any trace of consciousness. All of the available data support the notion that humans feel and experience the world differently than computers do. While a computer can beat a human master at chess or Go or some other game of structured rules, it will never be able to truly think outside of those rules, it will never be able to come up with its own new strategies on the fly, it will never be able to feel, to react, the way a human can. Artificial intelligence programs lack consciousness and self-awareness. They will never be able to have a sense of humor. They will never be able to appreciate art, or beauty, or love. They will never feel lonely. They will never have empathy for other people, for animals, for the environment. They will never enjoy music or fall in love, or cry at the drop of a hat. Merely by existing, mere, unenlightened humans are intellectually superior to computers, no matter how good our computers get at winning games like Go or Jeopardy. We don’t live by the rules of those games. Our minds are much, much bigger than that.

 

Compared to the awesome power of the human intellect, it’s obvious that computers are just dense electrochemical machines with intricate wiring. And while one day we might develop smarter (though still soulless) computers that surpass the human intellect in the games mentioned above, it will just end up being some more sophisticated way of doing the same thing. Even if it’s more efficient than our brains at doing complex mathematical operations, it will remain entirely in the dark about what it all means. Mindless processing power alone is not the answer to our existential questions. So, in the end, we don’t need to create another HAL 9000, or Skynet, or some evil global intelligent network of computers that rule the world and try to exterminate humanity as we become obsolete, which is a pretty cool subplot for a future Terminator movie. We don’t need any of that. We’re humans, and we are already intellectually superior to machines. End of story.

 

To be sure, I’m not denying that there are hard problems in AI. Of course, there are. All of these so called “advances” in this field are a bunch of crap if we still don’t have a program that can putt around on a miniaturized golf course all by itself, let alone actually play a decent game of golf like the humans do, without going into meltdown if it misses the hole five out of six times in a row. Can we get a movie about that? If you ask me, we don’t even really know how to define intelligence yet. How does it work, exactly? It’s a mess, really. If we’re ever going to create another actual HAL 9000, or yet another Skynet (what fun), we’d better be all on the same page when it comes to AI: what intelligence is, how it works, what consciousness is, what it feels like, what it really means to be self-aware. Without that common framework, trying to program yet another AI that can play yet another game like Go is like trying to blow up another Death Star with yet another way-too-large superlaser.

 

I think one of the big mistakes that computer scientists are making is that they are conflating intelligence with problem-solving. They’ve gotten into this habit of creating intricate Turing test competitions: give the computer a series of math problems, a chess board, etc., etc., give it a chat interface so you can interact with it like you would with another human being, and then see if the machine can fool you into thinking that it is a human. Once it does this, computers will have passed the Turing test and achieved general AI. Really? Is that really the way it works? I don’t see how. A computer has succeeded in faking it until it makes it, in terms of passing a Turing test competition, only if it has satisfied some pre-specified set of conditions that we know to be what a human would do in the same situation. But that is no guarantee that it has actually achieved intelligence! For all we know, computers can imitate humans until they generate the most plausible patterns of thought and behavior we know of, while all along remaining as soulless as ever. Who’s to say that the computer doesn’t merely use its programming to cheat the test? Who’s to say that it isn’t just shuffling its data around in an effort to do the most computations possible with the least amount of effort? It may succeed in conning us into thinking that it is self-aware, but that doesn’t prove that it actually is. It hasn’t actually passed the Turing test, unless we have defined it in a way that pre-determines the outcome: i.e., if the human pretends to be a computer, then it passes the test, but if the computer pretends to be a human, then it doesn’t pass the test! To me, that just doesn’t sound all that scientific."

 

Best,

Rasmus

 

On Mon, Jul 27, 2020 at 8:04 PM glen <[hidden email]> wrote:

Excellent. Thanks! I'd seen the link to Gwern from Slate Star Codex. But I loathe poetry. Now that you've recommended it, I have no choice. 8^)

On July 27, 2020 6:32:15 PM PDT, Alexander Rasmus <[hidden email]> wrote:


>Glen,
>
>Gwern has an extensive post on GPT-3 poetry experimentation here:
>https://www.gwern.net/GPT-3
>
>I strongly recommend the section on the Cyberiad, where GPT-3 stands in
>for
>Trurl's Electronic Bard:
>https://www.gwern.net/GPT-3#stanislaw-lems-cyberiad
>
>There's some discussion of fine tuning input, but I think more cases
>where
>they keep the prompt fixed and show several different outputs.

--
glen

- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/

- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/


- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/ 
Reply | Threaded
Open this post in threaded view
|

Re: GPT-3 and the chinese room

Frank Wimberly-2
Nick,

That's really funny.  It reminds me of a long complex joke that ends, "I wouldn't send a knight out on a dog like this."

Frank

---
Frank C. Wimberly
140 Calle Ojo Feliz,
Santa Fe, NM 87505

505 670-9918
Santa Fe, NM

On Mon, Jul 27, 2020, 10:11 PM <[hidden email]> wrote:

Hi Frank,

 

Well, yes.  Roughly, “if it quacks like a duck…”*.  But we have to understand “behavior” in a pretty broad sense.

 

The rules of the game are, define “thinking” in some way that satisfies everybody in the room, once everybody agrees, look and see if the entity in question “thinks”.  But you have to be honest about it.   Obviously if everybody in the room agrees that thinking requires “posting to FRIAM”, then chimpanzees don’t think.  So really the whole project is in how you frame the question.  There are a lot of arguments that continue uselessly because people have illicit criteria for their definitions.  Many arguments  at FRIAM about consciousness continue more or less indefinitely  because some participants implicitly include in their definition  of consciousness the possession of an immortal soul or of a human brain, or both, but don’t own up those criteria.  Thus their belief that computers or chimpanzees, or blades of grass are not conscious arises from their premises, not from any facts of any matter.

 

Nick  

* I once expressed a worry to a friend of mine concerning a Doctor we had both seen that the Doctor was not really qualified because he was constantly evading and deflecting my questions.  “Well,” my friend said.  “If he ducks like a quack, he probably is one.” 

Nicholas Thompson

Emeritus Professor of Ethology and Psychology

Clark University

[hidden email]

https://wordpress.clarku.edu/nthompson/

 

 

From: Friam <[hidden email]> On Behalf Of Frank Wimberly
Sent: Monday, July 27, 2020 9:12 PM
To: The Friday Morning Applied Complexity Coffee Group <[hidden email]>
Subject: Re: [FRIAM] GPT-3 and the chinese room

 

Smart computer (program).  Nick, is the behaviorist position that if it behaves as though it thinks than it thinks?

---
Frank C. Wimberly
140 Calle Ojo Feliz,
Santa Fe, NM 87505

505 670-9918
Santa Fe, NM

 

On Mon, Jul 27, 2020, 8:28 PM Alexander Rasmus <[hidden email]> wrote:

There's also a lot of stuff that isn't poetry. He generates Dwarf Fortress patch notes in the Misc section.

 

There's even a section where GPT-3 argues that it doesn't really understand anything (bold at the start is a shortened version of the prompt):

"AI will never be able to truly think for themselves, to have consciousness, to feel any of the richness and complexity of the world that we mere humans can feel. Mere, unenlightened humans might be impressed by the abilities of simple deep learning programs, but when looked at in a more holistic manner, it all adds up to… well, nothing. They still don’t exhibit any trace of consciousness. All of the available data support the notion that humans feel and experience the world differently than computers do. While a computer can beat a human master at chess or Go or some other game of structured rules, it will never be able to truly think outside of those rules, it will never be able to come up with its own new strategies on the fly, it will never be able to feel, to react, the way a human can. Artificial intelligence programs lack consciousness and self-awareness. They will never be able to have a sense of humor. They will never be able to appreciate art, or beauty, or love. They will never feel lonely. They will never have empathy for other people, for animals, for the environment. They will never enjoy music or fall in love, or cry at the drop of a hat. Merely by existing, mere, unenlightened humans are intellectually superior to computers, no matter how good our computers get at winning games like Go or Jeopardy. We don’t live by the rules of those games. Our minds are much, much bigger than that.

 

Compared to the awesome power of the human intellect, it’s obvious that computers are just dense electrochemical machines with intricate wiring. And while one day we might develop smarter (though still soulless) computers that surpass the human intellect in the games mentioned above, it will just end up being some more sophisticated way of doing the same thing. Even if it’s more efficient than our brains at doing complex mathematical operations, it will remain entirely in the dark about what it all means. Mindless processing power alone is not the answer to our existential questions. So, in the end, we don’t need to create another HAL 9000, or Skynet, or some evil global intelligent network of computers that rule the world and try to exterminate humanity as we become obsolete, which is a pretty cool subplot for a future Terminator movie. We don’t need any of that. We’re humans, and we are already intellectually superior to machines. End of story.

 

To be sure, I’m not denying that there are hard problems in AI. Of course, there are. All of these so called “advances” in this field are a bunch of crap if we still don’t have a program that can putt around on a miniaturized golf course all by itself, let alone actually play a decent game of golf like the humans do, without going into meltdown if it misses the hole five out of six times in a row. Can we get a movie about that? If you ask me, we don’t even really know how to define intelligence yet. How does it work, exactly? It’s a mess, really. If we’re ever going to create another actual HAL 9000, or yet another Skynet (what fun), we’d better be all on the same page when it comes to AI: what intelligence is, how it works, what consciousness is, what it feels like, what it really means to be self-aware. Without that common framework, trying to program yet another AI that can play yet another game like Go is like trying to blow up another Death Star with yet another way-too-large superlaser.

 

I think one of the big mistakes that computer scientists are making is that they are conflating intelligence with problem-solving. They’ve gotten into this habit of creating intricate Turing test competitions: give the computer a series of math problems, a chess board, etc., etc., give it a chat interface so you can interact with it like you would with another human being, and then see if the machine can fool you into thinking that it is a human. Once it does this, computers will have passed the Turing test and achieved general AI. Really? Is that really the way it works? I don’t see how. A computer has succeeded in faking it until it makes it, in terms of passing a Turing test competition, only if it has satisfied some pre-specified set of conditions that we know to be what a human would do in the same situation. But that is no guarantee that it has actually achieved intelligence! For all we know, computers can imitate humans until they generate the most plausible patterns of thought and behavior we know of, while all along remaining as soulless as ever. Who’s to say that the computer doesn’t merely use its programming to cheat the test? Who’s to say that it isn’t just shuffling its data around in an effort to do the most computations possible with the least amount of effort? It may succeed in conning us into thinking that it is self-aware, but that doesn’t prove that it actually is. It hasn’t actually passed the Turing test, unless we have defined it in a way that pre-determines the outcome: i.e., if the human pretends to be a computer, then it passes the test, but if the computer pretends to be a human, then it doesn’t pass the test! To me, that just doesn’t sound all that scientific."

 

Best,

Rasmus

 

On Mon, Jul 27, 2020 at 8:04 PM glen <[hidden email]> wrote:

Excellent. Thanks! I'd seen the link to Gwern from Slate Star Codex. But I loathe poetry. Now that you've recommended it, I have no choice. 8^)

On July 27, 2020 6:32:15 PM PDT, Alexander Rasmus <[hidden email]> wrote:


>Glen,
>
>Gwern has an extensive post on GPT-3 poetry experimentation here:
>https://www.gwern.net/GPT-3
>
>I strongly recommend the section on the Cyberiad, where GPT-3 stands in
>for
>Trurl's Electronic Bard:
>https://www.gwern.net/GPT-3#stanislaw-lems-cyberiad
>
>There's some discussion of fine tuning input, but I think more cases
>where
>they keep the prompt fixed and show several different outputs.

--
glen

- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/

- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/

- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/

- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/ 
Reply | Threaded
Open this post in threaded view
|

Re: GPT-3 and the chinese room

thompnickson2
In reply to this post by doug carmichael

Doug,

 

Dog do joy; why not computers?  

 

n

 

Nicholas Thompson

Emeritus Professor of Ethology and Psychology

Clark University

[hidden email]

https://wordpress.clarku.edu/nthompson/

 

 

From: Friam <[hidden email]> On Behalf Of doug carmichael
Sent: Monday, July 27, 2020 9:54 PM
To: The Friday Morning Applied Complexity Coffee Group <[hidden email]>
Subject: Re: [FRIAM] GPT-3 and the chinese room

 

I imagine playing chess, or go, with  a computer. As I play I have a very enlivening experience of playing. The computer seems to have no such thing.  For me, in my engagement, “Every neuron is listening to the mutter of the crowd.” Jerry Lettvin, MIT.   If The computer goes on to win it has nothing like the experience of winning. it just stops. I can’t imagine a computer saying,  except by playing a pre recorded sound file, “that is disgusting.”

 

 



On Jul 27, 2020, at 8:12 PM, Frank Wimberly <[hidden email]> wrote:

 

Smart computer (program).  Nick, is the behaviorist position that if it behaves as though it thinks than it thinks?

---
Frank C. Wimberly
140 Calle Ojo Feliz,
Santa Fe, NM 87505

505 670-9918
Santa Fe, NM

 

On Mon, Jul 27, 2020, 8:28 PM Alexander Rasmus <[hidden email]> wrote:

There's also a lot of stuff that isn't poetry. He generates Dwarf Fortress patch notes in the Misc section.

 

There's even a section where GPT-3 argues that it doesn't really understand anything (bold at the start is a shortened version of the prompt):

"AI will never be able to truly think for themselves, to have consciousness, to feel any of the richness and complexity of the world that we mere humans can feel. Mere, unenlightened humans might be impressed by the abilities of simple deep learning programs, but when looked at in a more holistic manner, it all adds up to… well, nothing. They still don’t exhibit any trace of consciousness. All of the available data support the notion that humans feel and experience the world differently than computers do. While a computer can beat a human master at chess or Go or some other game of structured rules, it will never be able to truly think outside of those rules, it will never be able to come up with its own new strategies on the fly, it will never be able to feel, to react, the way a human can. Artificial intelligence programs lack consciousness and self-awareness. They will never be able to have a sense of humor. They will never be able to appreciate art, or beauty, or love. They will never feel lonely. They will never have empathy for other people, for animals, for the environment. They will never enjoy music or fall in love, or cry at the drop of a hat. Merely by existing, mere, unenlightened humans are intellectually superior to computers, no matter how good our computers get at winning games like Go or Jeopardy. We don’t live by the rules of those games. Our minds are much, much bigger than that.

 

Compared to the awesome power of the human intellect, it’s obvious that computers are just dense electrochemical machines with intricate wiring. And while one day we might develop smarter (though still soulless) computers that surpass the human intellect in the games mentioned above, it will just end up being some more sophisticated way of doing the same thing. Even if it’s more efficient than our brains at doing complex mathematical operations, it will remain entirely in the dark about what it all means. Mindless processing power alone is not the answer to our existential questions. So, in the end, we don’t need to create another HAL 9000, or Skynet, or some evil global intelligent network of computers that rule the world and try to exterminate humanity as we become obsolete, which is a pretty cool subplot for a future Terminator movie. We don’t need any of that. We’re humans, and we are already intellectually superior to machines. End of story.

 

To be sure, I’m not denying that there are hard problems in AI. Of course, there are. All of these so called “advances” in this field are a bunch of crap if we still don’t have a program that can putt around on a miniaturized golf course all by itself, let alone actually play a decent game of golf like the humans do, without going into meltdown if it misses the hole five out of six times in a row. Can we get a movie about that? If you ask me, we don’t even really know how to define intelligence yet. How does it work, exactly? It’s a mess, really. If we’re ever going to create another actual HAL 9000, or yet another Skynet (what fun), we’d better be all on the same page when it comes to AI: what intelligence is, how it works, what consciousness is, what it feels like, what it really means to be self-aware. Without that common framework, trying to program yet another AI that can play yet another game like Go is like trying to blow up another Death Star with yet another way-too-large superlaser.

 

I think one of the big mistakes that computer scientists are making is that they are conflating intelligence with problem-solving. They’ve gotten into this habit of creating intricate Turing test competitions: give the computer a series of math problems, a chess board, etc., etc., give it a chat interface so you can interact with it like you would with another human being, and then see if the machine can fool you into thinking that it is a human. Once it does this, computers will have passed the Turing test and achieved general AI. Really? Is that really the way it works? I don’t see how. A computer has succeeded in faking it until it makes it, in terms of passing a Turing test competition, only if it has satisfied some pre-specified set of conditions that we know to be what a human would do in the same situation. But that is no guarantee that it has actually achieved intelligence! For all we know, computers can imitate humans until they generate the most plausible patterns of thought and behavior we know of, while all along remaining as soulless as ever. Who’s to say that the computer doesn’t merely use its programming to cheat the test? Who’s to say that it isn’t just shuffling its data around in an effort to do the most computations possible with the least amount of effort? It may succeed in conning us into thinking that it is self-aware, but that doesn’t prove that it actually is. It hasn’t actually passed the Turing test, unless we have defined it in a way that pre-determines the outcome: i.e., if the human pretends to be a computer, then it passes the test, but if the computer pretends to be a human, then it doesn’t pass the test! To me, that just doesn’t sound all that scientific."

 

Best,

Rasmus

 

On Mon, Jul 27, 2020 at 8:04 PM glen <[hidden email]> wrote:

Excellent. Thanks! I'd seen the link to Gwern from Slate Star Codex. But I loathe poetry. Now that you've recommended it, I have no choice. 8^)

On July 27, 2020 6:32:15 PM PDT, Alexander Rasmus <[hidden email]> wrote:


>Glen,
>
>Gwern has an extensive post on GPT-3 poetry experimentation here:
>https://www.gwern.net/GPT-3
>
>I strongly recommend the section on the Cyberiad, where GPT-3 stands in
>for
>Trurl's Electronic Bard:
>https://www.gwern.net/GPT-3#stanislaw-lems-cyberiad
>
>There's some discussion of fine tuning input, but I think more cases
>where
>they keep the prompt fixed and show several different outputs.

--
glen

- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/

- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/

- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/

 


- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/ 
Reply | Threaded
Open this post in threaded view
|

Re: GPT-3 and the chinese room

Prof David West
In reply to this post by thompnickson2
Richard Gabriel has been working on a program called Inkwell that he described several years ago as an AI "writer's assistant." He uses it to write and analyze poetry. Recently he has argued that Inkwell has passed the Turing Test - with a body of original poetry. I have asked him to send me the paper about this and will share with FRIAM.

davew


On Mon, Jul 27, 2020, at 10:11 PM, [hidden email] wrote:

Hi Frank,

 

Well, yes.  Roughly, “if it quacks like a duck…”*.  But we have to understand “behavior” in a pretty broad sense.

 

The rules of the game are, define “thinking” in some way that satisfies everybody in the room, once everybody agrees, look and see if the entity in question “thinks”.  But you have to be honest about it.   Obviously if everybody in the room agrees that thinking requires “posting to FRIAM”, then chimpanzees don’t think.  So really the whole project is in how you frame the question.  There are a lot of arguments that continue uselessly because people have illicit criteria for their definitions.  Many arguments  at FRIAM about consciousness continue more or less indefinitely  because some participants implicitly include in their definition  of consciousness the possession of an immortal soul or of a human brain, or both, but don’t own up those criteria.  Thus their belief that computers or chimpanzees, or blades of grass are not conscious arises from their premises, not from any facts of any matter.

 

Nick  

* I once expressed a worry to a friend of mine concerning a Doctor we had both seen that the Doctor was not really qualified because he was constantly evading and deflecting my questions.  “Well,” my friend said.  “If he ducks like a quack, he probably is one.” 

Nicholas Thompson

Emeritus Professor of Ethology and Psychology

Clark University

[hidden email]

https://wordpress.clarku.edu/nthompson/


 

 

From: Friam <[hidden email]> On Behalf Of Frank Wimberly
Sent: Monday, July 27, 2020 9:12 PM
To: The Friday Morning Applied Complexity Coffee Group <[hidden email]>
Subject: Re: [FRIAM] GPT-3 and the chinese room

 

Smart computer (program).  Nick, is the behaviorist position that if it behaves as though it thinks than it thinks?

---
Frank C. Wimberly
140 Calle Ojo Feliz,
Santa Fe, NM 87505

505 670-9918
Santa Fe, NM

 

On Mon, Jul 27, 2020, 8:28 PM Alexander Rasmus <[hidden email]> wrote:

There's also a lot of stuff that isn't poetry. He generates Dwarf Fortress patch notes in the Misc section.

 

There's even a section where GPT-3 argues that it doesn't really understand anything (bold at the start is a shortened version of the prompt):

"AI will never be able to truly think for themselves, to have consciousness, to feel any of the richness and complexity of the world that we mere humans can feel. Mere, unenlightened humans might be impressed by the abilities of simple deep learning programs, but when looked at in a more holistic manner, it all adds up to… well, nothing. They still don’t exhibit any trace of consciousness. All of the available data support the notion that humans feel and experience the world differently than computers do. While a computer can beat a human master at chess or Go or some other game of structured rules, it will never be able to truly think outside of those rules, it will never be able to come up with its own new strategies on the fly, it will never be able to feel, to react, the way a human can. Artificial intelligence programs lack consciousness and self-awareness. They will never be able to have a sense of humor. They will never be able to appreciate art, or beauty, or love. They will never feel lonely. They will never have empathy for other people, for animals, for the environment. They will never enjoy music or fall in love, or cry at the drop of a hat. Merely by existing, mere, unenlightened humans are intellectually superior to computers, no matter how good our computers get at winning games like Go or Jeopardy. We don’t live by the rules of those games. Our minds are much, much bigger than that.

 

Compared to the awesome power of the human intellect, it’s obvious that computers are just dense electrochemical machines with intricate wiring. And while one day we might develop smarter (though still soulless) computers that surpass the human intellect in the games mentioned above, it will just end up being some more sophisticated way of doing the same thing. Even if it’s more efficient than our brains at doing complex mathematical operations, it will remain entirely in the dark about what it all means. Mindless processing power alone is not the answer to our existential questions. So, in the end, we don’t need to create another HAL 9000, or Skynet, or some evil global intelligent network of computers that rule the world and try to exterminate humanity as we become obsolete, which is a pretty cool subplot for a future Terminator movie. We don’t need any of that. We’re humans, and we are already intellectually superior to machines. End of story.

 

To be sure, I’m not denying that there are hard problems in AI. Of course, there are. All of these so called “advances” in this field are a bunch of crap if we still don’t have a program that can putt around on a miniaturized golf course all by itself, let alone actually play a decent game of golf like the humans do, without going into meltdown if it misses the hole five out of six times in a row. Can we get a movie about that? If you ask me, we don’t even really know how to define intelligence yet. How does it work, exactly? It’s a mess, really. If we’re ever going to create another actual HAL 9000, or yet another Skynet (what fun), we’d better be all on the same page when it comes to AI: what intelligence is, how it works, what consciousness is, what it feels like, what it really means to be self-aware. Without that common framework, trying to program yet another AI that can play yet another game like Go is like trying to blow up another Death Star with yet another way-too-large superlaser.

 

I think one of the big mistakes that computer scientists are making is that they are conflating intelligence with problem-solving. They’ve gotten into this habit of creating intricate Turing test competitions: give the computer a series of math problems, a chess board, etc., etc., give it a chat interface so you can interact with it like you would with another human being, and then see if the machine can fool you into thinking that it is a human. Once it does this, computers will have passed the Turing test and achieved general AI. Really? Is that really the way it works? I don’t see how. A computer has succeeded in faking it until it makes it, in terms of passing a Turing test competition, only if it has satisfied some pre-specified set of conditions that we know to be what a human would do in the same situation. But that is no guarantee that it has actually achieved intelligence! For all we know, computers can imitate humans until they generate the most plausible patterns of thought and behavior we know of, while all along remaining as soulless as ever. Who’s to say that the computer doesn’t merely use its programming to cheat the test? Who’s to say that it isn’t just shuffling its data around in an effort to do the most computations possible with the least amount of effort? It may succeed in conning us into thinking that it is self-aware, but that doesn’t prove that it actually is. It hasn’t actually passed the Turing test, unless we have defined it in a way that pre-determines the outcome: i.e., if the human pretends to be a computer, then it passes the test, but if the computer pretends to be a human, then it doesn’t pass the test! To me, that just doesn’t sound all that scientific."

 

Best,

Rasmus

 

On Mon, Jul 27, 2020 at 8:04 PM glen <[hidden email]> wrote:

Excellent. Thanks! I'd seen the link to Gwern from Slate Star Codex. But I loathe poetry. Now that you've recommended it, I have no choice. 8^)

On July 27, 2020 6:32:15 PM PDT, Alexander Rasmus <[hidden email]> wrote:
>Glen,
>
>Gwern has an extensive post on GPT-3 poetry experimentation here:
>
>I strongly recommend the section on the Cyberiad, where GPT-3 stands in
>for
>Trurl's Electronic Bard:
>
>There's some discussion of fine tuning input, but I think more cases
>where
>they keep the prompt fixed and show several different outputs.

--
glen

- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam

- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam

- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam



- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/ 
Reply | Threaded
Open this post in threaded view
|

Re: GPT-3 and the chinese room

doug carmichael
In reply to this post by thompnickson2
dog is highly interconnected - hormones, nerves, senses, and environment. neurons are not binary . every synapse is an infinite state variable. 

doug

On Jul 27, 2020, at 10:45 PM, [hidden email] wrote:



Doug,

 

Dog do joy; why not computers?  

 

n

 

Nicholas Thompson

Emeritus Professor of Ethology and Psychology

Clark University

[hidden email]

https://wordpress.clarku.edu/nthompson/

 

 

From: Friam <[hidden email]> On Behalf Of doug carmichael
Sent: Monday, July 27, 2020 9:54 PM
To: The Friday Morning Applied Complexity Coffee Group <[hidden email]>
Subject: Re: [FRIAM] GPT-3 and the chinese room

 

I imagine playing chess, or go, with  a computer. As I play I have a very enlivening experience of playing. The computer seems to have no such thing.  For me, in my engagement, “Every neuron is listening to the mutter of the crowd.” Jerry Lettvin, MIT.   If The computer goes on to win it has nothing like the experience of winning. it just stops. I can’t imagine a computer saying,  except by playing a pre recorded sound file, “that is disgusting.”

 

 



On Jul 27, 2020, at 8:12 PM, Frank Wimberly <[hidden email]> wrote:

 

Smart computer (program).  Nick, is the behaviorist position that if it behaves as though it thinks than it thinks?

---
Frank C. Wimberly
140 Calle Ojo Feliz,
Santa Fe, NM 87505

505 670-9918
Santa Fe, NM

 

On Mon, Jul 27, 2020, 8:28 PM Alexander Rasmus <[hidden email]> wrote:

There's also a lot of stuff that isn't poetry. He generates Dwarf Fortress patch notes in the Misc section.

 

There's even a section where GPT-3 argues that it doesn't really understand anything (bold at the start is a shortened version of the prompt):

"AI will never be able to truly think for themselves, to have consciousness, to feel any of the richness and complexity of the world that we mere humans can feel. Mere, unenlightened humans might be impressed by the abilities of simple deep learning programs, but when looked at in a more holistic manner, it all adds up to… well, nothing. They still don’t exhibit any trace of consciousness. All of the available data support the notion that humans feel and experience the world differently than computers do. While a computer can beat a human master at chess or Go or some other game of structured rules, it will never be able to truly think outside of those rules, it will never be able to come up with its own new strategies on the fly, it will never be able to feel, to react, the way a human can. Artificial intelligence programs lack consciousness and self-awareness. They will never be able to have a sense of humor. They will never be able to appreciate art, or beauty, or love. They will never feel lonely. They will never have empathy for other people, for animals, for the environment. They will never enjoy music or fall in love, or cry at the drop of a hat. Merely by existing, mere, unenlightened humans are intellectually superior to computers, no matter how good our computers get at winning games like Go or Jeopardy. We don’t live by the rules of those games. Our minds are much, much bigger than that.

 

Compared to the awesome power of the human intellect, it’s obvious that computers are just dense electrochemical machines with intricate wiring. And while one day we might develop smarter (though still soulless) computers that surpass the human intellect in the games mentioned above, it will just end up being some more sophisticated way of doing the same thing. Even if it’s more efficient than our brains at doing complex mathematical operations, it will remain entirely in the dark about what it all means. Mindless processing power alone is not the answer to our existential questions. So, in the end, we don’t need to create another HAL 9000, or Skynet, or some evil global intelligent network of computers that rule the world and try to exterminate humanity as we become obsolete, which is a pretty cool subplot for a future Terminator movie. We don’t need any of that. We’re humans, and we are already intellectually superior to machines. End of story.

 

To be sure, I’m not denying that there are hard problems in AI. Of course, there are. All of these so called “advances” in this field are a bunch of crap if we still don’t have a program that can putt around on a miniaturized golf course all by itself, let alone actually play a decent game of golf like the humans do, without going into meltdown if it misses the hole five out of six times in a row. Can we get a movie about that? If you ask me, we don’t even really know how to define intelligence yet. How does it work, exactly? It’s a mess, really. If we’re ever going to create another actual HAL 9000, or yet another Skynet (what fun), we’d better be all on the same page when it comes to AI: what intelligence is, how it works, what consciousness is, what it feels like, what it really means to be self-aware. Without that common framework, trying to program yet another AI that can play yet another game like Go is like trying to blow up another Death Star with yet another way-too-large superlaser.

 

I think one of the big mistakes that computer scientists are making is that they are conflating intelligence with problem-solving. They’ve gotten into this habit of creating intricate Turing test competitions: give the computer a series of math problems, a chess board, etc., etc., give it a chat interface so you can interact with it like you would with another human being, and then see if the machine can fool you into thinking that it is a human. Once it does this, computers will have passed the Turing test and achieved general AI. Really? Is that really the way it works? I don’t see how. A computer has succeeded in faking it until it makes it, in terms of passing a Turing test competition, only if it has satisfied some pre-specified set of conditions that we know to be what a human would do in the same situation. But that is no guarantee that it has actually achieved intelligence! For all we know, computers can imitate humans until they generate the most plausible patterns of thought and behavior we know of, while all along remaining as soulless as ever. Who’s to say that the computer doesn’t merely use its programming to cheat the test? Who’s to say that it isn’t just shuffling its data around in an effort to do the most computations possible with the least amount of effort? It may succeed in conning us into thinking that it is self-aware, but that doesn’t prove that it actually is. It hasn’t actually passed the Turing test, unless we have defined it in a way that pre-determines the outcome: i.e., if the human pretends to be a computer, then it passes the test, but if the computer pretends to be a human, then it doesn’t pass the test! To me, that just doesn’t sound all that scientific."

 

Best,

Rasmus

 

On Mon, Jul 27, 2020 at 8:04 PM glen <[hidden email]> wrote:

Excellent. Thanks! I'd seen the link to Gwern from Slate Star Codex. But I loathe poetry. Now that you've recommended it, I have no choice. 8^)

On July 27, 2020 6:32:15 PM PDT, Alexander Rasmus <[hidden email]> wrote:


>Glen,
>
>Gwern has an extensive post on GPT-3 poetry experimentation here:
>https://www.gwern.net/GPT-3
>
>I strongly recommend the section on the Cyberiad, where GPT-3 stands in
>for
>Trurl's Electronic Bard:
>https://www.gwern.net/GPT-3#stanislaw-lems-cyberiad
>
>There's some discussion of fine tuning input, but I think more cases
>where
>they keep the prompt fixed and show several different outputs.

--
glen

- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/

- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/

- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/

 

- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/

- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/ 
Reply | Threaded
Open this post in threaded view
|

Re: GPT-3 and the chinese room

Frank Wimberly-2
Doug, Nick

One of the questions on my PhD qualifying exam was to defend or deny Marvin Minsky's claim that a brain is just a computer made of meat.  I chose to do the latter and argued in a vein similar to Doug's comments.

Frank

---
Frank C. Wimberly
140 Calle Ojo Feliz,
Santa Fe, NM 87505

505 670-9918
Santa Fe, NM

On Tue, Jul 28, 2020, 9:20 AM doug carmichael <[hidden email]> wrote:
dog is highly interconnected - hormones, nerves, senses, and environment. neurons are not binary . every synapse is an infinite state variable. 

doug

On Jul 27, 2020, at 10:45 PM, [hidden email] wrote:



Doug,

 

Dog do joy; why not computers?  

 

n

 

Nicholas Thompson

Emeritus Professor of Ethology and Psychology

Clark University

[hidden email]

https://wordpress.clarku.edu/nthompson/

 

 

From: Friam <[hidden email]> On Behalf Of doug carmichael
Sent: Monday, July 27, 2020 9:54 PM
To: The Friday Morning Applied Complexity Coffee Group <[hidden email]>
Subject: Re: [FRIAM] GPT-3 and the chinese room

 

I imagine playing chess, or go, with  a computer. As I play I have a very enlivening experience of playing. The computer seems to have no such thing.  For me, in my engagement, “Every neuron is listening to the mutter of the crowd.” Jerry Lettvin, MIT.   If The computer goes on to win it has nothing like the experience of winning. it just stops. I can’t imagine a computer saying,  except by playing a pre recorded sound file, “that is disgusting.”

 

 



On Jul 27, 2020, at 8:12 PM, Frank Wimberly <[hidden email]> wrote:

 

Smart computer (program).  Nick, is the behaviorist position that if it behaves as though it thinks than it thinks?

---
Frank C. Wimberly
140 Calle Ojo Feliz,
Santa Fe, NM 87505

505 670-9918
Santa Fe, NM

 

On Mon, Jul 27, 2020, 8:28 PM Alexander Rasmus <[hidden email]> wrote:

There's also a lot of stuff that isn't poetry. He generates Dwarf Fortress patch notes in the Misc section.

 

There's even a section where GPT-3 argues that it doesn't really understand anything (bold at the start is a shortened version of the prompt):

"AI will never be able to truly think for themselves, to have consciousness, to feel any of the richness and complexity of the world that we mere humans can feel. Mere, unenlightened humans might be impressed by the abilities of simple deep learning programs, but when looked at in a more holistic manner, it all adds up to… well, nothing. They still don’t exhibit any trace of consciousness. All of the available data support the notion that humans feel and experience the world differently than computers do. While a computer can beat a human master at chess or Go or some other game of structured rules, it will never be able to truly think outside of those rules, it will never be able to come up with its own new strategies on the fly, it will never be able to feel, to react, the way a human can. Artificial intelligence programs lack consciousness and self-awareness. They will never be able to have a sense of humor. They will never be able to appreciate art, or beauty, or love. They will never feel lonely. They will never have empathy for other people, for animals, for the environment. They will never enjoy music or fall in love, or cry at the drop of a hat. Merely by existing, mere, unenlightened humans are intellectually superior to computers, no matter how good our computers get at winning games like Go or Jeopardy. We don’t live by the rules of those games. Our minds are much, much bigger than that.

 

Compared to the awesome power of the human intellect, it’s obvious that computers are just dense electrochemical machines with intricate wiring. And while one day we might develop smarter (though still soulless) computers that surpass the human intellect in the games mentioned above, it will just end up being some more sophisticated way of doing the same thing. Even if it’s more efficient than our brains at doing complex mathematical operations, it will remain entirely in the dark about what it all means. Mindless processing power alone is not the answer to our existential questions. So, in the end, we don’t need to create another HAL 9000, or Skynet, or some evil global intelligent network of computers that rule the world and try to exterminate humanity as we become obsolete, which is a pretty cool subplot for a future Terminator movie. We don’t need any of that. We’re humans, and we are already intellectually superior to machines. End of story.

 

To be sure, I’m not denying that there are hard problems in AI. Of course, there are. All of these so called “advances” in this field are a bunch of crap if we still don’t have a program that can putt around on a miniaturized golf course all by itself, let alone actually play a decent game of golf like the humans do, without going into meltdown if it misses the hole five out of six times in a row. Can we get a movie about that? If you ask me, we don’t even really know how to define intelligence yet. How does it work, exactly? It’s a mess, really. If we’re ever going to create another actual HAL 9000, or yet another Skynet (what fun), we’d better be all on the same page when it comes to AI: what intelligence is, how it works, what consciousness is, what it feels like, what it really means to be self-aware. Without that common framework, trying to program yet another AI that can play yet another game like Go is like trying to blow up another Death Star with yet another way-too-large superlaser.

 

I think one of the big mistakes that computer scientists are making is that they are conflating intelligence with problem-solving. They’ve gotten into this habit of creating intricate Turing test competitions: give the computer a series of math problems, a chess board, etc., etc., give it a chat interface so you can interact with it like you would with another human being, and then see if the machine can fool you into thinking that it is a human. Once it does this, computers will have passed the Turing test and achieved general AI. Really? Is that really the way it works? I don’t see how. A computer has succeeded in faking it until it makes it, in terms of passing a Turing test competition, only if it has satisfied some pre-specified set of conditions that we know to be what a human would do in the same situation. But that is no guarantee that it has actually achieved intelligence! For all we know, computers can imitate humans until they generate the most plausible patterns of thought and behavior we know of, while all along remaining as soulless as ever. Who’s to say that the computer doesn’t merely use its programming to cheat the test? Who’s to say that it isn’t just shuffling its data around in an effort to do the most computations possible with the least amount of effort? It may succeed in conning us into thinking that it is self-aware, but that doesn’t prove that it actually is. It hasn’t actually passed the Turing test, unless we have defined it in a way that pre-determines the outcome: i.e., if the human pretends to be a computer, then it passes the test, but if the computer pretends to be a human, then it doesn’t pass the test! To me, that just doesn’t sound all that scientific."

 

Best,

Rasmus

 

On Mon, Jul 27, 2020 at 8:04 PM glen <[hidden email]> wrote:

Excellent. Thanks! I'd seen the link to Gwern from Slate Star Codex. But I loathe poetry. Now that you've recommended it, I have no choice. 8^)

On July 27, 2020 6:32:15 PM PDT, Alexander Rasmus <[hidden email]> wrote:


>Glen,
>
>Gwern has an extensive post on GPT-3 poetry experimentation here:
>https://www.gwern.net/GPT-3
>
>I strongly recommend the section on the Cyberiad, where GPT-3 stands in
>for
>Trurl's Electronic Bard:
>https://www.gwern.net/GPT-3#stanislaw-lems-cyberiad
>
>There's some discussion of fine tuning input, but I think more cases
>where
>they keep the prompt fixed and show several different outputs.

--
glen

- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/

- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/

- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/

 

- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/
- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/

- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/ 
Reply | Threaded
Open this post in threaded view
|

Re: GPT-3 and the chinese room

thompnickson2
In reply to this post by doug carmichael

Hi Doug,

 

I changed the subject line to head off accusations of dragging this lofty discussion into my nasty, fetid den.

 

dog is highly interconnected - hormones, nerves, senses, and environment. neurons are not binary . every synapse is an infinite state variable. 

 

These points might serve as an explanation for why dogs can and computers cannot exhibit joy – but only once we had agreed, up front, what it would be for a computer to exhibit joy.    For my part, I guess, I would say that to exhibit joy, a computer would have to be “embodied” – i.e., be a robot acting in an environment, probably a social environment – and that robot would have to behave joyously.  Or perhaps it could instruct an icon, in a screen environment, to behavior joyously.  But I assume any one of a dozen of the people on this list could design such a robot, or icon, once you and I had done the hard work of defining “joyous.”

 

Programmers do this with games, etc., all the time.

 

Heider and Simmel did it with a time-lapse camera and a few felt icons on a glass draft deflector.

 

Lee Rudolph, if he is still amongst us, can send you a program in netlogo where an icon exhibits joy.

 

Following early Tolman here. 

 

N

 

Nicholas Thompson

Emeritus Professor of Ethology and Psychology

Clark University

[hidden email]

https://wordpress.clarku.edu/nthompson/

 

 

From: Friam <[hidden email]> On Behalf Of doug carmichael
Sent: Tuesday, July 28, 2020 9:20 AM
To: The Friday Morning Applied Complexity Coffee Group <[hidden email]>
Subject: Re: [FRIAM] GPT-3 and the chinese room

 

dog is highly interconnected - hormones, nerves, senses, and environment. neurons are not binary . every synapse is an infinite state variable. 

doug



On Jul 27, 2020, at 10:45 PM, [hidden email] wrote:



Doug,

 

Dog do joy; why not computers?  

 

n

 

Nicholas Thompson

Emeritus Professor of Ethology and Psychology

Clark University

[hidden email]

https://wordpress.clarku.edu/nthompson/

 

 

From: Friam <[hidden email]> On Behalf Of doug carmichael
Sent: Monday, July 27, 2020 9:54 PM
To: The Friday Morning Applied Complexity Coffee Group <[hidden email]>
Subject: Re: [FRIAM] GPT-3 and the chinese room

 

I imagine playing chess, or go, with  a computer. As I play I have a very enlivening experience of playing. The computer seems to have no such thing.  For me, in my engagement, “Every neuron is listening to the mutter of the crowd.” Jerry Lettvin, MIT.   If The computer goes on to win it has nothing like the experience of winning. it just stops. I can’t imagine a computer saying,  except by playing a pre recorded sound file, “that is disgusting.”

 

 




On Jul 27, 2020, at 8:12 PM, Frank Wimberly <[hidden email]> wrote:

 

Smart computer (program).  Nick, is the behaviorist position that if it behaves as though it thinks than it thinks?

---
Frank C. Wimberly
140 Calle Ojo Feliz,
Santa Fe, NM 87505

505 670-9918
Santa Fe, NM

 

On Mon, Jul 27, 2020, 8:28 PM Alexander Rasmus <[hidden email]> wrote:

There's also a lot of stuff that isn't poetry. He generates Dwarf Fortress patch notes in the Misc section.

 

There's even a section where GPT-3 argues that it doesn't really understand anything (bold at the start is a shortened version of the prompt):

"AI will never be able to truly think for themselves, to have consciousness, to feel any of the richness and complexity of the world that we mere humans can feel. Mere, unenlightened humans might be impressed by the abilities of simple deep learning programs, but when looked at in a more holistic manner, it all adds up to… well, nothing. They still don’t exhibit any trace of consciousness. All of the available data support the notion that humans feel and experience the world differently than computers do. While a computer can beat a human master at chess or Go or some other game of structured rules, it will never be able to truly think outside of those rules, it will never be able to come up with its own new strategies on the fly, it will never be able to feel, to react, the way a human can. Artificial intelligence programs lack consciousness and self-awareness. They will never be able to have a sense of humor. They will never be able to appreciate art, or beauty, or love. They will never feel lonely. They will never have empathy for other people, for animals, for the environment. They will never enjoy music or fall in love, or cry at the drop of a hat. Merely by existing, mere, unenlightened humans are intellectually superior to computers, no matter how good our computers get at winning games like Go or Jeopardy. We don’t live by the rules of those games. Our minds are much, much bigger than that.

 

Compared to the awesome power of the human intellect, it’s obvious that computers are just dense electrochemical machines with intricate wiring. And while one day we might develop smarter (though still soulless) computers that surpass the human intellect in the games mentioned above, it will just end up being some more sophisticated way of doing the same thing. Even if it’s more efficient than our brains at doing complex mathematical operations, it will remain entirely in the dark about what it all means. Mindless processing power alone is not the answer to our existential questions. So, in the end, we don’t need to create another HAL 9000, or Skynet, or some evil global intelligent network of computers that rule the world and try to exterminate humanity as we become obsolete, which is a pretty cool subplot for a future Terminator movie. We don’t need any of that. We’re humans, and we are already intellectually superior to machines. End of story.

 

To be sure, I’m not denying that there are hard problems in AI. Of course, there are. All of these so called “advances” in this field are a bunch of crap if we still don’t have a program that can putt around on a miniaturized golf course all by itself, let alone actually play a decent game of golf like the humans do, without going into meltdown if it misses the hole five out of six times in a row. Can we get a movie about that? If you ask me, we don’t even really know how to define intelligence yet. How does it work, exactly? It’s a mess, really. If we’re ever going to create another actual HAL 9000, or yet another Skynet (what fun), we’d better be all on the same page when it comes to AI: what intelligence is, how it works, what consciousness is, what it feels like, what it really means to be self-aware. Without that common framework, trying to program yet another AI that can play yet another game like Go is like trying to blow up another Death Star with yet another way-too-large superlaser.

 

I think one of the big mistakes that computer scientists are making is that they are conflating intelligence with problem-solving. They’ve gotten into this habit of creating intricate Turing test competitions: give the computer a series of math problems, a chess board, etc., etc., give it a chat interface so you can interact with it like you would with another human being, and then see if the machine can fool you into thinking that it is a human. Once it does this, computers will have passed the Turing test and achieved general AI. Really? Is that really the way it works? I don’t see how. A computer has succeeded in faking it until it makes it, in terms of passing a Turing test competition, only if it has satisfied some pre-specified set of conditions that we know to be what a human would do in the same situation. But that is no guarantee that it has actually achieved intelligence! For all we know, computers can imitate humans until they generate the most plausible patterns of thought and behavior we know of, while all along remaining as soulless as ever. Who’s to say that the computer doesn’t merely use its programming to cheat the test? Who’s to say that it isn’t just shuffling its data around in an effort to do the most computations possible with the least amount of effort? It may succeed in conning us into thinking that it is self-aware, but that doesn’t prove that it actually is. It hasn’t actually passed the Turing test, unless we have defined it in a way that pre-determines the outcome: i.e., if the human pretends to be a computer, then it passes the test, but if the computer pretends to be a human, then it doesn’t pass the test! To me, that just doesn’t sound all that scientific."

 

Best,

Rasmus

 

On Mon, Jul 27, 2020 at 8:04 PM glen <[hidden email]> wrote:

Excellent. Thanks! I'd seen the link to Gwern from Slate Star Codex. But I loathe poetry. Now that you've recommended it, I have no choice. 8^)

On July 27, 2020 6:32:15 PM PDT, Alexander Rasmus <[hidden email]> wrote:


>Glen,
>
>Gwern has an extensive post on GPT-3 poetry experimentation here:
>https://www.gwern.net/GPT-3
>
>I strongly recommend the section on the Cyberiad, where GPT-3 stands in
>for
>Trurl's Electronic Bard:
>https://www.gwern.net/GPT-3#stanislaw-lems-cyberiad
>
>There's some discussion of fine tuning input, but I think more cases
>where
>they keep the prompt fixed and show several different outputs.

--
glen

- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/

- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/

- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/

 

- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/


- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/ 
Reply | Threaded
Open this post in threaded view
|

Re: GPT-3 and the chinese room

gepr

Much of this is begging for, as Nick pointed out, one's cryptic definition of "computer". Lee posted a nice definition awhile back, one which I think is flawed. But it was nice anyway:

On 3/11/15 3:41 PM, [hidden email] posted:> A computation is a process whereby we proceed from
> initially given objects, called inputs, according to a fixed
> set of rules, called a program, procedure, or algorithm,
> through a series of steps and arrive at the end of these
> steps with a final result, called the output. The algorithm,
> as a set of rules proceeding from inputs to output, must
> be precise and definite, with each successive step clearly
> determined. (Soare, 1996, p. 286; definitional emphases
> in the original)

I think it's fairly obvious that a dog is not a computer according to this definition. My primary objection to the definition is the *definite* requirement. And that objection, then, percolates out to the concepts of successive *steps* and any well-foundedness and fixedness of the rules. But none of it really relies on finiteness. I think infinite states are commensurate with this definition. It might be tempting to claim "the end of these steps with a final result" (typically part of the concept of an algorithm) conflicts with infinite states. But I don't think so. As long as we have a way to choose a value (definite or not) from that infinite set, we're good to go ... [ahem] I mean we're good to stop.

I think what we see with things like Transformers, including BERT, is a challenge to the definiteness of computation more than to any stopping or finiteness.


On 7/28/20 9:38 AM, [hidden email] wrote:

> On 7/28/20 8:20 AM, doug carmichael wrote:
>> dog is highly interconnected - hormones, nerves, senses, and environment. neurons are not binary . every synapse is an infinite state variable.
>
> These points might serve as an explanation for why dogs can and computers cannot exhibit joy – but only once we had agreed, up front, what it would be for a computer to exhibit joy.    For my part, I guess, I would say that to exhibit joy, a computer would have to be “embodied” – i.e., be a robot acting in an environment, probably a social environment – and that robot would have to behave joyously.  Or perhaps it could instruct an icon, in a screen environment, to behavior joyously.  But I assume any one of a dozen of the people on this list could design such a robot, or icon, once you and I had done the hard work of defining “joyous.”
>
> Programmers do this with games, etc., all the time.
>
> Heider and Simmel did it with a time-lapse camera and a few felt icons on a glass draft deflector.
>
> Lee Rudolph, if he is still amongst us, can send you a program in netlogo where an icon exhibits joy.


--
↙↙↙ uǝlƃ

- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/ 
uǝʃƃ ⊥ glen
Reply | Threaded
Open this post in threaded view
|

Re: GPT-3 and the chinese room

Frank Wimberly-2
In reply to this post by thompnickson2
Have you seen the TV ads in which a robot is signing a people up for insurance.  It does exhibit pleasure whenever they mention some detail that implies a need for another insurance policy.  For example, a couple mention having bought a larger house and the robot says, "I'm recommending our Diamond, Silver, Titanium Policy.  Sign here, and here, and here."  The couple sneak away.

---
Frank C. Wimberly
140 Calle Ojo Feliz,
Santa Fe, NM 87505

505 670-9918
Santa Fe, NM

On Tue, Jul 28, 2020, 10:38 AM <[hidden email]> wrote:

Hi Doug,

 

I changed the subject line to head off accusations of dragging this lofty discussion into my nasty, fetid den.

 

dog is highly interconnected - hormones, nerves, senses, and environment. neurons are not binary . every synapse is an infinite state variable. 

 

These points might serve as an explanation for why dogs can and computers cannot exhibit joy – but only once we had agreed, up front, what it would be for a computer to exhibit joy.    For my part, I guess, I would say that to exhibit joy, a computer would have to be “embodied” – i.e., be a robot acting in an environment, probably a social environment – and that robot would have to behave joyously.  Or perhaps it could instruct an icon, in a screen environment, to behavior joyously.  But I assume any one of a dozen of the people on this list could design such a robot, or icon, once you and I had done the hard work of defining “joyous.”

 

Programmers do this with games, etc., all the time.

 

Heider and Simmel did it with a time-lapse camera and a few felt icons on a glass draft deflector.

 

Lee Rudolph, if he is still amongst us, can send you a program in netlogo where an icon exhibits joy.

 

Following early Tolman here. 

 

N

 

Nicholas Thompson

Emeritus Professor of Ethology and Psychology

Clark University

[hidden email]

https://wordpress.clarku.edu/nthompson/

 

 

From: Friam <[hidden email]> On Behalf Of doug carmichael
Sent: Tuesday, July 28, 2020 9:20 AM
To: The Friday Morning Applied Complexity Coffee Group <[hidden email]>
Subject: Re: [FRIAM] GPT-3 and the chinese room

 

dog is highly interconnected - hormones, nerves, senses, and environment. neurons are not binary . every synapse is an infinite state variable. 

doug



On Jul 27, 2020, at 10:45 PM, [hidden email] wrote:



Doug,

 

Dog do joy; why not computers?  

 

n

 

Nicholas Thompson

Emeritus Professor of Ethology and Psychology

Clark University

[hidden email]

https://wordpress.clarku.edu/nthompson/

 

 

From: Friam <[hidden email]> On Behalf Of doug carmichael
Sent: Monday, July 27, 2020 9:54 PM
To: The Friday Morning Applied Complexity Coffee Group <[hidden email]>
Subject: Re: [FRIAM] GPT-3 and the chinese room

 

I imagine playing chess, or go, with  a computer. As I play I have a very enlivening experience of playing. The computer seems to have no such thing.  For me, in my engagement, “Every neuron is listening to the mutter of the crowd.” Jerry Lettvin, MIT.   If The computer goes on to win it has nothing like the experience of winning. it just stops. I can’t imagine a computer saying,  except by playing a pre recorded sound file, “that is disgusting.”

 

 




On Jul 27, 2020, at 8:12 PM, Frank Wimberly <[hidden email]> wrote:

 

Smart computer (program).  Nick, is the behaviorist position that if it behaves as though it thinks than it thinks?

---
Frank C. Wimberly
140 Calle Ojo Feliz,
Santa Fe, NM 87505

505 670-9918
Santa Fe, NM

 

On Mon, Jul 27, 2020, 8:28 PM Alexander Rasmus <[hidden email]> wrote:

There's also a lot of stuff that isn't poetry. He generates Dwarf Fortress patch notes in the Misc section.

 

There's even a section where GPT-3 argues that it doesn't really understand anything (bold at the start is a shortened version of the prompt):

"AI will never be able to truly think for themselves, to have consciousness, to feel any of the richness and complexity of the world that we mere humans can feel. Mere, unenlightened humans might be impressed by the abilities of simple deep learning programs, but when looked at in a more holistic manner, it all adds up to… well, nothing. They still don’t exhibit any trace of consciousness. All of the available data support the notion that humans feel and experience the world differently than computers do. While a computer can beat a human master at chess or Go or some other game of structured rules, it will never be able to truly think outside of those rules, it will never be able to come up with its own new strategies on the fly, it will never be able to feel, to react, the way a human can. Artificial intelligence programs lack consciousness and self-awareness. They will never be able to have a sense of humor. They will never be able to appreciate art, or beauty, or love. They will never feel lonely. They will never have empathy for other people, for animals, for the environment. They will never enjoy music or fall in love, or cry at the drop of a hat. Merely by existing, mere, unenlightened humans are intellectually superior to computers, no matter how good our computers get at winning games like Go or Jeopardy. We don’t live by the rules of those games. Our minds are much, much bigger than that.

 

Compared to the awesome power of the human intellect, it’s obvious that computers are just dense electrochemical machines with intricate wiring. And while one day we might develop smarter (though still soulless) computers that surpass the human intellect in the games mentioned above, it will just end up being some more sophisticated way of doing the same thing. Even if it’s more efficient than our brains at doing complex mathematical operations, it will remain entirely in the dark about what it all means. Mindless processing power alone is not the answer to our existential questions. So, in the end, we don’t need to create another HAL 9000, or Skynet, or some evil global intelligent network of computers that rule the world and try to exterminate humanity as we become obsolete, which is a pretty cool subplot for a future Terminator movie. We don’t need any of that. We’re humans, and we are already intellectually superior to machines. End of story.

 

To be sure, I’m not denying that there are hard problems in AI. Of course, there are. All of these so called “advances” in this field are a bunch of crap if we still don’t have a program that can putt around on a miniaturized golf course all by itself, let alone actually play a decent game of golf like the humans do, without going into meltdown if it misses the hole five out of six times in a row. Can we get a movie about that? If you ask me, we don’t even really know how to define intelligence yet. How does it work, exactly? It’s a mess, really. If we’re ever going to create another actual HAL 9000, or yet another Skynet (what fun), we’d better be all on the same page when it comes to AI: what intelligence is, how it works, what consciousness is, what it feels like, what it really means to be self-aware. Without that common framework, trying to program yet another AI that can play yet another game like Go is like trying to blow up another Death Star with yet another way-too-large superlaser.

 

I think one of the big mistakes that computer scientists are making is that they are conflating intelligence with problem-solving. They’ve gotten into this habit of creating intricate Turing test competitions: give the computer a series of math problems, a chess board, etc., etc., give it a chat interface so you can interact with it like you would with another human being, and then see if the machine can fool you into thinking that it is a human. Once it does this, computers will have passed the Turing test and achieved general AI. Really? Is that really the way it works? I don’t see how. A computer has succeeded in faking it until it makes it, in terms of passing a Turing test competition, only if it has satisfied some pre-specified set of conditions that we know to be what a human would do in the same situation. But that is no guarantee that it has actually achieved intelligence! For all we know, computers can imitate humans until they generate the most plausible patterns of thought and behavior we know of, while all along remaining as soulless as ever. Who’s to say that the computer doesn’t merely use its programming to cheat the test? Who’s to say that it isn’t just shuffling its data around in an effort to do the most computations possible with the least amount of effort? It may succeed in conning us into thinking that it is self-aware, but that doesn’t prove that it actually is. It hasn’t actually passed the Turing test, unless we have defined it in a way that pre-determines the outcome: i.e., if the human pretends to be a computer, then it passes the test, but if the computer pretends to be a human, then it doesn’t pass the test! To me, that just doesn’t sound all that scientific."

 

Best,

Rasmus

 

On Mon, Jul 27, 2020 at 8:04 PM glen <[hidden email]> wrote:

Excellent. Thanks! I'd seen the link to Gwern from Slate Star Codex. But I loathe poetry. Now that you've recommended it, I have no choice. 8^)

On July 27, 2020 6:32:15 PM PDT, Alexander Rasmus <[hidden email]> wrote:


>Glen,
>
>Gwern has an extensive post on GPT-3 poetry experimentation here:
>https://www.gwern.net/GPT-3
>
>I strongly recommend the section on the Cyberiad, where GPT-3 stands in
>for
>Trurl's Electronic Bard:
>https://www.gwern.net/GPT-3#stanislaw-lems-cyberiad
>
>There's some discussion of fine tuning input, but I think more cases
>where
>they keep the prompt fixed and show several different outputs.

--
glen

- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/

- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/

- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/

 

- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/

- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/

- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/ 
Reply | Threaded
Open this post in threaded view
|

gelotophilia

Steve Smith
In reply to this post by doug carmichael

As I read the interchange about GPT-3 and the Chinese room, I was drawn off into side-musings which were finally polyped off to a pure tangent (in my head) when DougC and NickT exchanged:

NLT> Dog do joy; why not computers?

DC> dog is highly interconnected - hormones, nerves, senses, and environment. neurons are not binary . every synapse is an infinite state variable.

While Joy and humor are not identical, there is some positive correlation.   Poking around, I was only mildly surprised to find that there was a body of literature and in fact international organizations and conferences on humor (not mimes or clowns or  stand up comedians, but real scholars studying the former as well as regular people).   I was looking for the physiological complexes implied by humor or joy.   I haven't (yet) found as much on the topic as I would like, maybe because I got sidelined reading about 2 neologisms (ca 2007) and a related ancient (Greek) term:   Gelotophobia, Gelotophilia, and Katagelasticism.   My limited Italian and Spanish had me reading it as "Gelato" or "Helado" which translates roughly into our own "Ice Cream", though the ingredients differ toward less rich technically.

Their meanings, however are roughly:  Fear of being laughed at; Love of being laughed at; and the Pleasure of laughing at others.     These are apparently more than the usual discomfort or warm feelings we might get from being laughed at, or from laughing at others, but a more deep and acute sense of it.

https://www.wired.com/2011/07/international-humor-conference/

https://www.zora.uzh.ch/id/eprint/14037/1/Ruch_Proyer_PhoPhiKat_V.pdf

Part of why I bring it up on this list is because as I study myself and others as we exchange our ideas, observations, and occasional (un)pleasantries, I am fascinated by the intersection between (convolution amongsT?) personal styles and perhaps more formal "training" each of us might have learned from our parents, among our peers, by our teachers, our workplaces, possibly professional organizations, etc.  

It appears to me that institutions of higher education enforce/impose a certain code of personal conduct first on their participants (undergrads, grads, postdocs, staff, faculty) which is a microcosm of the larger world.  White Collar and Blue Collar contexts are also similarly dissimilar, and within those, a cube-farm of programmer-geeks and a bullpen of writers, and a trading floor of traders (all white collar, taking their showers at the beginning of the day) have a wide spectrum while blue collar workers (taking their showers at the end of the day) do as well.   Construction crews,  oilfield roughnecks, cowboys, farmhands, etc.   each have their own myriad ways of interacting... sometimes *requiring* a level of mocking to feel connected, etc.  There may also be a strong generational component... as we cross roughly 3 generations.  Greatest/Boomers/X/Millenials/Zoomers/??? and all the cusps between.

But what I was most interested in is related to the original discussion which is what is the extended physiological response to humor, joy, mockery that a human (or animal?) may have which a synthetic being would need to be designed to include.   Perhaps a properly broadly conceived General Artificial Intelligence would ultimately include all of this as well, and as deep learning evolves, it seems that there is no reason that a GI couldn't simulate the physiological feedback loops that drive and regulate some aspects of humore?

- Steve


- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/ 
12