Singularians, Less Wrong, and Roki's Basilisk

classic Classic list List threaded Threaded
4 messages Options
Reply | Threaded
Open this post in threaded view
|

Singularians, Less Wrong, and Roki's Basilisk

glen ropella

http://rationalwiki.org/wiki/LessWrong#Roko.27s_Basilisk

> In July of 2010, Roko (a top contributor at the time) wondered if a future Friendly AI would punish people who didn't do everything in their power to further the AI research from which this AI originated, by at the very least donating all they have to it.

Sorry if y'all have seen this.  I just stumbled on it and thought it was
funny enough to pass on.

--
⇒⇐ glen

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
Reply | Threaded
Open this post in threaded view
|

Re: Singularians, Less Wrong, and Roki's Basilisk

Roger Critchlow-2
Which reminds me of the research I posted about growing socially intelligent agents by embedding them in an environment where they're forced to play prisoners dilemma with each other over and over.  I wondered how they would feel about having been subjected to thousands of generations of this torture when they realized how we had grown them.  There are two questions, of course: whether it's moral to torture pre-sentients to bring them to sentience; and whether the resulting super-sentient will forgive you when it becomes the master. 

-- rec --


On Mon, Dec 23, 2013 at 5:08 PM, glen <[hidden email]> wrote:

http://rationalwiki.org/wiki/LessWrong#Roko.27s_Basilisk

> In July of 2010, Roko (a top contributor at the time) wondered if a future Friendly AI would punish people who didn't do everything in their power to further the AI research from which this AI originated, by at the very least donating all they have to it.

Sorry if y'all have seen this.  I just stumbled on it and thought it was
funny enough to pass on.

--
⇒⇐ glen

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
Reply | Threaded
Open this post in threaded view
|

Re: Singularians, Less Wrong, and Roki's Basilisk

Owen Densmore
Administrator
Funny, this got some air on KCRW radio:
A book started the conversation:
    Our Final Invention: Artificial Intelligence and the End of the Human Era

The KCRW page mentions other concerns about mankind outsmarting itself.

   -- Owen


On Mon, Dec 23, 2013 at 5:23 PM, Roger Critchlow <[hidden email]> wrote:
Which reminds me of the research I posted about growing socially intelligent agents by embedding them in an environment where they're forced to play prisoners dilemma with each other over and over.  I wondered how they would feel about having been subjected to thousands of generations of this torture when they realized how we had grown them.  There are two questions, of course: whether it's moral to torture pre-sentients to bring them to sentience; and whether the resulting super-sentient will forgive you when it becomes the master. 

-- rec --


On Mon, Dec 23, 2013 at 5:08 PM, glen <[hidden email]> wrote:

http://rationalwiki.org/wiki/LessWrong#Roko.27s_Basilisk

> In July of 2010, Roko (a top contributor at the time) wondered if a future Friendly AI would punish people who didn't do everything in their power to further the AI research from which this AI originated, by at the very least donating all they have to it.

Sorry if y'all have seen this.  I just stumbled on it and thought it was
funny enough to pass on.

--
⇒⇐ glen

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
Reply | Threaded
Open this post in threaded view
|

Re: Singularians, Less Wrong, and Roki's Basilisk

glen ropella
In reply to this post by Roger Critchlow-2
On 12/23/2013 04:23 PM, Roger Critchlow wrote:
> I
> wondered how they would feel about having been subjected to thousands of
> generations of this torture when they realized how we had grown them.
>  There are two questions, of course: whether it's moral to torture
> pre-sentients to bring them to sentience; and whether the resulting
> super-sentient will forgive you when it becomes the master.

Of course, that all begs the definition of "sentience" in the first
place.  I'd claim it's just as moral to torture an eventual-sentient as
it is to torture, say, an embryo ... or even a fetus.  And it's a smooth
scale all the way unto the tortured's death.  If you'll permit me some
poetic license:  It's just as moral to torture an eventual-sentient as
it is to 80-year-old-torture a 70-year-old. [*]

My point being the banal one that "life is pain", or perhaps that
sentience is pain.  And if our so-called sentient AIs are not sentient
enough to understand that, then they're not sentient.

Or, we can just take this rhetoric at face value:

   http://elfs.livejournal.com/1197817.html

> As I pointed out in an earlier story, this is the moral equivalence of the following mind experiment: say you've created a being (meat or machine, I don't care, I'm not er, "materialist" has already been taken. Someone help me out here) that, when you bring it to consciousness, will experience enormous pain from the moment it is aware. Your moral obligation before that moment is exactly nil: the consciousness doesn't exist, you don't have a moral obligation toward it. You are not obliged to assuage the pain of the non-existent; even more importantly, you are not obliged to bring it into existence. Avoiding the instantiation of suffering creature is meant to make the humans feel good about themselves, but it's not sufficient or even necessary foundation for AI morality.

[*] Of course, it also begs the definition of "torture"... But I think
parsing that word leads you down rat-holes and towards intelligent
design, or at least justificationist rationalizing.  At its clearest, I
think I can say if the tortured is killed and doesn't reach the next
stage of development, then it wasn't really "torture", per se, it was
killing.  And that raises the specter: Would you rather a slow or quick
death?  For me, I think the answer is undoubtedly the slow one,
preferably really, really, really slow ... like 80-90 years or so. ;-)

--
⇒⇐ glen

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com