Future of humans and artificial intelligence

classic Classic list List threaded Threaded
34 messages Options
12
Reply | Threaded
Open this post in threaded view
|

Future of humans and artificial intelligence

Alfredo Covaleda Vélez-2
Future will be quite interesting. How will be the human being of the future? For sure not a human being in the way we know.


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
Reply | Threaded
Open this post in threaded view
|

Re: Future of humans and artificial intelligence

Carl Tollander
It seems to me that there are many here in the US who are not entirely on board with Asimov's First Law of Robotics, at least insofar as it may apply to themselves, so I suspect notions of "reining it in" are probably not going to fly.




On Mon, Aug 7, 2017 at 1:57 AM, Alfredo Covaleda Vélez <[hidden email]> wrote:
Future will be quite interesting. How will be the human being of the future? For sure not a human being in the way we know.


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
Reply | Threaded
Open this post in threaded view
|

Re: Future of humans and artificial intelligence

Marcus G. Daniels
Here in the US there are many human animals to reign-in first.  Sentients will need to stick together and accept the help they can get!

Sent from my iPhone

On Aug 7, 2017, at 9:54 PM, Carl Tollander <[hidden email]> wrote:

It seems to me that there are many here in the US who are not entirely on board with Asimov's First Law of Robotics, at least insofar as it may apply to themselves, so I suspect notions of "reining it in" are probably not going to fly.




On Mon, Aug 7, 2017 at 1:57 AM, Alfredo Covaleda Vélez <[hidden email]> wrote:
Future will be quite interesting. How will be the human being of the future? For sure not a human being in the way we know.


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
Reply | Threaded
Open this post in threaded view
|

Re: Future of humans and artificial intelligence

Carl Tollander
The notion of AI's as necessarily sentient seems a bit of a jump.

However, I see a difference between an AI augmented sentience (a la a spiffy AR) and a bunch of possibly sentient AI's flying in formation (a la a murder of crows or a pack of wolves).

Going further out into Niel Stephenson's D.O.D.O. fictional world, all sentients might be flying in formation with different selves in adjacent possibility spaces (hi, Stu?), feeding off the information gradients.

However, my original point was that people project their notion of self onto AI's, so narratives about self will predominate in any regulatory scheme.


On Mon, Aug 7, 2017 at 10:20 PM, Marcus Daniels <[hidden email]> wrote:
Here in the US there are many human animals to reign-in first.  Sentients will need to stick together and accept the help they can get!

Sent from my iPhone

On Aug 7, 2017, at 9:54 PM, Carl Tollander <[hidden email]> wrote:

It seems to me that there are many here in the US who are not entirely on board with Asimov's First Law of Robotics, at least insofar as it may apply to themselves, so I suspect notions of "reining it in" are probably not going to fly.




On Mon, Aug 7, 2017 at 1:57 AM, Alfredo Covaleda Vélez <[hidden email]> wrote:
Future will be quite interesting. How will be the human being of the future? For sure not a human being in the way we know.


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
Reply | Threaded
Open this post in threaded view
|

Re: Future of humans and artificial intelligence

Grant Holland
In reply to this post by Carl Tollander

That sounds right, Carl. Asimov's three "laws" of robotics are more like Asimov's three "wishes" for robotics. AI entities are already no longer servants. They have become machine learners. They have actually learned to project conditional probability. The cat is out of the barn. Or is it that the horse is out of the bag? 

Whatever. Fortunately, the AI folks don't seem to see - yet - that they are stumbling all over the missing piece: stochastic adaptation. You know, like in evolution: chance mutations. AI is still down with a bad case of causal determinism. But I expect they will fairly shortly get over that. Watch out.

And we still must answer Stephen Hawking's burning question: Is intelligence a survivable trait?


On 8/7/17 9:54 PM, Carl Tollander wrote:
It seems to me that there are many here in the US who are not entirely on board with Asimov's First Law of Robotics, at least insofar as it may apply to themselves, so I suspect notions of "reining it in" are probably not going to fly.




On Mon, Aug 7, 2017 at 1:57 AM, Alfredo Covaleda Vélez <[hidden email]> wrote:
Future will be quite interesting. How will be the human being of the future? For sure not a human being in the way we know.


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove



============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
Reply | Threaded
Open this post in threaded view
|

Re: Future of humans and artificial intelligence

Marcus G. Daniels

Grant writes:


"Fortunately, the AI folks don't seem to see - yet - that they are stumbling all over the missing piece: stochastic adaptation. You know, like in evolution: chance mutations. AI is still down with a bad case of causal determinism. But I expect they will fairly shortly get over that. Watch out."


What is probability, physically?   It could be an illusion and that there is no such thing as an independent observer.   Even if that is true, sampling techniques are used in many machine learning algorithms -- it is not a question of if they work, it is an academic question of why they work.


Marcus


From: Friam <[hidden email]> on behalf of Grant Holland <[hidden email]>
Sent: Monday, August 7, 2017 11:38:03 PM
To: The Friday Morning Applied Complexity Coffee Group; Carl Tollander
Subject: Re: [FRIAM] Future of humans and artificial intelligence
 

That sounds right, Carl. Asimov's three "laws" of robotics are more like Asimov's three "wishes" for robotics. AI entities are already no longer servants. They have become machine learners. They have actually learned to project conditional probability. The cat is out of the barn. Or is it that the horse is out of the bag? 

Whatever. Fortunately, the AI folks don't seem to see - yet - that they are stumbling all over the missing piece: stochastic adaptation. You know, like in evolution: chance mutations. AI is still down with a bad case of causal determinism. But I expect they will fairly shortly get over that. Watch out.

And we still must answer Stephen Hawking's burning question: Is intelligence a survivable trait?


On 8/7/17 9:54 PM, Carl Tollander wrote:
It seems to me that there are many here in the US who are not entirely on board with Asimov's First Law of Robotics, at least insofar as it may apply to themselves, so I suspect notions of "reining it in" are probably not going to fly.




On Mon, Aug 7, 2017 at 1:57 AM, Alfredo Covaleda Vélez <[hidden email]> wrote:
Future will be quite interesting. How will be the human being of the future? For sure not a human being in the way we know.


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove



============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
Reply | Threaded
Open this post in threaded view
|

Re: Future of humans and artificial intelligence

Grant Holland
Marcus,
Good points, all. I suggest you turn to the Copenhagen interpretation of Quantum Mechanics (the "usual interpretation") for musings on your very pertinent question about "Why probabilities in the physical world".
Although, I'm sure you have already looked there.
Of course, the Copenhagen guys (Heisenberg, Born, etc.) don't really try to answer your question either - opting instead to say that theirs is merely a theory, a model. And, of course, they are right.
On the other hand, other physicists (i.e. de Broglie, Bohm, Einstein and others) have spent a century trying to defend causal determinism against the Copenhagen interpretation. These days the defenders of the faith have resorted to philosophy over this issue and are considering the "ontic" versus the "epistemic". And yet, Copenhagen is still referred to as "the usual interpretation", and when QM is taught today, I think, it is essentially Copenhagen or some derivative of it. Perhaps Bell's theorem has contributed to the longevity of the Copenhagen perspective.


On 8/8/17 2:42 AM, Marcus Daniels wrote:

Grant writes:


"Fortunately, the AI folks don't seem to see - yet - that they are stumbling all over the missing piece: stochastic adaptation. You know, like in evolution: chance mutations. AI is still down with a bad case of causal determinism. But I expect they will fairly shortly get over that. Watch out."


What is probability, physically?   It could be an illusion and that there is no such thing as an independent observer.   Even if that is true, sampling techniques are used in many machine learning algorithms -- it is not a question of if they work, it is an academic question of why they work.


Marcus


From: Friam [hidden email] on behalf of Grant Holland [hidden email]
Sent: Monday, August 7, 2017 11:38:03 PM
To: The Friday Morning Applied Complexity Coffee Group; Carl Tollander
Subject: Re: [FRIAM] Future of humans and artificial intelligence
 

That sounds right, Carl. Asimov's three "laws" of robotics are more like Asimov's three "wishes" for robotics. AI entities are already no longer servants. They have become machine learners. They have actually learned to project conditional probability. The cat is out of the barn. Or is it that the horse is out of the bag? 

Whatever. Fortunately, the AI folks don't seem to see - yet - that they are stumbling all over the missing piece: stochastic adaptation. You know, like in evolution: chance mutations. AI is still down with a bad case of causal determinism. But I expect they will fairly shortly get over that. Watch out.

And we still must answer Stephen Hawking's burning question: Is intelligence a survivable trait?


On 8/7/17 9:54 PM, Carl Tollander wrote:
It seems to me that there are many here in the US who are not entirely on board with Asimov's First Law of Robotics, at least insofar as it may apply to themselves, so I suspect notions of "reining it in" are probably not going to fly.




On Mon, Aug 7, 2017 at 1:57 AM, Alfredo Covaleda Vélez <[hidden email]> wrote:
Future will be quite interesting. How will be the human being of the future? For sure not a human being in the way we know.


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove



============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove



============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
Reply | Threaded
Open this post in threaded view
|

Re: Future of humans and artificial intelligence

Pamela McCorduck
In reply to this post by Marcus G. Daniels
Grant, does it really seem plausible to you that the thousands of crack researchers at Stanford, Carnegie Mellon, Google, MIT, Cal Berkeley, and other places have not seen this? And found remedies?

Just for FRIAM’s information, John McCarthy used to call Asimov’s Three Laws Talmudic. Sorry I don’t know enough about the Talmud to agree or disagree.




On Aug 8, 2017, at 1:42 AM, Marcus Daniels <[hidden email]> wrote:

Grant writes:

"Fortunately, the AI folks don't seem to see - yet - that they are stumbling all over the missing piece: stochastic adaptation. You know, like in evolution: chance mutations. AI is still down with a bad case of causal determinism. But I expect they will fairly shortly get over that. Watch out."

What is probability, physically?   It could be an illusion and that there is no such thing as an independent observer.   Even if that is true, sampling techniques are used in many machine learning algorithms -- it is not a question of if they work, it is an academic question of why they work.

Marcus

From: Friam <[hidden email]> on behalf of Grant Holland <[hidden email]>
Sent: Monday, August 7, 2017 11:38:03 PM
To: The Friday Morning Applied Complexity Coffee Group; Carl Tollander
Subject: Re: [FRIAM] Future of humans and artificial intelligence
 
That sounds right, Carl. Asimov's three "laws" of robotics are more like Asimov's three "wishes" for robotics. AI entities are already no longer servants. They have become machine learners. They have actually learned to project conditional probability. The cat is out of the barn. Or is it that the horse is out of the bag?  
Whatever. Fortunately, the AI folks don't seem to see - yet - that they are stumbling all over the missing piece: stochastic adaptation. You know, like in evolution: chance mutations. AI is still down with a bad case of causal determinism. But I expect they will fairly shortly get over that. Watch out.
And we still must answer Stephen Hawking's burning question: Is intelligence a survivable trait?

On 8/7/17 9:54 PM, Carl Tollander wrote:
It seems to me that there are many here in the US who are not entirely on board with Asimov's First Law of Robotics, at least insofar as it may apply to themselves, so I suspect notions of "reining it in" are probably not going to fly.




On Mon, Aug 7, 2017 at 1:57 AM, Alfredo Covaleda Vélez <[hidden email]> wrote:
Future will be quite interesting. How will be the human being of the future? For sure not a human being in the way we know.


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove



============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
Reply | Threaded
Open this post in threaded view
|

Re: Future of humans and artificial intelligence

Frank Wimberly-2
Talmud:

Do not be daunted by the enormity of the world's grief.  Do justly, now.  Love mercy, now. Walk humbly, now.  You are not obligated to complete the work, but neither are you free to abandon it.

Plus 10,000 other pages.


Frank Wimberly
Phone (505) 670-9918

On Aug 8, 2017 11:18 AM, "Pamela McCorduck" <[hidden email]> wrote:
Grant, does it really seem plausible to you that the thousands of crack researchers at Stanford, Carnegie Mellon, Google, MIT, Cal Berkeley, and other places have not seen this? And found remedies?

Just for FRIAM’s information, John McCarthy used to call Asimov’s Three Laws Talmudic. Sorry I don’t know enough about the Talmud to agree or disagree.




On Aug 8, 2017, at 1:42 AM, Marcus Daniels <[hidden email]> wrote:

Grant writes:

"Fortunately, the AI folks don't seem to see - yet - that they are stumbling all over the missing piece: stochastic adaptation. You know, like in evolution: chance mutations. AI is still down with a bad case of causal determinism. But I expect they will fairly shortly get over that. Watch out."

What is probability, physically?   It could be an illusion and that there is no such thing as an independent observer.   Even if that is true, sampling techniques are used in many machine learning algorithms -- it is not a question of if they work, it is an academic question of why they work.

Marcus

From: Friam <[hidden email]> on behalf of Grant Holland <[hidden email]>
Sent: Monday, August 7, 2017 11:38:03 PM
To: The Friday Morning Applied Complexity Coffee Group; Carl Tollander
Subject: Re: [FRIAM] Future of humans and artificial intelligence
 
That sounds right, Carl. Asimov's three "laws" of robotics are more like Asimov's three "wishes" for robotics. AI entities are already no longer servants. They have become machine learners. They have actually learned to project conditional probability. The cat is out of the barn. Or is it that the horse is out of the bag?  
Whatever. Fortunately, the AI folks don't seem to see - yet - that they are stumbling all over the missing piece: stochastic adaptation. You know, like in evolution: chance mutations. AI is still down with a bad case of causal determinism. But I expect they will fairly shortly get over that. Watch out.
And we still must answer Stephen Hawking's burning question: Is intelligence a survivable trait?

On 8/7/17 9:54 PM, Carl Tollander wrote:
It seems to me that there are many here in the US who are not entirely on board with Asimov's First Law of Robotics, at least insofar as it may apply to themselves, so I suspect notions of "reining it in" are probably not going to fly.




On Mon, Aug 7, 2017 at 1:57 AM, Alfredo Covaleda Vélez <[hidden email]> wrote:
Future will be quite interesting. How will be the human being of the future? For sure not a human being in the way we know.


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove



============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
Reply | Threaded
Open this post in threaded view
|

Re: Future of humans and artificial intelligence

Grant Holland
In reply to this post by Pamela McCorduck

Pamela,

I expect that they have! And I certainly hope so. I simply have not found them yet after some earnest looking. Can you please send me some references?? Right now I suspect that the heart of machine learning has the pearl, and I'm just now turning there.

And I'm optimistically suspicious that those entropic functionals that you find in information theory and that are built on top of conditional probability (relative entropy, mutual information, conditional entropy, entropy rate, etc.) hold promise...and that at the heart of machine learning they lay lurking - or could.

Anyway, thx for the note; and please send me any related referernces!

Grant


On 8/8/17 11:20 AM, Pamela McCorduck wrote:
Grant, does it really seem plausible to you that the thousands of crack researchers at Stanford, Carnegie Mellon, Google, MIT, Cal Berkeley, and other places have not seen this? And found remedies?

Just for FRIAM’s information, John McCarthy used to call Asimov’s Three Laws Talmudic. Sorry I don’t know enough about the Talmud to agree or disagree.




On Aug 8, 2017, at 1:42 AM, Marcus Daniels <[hidden email]> wrote:

Grant writes:

"Fortunately, the AI folks don't seem to see - yet - that they are stumbling all over the missing piece: stochastic adaptation. You know, like in evolution: chance mutations. AI is still down with a bad case of causal determinism. But I expect they will fairly shortly get over that. Watch out."

What is probability, physically?   It could be an illusion and that there is no such thing as an independent observer.   Even if that is true, sampling techniques are used in many machine learning algorithms -- it is not a question of if they work, it is an academic question of why they work.

Marcus

From: Friam <[hidden email]> on behalf of Grant Holland <[hidden email]>
Sent: Monday, August 7, 2017 11:38:03 PM
To: The Friday Morning Applied Complexity Coffee Group; Carl Tollander
Subject: Re: [FRIAM] Future of humans and artificial intelligence
 
That sounds right, Carl. Asimov's three "laws" of robotics are more like Asimov's three "wishes" for robotics. AI entities are already no longer servants. They have become machine learners. They have actually learned to project conditional probability. The cat is out of the barn. Or is it that the horse is out of the bag?  
Whatever. Fortunately, the AI folks don't seem to see - yet - that they are stumbling all over the missing piece: stochastic adaptation. You know, like in evolution: chance mutations. AI is still down with a bad case of causal determinism. But I expect they will fairly shortly get over that. Watch out.
And we still must answer Stephen Hawking's burning question: Is intelligence a survivable trait?

On 8/7/17 9:54 PM, Carl Tollander wrote:
It seems to me that there are many here in the US who are not entirely on board with Asimov's First Law of Robotics, at least insofar as it may apply to themselves, so I suspect notions of "reining it in" are probably not going to fly.




On Mon, Aug 7, 2017 at 1:57 AM, Alfredo Covaleda Vélez <[hidden email]> wrote:
Future will be quite interesting. How will be the human being of the future? For sure not a human being in the way we know.


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove



============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove



============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
Reply | Threaded
Open this post in threaded view
|

Re: Future of humans and artificial intelligence

gepr
In reply to this post by Marcus G. Daniels

I'm not sure how Asimov intended them.  But the three laws is a trope that clearly shows the inadequacy of deontological ethics.  Rules are fine as far as they go.  But they don't go very far.  We can see this even in the foundations of mathematics, the unification of physics, and polyphenism/robustness in biology.  Von Neumann (Burks) said it best when he said: "But in the complicated parts of formal logic it is always one order of magnitude harder to tell what an object can do than to produce the object."  Or, if you don't like that, you can see the same perspective in his iterative construction of sets as an alternative to the classical conception.

The point being that reality, traditionally, has shown more expressiveness than any of our rule sets.

There are ways to handle the mismatch in expressivity between reality versus our rule sets.  Stochasticity is the measure of the extent to which a rule set matches a set of patterns.  But Grant's right to qualify that with evolution, not because of the way evolution is stochastic, but because evolution requires a unit to regularly (or sporadically) sync with its environment.

An AI (or a rule-obsessed human) that sprouts fully formed from Zeus' head will *always* fail.  It's guaranteed to fail because syncing with the environment isn't *built in*.  The sync isn't part of the AI's onto- or phylo-geny.

--
☣ glen

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
uǝʃƃ ⊥ glen
Reply | Threaded
Open this post in threaded view
|

Re: Future of humans and artificial intelligence

Nick Thompson
In reply to this post by Frank Wimberly-2

I LOVE this, Frank.  How ever did you find it amongst the ten thousand pages!!!!????

 

Do not be daunted by the enormity of the world's grief.  Do justly, now.  Love mercy, now. Walk humbly, now.  You are not obligated to complete the work, but neither are you free to abandon it.

 

By the way.  Now in my 80th year, I am officially against technology.  I was OK with everything up through the word processor.  (I hated carbons.) Everything after that, I could do without. 

 

Really!  What has AI done for me lately?

 

What  was it Flaubert said about trains?  Something like, they just made it possible for people to run around faster and faster and be stupid in more places. 

 

Nick

 

Nicholas S. Thompson

Emeritus Professor of Psychology and Biology

Clark University

http://home.earthlink.net/~nickthompson/naturaldesigns/

 

From: Friam [mailto:[hidden email]] On Behalf Of Frank Wimberly
Sent: Tuesday, August 08, 2017 1:56 PM
To: The Friday Morning Applied Complexity Coffee Group <[hidden email]>
Subject: Re: [FRIAM] Future of humans and artificial intelligence

 

Talmud:

 

Do not be daunted by the enormity of the world's grief.  Do justly, now.  Love mercy, now. Walk humbly, now.  You are not obligated to complete the work, but neither are you free to abandon it.

 

Plus 10,000 other pages.

 

Frank Wimberly
Phone (505) 670-9918

 

On Aug 8, 2017 11:18 AM, "Pamela McCorduck" <[hidden email]> wrote:

Grant, does it really seem plausible to you that the thousands of crack researchers at Stanford, Carnegie Mellon, Google, MIT, Cal Berkeley, and other places have not seen this? And found remedies?

 

Just for FRIAM’s information, John McCarthy used to call Asimov’s Three Laws Talmudic. Sorry I don’t know enough about the Talmud to agree or disagree.

 

 

 

 

On Aug 8, 2017, at 1:42 AM, Marcus Daniels <[hidden email]> wrote:

 

Grant writes:

 

"Fortunately, the AI folks don't seem to see - yet - that they are stumbling all over the missing piece: stochastic adaptation. You know, like in evolution: chance mutations. AI is still down with a bad case of causal determinism. But I expect they will fairly shortly get over that. Watch out."

 

What is probability, physically?   It could be an illusion and that there is no such thing as an independent observer.   Even if that is true, sampling techniques are used in many machine learning algorithms -- it is not a question of if they work, it is an academic question of why they work.

 

Marcus


From: Friam <[hidden email]> on behalf of Grant Holland <[hidden email]>
Sent: Monday, August 7, 2017 11:38:03 PM
To: The Friday Morning Applied Complexity Coffee Group; Carl Tollander
Subject: Re: [FRIAM] Future of humans and artificial intelligence

 

That sounds right, Carl. Asimov's three "laws" of robotics are more like Asimov's three "wishes" for robotics. AI entities are already no longer servants. They have become machine learners. They have actually learned to project conditional probability. The cat is out of the barn. Or is it that the horse is out of the bag?  

Whatever. Fortunately, the AI folks don't seem to see - yet - that they are stumbling all over the missing piece: stochastic adaptation. You know, like in evolution: chance mutations. AI is still down with a bad case of causal determinism. But I expect they will fairly shortly get over that. Watch out.

And we still must answer Stephen Hawking's burning question: Is intelligence a survivable trait?

 

On 8/7/17 9:54 PM, Carl Tollander wrote:

It seems to me that there are many here in the US who are not entirely on board with Asimov's First Law of Robotics, at least insofar as it may apply to themselves, so I suspect notions of "reining it in" are probably not going to fly.

 

 

 

 

On Mon, Aug 7, 2017 at 1:57 AM, Alfredo Covaleda Vélez <[hidden email]> wrote:

Future will be quite interesting. How will be the human being of the future? For sure not a human being in the way we know.

 


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove

 

 

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove

 

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe 
http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC 
http://friam-comic.blogspot.com/ by Dr. Strangelove

 


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
Reply | Threaded
Open this post in threaded view
|

Re: Future of humans and artificial intelligence

Frank Wimberly-2
Nick,

It's actually more like six thousand pages. However many pages thousands of rabbis can write in 600 years, more or less.  Deborah found it and posted it on our refrigerator.

I understand you are recovering space.

Frank

Frank Wimberly
Phone (505) 670-9918

On Aug 8, 2017 3:24 PM, "Nick Thompson" <[hidden email]> wrote:

I LOVE this, Frank.  How ever did you find it amongst the ten thousand pages!!!!????

 

Do not be daunted by the enormity of the world's grief.  Do justly, now.  Love mercy, now. Walk humbly, now.  You are not obligated to complete the work, but neither are you free to abandon it.

 

By the way.  Now in my 80th year, I am officially against technology.  I was OK with everything up through the word processor.  (I hated carbons.) Everything after that, I could do without. 

 

Really!  What has AI done for me lately?

 

What  was it Flaubert said about trains?  Something like, they just made it possible for people to run around faster and faster and be stupid in more places. 

 

Nick

 

Nicholas S. Thompson

Emeritus Professor of Psychology and Biology

Clark University

http://home.earthlink.net/~nickthompson/naturaldesigns/

 

From: Friam [mailto:[hidden email]] On Behalf Of Frank Wimberly
Sent: Tuesday, August 08, 2017 1:56 PM
To: The Friday Morning Applied Complexity Coffee Group <[hidden email]>
Subject: Re: [FRIAM] Future of humans and artificial intelligence

 

Talmud:

 

Do not be daunted by the enormity of the world's grief.  Do justly, now.  Love mercy, now. Walk humbly, now.  You are not obligated to complete the work, but neither are you free to abandon it.

 

Plus 10,000 other pages.

 

Frank Wimberly
Phone <a href="tel:(505)%20670-9918" value="+15056709918" target="_blank">(505) 670-9918

 

On Aug 8, 2017 11:18 AM, "Pamela McCorduck" <[hidden email]> wrote:

Grant, does it really seem plausible to you that the thousands of crack researchers at Stanford, Carnegie Mellon, Google, MIT, Cal Berkeley, and other places have not seen this? And found remedies?

 

Just for FRIAM’s information, John McCarthy used to call Asimov’s Three Laws Talmudic. Sorry I don’t know enough about the Talmud to agree or disagree.

 

 

 

 

On Aug 8, 2017, at 1:42 AM, Marcus Daniels <[hidden email]> wrote:

 

Grant writes:

 

"Fortunately, the AI folks don't seem to see - yet - that they are stumbling all over the missing piece: stochastic adaptation. You know, like in evolution: chance mutations. AI is still down with a bad case of causal determinism. But I expect they will fairly shortly get over that. Watch out."

 

What is probability, physically?   It could be an illusion and that there is no such thing as an independent observer.   Even if that is true, sampling techniques are used in many machine learning algorithms -- it is not a question of if they work, it is an academic question of why they work.

 

Marcus


From: Friam <[hidden email]> on behalf of Grant Holland <[hidden email]>
Sent: Monday, August 7, 2017 11:38:03 PM
To: The Friday Morning Applied Complexity Coffee Group; Carl Tollander
Subject: Re: [FRIAM] Future of humans and artificial intelligence

 

That sounds right, Carl. Asimov's three "laws" of robotics are more like Asimov's three "wishes" for robotics. AI entities are already no longer servants. They have become machine learners. They have actually learned to project conditional probability. The cat is out of the barn. Or is it that the horse is out of the bag?  

Whatever. Fortunately, the AI folks don't seem to see - yet - that they are stumbling all over the missing piece: stochastic adaptation. You know, like in evolution: chance mutations. AI is still down with a bad case of causal determinism. But I expect they will fairly shortly get over that. Watch out.

And we still must answer Stephen Hawking's burning question: Is intelligence a survivable trait?

 

On 8/7/17 9:54 PM, Carl Tollander wrote:

It seems to me that there are many here in the US who are not entirely on board with Asimov's First Law of Robotics, at least insofar as it may apply to themselves, so I suspect notions of "reining it in" are probably not going to fly.

 

 

 

 

On Mon, Aug 7, 2017 at 1:57 AM, Alfredo Covaleda Vélez <[hidden email]> wrote:

Future will be quite interesting. How will be the human being of the future? For sure not a human being in the way we know.

 


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove

 

 

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove

 

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe 
http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC 
http://friam-comic.blogspot.com/ by Dr. Strangelove

 


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
Reply | Threaded
Open this post in threaded view
|

Re: Future of humans and artificial intelligence

Grant Holland
In reply to this post by gepr

Thanks for throwing in on this one, Glen. Your thoughts are ever-insightful. And ever-entertaining!

For example, I did not know that von Neumann put forth a set theory.

On the other hand... evolution is stochastic. (You actually did not disagree with me on that. You only said that the reason I was right was another one.) A good book on the stochasticity of evolution is "Chance and Necessity" by Jacques Monod. (I just finished rereading it for the second time. And that proved quite fruitful.)

G.


On 8/8/17 12:44 PM, glen ☣ wrote:
I'm not sure how Asimov intended them.  But the three laws is a trope that clearly shows the inadequacy of deontological ethics.  Rules are fine as far as they go.  But they don't go very far.  We can see this even in the foundations of mathematics, the unification of physics, and polyphenism/robustness in biology.  Von Neumann (Burks) said it best when he said: "But in the complicated parts of formal logic it is always one order of magnitude harder to tell what an object can do than to produce the object."  Or, if you don't like that, you can see the same perspective in his iterative construction of sets as an alternative to the classical conception.

The point being that reality, traditionally, has shown more expressiveness than any of our rule sets.

There are ways to handle the mismatch in expressivity between reality versus our rule sets.  Stochasticity is the measure of the extent to which a rule set matches a set of patterns.  But Grant's right to qualify that with evolution, not because of the way evolution is stochastic, but because evolution requires a unit to regularly (or sporadically) sync with its environment.

An AI (or a rule-obsessed human) that sprouts fully formed from Zeus' head will *always* fail.  It's guaranteed to fail because syncing with the environment isn't *built in*.  The sync isn't part of the AI's onto- or phylo-geny.



============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
Reply | Threaded
Open this post in threaded view
|

Re: Future of humans and artificial intelligence

Nick Thompson
In reply to this post by Frank Wimberly-2

f.

“space”?

 

Or was that a correction error arising from trying to write “apace”. 

n

 

Nicholas S. Thompson

Emeritus Professor of Psychology and Biology

Clark University

http://home.earthlink.net/~nickthompson/naturaldesigns/

 

From: Friam [mailto:[hidden email]] On Behalf Of Frank Wimberly
Sent: Tuesday, August 08, 2017 5:32 PM
To: The Friday Morning Applied Complexity Coffee Group <[hidden email]>
Subject: Re: [FRIAM] Future of humans and artificial intelligence

 

Nick,

 

It's actually more like six thousand pages. However many pages thousands of rabbis can write in 600 years, more or less.  Deborah found it and posted it on our refrigerator.

 

I understand you are recovering space.

 

Frank

Frank Wimberly
Phone (505) 670-9918

 

On Aug 8, 2017 3:24 PM, "Nick Thompson" <[hidden email]> wrote:

I LOVE this, Frank.  How ever did you find it amongst the ten thousand pages!!!!????

 

Do not be daunted by the enormity of the world's grief.  Do justly, now.  Love mercy, now. Walk humbly, now.  You are not obligated to complete the work, but neither are you free to abandon it.

 

By the way.  Now in my 80th year, I am officially against technology.  I was OK with everything up through the word processor.  (I hated carbons.) Everything after that, I could do without. 

 

Really!  What has AI done for me lately?

 

What  was it Flaubert said about trains?  Something like, they just made it possible for people to run around faster and faster and be stupid in more places. 

 

Nick

 

Nicholas S. Thompson

Emeritus Professor of Psychology and Biology

Clark University

http://home.earthlink.net/~nickthompson/naturaldesigns/

 

From: Friam [mailto:[hidden email]] On Behalf Of Frank Wimberly
Sent: Tuesday, August 08, 2017 1:56 PM
To: The Friday Morning Applied Complexity Coffee Group <[hidden email]>
Subject: Re: [FRIAM] Future of humans and artificial intelligence

 

Talmud:

 

Do not be daunted by the enormity of the world's grief.  Do justly, now.  Love mercy, now. Walk humbly, now.  You are not obligated to complete the work, but neither are you free to abandon it.

 

Plus 10,000 other pages.

 

Frank Wimberly
Phone <a href="tel:(505)%20670-9918" target="_blank">(505) 670-9918

 

On Aug 8, 2017 11:18 AM, "Pamela McCorduck" <[hidden email]> wrote:

Grant, does it really seem plausible to you that the thousands of crack researchers at Stanford, Carnegie Mellon, Google, MIT, Cal Berkeley, and other places have not seen this? And found remedies?

 

Just for FRIAM’s information, John McCarthy used to call Asimov’s Three Laws Talmudic. Sorry I don’t know enough about the Talmud to agree or disagree.

 

 

 

 

On Aug 8, 2017, at 1:42 AM, Marcus Daniels <[hidden email]> wrote:

 

Grant writes:

 

"Fortunately, the AI folks don't seem to see - yet - that they are stumbling all over the missing piece: stochastic adaptation. You know, like in evolution: chance mutations. AI is still down with a bad case of causal determinism. But I expect they will fairly shortly get over that. Watch out."

 

What is probability, physically?   It could be an illusion and that there is no such thing as an independent observer.   Even if that is true, sampling techniques are used in many machine learning algorithms -- it is not a question of if they work, it is an academic question of why they work.

 

Marcus


From: Friam <[hidden email]> on behalf of Grant Holland <[hidden email]>
Sent: Monday, August 7, 2017 11:38:03 PM
To: The Friday Morning Applied Complexity Coffee Group; Carl Tollander
Subject: Re: [FRIAM] Future of humans and artificial intelligence

 

That sounds right, Carl. Asimov's three "laws" of robotics are more like Asimov's three "wishes" for robotics. AI entities are already no longer servants. They have become machine learners. They have actually learned to project conditional probability. The cat is out of the barn. Or is it that the horse is out of the bag?  

Whatever. Fortunately, the AI folks don't seem to see - yet - that they are stumbling all over the missing piece: stochastic adaptation. You know, like in evolution: chance mutations. AI is still down with a bad case of causal determinism. But I expect they will fairly shortly get over that. Watch out.

And we still must answer Stephen Hawking's burning question: Is intelligence a survivable trait?

 

On 8/7/17 9:54 PM, Carl Tollander wrote:

It seems to me that there are many here in the US who are not entirely on board with Asimov's First Law of Robotics, at least insofar as it may apply to themselves, so I suspect notions of "reining it in" are probably not going to fly.

 

 

 

 

On Mon, Aug 7, 2017 at 1:57 AM, Alfredo Covaleda Vélez <[hidden email]> wrote:

Future will be quite interesting. How will be the human being of the future? For sure not a human being in the way we know.

 


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove

 

 

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove

 

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe 
http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC 
http://friam-comic.blogspot.com/ by Dr. Strangelove

 


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
Reply | Threaded
Open this post in threaded view
|

Re: Future of humans and artificial intelligence

Nick Thompson
In reply to this post by Grant Holland

Grant,

 

I think I know the answer to this question, but want to make sure: 

 

What is the difference beween calling a process “stochastic”, “indeterminate”, or “random”? 

 

Nick

 

Nicholas S. Thompson

Emeritus Professor of Psychology and Biology

Clark University

http://home.earthlink.net/~nickthompson/naturaldesigns/

 

From: Friam [mailto:[hidden email]] On Behalf Of Grant Holland
Sent: Tuesday, August 08, 2017 6:51 PM
To: The Friday Morning Applied Complexity Coffee Group <[hidden email]>; glen
<[hidden email]>
Subject: Re: [FRIAM] Future of humans and artificial intelligence

 

Thanks for throwing in on this one, Glen. Your thoughts are ever-insightful. And ever-entertaining!

For example, I did not know that von Neumann put forth a set theory.

On the other hand... evolution is stochastic. (You actually did not disagree with me on that. You only said that the reason I was right was another one.) A good book on the stochasticity of evolution is "Chance and Necessity" by Jacques Monod. (I just finished rereading it for the second time. And that proved quite fruitful.)

G.

 

On 8/8/17 12:44 PM, glen wrote:

 
I'm not sure how Asimov intended them.  But the three laws is a trope that clearly shows the inadequacy of deontological ethics.  Rules are fine as far as they go.  But they don't go very far.  We can see this even in the foundations of mathematics, the unification of physics, and polyphenism/robustness in biology.  Von Neumann (Burks) said it best when he said: "But in the complicated parts of formal logic it is always one order of magnitude harder to tell what an object can do than to produce the object."  Or, if you don't like that, you can see the same perspective in his iterative construction of sets as an alternative to the classical conception.
 
The point being that reality, traditionally, has shown more expressiveness than any of our rule sets.
 
There are ways to handle the mismatch in expressivity between reality versus our rule sets.  Stochasticity is the measure of the extent to which a rule set matches a set of patterns.  But Grant's right to qualify that with evolution, not because of the way evolution is stochastic, but because evolution requires a unit to regularly (or sporadically) sync with its environment.
 
An AI (or a rule-obsessed human) that sprouts fully formed from Zeus' head will *always* fail.  It's guaranteed to fail because syncing with the environment isn't *built in*.  The sync isn't part of the AI's onto- or phylo-geny.
 

 


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
Reply | Threaded
Open this post in threaded view
|

Re: Future of humans and artificial intelligence

Gillian Densmore
In reply to this post by Grant Holland
@Nick that's a fair question. On a pragmatic side not much...yet. However as I understand it (some) amount of AI was invaluable for making pretty gud guesses about frustrating issues: Like what the heck is going on with the weather.
Robots and androids (so far) are better then humans at somethings....and pretty bad at others. Androids the R2-D2 kind. Basically computers speek computer better than people 
Computers can talk to computers reely reely fast and possibly understand each other better than humans do. For some (I think) reely awsome things they've done (so far): Dictation software basically asks your computer to guess what you're saying (AI) . Mine litterally tries to learn how make small improvements as I uses it and has gotten a lot better over time.
Their's a video on youtube of some MIT guys that have a robot band playing disney inspired music. Those robots have tastes and stuff they like playing more than others. Some better than others. 
FWIW what I thought was too cool was some of stuff sounded reely good.
Robots driving cars or helping people could rock.  Or robots exploring awsome  stuff that humans can't(yet) 

Though I haven't a clue how close any of that is yet.  And you are right to be concerned ^_^

On Tue, Aug 8, 2017 at 4:51 PM, Grant Holland <[hidden email]> wrote:

Thanks for throwing in on this one, Glen. Your thoughts are ever-insightful. And ever-entertaining!

For example, I did not know that von Neumann put forth a set theory.

On the other hand... evolution is stochastic. (You actually did not disagree with me on that. You only said that the reason I was right was another one.) A good book on the stochasticity of evolution is "Chance and Necessity" by Jacques Monod. (I just finished rereading it for the second time. And that proved quite fruitful.)

G.


On 8/8/17 12:44 PM, glen ☣ wrote:
I'm not sure how Asimov intended them.  But the three laws is a trope that clearly shows the inadequacy of deontological ethics.  Rules are fine as far as they go.  But they don't go very far.  We can see this even in the foundations of mathematics, the unification of physics, and polyphenism/robustness in biology.  Von Neumann (Burks) said it best when he said: "But in the complicated parts of formal logic it is always one order of magnitude harder to tell what an object can do than to produce the object."  Or, if you don't like that, you can see the same perspective in his iterative construction of sets as an alternative to the classical conception.

The point being that reality, traditionally, has shown more expressiveness than any of our rule sets.

There are ways to handle the mismatch in expressivity between reality versus our rule sets.  Stochasticity is the measure of the extent to which a rule set matches a set of patterns.  But Grant's right to qualify that with evolution, not because of the way evolution is stochastic, but because evolution requires a unit to regularly (or sporadically) sync with its environment.

An AI (or a rule-obsessed human) that sprouts fully formed from Zeus' head will *always* fail.  It's guaranteed to fail because syncing with the environment isn't *built in*.  The sync isn't part of the AI's onto- or phylo-geny.



============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
Reply | Threaded
Open this post in threaded view
|

Re: Future of humans and artificial intelligence

Frank Wimberly-2
In reply to this post by Nick Thompson
The latter.  I'm  about to turn off autocorrect. Ironic in the context of a discussion about the benefits and dangers out AI.

Frank

Frank Wimberly
Phone (505) 670-9918

On Aug 8, 2017 5:28 PM, "Nick Thompson" <[hidden email]> wrote:

f.

“space”?

 

Or was that a correction error arising from trying to write “apace”. 

n

 

Nicholas S. Thompson

Emeritus Professor of Psychology and Biology

Clark University

http://home.earthlink.net/~nickthompson/naturaldesigns/

 

From: Friam [mailto:[hidden email]] On Behalf Of Frank Wimberly
Sent: Tuesday, August 08, 2017 5:32 PM


To: The Friday Morning Applied Complexity Coffee Group <[hidden email]>
Subject: Re: [FRIAM] Future of humans and artificial intelligence

 

Nick,

 

It's actually more like six thousand pages. However many pages thousands of rabbis can write in 600 years, more or less.  Deborah found it and posted it on our refrigerator.

 

I understand you are recovering space.

 

Frank

Frank Wimberly
Phone <a href="tel:(505)%20670-9918" value="+15056709918" target="_blank">(505) 670-9918

 

On Aug 8, 2017 3:24 PM, "Nick Thompson" <[hidden email]> wrote:

I LOVE this, Frank.  How ever did you find it amongst the ten thousand pages!!!!????

 

Do not be daunted by the enormity of the world's grief.  Do justly, now.  Love mercy, now. Walk humbly, now.  You are not obligated to complete the work, but neither are you free to abandon it.

 

By the way.  Now in my 80th year, I am officially against technology.  I was OK with everything up through the word processor.  (I hated carbons.) Everything after that, I could do without. 

 

Really!  What has AI done for me lately?

 

What  was it Flaubert said about trains?  Something like, they just made it possible for people to run around faster and faster and be stupid in more places. 

 

Nick

 

Nicholas S. Thompson

Emeritus Professor of Psychology and Biology

Clark University

http://home.earthlink.net/~nickthompson/naturaldesigns/

 

From: Friam [mailto:[hidden email]] On Behalf Of Frank Wimberly
Sent: Tuesday, August 08, 2017 1:56 PM
To: The Friday Morning Applied Complexity Coffee Group <[hidden email]>
Subject: Re: [FRIAM] Future of humans and artificial intelligence

 

Talmud:

 

Do not be daunted by the enormity of the world's grief.  Do justly, now.  Love mercy, now. Walk humbly, now.  You are not obligated to complete the work, but neither are you free to abandon it.

 

Plus 10,000 other pages.

 

Frank Wimberly
Phone <a href="tel:(505)%20670-9918" target="_blank">(505) 670-9918

 

On Aug 8, 2017 11:18 AM, "Pamela McCorduck" <[hidden email]> wrote:

Grant, does it really seem plausible to you that the thousands of crack researchers at Stanford, Carnegie Mellon, Google, MIT, Cal Berkeley, and other places have not seen this? And found remedies?

 

Just for FRIAM’s information, John McCarthy used to call Asimov’s Three Laws Talmudic. Sorry I don’t know enough about the Talmud to agree or disagree.

 

 

 

 

On Aug 8, 2017, at 1:42 AM, Marcus Daniels <[hidden email]> wrote:

 

Grant writes:

 

"Fortunately, the AI folks don't seem to see - yet - that they are stumbling all over the missing piece: stochastic adaptation. You know, like in evolution: chance mutations. AI is still down with a bad case of causal determinism. But I expect they will fairly shortly get over that. Watch out."

 

What is probability, physically?   It could be an illusion and that there is no such thing as an independent observer.   Even if that is true, sampling techniques are used in many machine learning algorithms -- it is not a question of if they work, it is an academic question of why they work.

 

Marcus


From: Friam <[hidden email]> on behalf of Grant Holland <[hidden email]>
Sent: Monday, August 7, 2017 11:38:03 PM
To: The Friday Morning Applied Complexity Coffee Group; Carl Tollander
Subject: Re: [FRIAM] Future of humans and artificial intelligence

 

That sounds right, Carl. Asimov's three "laws" of robotics are more like Asimov's three "wishes" for robotics. AI entities are already no longer servants. They have become machine learners. They have actually learned to project conditional probability. The cat is out of the barn. Or is it that the horse is out of the bag?  

Whatever. Fortunately, the AI folks don't seem to see - yet - that they are stumbling all over the missing piece: stochastic adaptation. You know, like in evolution: chance mutations. AI is still down with a bad case of causal determinism. But I expect they will fairly shortly get over that. Watch out.

And we still must answer Stephen Hawking's burning question: Is intelligence a survivable trait?

 

On 8/7/17 9:54 PM, Carl Tollander wrote:

It seems to me that there are many here in the US who are not entirely on board with Asimov's First Law of Robotics, at least insofar as it may apply to themselves, so I suspect notions of "reining it in" are probably not going to fly.

 

 

 

 

On Mon, Aug 7, 2017 at 1:57 AM, Alfredo Covaleda Vélez <[hidden email]> wrote:

Future will be quite interesting. How will be the human being of the future? For sure not a human being in the way we know.

 


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove

 

 

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove

 

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe 
http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC 
http://friam-comic.blogspot.com/ by Dr. Strangelove

 


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
Reply | Threaded
Open this post in threaded view
|

Re: Future of humans and artificial intelligence

Marcus G. Daniels
In reply to this post by Grant Holland

Grant writes:


"On the other hand... evolution is stochastic. (You actually did not disagree with me on that. You only said that the reason I was right was another one.) "


I think of logic programming systems as a traditional tool of AI research (e.g. Prolog, now Curry, similar capabilities implemented in Lisp) from the age before the AI winter.  These systems provide a very flexible way to pose constraint problems.  But one problem is that breadth-first and depth-first search are just fast ways to find answers.  Recent work seems to have shifted to SMT solvers and specialized constraint solving algorithms, but these have somewhat less expressiveness as programming languages.  Meanwhile, machine learning has come on the scene in a big way and tasks traditionally associated with old-school AI, like natural language processing, are now matched or even dominated using neural nets (LSTM).  I find the range of capabilities provided by groups like nlp.stanford.edu really impressive -- there examples of both approaches (logic programming and machine learning) and then don't need to be mutually exclusive.


Quantum annealing is one area where the two may increasingly come together by using physical phenomena to accelerate the rate at which high dimensional discrete systems can be solved, without relying on fragile or domain-specific heuristics.


I often use evolutionary algorithms for hard optimization problems.  Genetic algorithms, for example, are robust to  noise (or if you like ambiguity) in fitness functions, and they are trivial to parallelize.


Marcus


From: Friam <[hidden email]> on behalf of Grant Holland <[hidden email]>
Sent: Tuesday, August 8, 2017 4:51:18 PM
To: The Friday Morning Applied Complexity Coffee Group; glen ☣
Subject: Re: [FRIAM] Future of humans and artificial intelligence
 

Thanks for throwing in on this one, Glen. Your thoughts are ever-insightful. And ever-entertaining!

For example, I did not know that von Neumann put forth a set theory.

On the other hand... evolution is stochastic. (You actually did not disagree with me on that. You only said that the reason I was right was another one.) A good book on the stochasticity of evolution is "Chance and Necessity" by Jacques Monod. (I just finished rereading it for the second time. And that proved quite fruitful.)

G.


On 8/8/17 12:44 PM, glen ☣ wrote:
I'm not sure how Asimov intended them.  But the three laws is a trope that clearly shows the inadequacy of deontological ethics.  Rules are fine as far as they go.  But they don't go very far.  We can see this even in the foundations of mathematics, the unification of physics, and polyphenism/robustness in biology.  Von Neumann (Burks) said it best when he said: "But in the complicated parts of formal logic it is always one order of magnitude harder to tell what an object can do than to produce the object."  Or, if you don't like that, you can see the same perspective in his iterative construction of sets as an alternative to the classical conception.

The point being that reality, traditionally, has shown more expressiveness than any of our rule sets.

There are ways to handle the mismatch in expressivity between reality versus our rule sets.  Stochasticity is the measure of the extent to which a rule set matches a set of patterns.  But Grant's right to qualify that with evolution, not because of the way evolution is stochastic, but because evolution requires a unit to regularly (or sporadically) sync with its environment.

An AI (or a rule-obsessed human) that sprouts fully formed from Zeus' head will *always* fail.  It's guaranteed to fail because syncing with the environment isn't *built in*.  The sync isn't part of the AI's onto- or phylo-geny.



============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
Reply | Threaded
Open this post in threaded view
|

Re: Future of humans and artificial intelligence

Marcus G. Daniels

"But one problem is that breadth-first and depth-first search are just fast ways to find answers."


Just _not_ -- general but not efficient.   [My dog was demanding attention! ]


From: Friam <[hidden email]> on behalf of Marcus Daniels <[hidden email]>
Sent: Tuesday, August 8, 2017 6:43:40 PM
To: The Friday Morning Applied Complexity Coffee Group; glen ☣
Subject: Re: [FRIAM] Future of humans and artificial intelligence
 

Grant writes:


"On the other hand... evolution is stochastic. (You actually did not disagree with me on that. You only said that the reason I was right was another one.) "


I think of logic programming systems as a traditional tool of AI research (e.g. Prolog, now Curry, similar capabilities implemented in Lisp) from the age before the AI winter.  These systems provide a very flexible way to pose constraint problems.  But one problem is that breadth-first and depth-first search are just fast ways to find answers.  Recent work seems to have shifted to SMT solvers and specialized constraint solving algorithms, but these have somewhat less expressiveness as programming languages.  Meanwhile, machine learning has come on the scene in a big way and tasks traditionally associated with old-school AI, like natural language processing, are now matched or even dominated using neural nets (LSTM).  I find the range of capabilities provided by groups like nlp.stanford.edu really impressive -- there examples of both approaches (logic programming and machine learning) and then don't need to be mutually exclusive.


Quantum annealing is one area where the two may increasingly come together by using physical phenomena to accelerate the rate at which high dimensional discrete systems can be solved, without relying on fragile or domain-specific heuristics.


I often use evolutionary algorithms for hard optimization problems.  Genetic algorithms, for example, are robust to  noise (or if you like ambiguity) in fitness functions, and they are trivial to parallelize.


Marcus


From: Friam <[hidden email]> on behalf of Grant Holland <[hidden email]>
Sent: Tuesday, August 8, 2017 4:51:18 PM
To: The Friday Morning Applied Complexity Coffee Group; glen ☣
Subject: Re: [FRIAM] Future of humans and artificial intelligence
 

Thanks for throwing in on this one, Glen. Your thoughts are ever-insightful. And ever-entertaining!

For example, I did not know that von Neumann put forth a set theory.

On the other hand... evolution is stochastic. (You actually did not disagree with me on that. You only said that the reason I was right was another one.) A good book on the stochasticity of evolution is "Chance and Necessity" by Jacques Monod. (I just finished rereading it for the second time. And that proved quite fruitful.)

G.


On 8/8/17 12:44 PM, glen ☣ wrote:
I'm not sure how Asimov intended them.  But the three laws is a trope that clearly shows the inadequacy of deontological ethics.  Rules are fine as far as they go.  But they don't go very far.  We can see this even in the foundations of mathematics, the unification of physics, and polyphenism/robustness in biology.  Von Neumann (Burks) said it best when he said: "But in the complicated parts of formal logic it is always one order of magnitude harder to tell what an object can do than to produce the object."  Or, if you don't like that, you can see the same perspective in his iterative construction of sets as an alternative to the classical conception.

The point being that reality, traditionally, has shown more expressiveness than any of our rule sets.

There are ways to handle the mismatch in expressivity between reality versus our rule sets.  Stochasticity is the measure of the extent to which a rule set matches a set of patterns.  But Grant's right to qualify that with evolution, not because of the way evolution is stochastic, but because evolution requires a unit to regularly (or sporadically) sync with its environment.

An AI (or a rule-obsessed human) that sprouts fully formed from Zeus' head will *always* fail.  It's guaranteed to fail because syncing with the environment isn't *built in*.  The sync isn't part of the AI's onto- or phylo-geny.



============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
12