Future of humans and artificial intelligence

classic Classic list List threaded Threaded
34 messages Options
12
Reply | Threaded
Open this post in threaded view
|

Re: Future of humans and artificial intelligence

Frank Wimberly-2
Then there's best-first search, B*, C*, constraint-directed search, etc.  And these are just classical search methods.

Feank

Frank Wimberly
Phone (505) 670-9918

On Aug 8, 2017 7:20 PM, "Marcus Daniels" <[hidden email]> wrote:

"But one problem is that breadth-first and depth-first search are just fast ways to find answers."


Just _not_ -- general but not efficient.   [My dog was demanding attention! ]


From: Friam <[hidden email]> on behalf of Marcus Daniels <[hidden email]>
Sent: Tuesday, August 8, 2017 6:43:40 PM
To: The Friday Morning Applied Complexity Coffee Group; glen ☣
Subject: Re: [FRIAM] Future of humans and artificial intelligence
 

Grant writes:


"On the other hand... evolution is stochastic. (You actually did not disagree with me on that. You only said that the reason I was right was another one.) "


I think of logic programming systems as a traditional tool of AI research (e.g. Prolog, now Curry, similar capabilities implemented in Lisp) from the age before the AI winter.  These systems provide a very flexible way to pose constraint problems.  But one problem is that breadth-first and depth-first search are just fast ways to find answers.  Recent work seems to have shifted to SMT solvers and specialized constraint solving algorithms, but these have somewhat less expressiveness as programming languages.  Meanwhile, machine learning has come on the scene in a big way and tasks traditionally associated with old-school AI, like natural language processing, are now matched or even dominated using neural nets (LSTM).  I find the range of capabilities provided by groups like nlp.stanford.edu really impressive -- there examples of both approaches (logic programming and machine learning) and then don't need to be mutually exclusive.


Quantum annealing is one area where the two may increasingly come together by using physical phenomena to accelerate the rate at which high dimensional discrete systems can be solved, without relying on fragile or domain-specific heuristics.


I often use evolutionary algorithms for hard optimization problems.  Genetic algorithms, for example, are robust to  noise (or if you like ambiguity) in fitness functions, and they are trivial to parallelize.


Marcus


From: Friam <[hidden email]> on behalf of Grant Holland <[hidden email]>
Sent: Tuesday, August 8, 2017 4:51:18 PM
To: The Friday Morning Applied Complexity Coffee Group; glen ☣
Subject: Re: [FRIAM] Future of humans and artificial intelligence
 

Thanks for throwing in on this one, Glen. Your thoughts are ever-insightful. And ever-entertaining!

For example, I did not know that von Neumann put forth a set theory.

On the other hand... evolution is stochastic. (You actually did not disagree with me on that. You only said that the reason I was right was another one.) A good book on the stochasticity of evolution is "Chance and Necessity" by Jacques Monod. (I just finished rereading it for the second time. And that proved quite fruitful.)

G.


On 8/8/17 12:44 PM, glen ☣ wrote:
I'm not sure how Asimov intended them.  But the three laws is a trope that clearly shows the inadequacy of deontological ethics.  Rules are fine as far as they go.  But they don't go very far.  We can see this even in the foundations of mathematics, the unification of physics, and polyphenism/robustness in biology.  Von Neumann (Burks) said it best when he said: "But in the complicated parts of formal logic it is always one order of magnitude harder to tell what an object can do than to produce the object."  Or, if you don't like that, you can see the same perspective in his iterative construction of sets as an alternative to the classical conception.

The point being that reality, traditionally, has shown more expressiveness than any of our rule sets.

There are ways to handle the mismatch in expressivity between reality versus our rule sets.  Stochasticity is the measure of the extent to which a rule set matches a set of patterns.  But Grant's right to qualify that with evolution, not because of the way evolution is stochastic, but because evolution requires a unit to regularly (or sporadically) sync with its environment.

An AI (or a rule-obsessed human) that sprouts fully formed from Zeus' head will *always* fail.  It's guaranteed to fail because syncing with the environment isn't *built in*.  The sync isn't part of the AI's onto- or phylo-geny.



============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
Reply | Threaded
Open this post in threaded view
|

Re: Future of humans and artificial intelligence

Marcus G. Daniels

Frank writes:


"Then there's best-first search, B*, C*, constraint-directed search, etc.  And these are just classical search methods."


Connecting this back to evolutionary / stochastic techniques, genetic programming is one way to get the best of both approaches, at least in principle.   One can expose these human-designed algorithms as predefined library functions.  Typically in genetic programming the vocabulary consists of simple routines (e.g. arithmetic), conditionals, and recursion.


In practice, this kind of seeding of the solution space can collapse diversity.   It is a drag to see tons of compute time spent on a million little refinements around an already good solution.  (Yes, I know that solution!)  More fun to see a set of clumsy solutions turn into to decent-performing but weird solutions.  I find my attention is drawn to properties of sub-populations and how I can keep the historically good performers _out_.  Not a pure GA, but a GA where communities also have fitness functions matching my heavy hand of justice..  (If I prove that conservatism just doesn't work, I'll be sure to pass it along.)


Marcus



From: Friam <[hidden email]> on behalf of Frank Wimberly <[hidden email]>
Sent: Tuesday, August 8, 2017 7:57:06 PM
To: The Friday Morning Applied Complexity Coffee Group
Subject: Re: [FRIAM] Future of humans and artificial intelligence
 
Then there's best-first search, B*, C*, constraint-directed search, etc.  And these are just classical search methods.

Feank

Frank Wimberly
Phone (505) 670-9918

On Aug 8, 2017 7:20 PM, "Marcus Daniels" <[hidden email]> wrote:

"But one problem is that breadth-first and depth-first search are just fast ways to find answers."


Just _not_ -- general but not efficient.   [My dog was demanding attention! ]


From: Friam <[hidden email]> on behalf of Marcus Daniels <[hidden email]>
Sent: Tuesday, August 8, 2017 6:43:40 PM
To: The Friday Morning Applied Complexity Coffee Group; glen ☣
Subject: Re: [FRIAM] Future of humans and artificial intelligence
 

Grant writes:


"On the other hand... evolution is stochastic. (You actually did not disagree with me on that. You only said that the reason I was right was another one.) "


I think of logic programming systems as a traditional tool of AI research (e.g. Prolog, now Curry, similar capabilities implemented in Lisp) from the age before the AI winter.  These systems provide a very flexible way to pose constraint problems.  But one problem is that breadth-first and depth-first search are just fast ways to find answers.  Recent work seems to have shifted to SMT solvers and specialized constraint solving algorithms, but these have somewhat less expressiveness as programming languages.  Meanwhile, machine learning has come on the scene in a big way and tasks traditionally associated with old-school AI, like natural language processing, are now matched or even dominated using neural nets (LSTM).  I find the range of capabilities provided by groups like nlp.stanford.edu really impressive -- there examples of both approaches (logic programming and machine learning) and then don't need to be mutually exclusive.


Quantum annealing is one area where the two may increasingly come together by using physical phenomena to accelerate the rate at which high dimensional discrete systems can be solved, without relying on fragile or domain-specific heuristics.


I often use evolutionary algorithms for hard optimization problems.  Genetic algorithms, for example, are robust to  noise (or if you like ambiguity) in fitness functions, and they are trivial to parallelize.


Marcus


From: Friam <[hidden email]> on behalf of Grant Holland <[hidden email]>
Sent: Tuesday, August 8, 2017 4:51:18 PM
To: The Friday Morning Applied Complexity Coffee Group; glen ☣
Subject: Re: [FRIAM] Future of humans and artificial intelligence
 

Thanks for throwing in on this one, Glen. Your thoughts are ever-insightful. And ever-entertaining!

For example, I did not know that von Neumann put forth a set theory.

On the other hand... evolution is stochastic. (You actually did not disagree with me on that. You only said that the reason I was right was another one.) A good book on the stochasticity of evolution is "Chance and Necessity" by Jacques Monod. (I just finished rereading it for the second time. And that proved quite fruitful.)

G.


On 8/8/17 12:44 PM, glen ☣ wrote:
I'm not sure how Asimov intended them.  But the three laws is a trope that clearly shows the inadequacy of deontological ethics.  Rules are fine as far as they go.  But they don't go very far.  We can see this even in the foundations of mathematics, the unification of physics, and polyphenism/robustness in biology.  Von Neumann (Burks) said it best when he said: "But in the complicated parts of formal logic it is always one order of magnitude harder to tell what an object can do than to produce the object."  Or, if you don't like that, you can see the same perspective in his iterative construction of sets as an alternative to the classical conception.

The point being that reality, traditionally, has shown more expressiveness than any of our rule sets.

There are ways to handle the mismatch in expressivity between reality versus our rule sets.  Stochasticity is the measure of the extent to which a rule set matches a set of patterns.  But Grant's right to qualify that with evolution, not because of the way evolution is stochastic, but because evolution requires a unit to regularly (or sporadically) sync with its environment.

An AI (or a rule-obsessed human) that sprouts fully formed from Zeus' head will *always* fail.  It's guaranteed to fail because syncing with the environment isn't *built in*.  The sync isn't part of the AI's onto- or phylo-geny.



============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
Reply | Threaded
Open this post in threaded view
|

Re: Future of humans and artificial intelligence

Frank Wimberly-2
My point was that depth-first and breadth-first can probably serve only as a straw-man (straw-men?).

Frank Wimberly
Phone (505) 670-9918

On Aug 8, 2017 10:11 PM, "Marcus Daniels" <[hidden email]> wrote:

Frank writes:


"Then there's best-first search, B*, C*, constraint-directed search, etc.  And these are just classical search methods."


Connecting this back to evolutionary / stochastic techniques, genetic programming is one way to get the best of both approaches, at least in principle.   One can expose these human-designed algorithms as predefined library functions.  Typically in genetic programming the vocabulary consists of simple routines (e.g. arithmetic), conditionals, and recursion.


In practice, this kind of seeding of the solution space can collapse diversity.   It is a drag to see tons of compute time spent on a million little refinements around an already good solution.  (Yes, I know that solution!)  More fun to see a set of clumsy solutions turn into to decent-performing but weird solutions.  I find my attention is drawn to properties of sub-populations and how I can keep the historically good performers _out_.  Not a pure GA, but a GA where communities also have fitness functions matching my heavy hand of justice..  (If I prove that conservatism just doesn't work, I'll be sure to pass it along.)


Marcus



From: Friam <[hidden email]> on behalf of Frank Wimberly <[hidden email]>
Sent: Tuesday, August 8, 2017 7:57:06 PM
To: The Friday Morning Applied Complexity Coffee Group
Subject: Re: [FRIAM] Future of humans and artificial intelligence
 
Then there's best-first search, B*, C*, constraint-directed search, etc.  And these are just classical search methods.

Feank

Frank Wimberly
Phone <a href="tel:(505)%20670-9918" value="+15056709918" target="_blank">(505) 670-9918

On Aug 8, 2017 7:20 PM, "Marcus Daniels" <[hidden email]> wrote:

"But one problem is that breadth-first and depth-first search are just fast ways to find answers."


Just _not_ -- general but not efficient.   [My dog was demanding attention! ]


From: Friam <[hidden email]> on behalf of Marcus Daniels <[hidden email]>
Sent: Tuesday, August 8, 2017 6:43:40 PM
To: The Friday Morning Applied Complexity Coffee Group; glen ☣
Subject: Re: [FRIAM] Future of humans and artificial intelligence
 

Grant writes:


"On the other hand... evolution is stochastic. (You actually did not disagree with me on that. You only said that the reason I was right was another one.) "


I think of logic programming systems as a traditional tool of AI research (e.g. Prolog, now Curry, similar capabilities implemented in Lisp) from the age before the AI winter.  These systems provide a very flexible way to pose constraint problems.  But one problem is that breadth-first and depth-first search are just fast ways to find answers.  Recent work seems to have shifted to SMT solvers and specialized constraint solving algorithms, but these have somewhat less expressiveness as programming languages.  Meanwhile, machine learning has come on the scene in a big way and tasks traditionally associated with old-school AI, like natural language processing, are now matched or even dominated using neural nets (LSTM).  I find the range of capabilities provided by groups like nlp.stanford.edu really impressive -- there examples of both approaches (logic programming and machine learning) and then don't need to be mutually exclusive.


Quantum annealing is one area where the two may increasingly come together by using physical phenomena to accelerate the rate at which high dimensional discrete systems can be solved, without relying on fragile or domain-specific heuristics.


I often use evolutionary algorithms for hard optimization problems.  Genetic algorithms, for example, are robust to  noise (or if you like ambiguity) in fitness functions, and they are trivial to parallelize.


Marcus


From: Friam <[hidden email]> on behalf of Grant Holland <[hidden email]>
Sent: Tuesday, August 8, 2017 4:51:18 PM
To: The Friday Morning Applied Complexity Coffee Group; glen ☣
Subject: Re: [FRIAM] Future of humans and artificial intelligence
 

Thanks for throwing in on this one, Glen. Your thoughts are ever-insightful. And ever-entertaining!

For example, I did not know that von Neumann put forth a set theory.

On the other hand... evolution is stochastic. (You actually did not disagree with me on that. You only said that the reason I was right was another one.) A good book on the stochasticity of evolution is "Chance and Necessity" by Jacques Monod. (I just finished rereading it for the second time. And that proved quite fruitful.)

G.


On 8/8/17 12:44 PM, glen ☣ wrote:
I'm not sure how Asimov intended them.  But the three laws is a trope that clearly shows the inadequacy of deontological ethics.  Rules are fine as far as they go.  But they don't go very far.  We can see this even in the foundations of mathematics, the unification of physics, and polyphenism/robustness in biology.  Von Neumann (Burks) said it best when he said: "But in the complicated parts of formal logic it is always one order of magnitude harder to tell what an object can do than to produce the object."  Or, if you don't like that, you can see the same perspective in his iterative construction of sets as an alternative to the classical conception.

The point being that reality, traditionally, has shown more expressiveness than any of our rule sets.

There are ways to handle the mismatch in expressivity between reality versus our rule sets.  Stochasticity is the measure of the extent to which a rule set matches a set of patterns.  But Grant's right to qualify that with evolution, not because of the way evolution is stochastic, but because evolution requires a unit to regularly (or sporadically) sync with its environment.

An AI (or a rule-obsessed human) that sprouts fully formed from Zeus' head will *always* fail.  It's guaranteed to fail because syncing with the environment isn't *built in*.  The sync isn't part of the AI's onto- or phylo-geny.



============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
Reply | Threaded
Open this post in threaded view
|

Re: Future of humans and artificial intelligence

Marcus G. Daniels

Frank writes:


"My point was that depth-first and breadth-first can probably serve only as a straw-man (straw-men?)."


Unless there is a robust meta-rule (not heuristic) or single deterministic search algorithm to rule them all, then wouldn't those other suggestions also be straw-men too?   If I knew that there were no noise and the domain was continuous and convex, then I wouldn't use a stochastic approach.


Marcus


From: Friam <[hidden email]> on behalf of Frank Wimberly <[hidden email]>
Sent: Tuesday, August 8, 2017 10:15:05 PM
To: The Friday Morning Applied Complexity Coffee Group
Subject: Re: [FRIAM] Future of humans and artificial intelligence
 
My point was that depth-first and breadth-first can probably serve only as a straw-man (straw-men?).

Frank Wimberly
Phone (505) 670-9918

On Aug 8, 2017 10:11 PM, "Marcus Daniels" <[hidden email]> wrote:

Frank writes:


"Then there's best-first search, B*, C*, constraint-directed search, etc.  And these are just classical search methods."


Connecting this back to evolutionary / stochastic techniques, genetic programming is one way to get the best of both approaches, at least in principle.   One can expose these human-designed algorithms as predefined library functions.  Typically in genetic programming the vocabulary consists of simple routines (e.g. arithmetic), conditionals, and recursion.


In practice, this kind of seeding of the solution space can collapse diversity.   It is a drag to see tons of compute time spent on a million little refinements around an already good solution.  (Yes, I know that solution!)  More fun to see a set of clumsy solutions turn into to decent-performing but weird solutions.  I find my attention is drawn to properties of sub-populations and how I can keep the historically good performers _out_.  Not a pure GA, but a GA where communities also have fitness functions matching my heavy hand of justice..  (If I prove that conservatism just doesn't work, I'll be sure to pass it along.)


Marcus



From: Friam <[hidden email]> on behalf of Frank Wimberly <[hidden email]>
Sent: Tuesday, August 8, 2017 7:57:06 PM
To: The Friday Morning Applied Complexity Coffee Group
Subject: Re: [FRIAM] Future of humans and artificial intelligence
 
Then there's best-first search, B*, C*, constraint-directed search, etc.  And these are just classical search methods.

Feank

Frank Wimberly
Phone <a href="tel:(505)%20670-9918" value="&#43;15056709918" target="_blank">(505) 670-9918

On Aug 8, 2017 7:20 PM, "Marcus Daniels" <[hidden email]> wrote:

"But one problem is that breadth-first and depth-first search are just fast ways to find answers."


Just _not_ -- general but not efficient.   [My dog was demanding attention! ]


From: Friam <[hidden email]> on behalf of Marcus Daniels <[hidden email]>
Sent: Tuesday, August 8, 2017 6:43:40 PM
To: The Friday Morning Applied Complexity Coffee Group; glen ☣
Subject: Re: [FRIAM] Future of humans and artificial intelligence
 

Grant writes:


"On the other hand... evolution is stochastic. (You actually did not disagree with me on that. You only said that the reason I was right was another one.) "


I think of logic programming systems as a traditional tool of AI research (e.g. Prolog, now Curry, similar capabilities implemented in Lisp) from the age before the AI winter.  These systems provide a very flexible way to pose constraint problems.  But one problem is that breadth-first and depth-first search are just fast ways to find answers.  Recent work seems to have shifted to SMT solvers and specialized constraint solving algorithms, but these have somewhat less expressiveness as programming languages.  Meanwhile, machine learning has come on the scene in a big way and tasks traditionally associated with old-school AI, like natural language processing, are now matched or even dominated using neural nets (LSTM).  I find the range of capabilities provided by groups like nlp.stanford.edu really impressive -- there examples of both approaches (logic programming and machine learning) and then don't need to be mutually exclusive.


Quantum annealing is one area where the two may increasingly come together by using physical phenomena to accelerate the rate at which high dimensional discrete systems can be solved, without relying on fragile or domain-specific heuristics.


I often use evolutionary algorithms for hard optimization problems.  Genetic algorithms, for example, are robust to  noise (or if you like ambiguity) in fitness functions, and they are trivial to parallelize.


Marcus


From: Friam <[hidden email]> on behalf of Grant Holland <[hidden email]>
Sent: Tuesday, August 8, 2017 4:51:18 PM
To: The Friday Morning Applied Complexity Coffee Group; glen ☣
Subject: Re: [FRIAM] Future of humans and artificial intelligence
 

Thanks for throwing in on this one, Glen. Your thoughts are ever-insightful. And ever-entertaining!

For example, I did not know that von Neumann put forth a set theory.

On the other hand... evolution is stochastic. (You actually did not disagree with me on that. You only said that the reason I was right was another one.) A good book on the stochasticity of evolution is "Chance and Necessity" by Jacques Monod. (I just finished rereading it for the second time. And that proved quite fruitful.)

G.


On 8/8/17 12:44 PM, glen ☣ wrote:
I'm not sure how Asimov intended them.  But the three laws is a trope that clearly shows the inadequacy of deontological ethics.  Rules are fine as far as they go.  But they don't go very far.  We can see this even in the foundations of mathematics, the unification of physics, and polyphenism/robustness in biology.  Von Neumann (Burks) said it best when he said: "But in the complicated parts of formal logic it is always one order of magnitude harder to tell what an object can do than to produce the object."  Or, if you don't like that, you can see the same perspective in his iterative construction of sets as an alternative to the classical conception.

The point being that reality, traditionally, has shown more expressiveness than any of our rule sets.

There are ways to handle the mismatch in expressivity between reality versus our rule sets.  Stochasticity is the measure of the extent to which a rule set matches a set of patterns.  But Grant's right to qualify that with evolution, not because of the way evolution is stochastic, but because evolution requires a unit to regularly (or sporadically) sync with its environment.

An AI (or a rule-obsessed human) that sprouts fully formed from Zeus' head will *always* fail.  It's guaranteed to fail because syncing with the environment isn't *built in*.  The sync isn't part of the AI's onto- or phylo-geny.



============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
Reply | Threaded
Open this post in threaded view
|

Re: Future of humans and artificial intelligence

Grant Holland
In reply to this post by Nick Thompson

Nick,

In science, these three terms are generally interchangeable. Their common usage is that they all describe activities, or "events", that are "subject to chance". Such activities, events or processes that are described by these terms are governed by the laws of probability. They all describe activities, events, or "happenings" whose repetitions do not always produce the same outcomes even when given the same inputs every time (initial conditions). In other words, uncertainty is involved.

However, like most words, these enjoy other usage, meanings, as well. For example "random" is sometimes used to mean "disorganized" or "lacking in specific pattern". This is a very different meaning than "activities that don't always produce the same outcome given the same inputs". Consider what a math formula for each of these tow meanings wold consist of. One of them would be based on probabilities; but the other would involve stationary relationships.

On 8/8/17 5:31 PM, Nick Thompson wrote:

Grant,

 

I think I know the answer to this question, but want to make sure: 

 

What is the difference beween calling a process “stochastic”, “indeterminate”, or “random”? 

 

Nick

 

Nicholas S. Thompson

Emeritus Professor of Psychology and Biology

Clark University

http://home.earthlink.net/~nickthompson/naturaldesigns/

 

From: Friam [[hidden email]] On Behalf Of Grant Holland
Sent: Tuesday, August 08, 2017 6:51 PM
To: The Friday Morning Applied Complexity Coffee Group [hidden email]; glen
[hidden email]
Subject: Re: [FRIAM] Future of humans and artificial intelligence

 

Thanks for throwing in on this one, Glen. Your thoughts are ever-insightful. And ever-entertaining!

For example, I did not know that von Neumann put forth a set theory.

On the other hand... evolution is stochastic. (You actually did not disagree with me on that. You only said that the reason I was right was another one.) A good book on the stochasticity of evolution is "Chance and Necessity" by Jacques Monod. (I just finished rereading it for the second time. And that proved quite fruitful.)

G.

 

On 8/8/17 12:44 PM, glen wrote:

 
I'm not sure how Asimov intended them.  But the three laws is a trope that clearly shows the inadequacy of deontological ethics.  Rules are fine as far as they go.  But they don't go very far.  We can see this even in the foundations of mathematics, the unification of physics, and polyphenism/robustness in biology.  Von Neumann (Burks) said it best when he said: "But in the complicated parts of formal logic it is always one order of magnitude harder to tell what an object can do than to produce the object."  Or, if you don't like that, you can see the same perspective in his iterative construction of sets as an alternative to the classical conception.
 
The point being that reality, traditionally, has shown more expressiveness than any of our rule sets.
 
There are ways to handle the mismatch in expressivity between reality versus our rule sets.  Stochasticity is the measure of the extent to which a rule set matches a set of patterns.  But Grant's right to qualify that with evolution, not because of the way evolution is stochastic, but because evolution requires a unit to regularly (or sporadically) sync with its environment.
 
An AI (or a rule-obsessed human) that sprouts fully formed from Zeus' head will *always* fail.  It's guaranteed to fail because syncing with the environment isn't *built in*.  The sync isn't part of the AI's onto- or phylo-geny.
 

 



============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
Reply | Threaded
Open this post in threaded view
|

Re: Future of humans and artificial intelligence

Grant Holland
In reply to this post by Marcus G. Daniels

Marcus,

Let me clarify what I meant by saying that evolution is stochastic....

By "evolution", I do not mean genetic algorithms. Genetic algorithms need not be, but can be, stochastic. Genetic algorithms are adaptive; but they need not be stochastically adaptive. On the other hand, biological evolution of life on earth is necessarily stochastically adaptive - due to chance mutations.

As Jacques Monod points out in his book "Chance and Necessity", chance mutations are the only natural mechanism by which new species are created. And it is completely subject to chance. Without this particular stochasticicty, there would only ever have been one species on earth, if that, and that species would now be long extinct because of its inability to adapt.


On 8/8/17 6:43 PM, Marcus Daniels wrote:

Grant writes:


"On the other hand... evolution is stochastic. (You actually did not disagree with me on that. You only said that the reason I was right was another one.) "


I think of logic programming systems as a traditional tool of AI research (e.g. Prolog, now Curry, similar capabilities implemented in Lisp) from the age before the AI winter.  These systems provide a very flexible way to pose constraint problems.  But one problem is that breadth-first and depth-first search are just fast ways to find answers.  Recent work seems to have shifted to SMT solvers and specialized constraint solving algorithms, but these have somewhat less expressiveness as programming languages.  Meanwhile, machine learning has come on the scene in a big way and tasks traditionally associated with old-school AI, like natural language processing, are now matched or even dominated using neural nets (LSTM).  I find the range of capabilities provided by groups like nlp.stanford.edu really impressive -- there examples of both approaches (logic programming and machine learning) and then don't need to be mutually exclusive.


Quantum annealing is one area where the two may increasingly come together by using physical phenomena to accelerate the rate at which high dimensional discrete systems can be solved, without relying on fragile or domain-specific heuristics.


I often use evolutionary algorithms for hard optimization problems.  Genetic algorithms, for example, are robust to  noise (or if you like ambiguity) in fitness functions, and they are trivial to parallelize.


Marcus


From: Friam [hidden email] on behalf of Grant Holland [hidden email]
Sent: Tuesday, August 8, 2017 4:51:18 PM
To: The Friday Morning Applied Complexity Coffee Group; glen ☣
Subject: Re: [FRIAM] Future of humans and artificial intelligence
 

Thanks for throwing in on this one, Glen. Your thoughts are ever-insightful. And ever-entertaining!

For example, I did not know that von Neumann put forth a set theory.

On the other hand... evolution is stochastic. (You actually did not disagree with me on that. You only said that the reason I was right was another one.) A good book on the stochasticity of evolution is "Chance and Necessity" by Jacques Monod. (I just finished rereading it for the second time. And that proved quite fruitful.)

G.


On 8/8/17 12:44 PM, glen ☣ wrote:
I'm not sure how Asimov intended them.  But the three laws is a trope that clearly shows the inadequacy of deontological ethics.  Rules are fine as far as they go.  But they don't go very far.  We can see this even in the foundations of mathematics, the unification of physics, and polyphenism/robustness in biology.  Von Neumann (Burks) said it best when he said: "But in the complicated parts of formal logic it is always one order of magnitude harder to tell what an object can do than to produce the object."  Or, if you don't like that, you can see the same perspective in his iterative construction of sets as an alternative to the classical conception.

The point being that reality, traditionally, has shown more expressiveness than any of our rule sets.

There are ways to handle the mismatch in expressivity between reality versus our rule sets.  Stochasticity is the measure of the extent to which a rule set matches a set of patterns.  But Grant's right to qualify that with evolution, not because of the way evolution is stochastic, but because evolution requires a unit to regularly (or sporadically) sync with its environment.

An AI (or a rule-obsessed human) that sprouts fully formed from Zeus' head will *always* fail.  It's guaranteed to fail because syncing with the environment isn't *built in*.  The sync isn't part of the AI's onto- or phylo-geny.




============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
Reply | Threaded
Open this post in threaded view
|

Re: Future of humans and artificial intelligence

Marcus G. Daniels

"Genetic algorithms need not be, but can be, stochastic. Genetic algorithms are adaptive; but they need not be stochastically adaptive"

[..]

"Without this particular stochasticicty, there would only ever have been one species on earth, if that, and that species would now be long extinct because of its inability to adapt."


If an algorithm can result in there being one species it is not adaptive.   I meant to imply a GA has a non-zero mutation rate (not just selection) and that mutation is random, without specifying particular distributional properties or distinguishing between pseudo-random and `truly' random.


Marcus


From: Grant Holland <[hidden email]>
Sent: Wednesday, August 9, 2017 12:40:48 AM
To: The Friday Morning Applied Complexity Coffee Group; Marcus Daniels; glen ☣
Subject: Re: [FRIAM] Future of humans and artificial intelligence
 

Marcus,

Let me clarify what I meant by saying that evolution is stochastic....

By "evolution", I do not mean genetic algorithms. Genetic algorithms need not be, but can be, stochastic. Genetic algorithms are adaptive; but they need not be stochastically adaptive. On the other hand, biological evolution of life on earth is necessarily stochastically adaptive - due to chance mutations.

As Jacques Monod points out in his book "Chance and Necessity", chance mutations are the only natural mechanism by which new species are created. And it is completely subject to chance. Without this particular stochasticicty, there would only ever have been one species on earth, if that, and that species would now be long extinct because of its inability to adapt.


On 8/8/17 6:43 PM, Marcus Daniels wrote:

Grant writes:


"On the other hand... evolution is stochastic. (You actually did not disagree with me on that. You only said that the reason I was right was another one.) "


I think of logic programming systems as a traditional tool of AI research (e.g. Prolog, now Curry, similar capabilities implemented in Lisp) from the age before the AI winter.  These systems provide a very flexible way to pose constraint problems.  But one problem is that breadth-first and depth-first search are just fast ways to find answers.  Recent work seems to have shifted to SMT solvers and specialized constraint solving algorithms, but these have somewhat less expressiveness as programming languages.  Meanwhile, machine learning has come on the scene in a big way and tasks traditionally associated with old-school AI, like natural language processing, are now matched or even dominated using neural nets (LSTM).  I find the range of capabilities provided by groups like nlp.stanford.edu really impressive -- there examples of both approaches (logic programming and machine learning) and then don't need to be mutually exclusive.


Quantum annealing is one area where the two may increasingly come together by using physical phenomena to accelerate the rate at which high dimensional discrete systems can be solved, without relying on fragile or domain-specific heuristics.


I often use evolutionary algorithms for hard optimization problems.  Genetic algorithms, for example, are robust to  noise (or if you like ambiguity) in fitness functions, and they are trivial to parallelize.


Marcus


From: Friam [hidden email] on behalf of Grant Holland [hidden email]
Sent: Tuesday, August 8, 2017 4:51:18 PM
To: The Friday Morning Applied Complexity Coffee Group; glen ☣
Subject: Re: [FRIAM] Future of humans and artificial intelligence
 

Thanks for throwing in on this one, Glen. Your thoughts are ever-insightful. And ever-entertaining!

For example, I did not know that von Neumann put forth a set theory.

On the other hand... evolution is stochastic. (You actually did not disagree with me on that. You only said that the reason I was right was another one.) A good book on the stochasticity of evolution is "Chance and Necessity" by Jacques Monod. (I just finished rereading it for the second time. And that proved quite fruitful.)

G.


On 8/8/17 12:44 PM, glen ☣ wrote:
I'm not sure how Asimov intended them.  But the three laws is a trope that clearly shows the inadequacy of deontological ethics.  Rules are fine as far as they go.  But they don't go very far.  We can see this even in the foundations of mathematics, the unification of physics, and polyphenism/robustness in biology.  Von Neumann (Burks) said it best when he said: "But in the complicated parts of formal logic it is always one order of magnitude harder to tell what an object can do than to produce the object."  Or, if you don't like that, you can see the same perspective in his iterative construction of sets as an alternative to the classical conception.

The point being that reality, traditionally, has shown more expressiveness than any of our rule sets.

There are ways to handle the mismatch in expressivity between reality versus our rule sets.  Stochasticity is the measure of the extent to which a rule set matches a set of patterns.  But Grant's right to qualify that with evolution, not because of the way evolution is stochastic, but because evolution requires a unit to regularly (or sporadically) sync with its environment.

An AI (or a rule-obsessed human) that sprouts fully formed from Zeus' head will *always* fail.  It's guaranteed to fail because syncing with the environment isn't *built in*.  The sync isn't part of the AI's onto- or phylo-geny.




============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
Reply | Threaded
Open this post in threaded view
|

Re: Future of humans and artificial intelligence

Frank Wimberly-2
In reply to this post by Marcus G. Daniels
Right.  Then you use gradient ascent.  But what if you are scheduling a job shop for throughput when there are thousands of variables most of which have discrete values?

Frank

Frank Wimberly
Phone (505) 670-9918

On Aug 8, 2017 10:41 PM, "Marcus Daniels" <[hidden email]> wrote:

Frank writes:


"My point was that depth-first and breadth-first can probably serve only as a straw-man (straw-men?)."


Unless there is a robust meta-rule (not heuristic) or single deterministic search algorithm to rule them all, then wouldn't those other suggestions also be straw-men too?   If I knew that there were no noise and the domain was continuous and convex, then I wouldn't use a stochastic approach.


Marcus


From: Friam <[hidden email]> on behalf of Frank Wimberly <[hidden email]>
Sent: Tuesday, August 8, 2017 10:15:05 PM
To: The Friday Morning Applied Complexity Coffee Group
Subject: Re: [FRIAM] Future of humans and artificial intelligence
 
My point was that depth-first and breadth-first can probably serve only as a straw-man (straw-men?).

Frank Wimberly
Phone <a href="tel:(505)%20670-9918" value="+15056709918" target="_blank">(505) 670-9918

On Aug 8, 2017 10:11 PM, "Marcus Daniels" <[hidden email]> wrote:

Frank writes:


"Then there's best-first search, B*, C*, constraint-directed search, etc.  And these are just classical search methods."


Connecting this back to evolutionary / stochastic techniques, genetic programming is one way to get the best of both approaches, at least in principle.   One can expose these human-designed algorithms as predefined library functions.  Typically in genetic programming the vocabulary consists of simple routines (e.g. arithmetic), conditionals, and recursion.


In practice, this kind of seeding of the solution space can collapse diversity.   It is a drag to see tons of compute time spent on a million little refinements around an already good solution.  (Yes, I know that solution!)  More fun to see a set of clumsy solutions turn into to decent-performing but weird solutions.  I find my attention is drawn to properties of sub-populations and how I can keep the historically good performers _out_.  Not a pure GA, but a GA where communities also have fitness functions matching my heavy hand of justice..  (If I prove that conservatism just doesn't work, I'll be sure to pass it along.)


Marcus



From: Friam <[hidden email]> on behalf of Frank Wimberly <[hidden email]>
Sent: Tuesday, August 8, 2017 7:57:06 PM
To: The Friday Morning Applied Complexity Coffee Group
Subject: Re: [FRIAM] Future of humans and artificial intelligence
 
Then there's best-first search, B*, C*, constraint-directed search, etc.  And these are just classical search methods.

Feank

Frank Wimberly
Phone <a href="tel:(505)%20670-9918" value="+15056709918" target="_blank">(505) 670-9918

On Aug 8, 2017 7:20 PM, "Marcus Daniels" <[hidden email]> wrote:

"But one problem is that breadth-first and depth-first search are just fast ways to find answers."


Just _not_ -- general but not efficient.   [My dog was demanding attention! ]


From: Friam <[hidden email]> on behalf of Marcus Daniels <[hidden email]>
Sent: Tuesday, August 8, 2017 6:43:40 PM
To: The Friday Morning Applied Complexity Coffee Group; glen ☣
Subject: Re: [FRIAM] Future of humans and artificial intelligence
 

Grant writes:


"On the other hand... evolution is stochastic. (You actually did not disagree with me on that. You only said that the reason I was right was another one.) "


I think of logic programming systems as a traditional tool of AI research (e.g. Prolog, now Curry, similar capabilities implemented in Lisp) from the age before the AI winter.  These systems provide a very flexible way to pose constraint problems.  But one problem is that breadth-first and depth-first search are just fast ways to find answers.  Recent work seems to have shifted to SMT solvers and specialized constraint solving algorithms, but these have somewhat less expressiveness as programming languages.  Meanwhile, machine learning has come on the scene in a big way and tasks traditionally associated with old-school AI, like natural language processing, are now matched or even dominated using neural nets (LSTM).  I find the range of capabilities provided by groups like nlp.stanford.edu really impressive -- there examples of both approaches (logic programming and machine learning) and then don't need to be mutually exclusive.


Quantum annealing is one area where the two may increasingly come together by using physical phenomena to accelerate the rate at which high dimensional discrete systems can be solved, without relying on fragile or domain-specific heuristics.


I often use evolutionary algorithms for hard optimization problems.  Genetic algorithms, for example, are robust to  noise (or if you like ambiguity) in fitness functions, and they are trivial to parallelize.


Marcus


From: Friam <[hidden email]> on behalf of Grant Holland <[hidden email]>
Sent: Tuesday, August 8, 2017 4:51:18 PM
To: The Friday Morning Applied Complexity Coffee Group; glen ☣
Subject: Re: [FRIAM] Future of humans and artificial intelligence
 

Thanks for throwing in on this one, Glen. Your thoughts are ever-insightful. And ever-entertaining!

For example, I did not know that von Neumann put forth a set theory.

On the other hand... evolution is stochastic. (You actually did not disagree with me on that. You only said that the reason I was right was another one.) A good book on the stochasticity of evolution is "Chance and Necessity" by Jacques Monod. (I just finished rereading it for the second time. And that proved quite fruitful.)

G.


On 8/8/17 12:44 PM, glen ☣ wrote:
I'm not sure how Asimov intended them.  But the three laws is a trope that clearly shows the inadequacy of deontological ethics.  Rules are fine as far as they go.  But they don't go very far.  We can see this even in the foundations of mathematics, the unification of physics, and polyphenism/robustness in biology.  Von Neumann (Burks) said it best when he said: "But in the complicated parts of formal logic it is always one order of magnitude harder to tell what an object can do than to produce the object."  Or, if you don't like that, you can see the same perspective in his iterative construction of sets as an alternative to the classical conception.

The point being that reality, traditionally, has shown more expressiveness than any of our rule sets.

There are ways to handle the mismatch in expressivity between reality versus our rule sets.  Stochasticity is the measure of the extent to which a rule set matches a set of patterns.  But Grant's right to qualify that with evolution, not because of the way evolution is stochastic, but because evolution requires a unit to regularly (or sporadically) sync with its environment.

An AI (or a rule-obsessed human) that sprouts fully formed from Zeus' head will *always* fail.  It's guaranteed to fail because syncing with the environment isn't *built in*.  The sync isn't part of the AI's onto- or phylo-geny.



============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
Reply | Threaded
Open this post in threaded view
|

Re: Future of humans and artificial intelligence

Marcus G. Daniels

"Right.  Then you use gradient ascent.  But what if you are scheduling a job shop for throughput when there are thousands of variables most of which have discrete values?"


I'd try to code it up for a SMT solver like Z3, or look for a SMT solver that had theories that closely matched the domain of the job shop.  Or try something like this on a D-Wave.


Marcus




From: Friam <[hidden email]> on behalf of Frank Wimberly <[hidden email]>
Sent: Wednesday, August 9, 2017 7:35 AM
To: The Friday Morning Applied Complexity Coffee Group
Subject: Re: [FRIAM] Future of humans and artificial intelligence
 
Right.  Then you use gradient ascent.  But what if you are scheduling a job shop for throughput when there are thousands of variables most of which have discrete values?

Frank

Frank Wimberly
Phone (505) 670-9918

On Aug 8, 2017 10:41 PM, "Marcus Daniels" <[hidden email]> wrote:

Frank writes:


"My point was that depth-first and breadth-first can probably serve only as a straw-man (straw-men?)."


Unless there is a robust meta-rule (not heuristic) or single deterministic search algorithm to rule them all, then wouldn't those other suggestions also be straw-men too?   If I knew that there were no noise and the domain was continuous and convex, then I wouldn't use a stochastic approach.


Marcus


From: Friam <[hidden email]> on behalf of Frank Wimberly <[hidden email]>
Sent: Tuesday, August 8, 2017 10:15:05 PM
To: The Friday Morning Applied Complexity Coffee Group
Subject: Re: [FRIAM] Future of humans and artificial intelligence
 
My point was that depth-first and breadth-first can probably serve only as a straw-man (straw-men?).

Frank Wimberly
Phone <a href="tel:(505)%20670-9918" value="&#43;15056709918" target="_blank">(505) 670-9918

On Aug 8, 2017 10:11 PM, "Marcus Daniels" <[hidden email]> wrote:

Frank writes:


"Then there's best-first search, B*, C*, constraint-directed search, etc.  And these are just classical search methods."


Connecting this back to evolutionary / stochastic techniques, genetic programming is one way to get the best of both approaches, at least in principle.   One can expose these human-designed algorithms as predefined library functions.  Typically in genetic programming the vocabulary consists of simple routines (e.g. arithmetic), conditionals, and recursion.


In practice, this kind of seeding of the solution space can collapse diversity.   It is a drag to see tons of compute time spent on a million little refinements around an already good solution.  (Yes, I know that solution!)  More fun to see a set of clumsy solutions turn into to decent-performing but weird solutions.  I find my attention is drawn to properties of sub-populations and how I can keep the historically good performers _out_.  Not a pure GA, but a GA where communities also have fitness functions matching my heavy hand of justice..  (If I prove that conservatism just doesn't work, I'll be sure to pass it along.)


Marcus



From: Friam <[hidden email]> on behalf of Frank Wimberly <[hidden email]>
Sent: Tuesday, August 8, 2017 7:57:06 PM
To: The Friday Morning Applied Complexity Coffee Group
Subject: Re: [FRIAM] Future of humans and artificial intelligence
 
Then there's best-first search, B*, C*, constraint-directed search, etc.  And these are just classical search methods.

Feank

Frank Wimberly
Phone <a href="tel:(505)%20670-9918" value="&#43;15056709918" target="_blank">(505) 670-9918

On Aug 8, 2017 7:20 PM, "Marcus Daniels" <[hidden email]> wrote:

"But one problem is that breadth-first and depth-first search are just fast ways to find answers."


Just _not_ -- general but not efficient.   [My dog was demanding attention! ]


From: Friam <[hidden email]> on behalf of Marcus Daniels <[hidden email]>
Sent: Tuesday, August 8, 2017 6:43:40 PM
To: The Friday Morning Applied Complexity Coffee Group; glen ☣
Subject: Re: [FRIAM] Future of humans and artificial intelligence
 

Grant writes:


"On the other hand... evolution is stochastic. (You actually did not disagree with me on that. You only said that the reason I was right was another one.) "


I think of logic programming systems as a traditional tool of AI research (e.g. Prolog, now Curry, similar capabilities implemented in Lisp) from the age before the AI winter.  These systems provide a very flexible way to pose constraint problems.  But one problem is that breadth-first and depth-first search are just fast ways to find answers.  Recent work seems to have shifted to SMT solvers and specialized constraint solving algorithms, but these have somewhat less expressiveness as programming languages.  Meanwhile, machine learning has come on the scene in a big way and tasks traditionally associated with old-school AI, like natural language processing, are now matched or even dominated using neural nets (LSTM).  I find the range of capabilities provided by groups like nlp.stanford.edu really impressive -- there examples of both approaches (logic programming and machine learning) and then don't need to be mutually exclusive.


Quantum annealing is one area where the two may increasingly come together by using physical phenomena to accelerate the rate at which high dimensional discrete systems can be solved, without relying on fragile or domain-specific heuristics.


I often use evolutionary algorithms for hard optimization problems.  Genetic algorithms, for example, are robust to  noise (or if you like ambiguity) in fitness functions, and they are trivial to parallelize.


Marcus


From: Friam <[hidden email]> on behalf of Grant Holland <[hidden email]>
Sent: Tuesday, August 8, 2017 4:51:18 PM
To: The Friday Morning Applied Complexity Coffee Group; glen ☣
Subject: Re: [FRIAM] Future of humans and artificial intelligence
 

Thanks for throwing in on this one, Glen. Your thoughts are ever-insightful. And ever-entertaining!

For example, I did not know that von Neumann put forth a set theory.

On the other hand... evolution is stochastic. (You actually did not disagree with me on that. You only said that the reason I was right was another one.) A good book on the stochasticity of evolution is "Chance and Necessity" by Jacques Monod. (I just finished rereading it for the second time. And that proved quite fruitful.)

G.


On 8/8/17 12:44 PM, glen ☣ wrote:
I'm not sure how Asimov intended them.  But the three laws is a trope that clearly shows the inadequacy of deontological ethics.  Rules are fine as far as they go.  But they don't go very far.  We can see this even in the foundations of mathematics, the unification of physics, and polyphenism/robustness in biology.  Von Neumann (Burks) said it best when he said: "But in the complicated parts of formal logic it is always one order of magnitude harder to tell what an object can do than to produce the object."  Or, if you don't like that, you can see the same perspective in his iterative construction of sets as an alternative to the classical conception.

The point being that reality, traditionally, has shown more expressiveness than any of our rule sets.

There are ways to handle the mismatch in expressivity between reality versus our rule sets.  Stochasticity is the measure of the extent to which a rule set matches a set of patterns.  But Grant's right to qualify that with evolution, not because of the way evolution is stochastic, but because evolution requires a unit to regularly (or sporadically) sync with its environment.

An AI (or a rule-obsessed human) that sprouts fully formed from Zeus' head will *always* fail.  It's guaranteed to fail because syncing with the environment isn't *built in*.  The sync isn't part of the AI's onto- or phylo-geny.



============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
Reply | Threaded
Open this post in threaded view
|

Re: Future of humans and artificial intelligence

gepr
In reply to this post by Grant Holland
FWIW, I tend to use stochastic to mean a process with a collection of variables, some of which are (pseudo) randomly set and some of which are not. A "random process" would imply a process where either all the variables are random OR where the randomly set variables are dominant. A process can be stochastic even if the randomness has little effect.

My use of indeterminate is ambiguous. In processes where we're ignorant of how a variable is set, those variables are indeterminate​. But I also use it to mean unset variables. E.g. a semaphore that's being polled for a value or state change. But as with stochasticity, a "don't care" variable can be indeterminate without making the whole process indeterminate.

On August 8, 2017 11:23:29 PM PDT, Grant Holland <[hidden email]> wrote:

>Nick,
>
>In science, these three terms are generally interchangeable. Their
>common usage is that they all describe activities, or "events", that
>are
>"subject to chance". Such activities, events or processes that are
>described by these terms are governed by the laws of probability. They
>all describe activities, events, or "happenings" whose repetitions do
>not always produce the same outcomes even when given the same inputs
>every time (initial conditions). In other words, uncertainty is
>involved.
>
>However, like most words, these enjoy other usage, meanings, as well.
>For example "random" is sometimes used to mean "disorganized" or
>"lacking in specific pattern". This is a very different meaning than
>"activities that don't always produce the same outcome given the same
>inputs". Consider what a math formula for each of these tow meanings
>wold consist of. One of them would be based on probabilities; but the
>other would involve stationary relationships.
>
>On 8/8/17 5:31 PM, Nick Thompson wrote:
>>
>> Grant,
>>
>> I think I know the answer to this question, but want to make sure:
>>
>> What is the difference beween calling a process “stochastic”,
>> “indeterminate”, or “random”?
--
⛧glen⛧

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
uǝʃƃ ⊥ glen
Reply | Threaded
Open this post in threaded view
|

Re: Future of humans and artificial intelligence

Prof David West
In reply to this post by Carl Tollander
For what its worth - I will be teaching a short class next month in Santa Fe, "Isaac Asimov and the Robots." Two points of coverage: 1) the robots themselves invent and follow a "Zeroth Law" that allows them to eliminate individual human beings with a result the exact opposite of Hawking et. al.'s fears that our  creations will not love us; 2) how the actual evolution of robotics and AI (see Daniel Suarez' Kill Decision - autonomous swarming drones as tools of war and death to humans) diverged from the rosy naive 1950s view of the future that Asimov advanced.

davew


On Mon, Aug 7, 2017, at 09:54 PM, Carl Tollander wrote:
It seems to me that there are many here in the US who are not entirely on board with Asimov's First Law of Robotics, at least insofar as it may apply to themselves, so I suspect notions of "reining it in" are probably not going to fly.




On Mon, Aug 7, 2017 at 1:57 AM, Alfredo Covaleda Vélez <[hidden email]> wrote:
Future will be quite interesting. How will be the human being of the future? For sure not a human being in the way we know.


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
Reply | Threaded
Open this post in threaded view
|

Re: Future of humans and artificial intelligence

Steve Smith

Dave -

Most excellent of you to do this, and what will be your venue for this class?

Are you familiar with our own Jack Williamson's vague parallel work in his "Humanoids" which began in 1947 with the Novelette: "With Folded Hands".  I do not know if he ever acknowledged an influence in this work from Asimov's introduction to the "three laws" in 1941?  He investigates the (unintended/unexpected catastrophic consequences of something like the three laws on humanity, having the human spirit "quelled" by being "niced" or "safed" near-to-death)

He claims  to have written this as a cathartic project to shake off the existential angst/depression he felt from the (ab)use of atomic weapons at the end of WWII.  Jack was too old to serve in the military when the war broke out (he was 36?), but instead volunteered to work in the South Pacific as a civilian meteorologist.  He had started his career in Science Fiction before the term was fully adopted (Scientific Romance and Scientifiction being precursors according to Jack) with the publication of a short story "Metal Man" In Hugo Gernsbach's Amazing Stories in 1928.  Up until the end of WWII he claims to have been somewhat of a techno-utopianist, believing that advancing technology would (continue to ) simply advance the quality of life of human beings (somewhat?) monotonically. 

I hosted Jack at an evening talk at LANL/Bradbury Science Museum in 1998 during the Nebula Awards on the theme of how Science and Science Fiction inform one another.   Jack was 90  that year and had over 90 published works at that time.  His work was always somewhat in the vein of Space Opera and his characters were generally quite two dimensional and his gender politics typical of his generation of science fictioneers, yet he was still loved by his community.  His use of this pulpy/pop medium as a way to investigate and discuss fundamental aspects of human nature and many of the social or even spiritual implications of the advance of technology was nevertheless quite inspired (IMO).

He died in 2007 at the ripe young age of 98 and was still producing work nearly up to the day of his death.  In 1998 when I first met him, the OED was creating an appendix/section of "neologisms from science fiction" and he was credited (informally?) with having the most entries in the not-yet-published project.   His most famous throwdown in this category at the time was his "invention" of anti-matter, which he called "contra-terrene" or more colloquially "seetee" (a phoneticization of the contraction "CT")!   He was also quite proud of being interrogated by the FBI during the Manhattan project for having written a story about Atomic Weapons... they wanted to assume he had access to a security leak until he showed them a 1932(?) short story on the same theme, making it clear that the ideas of nuclear fission (fusion even?) as a weapon were not new (to him anyway)...  that apparently satisfied them and of course, he didn't appreciate the full import of their interrogation until after the war.

Carry On!

 - Steve


On 8/9/17 9:05 AM, Prof David West wrote:
For what its worth - I will be teaching a short class next month in Santa Fe, "Isaac Asimov and the Robots." Two points of coverage: 1) the robots themselves invent and follow a "Zeroth Law" that allows them to eliminate individual human beings with a result the exact opposite of Hawking et. al.'s fears that our  creations will not love us; 2) how the actual evolution of robotics and AI (see Daniel Suarez' Kill Decision - autonomous swarming drones as tools of war and death to humans) diverged from the rosy naive 1950s view of the future that Asimov advanced.

davew


On Mon, Aug 7, 2017, at 09:54 PM, Carl Tollander wrote:
It seems to me that there are many here in the US who are not entirely on board with Asimov's First Law of Robotics, at least insofar as it may apply to themselves, so I suspect notions of "reining it in" are probably not going to fly.




On Mon, Aug 7, 2017 at 1:57 AM, Alfredo Covaleda Vélez <[hidden email]> wrote:
Future will be quite interesting. How will be the human being of the future? For sure not a human being in the way we know.


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove



============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
Reply | Threaded
Open this post in threaded view
|

Re: Future of humans and artificial intelligence

Prof David West
Steve, it is a Renesan course on Tue, September 7 and 14. I have read Jack Williamson, not all 90, and he would have been included in another course I proposed to Renesan on science fiction themes. Maybe in the future.

davew



On Wed, Aug 9, 2017, at 09:57 AM, Steven A Smith wrote:

Dave -

Most excellent of you to do this, and what will be your venue for this class?

Are you familiar with our own Jack Williamson's vague parallel work in his "Humanoids" which began in 1947 with the Novelette: "With Folded Hands".  I do not know if he ever acknowledged an influence in this work from Asimov's introduction to the "three laws" in 1941?  He investigates the (unintended/unexpected catastrophic consequences of something like the three laws on humanity, having the human spirit "quelled" by being "niced" or "safed" near-to-death)

He claims  to have written this as a cathartic project to shake off the existential angst/depression he felt from the (ab)use of atomic weapons at the end of WWII.  Jack was too old to serve in the military when the war broke out (he was 36?), but instead volunteered to work in the South Pacific as a civilian meteorologist.  He had started his career in Science Fiction before the term was fully adopted (Scientific Romance and Scientifiction being precursors according to Jack) with the publication of a short story "Metal Man" In Hugo Gernsbach's Amazing Stories in 1928.  Up until the end of WWII he claims to have been somewhat of a techno-utopianist, believing that advancing technology would (continue to ) simply advance the quality of life of human beings (somewhat?) monotonically. 

I hosted Jack at an evening talk at LANL/Bradbury Science Museum in 1998 during the Nebula Awards on the theme of how Science and Science Fiction inform one another.   Jack was 90  that year and had over 90 published works at that time.  His work was always somewhat in the vein of Space Opera and his characters were generally quite two dimensional and his gender politics typical of his generation of science fictioneers, yet he was still loved by his community.  His use of this pulpy/pop medium as a way to investigate and discuss fundamental aspects of human nature and many of the social or even spiritual implications of the advance of technology was nevertheless quite inspired (IMO).

He died in 2007 at the ripe young age of 98 and was still producing work nearly up to the day of his death.  In 1998 when I first met him, the OED was creating an appendix/section of "neologisms from science fiction" and he was credited (informally?) with having the most entries in the not-yet-published project.   His most famous throwdown in this category at the time was his "invention" of anti-matter, which he called "contra-terrene" or more colloquially "seetee" (a phoneticization of the contraction "CT")!   He was also quite proud of being interrogated by the FBI during the Manhattan project for having written a story about Atomic Weapons... they wanted to assume he had access to a security leak until he showed them a 1932(?) short story on the same theme, making it clear that the ideas of nuclear fission (fusion even?) as a weapon were not new (to him anyway)...  that apparently satisfied them and of course, he didn't appreciate the full import of their interrogation until after the war.

Carry On!

 - Steve


On 8/9/17 9:05 AM, Prof David West wrote:
For what its worth - I will be teaching a short class next month in Santa Fe, "Isaac Asimov and the Robots." Two points of coverage: 1) the robots themselves invent and follow a "Zeroth Law" that allows them to eliminate individual human beings with a result the exact opposite of Hawking et. al.'s fears that our  creations will not love us; 2) how the actual evolution of robotics and AI (see Daniel Suarez' Kill Decision - autonomous swarming drones as tools of war and death to humans) diverged from the rosy naive 1950s view of the future that Asimov advanced.

davew


On Mon, Aug 7, 2017, at 09:54 PM, Carl Tollander wrote:
It seems to me that there are many here in the US who are not entirely on board with Asimov's First Law of Robotics, at least insofar as it may apply to themselves, so I suspect notions of "reining it in" are probably not going to fly.




On Mon, Aug 7, 2017 at 1:57 AM, Alfredo Covaleda Vélez <[hidden email]> wrote:
Future will be quite interesting. How will be the human being of the future? For sure not a human being in the way we know.


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove



============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
Reply | Threaded
Open this post in threaded view
|

Re: Future of humans and artificial intelligence

Steve Smith

thanks for the reference, I was not aware of the Renesan Institute before this, though I had heard somewhere about the first listed lecture/course/seminar on "the Trickster".   I don't see your course in the lineup?  I will be out of town on the 7th so I wouldn't try to attend anyway, but as always "good on ya" for your efforts to continue to spread the enlightenment.

I've a friend who introduced me to Jack... he was in middle school in Portales when someone introduced him to "that old professor who writes Science Fiction" (then in his 50s?).  They became fast friends despite the many decades between them, and my friend Joe even influenced several of Jack's titles, if not characters and narratives.  He claims he helped Jack come up with the title "Terraforming Earth", although Joe's throwdown was "Terraforming Terra" which apparently Jack loved but his editor said "not enough people know what 'Terra' is".  Oh well.

In Jack's life story, his parents moved him from their hardscrabble farm near Bisbee AZ where he was born to a relative's more productive ranches in Mexico/TX but eventually eventually they migrated to NM in 1915 in a covered wagon.  He has(d) stories!

I have a copy of Jack's 2005 autobiography, "Wonder's Child" if perchance you would like to borrow it.   The duality of Science/Fiction ( or more generally the interplay between the literal/actualized and the imagined is a fascinating study to me).   This second wave of Scientific Romancing (after Verne, Swift, Burroughs, even London/Twain) was so smack-dab in the middle of the golden age of transportation and communication, into information processing that it deeply informs/reflects our contemporary psyche, even for those who think they don't like or care about Science Fiction.   The more modern adoption of Science Fiction into mainstream cinema/TV has put titles/tropes like "the Matrix" and "BladeRunner", "Avatar" and "Dr. Who" squarely in the face (most literally) of the masses.  

I believe this is for the better and the worse.  Like everything I suppose!  Nothing Aristotelian about MY logic!?

- Steve

"The best thing about being on the fence is that the view is better from up there" - R. Edward Lowe

Steve, it is a Renesan course on Tue, September 7 and 14. I have read Jack Williamson, not all 90, and he would have been included in another course I proposed to Renesan on science fiction themes. Maybe in the future.

davew



On Wed, Aug 9, 2017, at 09:57 AM, Steven A Smith wrote:

Dave -

Most excellent of you to do this, and what will be your venue for this class?

Are you familiar with our own Jack Williamson's vague parallel work in his "Humanoids" which began in 1947 with the Novelette: "With Folded Hands".  I do not know if he ever acknowledged an influence in this work from Asimov's introduction to the "three laws" in 1941?  He investigates the (unintended/unexpected catastrophic consequences of something like the three laws on humanity, having the human spirit "quelled" by being "niced" or "safed" near-to-death)

He claims  to have written this as a cathartic project to shake off the existential angst/depression he felt from the (ab)use of atomic weapons at the end of WWII.  Jack was too old to serve in the military when the war broke out (he was 36?), but instead volunteered to work in the South Pacific as a civilian meteorologist.  He had started his career in Science Fiction before the term was fully adopted (Scientific Romance and Scientifiction being precursors according to Jack) with the publication of a short story "Metal Man" In Hugo Gernsbach's Amazing Stories in 1928.  Up until the end of WWII he claims to have been somewhat of a techno-utopianist, believing that advancing technology would (continue to ) simply advance the quality of life of human beings (somewhat?) monotonically. 

I hosted Jack at an evening talk at LANL/Bradbury Science Museum in 1998 during the Nebula Awards on the theme of how Science and Science Fiction inform one another.   Jack was 90  that year and had over 90 published works at that time.  His work was always somewhat in the vein of Space Opera and his characters were generally quite two dimensional and his gender politics typical of his generation of science fictioneers, yet he was still loved by his community.  His use of this pulpy/pop medium as a way to investigate and discuss fundamental aspects of human nature and many of the social or even spiritual implications of the advance of technology was nevertheless quite inspired (IMO).

He died in 2007 at the ripe young age of 98 and was still producing work nearly up to the day of his death.  In 1998 when I first met him, the OED was creating an appendix/section of "neologisms from science fiction" and he was credited (informally?) with having the most entries in the not-yet-published project.   His most famous throwdown in this category at the time was his "invention" of anti-matter, which he called "contra-terrene" or more colloquially "seetee" (a phoneticization of the contraction "CT")!   He was also quite proud of being interrogated by the FBI during the Manhattan project for having written a story about Atomic Weapons... they wanted to assume he had access to a security leak until he showed them a 1932(?) short story on the same theme, making it clear that the ideas of nuclear fission (fusion even?) as a weapon were not new (to him anyway)...  that apparently satisfied them and of course, he didn't appreciate the full import of their interrogation until after the war.

Carry On!

 - Steve


On 8/9/17 9:05 AM, Prof David West wrote:
For what its worth - I will be teaching a short class next month in Santa Fe, "Isaac Asimov and the Robots." Two points of coverage: 1) the robots themselves invent and follow a "Zeroth Law" that allows them to eliminate individual human beings with a result the exact opposite of Hawking et. al.'s fears that our  creations will not love us; 2) how the actual evolution of robotics and AI (see Daniel Suarez' Kill Decision - autonomous swarming drones as tools of war and death to humans) diverged from the rosy naive 1950s view of the future that Asimov advanced.

davew


On Mon, Aug 7, 2017, at 09:54 PM, Carl Tollander wrote:
It seems to me that there are many here in the US who are not entirely on board with Asimov's First Law of Robotics, at least insofar as it may apply to themselves, so I suspect notions of "reining it in" are probably not going to fly.




On Mon, Aug 7, 2017 at 1:57 AM, Alfredo Covaleda Vélez <[hidden email]> wrote:
Future will be quite interesting. How will be the human being of the future? For sure not a human being in the way we know.


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove



============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove



============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
12