Fascinating article on how AI is driving change in SEO, categories of AI and the Law of Accelerating Returns

classic Classic list List threaded Threaded
46 messages Options
123
Reply | Threaded
Open this post in threaded view
|

Fascinating article on how AI is driving change in SEO, categories of AI and the Law of Accelerating Returns

Tom Johnson

Among other points: "...why doing regression analysis over every site, without having the context of the search result that it is in, is supremely flawed."
TJ

============================================
Tom Johnson
Institute for Analytic Journalism   --     Santa Fe, NM USA
505.577.6482(c)                                    505.473.9646(h)
Society of Professional Journalists   -   Region 9 Director
Check out It's The People's Data
http://www.jtjohnson.com                   [hidden email]
============================================



Sent with MailTrack

Virus-free. www.avast.com

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
Reply | Threaded
Open this post in threaded view
|

Re: Fascinating article on how AI is driving change in SEO, categories of AI and the Law of Accelerating Returns

Robert Wall
Hi Tom,

Interesting article about Google and their foray [actually a Blitzkrieg, as they are buying up all of the brain trust in this area] into the world of machine learning presumably to improve the search customer experience.  Could their efforts actually have unintended consequences for both the search customer and the marketing efforts of the website owners? It is interesting to consider. For example, for the former case, Google picking WebMD as the paragon website for the healthcare industry flies in the face of my own experience and, say, this New York Times Magazine article: A Prescription for Fear (Feb 2011).  Will this actually make WebMD the de facto paragon in the minds of the searchers?  For the latter, successful web marketing becomes increasingly subject to the latest Google search algorithms instead of the previously more expert in-house marketing departments. Of course, this is the nature of SEO--to game the algorithms to attract better rankings.  But, it seems those in-house marketing departments will need to up their game:

In other ways, things are a bit harder. The field of SEO will continue to become extremely technical. Analytics and big data are the order of the day, and any SEO that isn’t familiar with these approaches has a lot of catching up to do. Those of you who have these skills can look forward to a big payday.

Also, with respect to those charts anticipating exponential growth for AGI technology--even eclipsing human intelligence by mid-century--there is much reasoning to see this as overly optimistic [see, for example, Hubert Dreyfus' critique of Good Old Fashion AI: "What Computers Can't Do"].  These charts kind of remind me of the "ultraviolet catastrophe" around the end of the 19th century. There are physical limitations that may well tamp progress and keep it to ANI.  With respect to AGI, there have been some pointed challenges to this "Law of Accelerating Returns."

On this point, I thought this article in AEON titled "Creative Blocks: The very laws of physics imply that artificial intelligence must be possible. What’s holding us up? (Oct 2012)" is on point concerning the philosophical and epistemological road blocks.  This one, titled "Where do minds belong? (Mar 2016)" discusses the technological roadblocks in an insightful, highly speculative, but entertaining manner.

Nonetheless, this whole discussion is quite intriguing, no matter your stance, hopes, or fears. 😎

Cheers,

Robert

On Sat, Jun 4, 2016 at 4:26 PM, Tom Johnson <[hidden email]> wrote:

Among other points: "...why doing regression analysis over every site, without having the context of the search result that it is in, is supremely flawed."
TJ

============================================
Tom Johnson
Institute for Analytic Journalism   --     Santa Fe, NM USA
<a href="tel:505.577.6482" value="+15055776482" target="_blank">505.577.6482(c)                                    <a href="tel:505.473.9646" value="+15054739646" target="_blank">505.473.9646(h)
Society of Professional Journalists   -   Region 9 Director
Check out It's The People's Data
http://www.jtjohnson.com                   [hidden email]
============================================



Sent with MailTrack

Virus-free. www.avast.com

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
Reply | Threaded
Open this post in threaded view
|

Re: Fascinating article on how AI is driving change in SEO, categories of AI and the Law of Accelerating Returns

Pamela McCorduck
I have some grave concerns about AI being concentrated in the hands of a few big firms—Google, FaceBook, Amazon, and so on. Elon Musk says the answer is open sourcing, but I’m skeptical. That said, I’d be interested in hearing other people’s solutions. Then again, you may not think it’s a problem.


On Jun 5, 2016, at 3:22 PM, Robert Wall <[hidden email]> wrote:

Hi Tom,

Interesting article about Google and their foray [actually a Blitzkrieg, as they are buying up all of the brain trust in this area] into the world of machine learning presumably to improve the search customer experience.  Could their efforts actually have unintended consequences for both the search customer and the marketing efforts of the website owners? It is interesting to consider. For example, for the former case, Google picking WebMD as the paragon website for the healthcare industry flies in the face of my own experience and, say, this New York Times Magazine article: A Prescription for Fear (Feb 2011).  Will this actually make WebMD the de facto paragon in the minds of the searchers?  For the latter, successful web marketing becomes increasingly subject to the latest Google search algorithms instead of the previously more expert in-house marketing departments. Of course, this is the nature of SEO--to game the algorithms to attract better rankings.  But, it seems those in-house marketing departments will need to up their game:

In other ways, things are a bit harder. The field of SEO will continue to become extremely technical. Analytics and big data are the order of the day, and any SEO that isn’t familiar with these approaches has a lot of catching up to do. Those of you who have these skills can look forward to a big payday.

Also, with respect to those charts anticipating exponential growth for AGI technology--even eclipsing human intelligence by mid-century--there is much reasoning to see this as overly optimistic [see, for example, Hubert Dreyfus' critique of Good Old Fashion AI: "What Computers Can't Do"].  These charts kind of remind me of the "ultraviolet catastrophe" around the end of the 19th century. There are physical limitations that may well tamp progress and keep it to ANI.  With respect to AGI, there have been some pointed challenges to this "Law of Accelerating Returns."

On this point, I thought this article in AEON titled "Creative Blocks: The very laws of physics imply that artificial intelligence must be possible. What’s holding us up? (Oct 2012)" is on point concerning the philosophical and epistemological road blocks.  This one, titled "Where do minds belong? (Mar 2016)" discusses the technological roadblocks in an insightful, highly speculative, but entertaining manner.

Nonetheless, this whole discussion is quite intriguing, no matter your stance, hopes, or fears. 😎

Cheers,

Robert

On Sat, Jun 4, 2016 at 4:26 PM, Tom Johnson <[hidden email]> wrote:

Among other points: "...why doing regression analysis over every site, without having the context of the search result that it is in, is supremely flawed."
TJ

============================================
Tom Johnson
Institute for Analytic Journalism   --     Santa Fe, NM USA
<a href="tel:505.577.6482" value="+15055776482" target="_blank" class="">505.577.6482(c)                                    <a href="tel:505.473.9646" value="+15054739646" target="_blank" class="">505.473.9646(h)
Society of Professional Journalists   -   Region 9 Director
Check out It's The People's Data
http://www.jtjohnson.com                   [hidden email]
============================================



Sent with MailTrack

Virus-free. www.avast.com
<a href="x-msg://4/#m_-9171770883074403068_DDB4FAA8-2DD7-40BB-A1B8-4E2AA1F9FDF2" width="1" height="1" class="">

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
Reply | Threaded
Open this post in threaded view
|

Re: Fascinating article on how AI is driving change in SEO, categories of AI and the Law of Accelerating Returns

Tom Johnson
In reply to this post by Robert Wall
Robert:
Thanks for the pointers at the end of your remarks to the interesting articles.  I wonder, too, if someone could come up with parallel "paragon websites."  That is, here's WebMD.  and displayed alongside the "best" critics or alternatives to that site.

TJ





============================================
Tom Johnson
Institute for Analytic Journalism   --     Santa Fe, NM USA
505.577.6482(c)                                    505.473.9646(h)
Society of Professional Journalists   -   Region 9 Director
Check out It's The People's Data
http://www.jtjohnson.com                   [hidden email]
============================================

On Sun, Jun 5, 2016 at 3:22 PM, Robert Wall <[hidden email]> wrote:
Hi Tom,

Interesting article about Google and their foray [actually a Blitzkrieg, as they are buying up all of the brain trust in this area] into the world of machine learning presumably to improve the search customer experience.  Could their efforts actually have unintended consequences for both the search customer and the marketing efforts of the website owners? It is interesting to consider. For example, for the former case, Google picking WebMD as the paragon website for the healthcare industry flies in the face of my own experience and, say, this New York Times Magazine article: A Prescription for Fear (Feb 2011).  Will this actually make WebMD the de facto paragon in the minds of the searchers?  For the latter, successful web marketing becomes increasingly subject to the latest Google search algorithms instead of the previously more expert in-house marketing departments. Of course, this is the nature of SEO--to game the algorithms to attract better rankings.  But, it seems those in-house marketing departments will need to up their game:

In other ways, things are a bit harder. The field of SEO will continue to become extremely technical. Analytics and big data are the order of the day, and any SEO that isn’t familiar with these approaches has a lot of catching up to do. Those of you who have these skills can look forward to a big payday.

Also, with respect to those charts anticipating exponential growth for AGI technology--even eclipsing human intelligence by mid-century--there is much reasoning to see this as overly optimistic [see, for example, Hubert Dreyfus' critique of Good Old Fashion AI: "What Computers Can't Do"].  These charts kind of remind me of the "ultraviolet catastrophe" around the end of the 19th century. There are physical limitations that may well tamp progress and keep it to ANI.  With respect to AGI, there have been some pointed challenges to this "Law of Accelerating Returns."

On this point, I thought this article in AEON titled "Creative Blocks: The very laws of physics imply that artificial intelligence must be possible. What’s holding us up? (Oct 2012)" is on point concerning the philosophical and epistemological road blocks.  This one, titled "Where do minds belong? (Mar 2016)" discusses the technological roadblocks in an insightful, highly speculative, but entertaining manner.

Nonetheless, this whole discussion is quite intriguing, no matter your stance, hopes, or fears. 😎

Cheers,

Robert

On Sat, Jun 4, 2016 at 4:26 PM, Tom Johnson <[hidden email]> wrote:

Among other points: "...why doing regression analysis over every site, without having the context of the search result that it is in, is supremely flawed."
TJ

============================================
Tom Johnson
Institute for Analytic Journalism   --     Santa Fe, NM USA
<a href="tel:505.577.6482" value="+15055776482" target="_blank">505.577.6482(c)                                    <a href="tel:505.473.9646" value="+15054739646" target="_blank">505.473.9646(h)
Society of Professional Journalists   -   Region 9 Director
Check out It's The People's Data
http://www.jtjohnson.com                   [hidden email]
============================================



Sent with MailTrack
<img width="0" height="0" src="data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7">

Virus-free. www.avast.com

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
Reply | Threaded
Open this post in threaded view
|

Re: Fascinating article on how AI is driving change in SEO, categories of AI and the Law of Accelerating Returns

Roger Critchlow-2
"Artificial intelligence has the same relation to intelligence as artificial flowers have to flowers."  -- David Parnas
Which is even funnier now than it was in 70's or 80's when first said, because artificial flowers have become more and more amazing over the decades.

-- rec --

On Sun, Jun 5, 2016 at 6:09 PM, Tom Johnson <[hidden email]> wrote:
Robert:
Thanks for the pointers at the end of your remarks to the interesting articles.  I wonder, too, if someone could come up with parallel "paragon websites."  That is, here's WebMD.  and displayed alongside the "best" critics or alternatives to that site.

TJ





============================================
Tom Johnson
Institute for Analytic Journalism   --     Santa Fe, NM USA
<a href="tel:505.577.6482" value="+15055776482" target="_blank">505.577.6482(c)                                    <a href="tel:505.473.9646" value="+15054739646" target="_blank">505.473.9646(h)
Society of Professional Journalists   -   Region 9 Director
Check out It's The People's Data
http://www.jtjohnson.com                   [hidden email]
============================================

On Sun, Jun 5, 2016 at 3:22 PM, Robert Wall <[hidden email]> wrote:
Hi Tom,

Interesting article about Google and their foray [actually a Blitzkrieg, as they are buying up all of the brain trust in this area] into the world of machine learning presumably to improve the search customer experience.  Could their efforts actually have unintended consequences for both the search customer and the marketing efforts of the website owners? It is interesting to consider. For example, for the former case, Google picking WebMD as the paragon website for the healthcare industry flies in the face of my own experience and, say, this New York Times Magazine article: A Prescription for Fear (Feb 2011).  Will this actually make WebMD the de facto paragon in the minds of the searchers?  For the latter, successful web marketing becomes increasingly subject to the latest Google search algorithms instead of the previously more expert in-house marketing departments. Of course, this is the nature of SEO--to game the algorithms to attract better rankings.  But, it seems those in-house marketing departments will need to up their game:

In other ways, things are a bit harder. The field of SEO will continue to become extremely technical. Analytics and big data are the order of the day, and any SEO that isn’t familiar with these approaches has a lot of catching up to do. Those of you who have these skills can look forward to a big payday.

Also, with respect to those charts anticipating exponential growth for AGI technology--even eclipsing human intelligence by mid-century--there is much reasoning to see this as overly optimistic [see, for example, Hubert Dreyfus' critique of Good Old Fashion AI: "What Computers Can't Do"].  These charts kind of remind me of the "ultraviolet catastrophe" around the end of the 19th century. There are physical limitations that may well tamp progress and keep it to ANI.  With respect to AGI, there have been some pointed challenges to this "Law of Accelerating Returns."

On this point, I thought this article in AEON titled "Creative Blocks: The very laws of physics imply that artificial intelligence must be possible. What’s holding us up? (Oct 2012)" is on point concerning the philosophical and epistemological road blocks.  This one, titled "Where do minds belong? (Mar 2016)" discusses the technological roadblocks in an insightful, highly speculative, but entertaining manner.

Nonetheless, this whole discussion is quite intriguing, no matter your stance, hopes, or fears. 😎

Cheers,

Robert

On Sat, Jun 4, 2016 at 4:26 PM, Tom Johnson <[hidden email]> wrote:

Among other points: "...why doing regression analysis over every site, without having the context of the search result that it is in, is supremely flawed."
TJ

============================================
Tom Johnson
Institute for Analytic Journalism   --     Santa Fe, NM USA
<a href="tel:505.577.6482" value="+15055776482" target="_blank">505.577.6482(c)                                    <a href="tel:505.473.9646" value="+15054739646" target="_blank">505.473.9646(h)
Society of Professional Journalists   -   Region 9 Director
Check out It's The People's Data
http://www.jtjohnson.com                   [hidden email]
============================================



Sent with MailTrack

Virus-free. www.avast.com

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
Reply | Threaded
Open this post in threaded view
|

Re: Fascinating article on how AI is driving change in SEO, categories of AI and the Law of Accelerating Returns

Pamela McCorduck
Parnas was on the faculty with my husband at CMU back in the day. He was known as “the department’s conscience.” Except, Joe said, how can you be considered the conscience when you’re against everything? Everything whatsoever?

He was eventually let go, went to,  uh, Dortmund as I recall, then to Canada (or maybe the other way around). He was the compleat contrarian. Doesn’t mean he was always wrong. He was right about Brilliant Pebbles, or Star Wars, or whatever Reagan’s brainchild was. The software had to work right the very first time. Wasn’t going to happen, he said, and he was right. But basically, he was a chronic malcontent.


On Jun 5, 2016, at 5:03 PM, Roger Critchlow <[hidden email]> wrote:

"Artificial intelligence has the same relation to intelligence as artificial flowers have to flowers."  -- David Parnas
Which is even funnier now than it was in 70's or 80's when first said, because artificial flowers have become more and more amazing over the decades.

-- rec --

On Sun, Jun 5, 2016 at 6:09 PM, Tom Johnson <[hidden email]> wrote:
Robert:
Thanks for the pointers at the end of your remarks to the interesting articles.  I wonder, too, if someone could come up with parallel "paragon websites."  That is, here's WebMD.  and displayed alongside the "best" critics or alternatives to that site.

TJ





============================================
Tom Johnson
Institute for Analytic Journalism   --     Santa Fe, NM USA
<a href="tel:505.577.6482" value="+15055776482" target="_blank" class="">505.577.6482(c)                                    <a href="tel:505.473.9646" value="+15054739646" target="_blank" class="">505.473.9646(h)
Society of Professional Journalists   -   Region 9 Director
Check out It's The People's Data
http://www.jtjohnson.com                   [hidden email]
============================================

On Sun, Jun 5, 2016 at 3:22 PM, Robert Wall <[hidden email]> wrote:
Hi Tom,

Interesting article about Google and their foray [actually a Blitzkrieg, as they are buying up all of the brain trust in this area] into the world of machine learning presumably to improve the search customer experience.  Could their efforts actually have unintended consequences for both the search customer and the marketing efforts of the website owners? It is interesting to consider. For example, for the former case, Google picking WebMD as the paragon website for the healthcare industry flies in the face of my own experience and, say, this New York Times Magazine article: A Prescription for Fear (Feb 2011).  Will this actually make WebMD the de facto paragon in the minds of the searchers?  For the latter, successful web marketing becomes increasingly subject to the latest Google search algorithms instead of the previously more expert in-house marketing departments. Of course, this is the nature of SEO--to game the algorithms to attract better rankings.  But, it seems those in-house marketing departments will need to up their game:

In other ways, things are a bit harder. The field of SEO will continue to become extremely technical. Analytics and big data are the order of the day, and any SEO that isn’t familiar with these approaches has a lot of catching up to do. Those of you who have these skills can look forward to a big payday.

Also, with respect to those charts anticipating exponential growth for AGI technology--even eclipsing human intelligence by mid-century--there is much reasoning to see this as overly optimistic [see, for example, Hubert Dreyfus' critique of Good Old Fashion AI: "What Computers Can't Do"].  These charts kind of remind me of the "ultraviolet catastrophe" around the end of the 19th century. There are physical limitations that may well tamp progress and keep it to ANI.  With respect to AGI, there have been some pointed challenges to this "Law of Accelerating Returns."

On this point, I thought this article in AEON titled "Creative Blocks: The very laws of physics imply that artificial intelligence must be possible. What’s holding us up? (Oct 2012)" is on point concerning the philosophical and epistemological road blocks.  This one, titled "Where do minds belong? (Mar 2016)" discusses the technological roadblocks in an insightful, highly speculative, but entertaining manner.

Nonetheless, this whole discussion is quite intriguing, no matter your stance, hopes, or fears. 😎

Cheers,

Robert

On Sat, Jun 4, 2016 at 4:26 PM, Tom Johnson <[hidden email]> wrote:

Among other points: "...why doing regression analysis over every site, without having the context of the search result that it is in, is supremely flawed."
TJ

============================================
Tom Johnson
Institute for Analytic Journalism   --     Santa Fe, NM USA
<a href="tel:505.577.6482" value="+15055776482" target="_blank" class="">505.577.6482(c)                                    <a href="tel:505.473.9646" value="+15054739646" target="_blank" class="">505.473.9646(h)
Society of Professional Journalists   -   Region 9 Director
Check out It's The People's Data
http://www.jtjohnson.com                   [hidden email]
============================================



Sent with MailTrack

Virus-free. www.avast.com
<a href="x-msg://4/#m_8464765597632463979_m_-6391383403917800669_m_-9171770883074403068_DDB4FAA8-2DD7-40BB-A1B8-4E2AA1F9FDF2" width="1" height="1" class="">

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
Reply | Threaded
Open this post in threaded view
|

Re: Fascinating article on how AI is driving change in SEO, categories of AI and the Law of Accelerating Returns

Edward Angel
In reply to this post by Pamela McCorduck
There is a large group of distinguished people including Elon Musk, Stephen Hawking, Bill Joy and Martin Rees, who believe that AI is an existential threat and the probability of the human race surviving another 100 years is less than 50/50.  Stephen Hawking has said he has no idea what to do about. Bill Joy’s (non) solution is better ethical education for workers in the area. I can’t see how open source will prevent the dangers they worry about. Martin Rees has an Institute at Cambridge that worries about these things. 

Ed
_______________________

Ed Angel

Founding Director, Art, Research, Technology and Science Laboratory (ARTS Lab)
Professor Emeritus of Computer Science, University of New Mexico

1017 Sierra Pinon
Santa Fe, NM 87501
505-984-0136 (home)   [hidden email]
505-453-4944 (cell)  http://www.cs.unm.edu/~angel

On Jun 5, 2016, at 4:04 PM, Pamela McCorduck <[hidden email]> wrote:

I have some grave concerns about AI being concentrated in the hands of a few big firms—Google, FaceBook, Amazon, and so on. Elon Musk says the answer is open sourcing, but I’m skeptical. That said, I’d be interested in hearing other people’s solutions. Then again, you may not think it’s a problem.


On Jun 5, 2016, at 3:22 PM, Robert Wall <[hidden email]> wrote:

Hi Tom,

Interesting article about Google and their foray [actually a Blitzkrieg, as they are buying up all of the brain trust in this area] into the world of machine learning presumably to improve the search customer experience.  Could their efforts actually have unintended consequences for both the search customer and the marketing efforts of the website owners? It is interesting to consider. For example, for the former case, Google picking WebMD as the paragon website for the healthcare industry flies in the face of my own experience and, say, this New York Times Magazine article: A Prescription for Fear (Feb 2011).  Will this actually make WebMD the de facto paragon in the minds of the searchers?  For the latter, successful web marketing becomes increasingly subject to the latest Google search algorithms instead of the previously more expert in-house marketing departments. Of course, this is the nature of SEO--to game the algorithms to attract better rankings.  But, it seems those in-house marketing departments will need to up their game:

In other ways, things are a bit harder. The field of SEO will continue to become extremely technical. Analytics and big data are the order of the day, and any SEO that isn’t familiar with these approaches has a lot of catching up to do. Those of you who have these skills can look forward to a big payday.

Also, with respect to those charts anticipating exponential growth for AGI technology--even eclipsing human intelligence by mid-century--there is much reasoning to see this as overly optimistic [see, for example, Hubert Dreyfus' critique of Good Old Fashion AI: "What Computers Can't Do"].  These charts kind of remind me of the "ultraviolet catastrophe" around the end of the 19th century. There are physical limitations that may well tamp progress and keep it to ANI.  With respect to AGI, there have been some pointed challenges to this "Law of Accelerating Returns."

On this point, I thought this article in AEON titled "Creative Blocks: The very laws of physics imply that artificial intelligence must be possible. What’s holding us up? (Oct 2012)" is on point concerning the philosophical and epistemological road blocks.  This one, titled "Where do minds belong? (Mar 2016)" discusses the technological roadblocks in an insightful, highly speculative, but entertaining manner.

Nonetheless, this whole discussion is quite intriguing, no matter your stance, hopes, or fears. 😎

Cheers,

Robert

On Sat, Jun 4, 2016 at 4:26 PM, Tom Johnson <[hidden email]> wrote:

Among other points: "...why doing regression analysis over every site, without having the context of the search result that it is in, is supremely flawed."
TJ

============================================
Tom Johnson
Institute for Analytic Journalism   --     Santa Fe, NM USA
<a href="tel:505.577.6482" value="+15055776482" target="_blank" class="">505.577.6482(c)                                    <a href="tel:505.473.9646" value="+15054739646" target="_blank" class="">505.473.9646(h)
Society of Professional Journalists   -   Region 9 Director
Check out It's The People's Data
http://www.jtjohnson.com                   [hidden email]
============================================



Sent with MailTrack

Virus-free. www.avast.com
<a href="x-msg://4/#m_-9171770883074403068_DDB4FAA8-2DD7-40BB-A1B8-4E2AA1F9FDF2" width="1" height="1" class="">

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
Reply | Threaded
Open this post in threaded view
|

Re: Fascinating article on how AI is driving change in SEO, categories of AI and the Law of Accelerating Returns

gepr

Well, my interpretation of Pamela's concern would have more to do with [bio]diversity than it does some form of naive extinction threat.  In previous posts, I've outlined my skepticism that (complicated) open source is any less opaque to understanding than proprietary sources because the skills and effort it takes to suss out the content can be prohibitive.  Regardless, it's true that open sourcing facilitates copying and forking (with or without understanding).  And that sort of thing definitely contributes to _diversity_.

So, if diversity in AI might cause a more robust system (including interaction with the already somewhat diverse naturally intelligent systems), then there's a clear path for how open source would help prevent an extinction event.

The people who believe in things like "group think" should predictably recognize that argument.

On 06/06/2016 10:42 AM, Edward Angel wrote:
> There is a large group of distinguished people including Elon Musk, Stephen Hawking, Bill Joy and Martin Rees, who believe that AI is an existential threat and the probability of the human race surviving another 100 years is less than 50/50.  Stephen Hawking has said he has no idea what to do about. Bill Joy’s (non) solution is better ethical education for workers in the area. I can’t see how open source will prevent the dangers they worry about. Martin Rees has an Institute at Cambridge that worries about these things.
>
> Ed

>
>> On Jun 5, 2016, at 4:04 PM, Pamela McCorduck <[hidden email] <mailto:[hidden email]>> wrote:
>>
>> I have some grave concerns about AI being concentrated in the hands of a few big firms—Google, FaceBook, Amazon, and so on. Elon Musk says the answer is open sourcing, but I’m skeptical. That said, I’d be interested in hearing other people’s solutions. Then again, you may not think it’s a problem.
>>

--
☣ glen

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
uǝʃƃ ⊥ glen
Reply | Threaded
Open this post in threaded view
|

Re: Fascinating article on how AI is driving change in SEO, categories of AI and the Law of Accelerating Returns

Pamela McCorduck
In reply to this post by Edward Angel
The field of AI itself has had a major project underway for a couple of years to address these issues. It’s called AI 100, and is funded by one of the wealthy founders of the field, Eric Horowitz at Microsoft. The headquarters of this project are at Stanford University.

It is funded for a century. Yes, that’s a century, on the grounds that whatever is decided in five years or ten years will need to be revisited five years or ten years on, again and again. Its staff consists of leading members of the field (who really know what the field can and cannot do), and they will be joined by ethicists, economists, philosophers and others (maybe already are) as the project moves along. (Their first report was rather scathing about Ray Kurzweil and the singularity, but that’s another issue.)

Musk, Hawking et al., are very good at getting publicity, but their first great solution last summer was to send a petition to the U.N., which they did with great fanfare. Of course nothing happened, and nothing could. This is the level of naivete (and sorry, self-importance) these men exhibit. 

I also find them more than a bit hypocritical. Musk is not giving up his smartphone, and Hawking concedes that he loves what AI has done for him personally (in terms of vocal communication) but maybe others shouldn’t be allowed to handle this…

Finally, and this is where my anger really boils: they sound to me like the worst kind of patronizing, privileged white guys imaginable. There’s no sense in their aggrieved messages that billions of people around the globe are struggling, and have lives that could be vastly improved with AI.  Maybe it behooves them to imagine the good AI can do for those people, instead of stamping their feet because AI is going to upset their personal world. Which it will. It must be very hard to be the smartest guy on the block for so long, and then here comes something even smarter.

Pamela


On Jun 6, 2016, at 11:42 AM, Edward Angel <[hidden email]> wrote:

There is a large group of distinguished people including Elon Musk, Stephen Hawking, Bill Joy and Martin Rees, who believe that AI is an existential threat and the probability of the human race surviving another 100 years is less than 50/50.  Stephen Hawking has said he has no idea what to do about. Bill Joy’s (non) solution is better ethical education for workers in the area. I can’t see how open source will prevent the dangers they worry about. Martin Rees has an Institute at Cambridge that worries about these things. 

Ed
_______________________

Ed Angel

Founding Director, Art, Research, Technology and Science Laboratory (ARTS Lab)
Professor Emeritus of Computer Science, University of New Mexico

1017 Sierra Pinon
Santa Fe, NM 87501
505-984-0136 (home)   [hidden email]
505-453-4944 (cell)  http://www.cs.unm.edu/~angel

On Jun 5, 2016, at 4:04 PM, Pamela McCorduck <[hidden email]> wrote:

I have some grave concerns about AI being concentrated in the hands of a few big firms—Google, FaceBook, Amazon, and so on. Elon Musk says the answer is open sourcing, but I’m skeptical. That said, I’d be interested in hearing other people’s solutions. Then again, you may not think it’s a problem.


On Jun 5, 2016, at 3:22 PM, Robert Wall <[hidden email]> wrote:

Hi Tom,

Interesting article about Google and their foray [actually a Blitzkrieg, as they are buying up all of the brain trust in this area] into the world of machine learning presumably to improve the search customer experience.  Could their efforts actually have unintended consequences for both the search customer and the marketing efforts of the website owners? It is interesting to consider. For example, for the former case, Google picking WebMD as the paragon website for the healthcare industry flies in the face of my own experience and, say, this New York Times Magazine article: A Prescription for Fear (Feb 2011).  Will this actually make WebMD the de facto paragon in the minds of the searchers?  For the latter, successful web marketing becomes increasingly subject to the latest Google search algorithms instead of the previously more expert in-house marketing departments. Of course, this is the nature of SEO--to game the algorithms to attract better rankings.  But, it seems those in-house marketing departments will need to up their game:

In other ways, things are a bit harder. The field of SEO will continue to become extremely technical. Analytics and big data are the order of the day, and any SEO that isn’t familiar with these approaches has a lot of catching up to do. Those of you who have these skills can look forward to a big payday.

Also, with respect to those charts anticipating exponential growth for AGI technology--even eclipsing human intelligence by mid-century--there is much reasoning to see this as overly optimistic [see, for example, Hubert Dreyfus' critique of Good Old Fashion AI: "What Computers Can't Do"].  These charts kind of remind me of the "ultraviolet catastrophe" around the end of the 19th century. There are physical limitations that may well tamp progress and keep it to ANI.  With respect to AGI, there have been some pointed challenges to this "Law of Accelerating Returns."

On this point, I thought this article in AEON titled "Creative Blocks: The very laws of physics imply that artificial intelligence must be possible. What’s holding us up? (Oct 2012)" is on point concerning the philosophical and epistemological road blocks.  This one, titled "Where do minds belong? (Mar 2016)" discusses the technological roadblocks in an insightful, highly speculative, but entertaining manner.

Nonetheless, this whole discussion is quite intriguing, no matter your stance, hopes, or fears. 😎

Cheers,

Robert

On Sat, Jun 4, 2016 at 4:26 PM, Tom Johnson <[hidden email]> wrote:

Among other points: "...why doing regression analysis over every site, without having the context of the search result that it is in, is supremely flawed."
TJ

============================================
Tom Johnson
Institute for Analytic Journalism   --     Santa Fe, NM USA
<a href="tel:505.577.6482" value="+15055776482" target="_blank" class="">505.577.6482(c)                                    <a href="tel:505.473.9646" value="+15054739646" target="_blank" class="">505.473.9646(h)
Society of Professional Journalists   -   Region 9 Director
Check out It's The People's Data
http://www.jtjohnson.com                   [hidden email]
============================================



Sent with MailTrack

Virus-free. www.avast.com
<a href="x-msg://4/#m_-9171770883074403068_DDB4FAA8-2DD7-40BB-A1B8-4E2AA1F9FDF2" width="1" height="1" class="">

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
Reply | Threaded
Open this post in threaded view
|

Re: Fascinating article on how AI is driving change in SEO, categories of AI and the Law of Accelerating Returns

Marcus G. Daniels
In reply to this post by Edward Angel

Nah, we’re just a media for representing knowledge.   Not a obviously a very efficient one, either.   I mean, wasting all that time in school, only to forget much of it and then hopefully become a professional expert in some tiny area.   And a lot of people won’t even accomplish that, but nonetheless participate in filling the atmosphere full of CO2 and CH4 and using up vast fossil fuel reserves.  After a few decades comes retirement and much of that expertise is lost by society.   It’s all quite wasteful.  Something better sounds like a good idea.  It’s not extinction, it’s evolution.

 

From: Friam [mailto:[hidden email]] On Behalf Of Edward Angel
Sent: Monday, June 06, 2016 11:42 AM
To: The Friday Morning Applied Complexity Coffee Group <[hidden email]>
Subject: Re: [FRIAM] Fascinating article on how AI is driving change in SEO, categories of AI and the Law of Accelerating Returns

 

There is a large group of distinguished people including Elon Musk, Stephen Hawking, Bill Joy and Martin Rees, who believe that AI is an existential threat and the probability of the human race surviving another 100 years is less than 50/50.  Stephen Hawking has said he has no idea what to do about. Bill Joy’s (non) solution is better ethical education for workers in the area. I can’t see how open source will prevent the dangers they worry about. Martin Rees has an Institute at Cambridge that worries about these things. 

 

Ed

_______________________


Ed Angel

Founding Director, Art, Research, Technology and Science Laboratory (ARTS Lab)
Professor Emeritus of Computer Science, University of New Mexico

1017 Sierra Pinon

Santa Fe, NM 87501
505-984-0136 (home)                                
[hidden email]

505-453-4944 (cell)                                     http://www.cs.unm.edu/~angel

 

On Jun 5, 2016, at 4:04 PM, Pamela McCorduck <[hidden email]> wrote:

 

I have some grave concerns about AI being concentrated in the hands of a few big firms—Google, FaceBook, Amazon, and so on. Elon Musk says the answer is open sourcing, but I’m skeptical. That said, I’d be interested in hearing other people’s solutions. Then again, you may not think it’s a problem.

 

 

On Jun 5, 2016, at 3:22 PM, Robert Wall <[hidden email]> wrote:

 

Hi Tom,

 

Interesting article about Google and their foray [actually a Blitzkrieg, as they are buying up all of the brain trust in this area] into the world of machine learning presumably to improve the search customer experience.  Could their efforts actually have unintended consequences for both the search customer and the marketing efforts of the website owners? It is interesting to consider. For example, for the former case, Google picking WebMD as the paragon website for the healthcare industry flies in the face of my own experience and, say, this New York Times Magazine article: A Prescription for Fear (Feb 2011).  Will this actually make WebMD the de facto paragon in the minds of the searchers?  For the latter, successful web marketing becomes increasingly subject to the latest Google search algorithms instead of the previously more expert in-house marketing departments. Of course, this is the nature of SEO--to game the algorithms to attract better rankings.  But, it seems those in-house marketing departments will need to up their game:

 

In other ways, things are a bit harder. The field of SEO will continue to become extremely technical. Analytics and big data are the order of the day, and any SEO that isn’t familiar with these approaches has a lot of catching up to do. Those of you who have these skills can look forward to a big payday.

 

Also, with respect to those charts anticipating exponential growth for AGI technology--even eclipsing human intelligence by mid-century--there is much reasoning to see this as overly optimistic [see, for example, Hubert Dreyfus' critique of Good Old Fashion AI: "What Computers Can't Do"].  These charts kind of remind me of the "ultraviolet catastrophe" around the end of the 19th century. There are physical limitations that may well tamp progress and keep it to ANI.  With respect to AGI, there have been some pointed challenges to this "Law of Accelerating Returns."

 

On this point, I thought this article in AEON titled "Creative Blocks: The very laws of physics imply that artificial intelligence must be possible. What’s holding us up? (Oct 2012)" is on point concerning the philosophical and epistemological road blocks.  This one, titled "Where do minds belong? (Mar 2016)" discusses the technological roadblocks in an insightful, highly speculative, but entertaining manner.

 

Nonetheless, this whole discussion is quite intriguing, no matter your stance, hopes, or fears. 😎

 

Cheers,

 

Robert

 

On Sat, Jun 4, 2016 at 4:26 PM, Tom Johnson <[hidden email]> wrote:

 

Among other points: "...why doing regression analysis over every site, without having the context of the search result that it is in, is supremely flawed."

TJ


============================================
Tom Johnson
Institute for Analytic Journalism   --     Santa Fe, NM USA
<a href="tel:505.577.6482" target="_blank">505.577.6482(c)                                    <a href="tel:505.473.9646" target="_blank">505.473.9646(h)
Society of Professional Journalists   -   Region 9 Director
Check out It's The People's Data

http://www.jtjohnson.com                   [hidden email]
============================================



Sent with MailTrack

https://mailtrack.io/trace/mail/0bfe42c67c83e2b91811006652525d0703cb4522341037.png

 

https://ipmcdn.avast.com/images/2016/icons/icon-envelope-tick-round-orange_184x116-v1.png

Virus-free. www.avast.com


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com

 

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com

 

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com

 


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
Reply | Threaded
Open this post in threaded view
|

Re: Fascinating article on how AI is driving change in SEO, categories of AI and the Law of Accelerating Returns

Nick Thompson
In reply to this post by Roger Critchlow-2

Roger,

 

Can artificial flowers learn? 

 

Are you on your boat, yet?  Beautiful day on Massachusetts bay!  See http://www.ssd.noaa.gov/goes/east/eaus/flash-vis.html

 

By the way, in support of your aphorism “layers of the atmosphere don’t mix” which I have been chewing on ever since you offered it:  look on the extreme right of the satellite loop to see the upper half of the atmosphere sliding out over the cold maritime layer without any interaction whatsoever.  Cool! 

 

Nick

 

 

Nicholas S. Thompson

Emeritus Professor of Psychology and Biology

Clark University

http://home.earthlink.net/~nickthompson/naturaldesigns/

 

From: Friam [mailto:[hidden email]] On Behalf Of Roger Critchlow
Sent: Sunday, June 05, 2016 7:03 PM
To: The Friday Morning Applied Complexity Coffee Group <[hidden email]>
Subject: Re: [FRIAM] Fascinating article on how AI is driving change in SEO, categories of AI and the Law of Accelerating Returns

 

"Artificial intelligence has the same relation to intelligence as artificial flowers have to flowers."  -- David Parnas

Which is even funnier now than it was in 70's or 80's when first said, because artificial flowers have become more and more amazing over the decades.

 

-- rec --

 

On Sun, Jun 5, 2016 at 6:09 PM, Tom Johnson <[hidden email]> wrote:

Robert:

Thanks for the pointers at the end of your remarks to the interesting articles.  I wonder, too, if someone could come up with parallel "paragon websites."  That is, here's WebMD.  and displayed alongside the "best" critics or alternatives to that site.

 

TJ




============================================
Tom Johnson
Institute for Analytic Journalism   --     Santa Fe, NM USA
<a href="tel:505.577.6482" target="_blank">505.577.6482(c)                                    <a href="tel:505.473.9646" target="_blank">505.473.9646(h)
Society of Professional Journalists   -   Region 9 Director
Check out It's The People's Data

http://www.jtjohnson.com                   [hidden email]
============================================

 

On Sun, Jun 5, 2016 at 3:22 PM, Robert Wall <[hidden email]> wrote:

Hi Tom,

 

Interesting article about Google and their foray [actually a Blitzkrieg, as they are buying up all of the brain trust in this area] into the world of machine learning presumably to improve the search customer experience.  Could their efforts actually have unintended consequences for both the search customer and the marketing efforts of the website owners? It is interesting to consider. For example, for the former case, Google picking WebMD as the paragon website for the healthcare industry flies in the face of my own experience and, say, this New York Times Magazine article: A Prescription for Fear (Feb 2011).  Will this actually make WebMD the de facto paragon in the minds of the searchers?  For the latter, successful web marketing becomes increasingly subject to the latest Google search algorithms instead of the previously more expert in-house marketing departments. Of course, this is the nature of SEO--to game the algorithms to attract better rankings.  But, it seems those in-house marketing departments will need to up their game:

 

In other ways, things are a bit harder. The field of SEO will continue to become extremely technical. Analytics and big data are the order of the day, and any SEO that isn’t familiar with these approaches has a lot of catching up to do. Those of you who have these skills can look forward to a big payday.

 

Also, with respect to those charts anticipating exponential growth for AGI technology--even eclipsing human intelligence by mid-century--there is much reasoning to see this as overly optimistic [see, for example, Hubert Dreyfus' critique of Good Old Fashion AI: "What Computers Can't Do"].  These charts kind of remind me of the "ultraviolet catastrophe" around the end of the 19th century. There are physical limitations that may well tamp progress and keep it to ANI.  With respect to AGI, there have been some pointed challenges to this "Law of Accelerating Returns."

 

On this point, I thought this article in AEON titled "Creative Blocks: The very laws of physics imply that artificial intelligence must be possible. What’s holding us up? (Oct 2012)" is on point concerning the philosophical and epistemological road blocks.  This one, titled "Where do minds belong? (Mar 2016)" discusses the technological roadblocks in an insightful, highly speculative, but entertaining manner.

 

Nonetheless, this whole discussion is quite intriguing, no matter your stance, hopes, or fears. 😎

 

Cheers,

 

Robert

 

On Sat, Jun 4, 2016 at 4:26 PM, Tom Johnson <[hidden email]> wrote:

 

Among other points: "...why doing regression analysis over every site, without having the context of the search result that it is in, is supremely flawed."

TJ


============================================
Tom Johnson
Institute for Analytic Journalism   --     Santa Fe, NM USA
<a href="tel:505.577.6482" target="_blank">505.577.6482(c)                                    <a href="tel:505.473.9646" target="_blank">505.473.9646(h)
Society of Professional Journalists   -   Region 9 Director
Check out It's The People's Data

http://www.jtjohnson.com                   [hidden email]
============================================



Sent with MailTrack

 

Virus-free. www.avast.com

 

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com

 


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com

 


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com

 


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
Reply | Threaded
Open this post in threaded view
|

Re: Fascinating article on how AI is driving change in SEO, categories of AI and the Law of Accelerating Returns

Stephen Guerin-5
In reply to this post by Pamela McCorduck
Hi Pamela,

While open source gives some transparency, our direction is to move toward more distributed AI where our data is not given to a centralized authority before the AI is applied. Rather, we think that the AI should be more out at the edge of the network with our sensors/cameras/microphones. The derived information from the raw data could then be shared via agents transacting on our behalf for collective action while maximizing privacy. A Santa Fe Approach if you will :-)

We've been using Steve Mann's term Souveillance (in opposition to Surveillance) as a shorthand for this idea along with the serverless p2p solutions we're calling Acequia - a more grounded social structure and water distribution system in opposition to a faceless centralized Cloud eg water vapor in the sky :-) 

-S

_______________________________________________________________________
[hidden email]
CEO, Simtable  http://www.simtable.com
1600 Lena St #D1, Santa Fe, NM 87505
office: (505)995-0206 mobile: (505)577-5828
twitter: @simtable

On Sun, Jun 5, 2016 at 4:04 PM, Pamela McCorduck <[hidden email]> wrote:
I have some grave concerns about AI being concentrated in the hands of a few big firms—Google, FaceBook, Amazon, and so on. Elon Musk says the answer is open sourcing, but I’m skeptical. That said, I’d be interested in hearing other people’s solutions. Then again, you may not think it’s a problem.


On Jun 5, 2016, at 3:22 PM, Robert Wall <[hidden email]> wrote:

Hi Tom,

Interesting article about Google and their foray [actually a Blitzkrieg, as they are buying up all of the brain trust in this area] into the world of machine learning presumably to improve the search customer experience.  Could their efforts actually have unintended consequences for both the search customer and the marketing efforts of the website owners? It is interesting to consider. For example, for the former case, Google picking WebMD as the paragon website for the healthcare industry flies in the face of my own experience and, say, this New York Times Magazine article: A Prescription for Fear (Feb 2011).  Will this actually make WebMD the de facto paragon in the minds of the searchers?  For the latter, successful web marketing becomes increasingly subject to the latest Google search algorithms instead of the previously more expert in-house marketing departments. Of course, this is the nature of SEO--to game the algorithms to attract better rankings.  But, it seems those in-house marketing departments will need to up their game:

In other ways, things are a bit harder. The field of SEO will continue to become extremely technical. Analytics and big data are the order of the day, and any SEO that isn’t familiar with these approaches has a lot of catching up to do. Those of you who have these skills can look forward to a big payday.

Also, with respect to those charts anticipating exponential growth for AGI technology--even eclipsing human intelligence by mid-century--there is much reasoning to see this as overly optimistic [see, for example, Hubert Dreyfus' critique of Good Old Fashion AI: "What Computers Can't Do"].  These charts kind of remind me of the "ultraviolet catastrophe" around the end of the 19th century. There are physical limitations that may well tamp progress and keep it to ANI.  With respect to AGI, there have been some pointed challenges to this "Law of Accelerating Returns."

On this point, I thought this article in AEON titled "Creative Blocks: The very laws of physics imply that artificial intelligence must be possible. What’s holding us up? (Oct 2012)" is on point concerning the philosophical and epistemological road blocks.  This one, titled "Where do minds belong? (Mar 2016)" discusses the technological roadblocks in an insightful, highly speculative, but entertaining manner.

Nonetheless, this whole discussion is quite intriguing, no matter your stance, hopes, or fears. 😎

Cheers,

Robert

On Sat, Jun 4, 2016 at 4:26 PM, Tom Johnson <[hidden email]> wrote:

Among other points: "...why doing regression analysis over every site, without having the context of the search result that it is in, is supremely flawed."
TJ

============================================
Tom Johnson
Institute for Analytic Journalism   --     Santa Fe, NM USA
<a href="tel:505.577.6482" value="+15055776482" target="_blank">505.577.6482(c)                                    <a href="tel:505.473.9646" value="+15054739646" target="_blank">505.473.9646(h)
Society of Professional Journalists   -   Region 9 Director
Check out It's The People's Data
http://www.jtjohnson.com                   [hidden email]
============================================



Sent with MailTrack

Virus-free. www.avast.com

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
Reply | Threaded
Open this post in threaded view
|

Re: Fascinating article on how AI is driving change in SEO, categories of AI and the Law of Accelerating Returns

gepr
In reply to this post by Pamela McCorduck

On that note, I found this article interesting:

A Universal Basic Income Is a Poor Tool to Fight Poverty
http://www.nytimes.com/2016/06/01/business/economy/universal-basic-income-poverty.html?_r=0

One of the interesting dynamics I've noticed is when I argue about the basic income with people who have day jobs (mostly venture funded, but some megacorps like Intel), they tend to object strongly; and when I have similar conversations with people who struggle on a continual basis to find and execute _projects_ (mostly DIY people who do a lot of freelance work from hardware prototyping to fixing motorcycles), they tend to be for the idea (if not the practicals of how to pay for it).

I can't help thinking it has to do with the (somewhat false) dichotomy between those who think people are basically good, productive, energetic, useful versus those who think (most) people are basically lazy, unproductive, parasites.  The DIYers surround themselves with similarly creative people, whereas the day-job people are either themselves or surrounded by, people they feel don't pull their weight.  (I know I've often felt like a "third wheel" when working on large teams... and I end up having to fend for myself and forcibly squeeze some task out so that I can be productive.  These day-jobbers might feel similarly at various times.  Or they're simply narcissists and don't recognize the contributions of their team members.)

It also seems coincident with "great man" worship... The day-jobbers tend to put more stock in famous people (like Musk or Hawking or whoever), whereas the DIYers seem to be open to or tolerant of ideas (or even ways of life) in which they may initially see zero benefit.


On 06/06/2016 11:24 AM, Pamela McCorduck wrote:
>
> Finally, and this is where my anger really boils: they sound to me like the worst kind of patronizing, privileged white guys imaginable. There’s no sense in their aggrieved messages that billions of people around the globe are struggling, and have lives that could be vastly improved with AI.  Maybe it behooves them to imagine the good AI can do for those people, instead of stamping their feet because AI is going to upset their personal world. Which it will. It must be very hard to be the smartest guy on the block for so long, and then here comes something even smarter.

--
☣ glen

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
uǝʃƃ ⊥ glen
Reply | Threaded
Open this post in threaded view
|

Re: Fascinating article on how AI is driving change in SEO, categories of AI and the Law of Accelerating Returns

Marcus G. Daniels
I suspect a universal basic income is a requirement for people to _not_ seek an idle life.    If people can't count on food, shelter, and health care, they probably can't engage in anything in a substantial way.    On the other hand, saving the people that could do substantial things (and by "substantial" I mean artistic or scientific discovery or synthesis),  could come at a prohibitive cost of saving those that won't.   A problem with the "day jobber" approach is the narrowing of substantial things to what happens to be in the interest of dominant organizations.    Even in silicon valley, that's a harsh narrowing of the possible.   So I would say do it to make the world interesting and not just for humanitarian reasons.

-----Original Message-----
From: Friam [mailto:[hidden email]] On Behalf Of glen ?
Sent: Monday, June 06, 2016 1:36 PM
To: The Friday Morning Applied Complexity Coffee Group <[hidden email]>
Subject: Re: [FRIAM] Fascinating article on how AI is driving change in SEO, categories of AI and the Law of Accelerating Returns


On that note, I found this article interesting:

A Universal Basic Income Is a Poor Tool to Fight Poverty
http://www.nytimes.com/2016/06/01/business/economy/universal-basic-income-poverty.html?_r=0

One of the interesting dynamics I've noticed is when I argue about the basic income with people who have day jobs (mostly venture funded, but some megacorps like Intel), they tend to object strongly; and when I have similar conversations with people who struggle on a continual basis to find and execute _projects_ (mostly DIY people who do a lot of freelance work from hardware prototyping to fixing motorcycles), they tend to be for the idea (if not the practicals of how to pay for it).

I can't help thinking it has to do with the (somewhat false) dichotomy between those who think people are basically good, productive, energetic, useful versus those who think (most) people are basically lazy, unproductive, parasites.  The DIYers surround themselves with similarly creative people, whereas the day-job people are either themselves or surrounded by, people they feel don't pull their weight.  (I know I've often felt like a "third wheel" when working on large teams... and I end up having to fend for myself and forcibly squeeze some task out so that I can be productive.  These day-jobbers might feel similarly at various times.  Or they're simply narcissists and don't recognize the contributions of their team members.)

It also seems coincident with "great man" worship... The day-jobbers tend to put more stock in famous people (like Musk or Hawking or whoever), whereas the DIYers seem to be open to or tolerant of ideas (or even ways of life) in which they may initially see zero benefit.


On 06/06/2016 11:24 AM, Pamela McCorduck wrote:
>
> Finally, and this is where my anger really boils: they sound to me like the worst kind of patronizing, privileged white guys imaginable. There’s no sense in their aggrieved messages that billions of people around the globe are struggling, and have lives that could be vastly improved with AI.  Maybe it behooves them to imagine the good AI can do for those people, instead of stamping their feet because AI is going to upset their personal world. Which it will. It must be very hard to be the smartest guy on the block for so long, and then here comes something even smarter.

--
☣ glen

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
Reply | Threaded
Open this post in threaded view
|

Re: Fascinating article on how AI is driving change in SEO, categories of AI and the Law of Accelerating Returns

Roger Critchlow-2
https://medium.com/utopia-for-realists/why-do-the-poor-make-such-poor-decisions-f05d84c44f1a was interesting, vis a vis what happens when you just give poor people money.

-- rec --

On Mon, Jun 6, 2016 at 4:54 PM, Marcus Daniels <[hidden email]> wrote:
I suspect a universal basic income is a requirement for people to _not_ seek an idle life.    If people can't count on food, shelter, and health care, they probably can't engage in anything in a substantial way.    On the other hand, saving the people that could do substantial things (and by "substantial" I mean artistic or scientific discovery or synthesis),  could come at a prohibitive cost of saving those that won't.   A problem with the "day jobber" approach is the narrowing of substantial things to what happens to be in the interest of dominant organizations.    Even in silicon valley, that's a harsh narrowing of the possible.   So I would say do it to make the world interesting and not just for humanitarian reasons.

-----Original Message-----
From: Friam [mailto:[hidden email]] On Behalf Of glen ?
Sent: Monday, June 06, 2016 1:36 PM
To: The Friday Morning Applied Complexity Coffee Group <[hidden email]>
Subject: Re: [FRIAM] Fascinating article on how AI is driving change in SEO, categories of AI and the Law of Accelerating Returns


On that note, I found this article interesting:

A Universal Basic Income Is a Poor Tool to Fight Poverty
http://www.nytimes.com/2016/06/01/business/economy/universal-basic-income-poverty.html?_r=0

One of the interesting dynamics I've noticed is when I argue about the basic income with people who have day jobs (mostly venture funded, but some megacorps like Intel), they tend to object strongly; and when I have similar conversations with people who struggle on a continual basis to find and execute _projects_ (mostly DIY people who do a lot of freelance work from hardware prototyping to fixing motorcycles), they tend to be for the idea (if not the practicals of how to pay for it).

I can't help thinking it has to do with the (somewhat false) dichotomy between those who think people are basically good, productive, energetic, useful versus those who think (most) people are basically lazy, unproductive, parasites.  The DIYers surround themselves with similarly creative people, whereas the day-job people are either themselves or surrounded by, people they feel don't pull their weight.  (I know I've often felt like a "third wheel" when working on large teams... and I end up having to fend for myself and forcibly squeeze some task out so that I can be productive.  These day-jobbers might feel similarly at various times.  Or they're simply narcissists and don't recognize the contributions of their team members.)

It also seems coincident with "great man" worship... The day-jobbers tend to put more stock in famous people (like Musk or Hawking or whoever), whereas the DIYers seem to be open to or tolerant of ideas (or even ways of life) in which they may initially see zero benefit.


On 06/06/2016 11:24 AM, Pamela McCorduck wrote:
>
> Finally, and this is where my anger really boils: they sound to me like the worst kind of patronizing, privileged white guys imaginable. There’s no sense in their aggrieved messages that billions of people around the globe are struggling, and have lives that could be vastly improved with AI.  Maybe it behooves them to imagine the good AI can do for those people, instead of stamping their feet because AI is going to upset their personal world. Which it will. It must be very hard to be the smartest guy on the block for so long, and then here comes something even smarter.

--
☣ glen

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
Reply | Threaded
Open this post in threaded view
|

Re: Fascinating article on how AI is driving change in SEO, categories of AI and the Law of Accelerating Returns

Marcus G. Daniels

If I were a robot overlord, and I didn’t want to look after 7 billion humans as pets, I’d start offering advanced medicine and genetic enhancements to “early users”, esp. the rich and powerful.   The results of these could be things like open-ended lifespan (ongoing repairs to aging bodies) and improved IQ, and perhaps even nicely-packaged cybernetic enhancements for emergency `soul preservation’ or high-speed  communication.  Humans are good at ignoring suffering outside of their tribe, and this would just be a new kind of social stratification.  Don’t need Skynet, just an incentive structure…

 

 

From: Friam [mailto:[hidden email]] On Behalf Of Robert Wall
Sent: Monday, June 06, 2016 7:16 PM
To: The Friday Morning Applied Complexity Coffee Group <[hidden email]>
Subject: Re: [FRIAM] Fascinating article on how AI is driving change in SEO, categories of AI and the Law of Accelerating Returns

 

Getting back to Tom's original theme about how AI is driving change, let's examine that further, but now integrating in some of the other thoughts in this thread such as: on the hegemonic nature of AI-- proprietary or open source; or the societal impact of AI on the workforce--requisite skills increasing the value of the surviving human work; or on the existential risk of AI to humanity.  Certainly, it would be very relevant to also consider AI in the context of technological unemployment.  IMHO, this is the immediate existential threat, the threat to human-performed work.  Work is the thing that gives most of us something to organize our lives around ... giving us meaning to our existence. This threat is not naive.  It is real, palpable, and more fearsome than mortal death or physical extinction.

 

We talked about the difference between ANI [Artificial Narrow Intelligence] and AGI [Artificial General Inteligence], with the former being the most prevalent--actually, the only type currently achieved. Current factory robots are of the ANI-type and are already replacing human workers by the millions here and abroad.  As their cost [ ~ $20,000] continues to decline through manufacturing efficiencies these robots will be able to replace even more workers, simultaneously putting downward pressure on the official, sustainable minimum wage.  

 

Even if the average rate of increase in "IQ" of these ANI robots remains at a modest steady pace or accelerates in pace with the supposed law of accelerating returns, then these ANI robots will start to make progress in the higher-paying jobs AND will tend to obviate the often stated political bromide of education as a solution; that is, human progress through a relatively slow educational process will not be able to keep up. 

 

Nor will we be "just a media for representing knowledge." Because situation, actionable knowledge will be derived at the edges of the network by way of sousveillance replacing the current news sources and repurposing them for command and control of, well, the situation.  "And it is difficult to imagine how such a sluggish government system could keep up with such a rapid rate of change when it can barely do so now. (-quote from the linked article below)"

 

This situation has been anticipated years ago such as in the Harvard Business Review article: What Happens to Society When Robots Replace Workers? (Dec 2014):

 

"Ultimately, we need a new, individualized, cultural, approach to the meaning of work and the purpose of life. Otherwise, people will  find a solution – human beings always do – but it may not be the one for which we began this technological revolution."

 

Here's the rub and maybe the signal to keep all this in check:  Under such a dystopian scenario--where labor is transformed into capital--our capitalistic system would eventually collapse.  Experts say that when unemployment reaches 35%, or thereabouts, the whole economic system collapses into chaos. Essentially there would be no consumers left in our consumer society. Perhaps, the only recourse would be for the capitalists who own the robots [the new workforce] to provide for a universal basic income to the technologically unemployed in order to maintain social order. 

 

BUT, without a reason to get up in the morning, I doubt that this could last for long. 

 

Dystopian indeed. I know.  Under such a scenario, we really won't need those SEO workers because there will be fewer and fewer consumers looking for stuff except for free entertainment.  So Facebook should become the new paragon website under most search categories, but Amazon, not so much.  The Google search algorithms will need to be recalibrated ... oh, wait a minute... no SEO workers. Facebook will become the new Google. Brave new world. 

 

Cheers 🤐

 

On Mon, Jun 6, 2016 at 3:22 PM, Roger Critchlow <[hidden email]> wrote:

https://medium.com/utopia-for-realists/why-do-the-poor-make-such-poor-decisions-f05d84c44f1a was interesting, vis a vis what happens when you just give poor people money.

 

-- rec --

 

On Mon, Jun 6, 2016 at 4:54 PM, Marcus Daniels <[hidden email]> wrote:

I suspect a universal basic income is a requirement for people to _not_ seek an idle life.    If people can't count on food, shelter, and health care, they probably can't engage in anything in a substantial way.    On the other hand, saving the people that could do substantial things (and by "substantial" I mean artistic or scientific discovery or synthesis),  could come at a prohibitive cost of saving those that won't.   A problem with the "day jobber" approach is the narrowing of substantial things to what happens to be in the interest of dominant organizations.    Even in silicon valley, that's a harsh narrowing of the possible.   So I would say do it to make the world interesting and not just for humanitarian reasons.

-----Original Message-----
From: Friam [mailto:[hidden email]] On Behalf Of glen ?
Sent: Monday, June 06, 2016 1:36 PM
To: The Friday Morning Applied Complexity Coffee Group <[hidden email]>
Subject: Re: [FRIAM] Fascinating article on how AI is driving change in SEO, categories of AI and the Law of Accelerating Returns

On that note, I found this article interesting:

A Universal Basic Income Is a Poor Tool to Fight Poverty
http://www.nytimes.com/2016/06/01/business/economy/universal-basic-income-poverty.html?_r=0

One of the interesting dynamics I've noticed is when I argue about the basic income with people who have day jobs (mostly venture funded, but some megacorps like Intel), they tend to object strongly; and when I have similar conversations with people who struggle on a continual basis to find and execute _projects_ (mostly DIY people who do a lot of freelance work from hardware prototyping to fixing motorcycles), they tend to be for the idea (if not the practicals of how to pay for it).

I can't help thinking it has to do with the (somewhat false) dichotomy between those who think people are basically good, productive, energetic, useful versus those who think (most) people are basically lazy, unproductive, parasites.  The DIYers surround themselves with similarly creative people, whereas the day-job people are either themselves or surrounded by, people they feel don't pull their weight.  (I know I've often felt like a "third wheel" when working on large teams... and I end up having to fend for myself and forcibly squeeze some task out so that I can be productive.  These day-jobbers might feel similarly at various times.  Or they're simply narcissists and don't recognize the contributions of their team members.)

It also seems coincident with "great man" worship... The day-jobbers tend to put more stock in famous people (like Musk or Hawking or whoever), whereas the DIYers seem to be open to or tolerant of ideas (or even ways of life) in which they may initially see zero benefit.


On 06/06/2016 11:24 AM, Pamela McCorduck wrote:
>
> Finally, and this is where my anger really boils: they sound to me like the worst kind of patronizing, privileged white guys imaginable. There’s no sense in their aggrieved messages that billions of people around the globe are struggling, and have lives that could be vastly improved with AI.  Maybe it behooves them to imagine the good AI can do for those people, instead of stamping their feet because AI is going to upset their personal world. Which it will. It must be very hard to be the smartest guy on the block for so long, and then here comes something even smarter.

--
glen

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com

 


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com

 


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
Reply | Threaded
Open this post in threaded view
|

Re: Fascinating article on how AI is driving change in SEO, categories of AI and the Law of Accelerating Returns

Robert Wall
Hi Marcus or Robot Overload,

Tongue in cheek:  How about early "retirement" packages to benefit the surviving families?  I certainly may have to consider this myself for my kids' and grandkids' survival  if the "offer" comes about.  But I am retired and not displaced ... but I may still seem like a resource consumer with no "apparent" ROI [except for what gets posted here, of course. :-)] 

Still, given the knowledge I currently represent and embody that will waste away with my death as you have said, I may still be more of an optimist in these matters.  As naive as this may sound, if, for the sake of improving humanity, we all spent just a bit more attention to achieving this uptick through our own conscious evolution than through technological evolution [and not through religion], we would have much fewer worries here. Improve the conscious states even if through "advanced medicine and genetic enhancements" or better and closer, more rational social politics. 

This is the way to improve humanity in a meaningful way. No sixth extinction event marking the end of the Anthropocene and the beginning of the posthuman era.  No SkyNet.  No I Robot [the movie not the novel].  Just the conquering of what seems to be in the way of our survival at the moment, irrespective of any ANI or AQI robots: our immediate impact on the ecosystem. In that respect, we should do what is right for us collectively and right for a planet upon which we desperately will need for a long time to come. No way we are going to be able to leave this rock. Transhumanism is a great Sci-Fi narrative, but not a good bet for us in the long run. 

I recommend reading Martin Heidegger's essay The Question Concerning Technology (1954).  We are enframed.  But, the escape is ... well, poetry. Okay, I know ... but you have to read this essay to understand. 😎

Best regards,

Robert

On Mon, Jun 6, 2016 at 8:03 PM, Marcus Daniels <[hidden email]> wrote:

If I were a robot overlord, and I didn’t want to look after 7 billion humans as pets, I’d start offering advanced medicine and genetic enhancements to “early users”, esp. the rich and powerful.   The results of these could be things like open-ended lifespan (ongoing repairs to aging bodies) and improved IQ, and perhaps even nicely-packaged cybernetic enhancements for emergency `soul preservation’ or high-speed  communication.  Humans are good at ignoring suffering outside of their tribe, and this would just be a new kind of social stratification.  Don’t need Skynet, just an incentive structure…

 

 

From: Friam [mailto:[hidden email]] On Behalf Of Robert Wall
Sent: Monday, June 06, 2016 7:16 PM


To: The Friday Morning Applied Complexity Coffee Group <[hidden email]>
Subject: Re: [FRIAM] Fascinating article on how AI is driving change in SEO, categories of AI and the Law of Accelerating Returns

 

Getting back to Tom's original theme about how AI is driving change, let's examine that further, but now integrating in some of the other thoughts in this thread such as: on the hegemonic nature of AI-- proprietary or open source; or the societal impact of AI on the workforce--requisite skills increasing the value of the surviving human work; or on the existential risk of AI to humanity.  Certainly, it would be very relevant to also consider AI in the context of technological unemployment.  IMHO, this is the immediate existential threat, the threat to human-performed work.  Work is the thing that gives most of us something to organize our lives around ... giving us meaning to our existence. This threat is not naive.  It is real, palpable, and more fearsome than mortal death or physical extinction.

 

We talked about the difference between ANI [Artificial Narrow Intelligence] and AGI [Artificial General Inteligence], with the former being the most prevalent--actually, the only type currently achieved. Current factory robots are of the ANI-type and are already replacing human workers by the millions here and abroad.  As their cost [ ~ $20,000] continues to decline through manufacturing efficiencies these robots will be able to replace even more workers, simultaneously putting downward pressure on the official, sustainable minimum wage.  

 

Even if the average rate of increase in "IQ" of these ANI robots remains at a modest steady pace or accelerates in pace with the supposed law of accelerating returns, then these ANI robots will start to make progress in the higher-paying jobs AND will tend to obviate the often stated political bromide of education as a solution; that is, human progress through a relatively slow educational process will not be able to keep up. 

 

Nor will we be "just a media for representing knowledge." Because situation, actionable knowledge will be derived at the edges of the network by way of sousveillance replacing the current news sources and repurposing them for command and control of, well, the situation.  "And it is difficult to imagine how such a sluggish government system could keep up with such a rapid rate of change when it can barely do so now. (-quote from the linked article below)"

 

This situation has been anticipated years ago such as in the Harvard Business Review article: What Happens to Society When Robots Replace Workers? (Dec 2014):

 

"Ultimately, we need a new, individualized, cultural, approach to the meaning of work and the purpose of life. Otherwise, people will  find a solution – human beings always do – but it may not be the one for which we began this technological revolution."

 

Here's the rub and maybe the signal to keep all this in check:  Under such a dystopian scenario--where labor is transformed into capital--our capitalistic system would eventually collapse.  Experts say that when unemployment reaches 35%, or thereabouts, the whole economic system collapses into chaos. Essentially there would be no consumers left in our consumer society. Perhaps, the only recourse would be for the capitalists who own the robots [the new workforce] to provide for a universal basic income to the technologically unemployed in order to maintain social order. 

 

BUT, without a reason to get up in the morning, I doubt that this could last for long. 

 

Dystopian indeed. I know.  Under such a scenario, we really won't need those SEO workers because there will be fewer and fewer consumers looking for stuff except for free entertainment.  So Facebook should become the new paragon website under most search categories, but Amazon, not so much.  The Google search algorithms will need to be recalibrated ... oh, wait a minute... no SEO workers. Facebook will become the new Google. Brave new world. 

 

Cheers 🤐

 

On Mon, Jun 6, 2016 at 3:22 PM, Roger Critchlow <[hidden email]> wrote:

https://medium.com/utopia-for-realists/why-do-the-poor-make-such-poor-decisions-f05d84c44f1a was interesting, vis a vis what happens when you just give poor people money.

 

-- rec --

 

On Mon, Jun 6, 2016 at 4:54 PM, Marcus Daniels <[hidden email]> wrote:

I suspect a universal basic income is a requirement for people to _not_ seek an idle life.    If people can't count on food, shelter, and health care, they probably can't engage in anything in a substantial way.    On the other hand, saving the people that could do substantial things (and by "substantial" I mean artistic or scientific discovery or synthesis),  could come at a prohibitive cost of saving those that won't.   A problem with the "day jobber" approach is the narrowing of substantial things to what happens to be in the interest of dominant organizations.    Even in silicon valley, that's a harsh narrowing of the possible.   So I would say do it to make the world interesting and not just for humanitarian reasons.

-----Original Message-----
From: Friam [mailto:[hidden email]] On Behalf Of glen ?
Sent: Monday, June 06, 2016 1:36 PM
To: The Friday Morning Applied Complexity Coffee Group <[hidden email]>
Subject: Re: [FRIAM] Fascinating article on how AI is driving change in SEO, categories of AI and the Law of Accelerating Returns

On that note, I found this article interesting:

A Universal Basic Income Is a Poor Tool to Fight Poverty
http://www.nytimes.com/2016/06/01/business/economy/universal-basic-income-poverty.html?_r=0

One of the interesting dynamics I've noticed is when I argue about the basic income with people who have day jobs (mostly venture funded, but some megacorps like Intel), they tend to object strongly; and when I have similar conversations with people who struggle on a continual basis to find and execute _projects_ (mostly DIY people who do a lot of freelance work from hardware prototyping to fixing motorcycles), they tend to be for the idea (if not the practicals of how to pay for it).

I can't help thinking it has to do with the (somewhat false) dichotomy between those who think people are basically good, productive, energetic, useful versus those who think (most) people are basically lazy, unproductive, parasites.  The DIYers surround themselves with similarly creative people, whereas the day-job people are either themselves or surrounded by, people they feel don't pull their weight.  (I know I've often felt like a "third wheel" when working on large teams... and I end up having to fend for myself and forcibly squeeze some task out so that I can be productive.  These day-jobbers might feel similarly at various times.  Or they're simply narcissists and don't recognize the contributions of their team members.)

It also seems coincident with "great man" worship... The day-jobbers tend to put more stock in famous people (like Musk or Hawking or whoever), whereas the DIYers seem to be open to or tolerant of ideas (or even ways of life) in which they may initially see zero benefit.


On 06/06/2016 11:24 AM, Pamela McCorduck wrote:
>
> Finally, and this is where my anger really boils: they sound to me like the worst kind of patronizing, privileged white guys imaginable. There’s no sense in their aggrieved messages that billions of people around the globe are struggling, and have lives that could be vastly improved with AI.  Maybe it behooves them to imagine the good AI can do for those people, instead of stamping their feet because AI is going to upset their personal world. Which it will. It must be very hard to be the smartest guy on the block for so long, and then here comes something even smarter.

--
glen

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com

 


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com

 


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
Reply | Threaded
Open this post in threaded view
|

Re: Fascinating article on how AI is driving change in SEO, categories of AI and the Law of Accelerating Returns

Marcus G. Daniels

Transhumanism is a great Sci-Fi narrative, but not a good bet for us in the long run.”

 

Well,

 

http://www.nature.com/articles/srep22555

http://science.sciencemag.org/content/early/2016/06/01/science.aaf6850.full

 


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
Reply | Threaded
Open this post in threaded view
|

Re: Fascinating article on how AI is driving change in SEO, categories of AI and the Law of Accelerating Returns

gepr
In reply to this post by Roger Critchlow-2
On 06/06/2016 02:22 PM, Roger Critchlow wrote:
> https://medium.com/utopia-for-realists/why-do-the-poor-make-such-poor-decisions-f05d84c44f1a
> was interesting, vis a vis what happens when you just give poor people
> money.

Excerpt:
> So in concrete terms, just how much dumber does poverty make you?
>
> "Our effects correspond to between 13 and 14 IQ points," Shafir says. "That’s comparable to losing a night’s sleep or the effects of alcoholism." What’s  remarkable is that we could have figured all this out 30 years ago. Shafir and Mullainathan weren’t relying on anything so complicated as brain scans. "Economists have been studying poverty for years and psychologists have been studying cognitive limitations for years,” Shafir explains. “We just put two and two together."

That is a good read.  Thanks.

> On Mon, Jun 6, 2016 at 4:54 PM, Marcus Daniels <[hidden email]> wrote:
>
>> A problem with the
>> "day jobber" approach is the narrowing of substantial things to what
>> happens to be in the interest of dominant organizations.    Even in silicon
>> valley, that's a harsh narrowing of the possible.   So I would say do it to
>> make the world interesting and not just for humanitarian reasons.

Yep.  We can't be arrogant enough to think we don't need those large hubs of intention, though.  I can imagine if there's any truth to the scale-free network concept, then lots of people _should_ sign over their labor to the interests of some large organization.  But that's a far cry from the current thinking that everybody should have a "job" and that over simplifies around unemployment stats.  When I hear politicians say things like "job creator" or talk about how the people want jobs, I get a little nauseous.  The word "job" has always had an obligatory tone to it.  Objective-oriented people, in my experience, tend to talk about things like career paths or in terms of dreams, roles, achievements, etc.  If they talk about jobs, it's usually in the context of using a job as a stepping stone toward their objective.  Jobs are tools, means to an end, not ends in themselves.

I suppose it's kinda like those motorcycle commercials that say things like "The journey is the destination".  No, the destination is the destination and the journey is the journey.  Sheesh.  Of course, that doesn't mean you can't have fun while using your tool.  And some tools are way more fun than others.  But anyone who talks about creating tools just for the sake of the tool, is ... well, a bit of a tool.

--
☣ glen

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
uǝʃƃ ⊥ glen
Reply | Threaded
Open this post in threaded view
|

Re: Fascinating article on how AI is driving change in SEO, categories of AI and the Law of Accelerating Returns

gepr
In reply to this post by Robert Wall
On 06/05/2016 02:22 PM, Robert Wall wrote:
> This one, titled "Where do minds belong?
> <https://aeon.co/essays/intelligent-machines-might-want-to-become-biological-again>
> (Mar
> 2016)" discusses the technological roadblocks in an insightful, highly
> speculative, but entertaining manner.

"Those early intelligences could have long ago reached the point where they decided to transition back from machines to biology."

The gist of this essay is a perfect example of trying to answer an ill-formed question.  It's entirely based on an unjustified distinction between machine and biology.  I'm all for justifying such a distinction.  And invoking von Neumann, energetics, and "neuromorphic architectures" exhibits a bit of context most others don't manage.  But discussing a move to machine intelligence and then a potential move back to biological intelligence without giving even a hand-waving mention of the difference between the two is conflating cart and horse.  And to beat around the bush so much is maddening.

Maybe there's currently a dearth of click-bait value left in the "what is life" genre.  So, perhaps Scharf and Aeon are exhibiting their awareness of a buzzphilic audience.

It would have been responsible, as long as you're going to mention Church-Turing and von Neumann anyway, to point out that both von Neumann and Turing went quite a ways in demonstrating that biology and machines are not very different.  To me, the _problem_ isn't one of AI.  The problem is this unjustified dichotomy between machine and biology.  A correlate problem is the (again probably false) distinction between life and intelligence.

--
☣ glen

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
uǝʃƃ ⊥ glen
123