http://friam.383.s1.nabble.com/Fascinating-article-on-how-AI-is-driving-change-in-SEO-categories-of-AI-and-the-Law-of-Accelerating-s-tp7587533p7587541.html
The field of AI itself has had a major project underway for a couple of years to address these issues. It’s called AI 100, and is funded by one of the wealthy founders of the field, Eric Horowitz at Microsoft. The headquarters of this project are at Stanford University.
It is funded for a century. Yes, that’s a century, on the grounds that whatever is decided in five years or ten years will need to be revisited five years or ten years on, again and again. Its staff consists of leading members of the field (who really know what the field can and cannot do), and they will be joined by ethicists, economists, philosophers and others (maybe already are) as the project moves along. (Their first report was rather scathing about Ray Kurzweil and the singularity, but that’s another issue.)
Musk, Hawking et al., are very good at getting publicity, but their first great solution last summer was to send a petition to the U.N., which they did with great fanfare. Of course nothing happened, and nothing could. This is the level of naivete (and sorry, self-importance) these men exhibit.
I also find them more than a bit hypocritical. Musk is not giving up his smartphone, and Hawking concedes that he loves what AI has done for him personally (in terms of vocal communication) but maybe others shouldn’t be allowed to handle this…
Finally, and this is where my anger really boils: they sound to me like the worst kind of patronizing, privileged white guys imaginable. There’s no sense in their aggrieved messages that billions of people around the globe are struggling, and have lives that could be vastly improved with AI. Maybe it behooves them to imagine the good AI can do for those people, instead of stamping their feet because AI is going to upset their personal world. Which it will. It must be very hard to be the smartest guy on the block for so long, and then here comes something even smarter.
Pamela
There is a large group of distinguished people including Elon Musk, Stephen Hawking, Bill Joy and Martin Rees, who believe that AI is an existential threat and the probability of the human race surviving another 100 years is less than 50/50. Stephen Hawking has said he has no idea what to do about. Bill Joy’s (non) solution is better ethical education for workers in the area. I can’t see how open source will prevent the dangers they worry about. Martin Rees has an Institute at Cambridge that worries about these things.
Ed
_______________________
Ed Angel
Founding Director, Art, Research, Technology and Science Laboratory (ARTS Lab)
Professor Emeritus of Computer Science, University of New Mexico
1017 Sierra Pinon
I have some grave concerns about AI being concentrated in the hands of a few big firms—Google, FaceBook, Amazon, and so on. Elon Musk says the answer is open sourcing, but I’m skeptical. That said, I’d be interested in hearing other people’s solutions. Then again, you may not think it’s a problem.
Hi Tom,
Interesting article about Google and their foray [actually a Blitzkrieg, as they are buying up all of the brain trust in this area] into the world of machine learning presumably to improve the search customer experience. Could their efforts actually have unintended consequences for both the search customer and the marketing efforts of the website owners? It is interesting to consider. For example, for the former case, Google picking WebMD as the paragon website for the healthcare industry flies in the face of my own experience and, say, this
New York Times Magazine article:
A Prescription for Fear (Feb 2011). Will this actually make WebMD the
de facto paragon in the minds of the searchers? For the latter, successful web marketing becomes increasingly subject to the latest Google search algorithms instead of the previously more expert in-house marketing departments. Of course, this is the nature of SEO--to game the algorithms to attract better rankings. But, it seems those in-house marketing departments will need to up their game:
In other ways, things are a bit harder. The field of SEO will continue to become extremely technical. Analytics and big data are the order of the day, and any SEO that isn’t familiar with these approaches has a lot of catching up to do. Those of you who have these skills can look forward to a big payday.
Also, with respect to those charts anticipating exponential growth for AGI technology--even eclipsing human intelligence by mid-century--there is much reasoning to see this as overly optimistic [see, for example, Hubert Dreyfus' critique of Good Old Fashion AI: "What Computers Can't Do"]. These charts kind of remind me of the "ultraviolet catastrophe" around the end of the 19th century. There are physical limitations that may well tamp progress and keep it to ANI. With respect to AGI, there have been some pointed challenges to this "Law of Accelerating Returns."
Nonetheless, this whole discussion is quite intriguing, no matter your stance, hopes, or fears. 😎
Cheers,
Robert
============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe
http://redfish.com/mailman/listinfo/friam_redfish.com
============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe
http://redfish.com/mailman/listinfo/friam_redfish.com
============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe
http://redfish.com/mailman/listinfo/friam_redfish.com
Meets Fridays 9a-11:30 at cafe at St. John's College