FYI, Santa Fe folks. -tj============================================ Tom Johnson Institute for Analytic Journalism -- Santa Fe, NM USA 505.577.6482(c) 505.473.9646(h) Society of Professional Journalists - Region 9 Director Join more than 1,500 journalists Sept. 18-20 at Excellence in Journalism 2015 in Orlando. #EIJ15 Orlando Can We Reshape Humanity’s Deep Future?Possibilities & Risks of Artificial Intelligence (AI), Human Enhancement, and Other Emerging TechnologiesWHERE: The James A. Little Theater at the New Mexico School for the Deaf. Dr. Nick Bostrom spends much of his time calculating the possible rewards and dangers of rapid technological advances — how such advances will likely alter the course of human evolution and life as we know it. One useful concept in untangling this puzzle is existential risk — the question of whether an adverse outcome would end human intelligent life or drastically curtail what we, in the infancy of the twenty-first century, would consider a viable future. Figuring out how to reduce existential risk even slightly brings into play an array of thought-provoking issues. In this engaging lecture, Professor Bostrom will present the factors to be taken into consideration:
About Nick BostromNick Bostrom is Professor in the Faculty of Philosophy at Oxford University. He is the founding director of the Future of Humanity Institute, a multidisciplinary research center that enables a few exceptional mathematicians, philosophers, and scientists to think carefully about global priorities and big questions for humanity. He is the recipient of a Eugene R. Gannon Award and has been listed on Foreign Policy’s Top 100 Global Thinkers list. He was included on Prospect magazine’s World Thinkers list, the youngest person in the top fifteen from all fields and the highest-ranked analytic philosopher. His writings have been translated into twenty-four languages. Bostrom’s background includes physics, computational neuroscience, and mathematical logic as well as philosophy. He is the author of some 200 publications, including Anthropic Bias (Routledge, 2002), Global Catastrophic Risks (ed., OUP, 2008), Human Enhancement (ed., OUP, 2009), and Superintelligence: Paths, Dangers, Strategies (OUP, 2014), a New York Times bestseller. He is best known for his work in five areas: existential risk; the simulation argument; anthropics; impacts of future technology; and implications of consequentialism for global strategy. He has been referred to as one of the most important thinkers of our age. SAR thanks these sponsors for underwriting this lecture:
Slate, Sept. 2014: You Should Be Terrified of Superintelligent MachinesIn the recent discussion over the risks of developing superintelligent machines—that is, machines with general intelligence greater than that of humans—two narratives have emerged. One side argues that if a machine ever achieved advanced intelligence, it would automatically know and care about human values and wouldn’t pose a threat to us. The opposing side argues that artificial intelligence would “want” to wipe humans out, either out of revenge or an intrinsic desire for survival. Aeon Magazine, Feb. 2013: OmensTo understand why an AI might be dangerous, you have to avoid anthropomorphising it. When you ask yourself what it might do in a particular situation, you can’t answer by proxy. You can't picture a super-smart version of yourself floating above the situation. Human cognition is only one species of intelligence, one with built-in impulses like empathy that colour the way we see the world, and limit what we are willing to do to accomplish our goals. But these biochemical impulses aren’t essential components of intelligence. They’re incidental software applications, installed by aeons of evolution and culture. Bostrom told me that it’s best to think of an AI as a primordial force of nature, like a star system or a hurricane — something strong, but indifferent. TEDx/Youtube, Apr. 2015: TEDx Talks: What happens when our computers get smarter than we are?Artificial intelligence is getting smarter by leaps and bounds — within this century, research suggests, a computer AI could be as “smart” as a human being. Nick Bostrom asks us to think hard about the world we're building right now, driven by thinking machines. Will our smart machines help to preserve humanity and our values — or will they have values of their own? Become a Member of SAR!A School for Advanced Research membership opens doors to exploring a world of ideas about past and present peoples around the world and in the Southwest, as well as Native American life and arts. Become an SAR member today. Individual memberships start at $50. Click here to join! Header image, copyright: / 123RF Stock Photo ============================== Dorothy H. Bracey -- Santa Fe, NM US [hidden email] ============================== ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com |
I plan to be there. This should be fascinating! George Duncan
georgeduncanart.com (505) 983-6895
Represented by ViVO Contemporary 725 Canyon Road
Santa Fe, NM 87501
My art theme: Dynamic application of matrix order and luminous chaos. "Attempt what is not certain. Certainty may or may not come later. It may then be a valuable delusion."From "Notes to myself on beginning a painting" by Richard Diebenkorn On Mon, May 18, 2015 at 4:54 PM, Tom Johnson <[hidden email]> wrote:
============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com |
Nick Bostrom’s book is really interesting, and I recommend it. I’m sure his talk will be stimulating. I may be in Santa Fe myself by then.
Pamela > > ============================================================ > FRIAM Applied Complexity Group listserv > Meets Fridays 9a-11:30 at cafe at St. John's College > to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com |
In reply to this post by Tom Johnson
cdobson@okstate,edu On Mon, May 18, 2015 at 4:54 PM, Tom Johnson <[hidden email]> wrote:
============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com |
Thanks so much, Tom. I've got my ticket. Sounds wonderful. See you there. On Tue, May 19, 2015 at 10:22 PM, John Dobson <[hidden email]> wrote:
-- Merle Lefkoff, Ph.D.
President, Center for Emergent Diplomacy Santa Fe, New Mexico, USA [hidden email] mobile: (303) 859-5609 skype: merlelefkoff ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com |
I have ordered 2 tickets. Should be interesting. Thanks, Tom. George Duncan
georgeduncanart.com (505) 983-6895
Represented by ViVO Contemporary 725 Canyon Road
Santa Fe, NM 87501
My art theme: Dynamic application of matrix order and luminous chaos. "Attempt what is not certain. Certainty may or may not come later. It may then be a valuable delusion."From "Notes to myself on beginning a painting" by Richard Diebenkorn On Wed, May 20, 2015 at 9:48 AM, Merle Lefkoff <[hidden email]> wrote:
============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com |
Free forum by Nabble | Edit this page |