Fwd: SAR lecture - of interest to FRIAM folks?

classic Classic list List threaded Threaded
6 messages Options
Reply | Threaded
Open this post in threaded view
|

Fwd: SAR lecture - of interest to FRIAM folks?

Tom Johnson
FYI, Santa Fe folks.
-tj

============================================
Tom Johnson
Institute for Analytic Journalism   --     Santa Fe, NM USA
505.577.6482(c)                                    505.473.9646(h)
Society of Professional Journalists   -   Region 9 Director
Join more than 1,500 journalists Sept. 18-20 at
Excellence in Journalism 2015 in Orlando.  #EIJ15 Orlando
http://www.jtjohnson.com                   [hidden email]
============================================


Can We Reshape Humanity’s Deep Future?

Possibilities & Risks of Artificial Intelligence (AI), Human Enhancement, and Other Emerging Technologies


WHERE: The James A. Little Theater at the New Mexico School for the Deaf.
WHEN: Sunday, June 7, 2015, 2:00 pm
TICKETS: Book your seats now | More info.


Dr. Nick Bostrom spends much of his time calculating the possible rewards and dangers of rapid technological advances — how such advances will likely alter the course of human evolution and life as we know it. One useful concept in untangling this puzzle is existential risk — the question of whether an adverse outcome would end human intelligent life or drastically curtail what we, in the infancy of the twenty-first century, would consider a viable future. Figuring out how to reduce existential risk even slightly brings into play an array of thought-provoking issues. In this engaging lecture, Professor Bostrom will present the factors to be taken into consideration:

  • Future technology and its capabilities
  • Anthropics
  • Population ethics
  • Human enhancement ethics
  • Game theory
  • Fermi paradox

About Nick Bostrom

Nick Bostrom is Professor in the Faculty of Philosophy at Oxford University. He is the founding director of the Future of Humanity Institute, a multidisciplinary research center that enables a few exceptional mathematicians, philosophers, and scientists to think carefully about global priorities and big questions for humanity.

He is the recipient of a Eugene R. Gannon Award and has been listed on Foreign Policy’s Top 100 Global Thinkers list. He was included on Prospect magazine’s World Thinkers list, the youngest person in the top fifteen from all fields and the highest-ranked analytic philosopher. His writings have been translated into twenty-four languages.

Bostrom’s background includes physics, computational neuroscience, and mathematical logic as well as philosophy. He is the author of some 200 publications, including Anthropic Bias (Routledge, 2002), Global Catastrophic Risks (ed., OUP, 2008), Human Enhancement (ed., OUP, 2009), and Superintelligence: Paths, Dangers, Strategies (OUP, 2014), a New York Times bestseller. He is best known for his work in five areas: existential risk; the simulation argument; anthropics; impacts of future technology; and implications of consequentialism for global strategy. He has been referred to as one of the most important thinkers of our age.


SAR thanks these sponsors for underwriting this lecture:


 

Slate, Sept. 2014:

You Should Be Terrified of Superintelligent Machines

In the recent discussion over the risks of developing superintelligent machines—that is, machines with general intelligence greater than that of humans—two narratives have emerged. One side argues that if a machine ever achieved advanced intelligence, it would automatically know and care about human values and wouldn’t pose a threat to us. The opposing side argues that artificial intelligence would “want” to wipe humans out, either out of revenge or an intrinsic desire for survival. 

As it turns out, both of these views are wrong. 

Read more >

Aeon Magazine, Feb. 2013:

Omens

To understand why an AI might be dangerous, you have to avoid anthropomorphising it. When you ask yourself what it might do in a particular situation, you can’t answer by proxy. You can't picture a super-smart version of yourself floating above the situation. Human cognition is only one species of intelligence, one with built-in impulses like empathy that colour the way we see the world, and limit what we are willing to do to accomplish our goals. But these biochemical impulses aren’t essential components of intelligence. They’re incidental software applications, installed by aeons of evolution and culture. Bostrom told me that it’s best to think of an AI as a primordial force of nature, like a star system or a hurricane — something strong, but indifferent.

Read more >

TEDx/Youtube, Apr. 2015:

TEDx Talks: What happens when our computers get smarter than we are?


Artificial intelligence is getting smarter by leaps and bounds — within this century, research suggests, a computer AI could be as “smart” as a human being. Nick Bostrom asks us to think hard about the world we're building right now, driven by thinking machines. Will our smart machines help to preserve humanity and our values — or will they have values of their own?

Become a Member of SAR!

A School for Advanced Research membership opens doors to exploring a world of ideas about past and present peoples around the world and in the Southwest, as well as Native American life and arts. Become an SAR member today. Individual memberships start at $50.   Click here to join!




Header image, copyright: / 123RF Stock Photo


==============================
Dorothy H. Bracey -- Santa Fe, NM US
[hidden email]                      
==============================



============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
Reply | Threaded
Open this post in threaded view
|

Re: Fwd: SAR lecture - of interest to FRIAM folks?

George Duncan-2
I plan to be there. This should be fascinating!

George Duncan
georgeduncanart.com
(505) 983-6895 
Represented by ViVO Contemporary
725 Canyon Road
Santa Fe, NM 87501
 
My art theme: Dynamic application of matrix order and luminous chaos.

"Attempt what is not certain. Certainty may or may not come later. It may then be a valuable delusion."

From "Notes to myself on beginning a painting" by Richard Diebenkorn

On Mon, May 18, 2015 at 4:54 PM, Tom Johnson <[hidden email]> wrote:
FYI, Santa Fe folks.
-tj

============================================
Tom Johnson
Institute for Analytic Journalism   --     Santa Fe, NM USA
<a href="tel:505.577.6482" value="+15055776482" target="_blank">505.577.6482(c)                                    <a href="tel:505.473.9646" value="+15054739646" target="_blank">505.473.9646(h)
Society of Professional Journalists   -   Region 9 Director
Join more than 1,500 journalists Sept. 18-20 at
Excellence in Journalism 2015 in Orlando.  #EIJ15 Orlando
http://www.jtjohnson.com                   [hidden email]
============================================


Can We Reshape Humanity’s Deep Future?

Possibilities & Risks of Artificial Intelligence (AI), Human Enhancement, and Other Emerging Technologies


WHERE: The James A. Little Theater at the New Mexico School for the Deaf.
WHEN: Sunday, June 7, 2015, 2:00 pm
TICKETS: Book your seats now | More info.


Dr. Nick Bostrom spends much of his time calculating the possible rewards and dangers of rapid technological advances — how such advances will likely alter the course of human evolution and life as we know it. One useful concept in untangling this puzzle is existential risk — the question of whether an adverse outcome would end human intelligent life or drastically curtail what we, in the infancy of the twenty-first century, would consider a viable future. Figuring out how to reduce existential risk even slightly brings into play an array of thought-provoking issues. In this engaging lecture, Professor Bostrom will present the factors to be taken into consideration:

  • Future technology and its capabilities
  • Anthropics
  • Population ethics
  • Human enhancement ethics
  • Game theory
  • Fermi paradox

About Nick Bostrom

Nick Bostrom is Professor in the Faculty of Philosophy at Oxford University. He is the founding director of the Future of Humanity Institute, a multidisciplinary research center that enables a few exceptional mathematicians, philosophers, and scientists to think carefully about global priorities and big questions for humanity.

He is the recipient of a Eugene R. Gannon Award and has been listed on Foreign Policy’s Top 100 Global Thinkers list. He was included on Prospect magazine’s World Thinkers list, the youngest person in the top fifteen from all fields and the highest-ranked analytic philosopher. His writings have been translated into twenty-four languages.

Bostrom’s background includes physics, computational neuroscience, and mathematical logic as well as philosophy. He is the author of some 200 publications, including Anthropic Bias (Routledge, 2002), Global Catastrophic Risks (ed., OUP, 2008), Human Enhancement (ed., OUP, 2009), and Superintelligence: Paths, Dangers, Strategies (OUP, 2014), a New York Times bestseller. He is best known for his work in five areas: existential risk; the simulation argument; anthropics; impacts of future technology; and implications of consequentialism for global strategy. He has been referred to as one of the most important thinkers of our age.


SAR thanks these sponsors for underwriting this lecture:


 

Slate, Sept. 2014:

You Should Be Terrified of Superintelligent Machines

In the recent discussion over the risks of developing superintelligent machines—that is, machines with general intelligence greater than that of humans—two narratives have emerged. One side argues that if a machine ever achieved advanced intelligence, it would automatically know and care about human values and wouldn’t pose a threat to us. The opposing side argues that artificial intelligence would “want” to wipe humans out, either out of revenge or an intrinsic desire for survival. 

As it turns out, both of these views are wrong. 

Read more >

Aeon Magazine, Feb. 2013:

Omens

To understand why an AI might be dangerous, you have to avoid anthropomorphising it. When you ask yourself what it might do in a particular situation, you can’t answer by proxy. You can't picture a super-smart version of yourself floating above the situation. Human cognition is only one species of intelligence, one with built-in impulses like empathy that colour the way we see the world, and limit what we are willing to do to accomplish our goals. But these biochemical impulses aren’t essential components of intelligence. They’re incidental software applications, installed by aeons of evolution and culture. Bostrom told me that it’s best to think of an AI as a primordial force of nature, like a star system or a hurricane — something strong, but indifferent.

Read more >

TEDx/Youtube, Apr. 2015:

TEDx Talks: What happens when our computers get smarter than we are?


Artificial intelligence is getting smarter by leaps and bounds — within this century, research suggests, a computer AI could be as “smart” as a human being. Nick Bostrom asks us to think hard about the world we're building right now, driven by thinking machines. Will our smart machines help to preserve humanity and our values — or will they have values of their own?

Become a Member of SAR!

A School for Advanced Research membership opens doors to exploring a world of ideas about past and present peoples around the world and in the Southwest, as well as Native American life and arts. Become an SAR member today. Individual memberships start at $50.   Click here to join!




Header image, copyright: / 123RF Stock Photo


==============================
Dorothy H. Bracey -- Santa Fe, NM US
[hidden email]                      
==============================



============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
Reply | Threaded
Open this post in threaded view
|

Re: Fwd: SAR lecture - of interest to FRIAM folks?

Pamela McCorduck
Nick Bostrom’s book is really interesting, and I recommend it. I’m sure his talk will be stimulating. I may be in Santa Fe myself by then.

Pamela

>
> ============================================================
> FRIAM Applied Complexity Group listserv
> Meets Fridays 9a-11:30 at cafe at St. John's College
> to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
Reply | Threaded
Open this post in threaded view
|

[ SPAM ] Re: Fwd: SAR lecture - of interest to FRIAM folks?

John Dobson
In reply to this post by Tom Johnson
cdobson@okstate,edu

On Mon, May 18, 2015 at 4:54 PM, Tom Johnson <[hidden email]> wrote:
FYI, Santa Fe folks.
-tj

============================================
Tom Johnson
Institute for Analytic Journalism   --     Santa Fe, NM USA
<a href="tel:505.577.6482" value="+15055776482" target="_blank">505.577.6482(c)                                    <a href="tel:505.473.9646" value="+15054739646" target="_blank">505.473.9646(h)
Society of Professional Journalists   -   Region 9 Director
Join more than 1,500 journalists Sept. 18-20 at
Excellence in Journalism 2015 in Orlando.  #EIJ15 Orlando
http://www.jtjohnson.com                   [hidden email]
============================================


Can We Reshape Humanity’s Deep Future?

Possibilities & Risks of Artificial Intelligence (AI), Human Enhancement, and Other Emerging Technologies


WHERE: The James A. Little Theater at the New Mexico School for the Deaf.
WHEN: Sunday, June 7, 2015, 2:00 pm
TICKETS: Book your seats now | More info.


Dr. Nick Bostrom spends much of his time calculating the possible rewards and dangers of rapid technological advances — how such advances will likely alter the course of human evolution and life as we know it. One useful concept in untangling this puzzle is existential risk — the question of whether an adverse outcome would end human intelligent life or drastically curtail what we, in the infancy of the twenty-first century, would consider a viable future. Figuring out how to reduce existential risk even slightly brings into play an array of thought-provoking issues. In this engaging lecture, Professor Bostrom will present the factors to be taken into consideration:

  • Future technology and its capabilities
  • Anthropics
  • Population ethics
  • Human enhancement ethics
  • Game theory
  • Fermi paradox

About Nick Bostrom

Nick Bostrom is Professor in the Faculty of Philosophy at Oxford University. He is the founding director of the Future of Humanity Institute, a multidisciplinary research center that enables a few exceptional mathematicians, philosophers, and scientists to think carefully about global priorities and big questions for humanity.

He is the recipient of a Eugene R. Gannon Award and has been listed on Foreign Policy’s Top 100 Global Thinkers list. He was included on Prospect magazine’s World Thinkers list, the youngest person in the top fifteen from all fields and the highest-ranked analytic philosopher. His writings have been translated into twenty-four languages.

Bostrom’s background includes physics, computational neuroscience, and mathematical logic as well as philosophy. He is the author of some 200 publications, including Anthropic Bias (Routledge, 2002), Global Catastrophic Risks (ed., OUP, 2008), Human Enhancement (ed., OUP, 2009), and Superintelligence: Paths, Dangers, Strategies (OUP, 2014), a New York Times bestseller. He is best known for his work in five areas: existential risk; the simulation argument; anthropics; impacts of future technology; and implications of consequentialism for global strategy. He has been referred to as one of the most important thinkers of our age.


SAR thanks these sponsors for underwriting this lecture:


 

Slate, Sept. 2014:

You Should Be Terrified of Superintelligent Machines

In the recent discussion over the risks of developing superintelligent machines—that is, machines with general intelligence greater than that of humans—two narratives have emerged. One side argues that if a machine ever achieved advanced intelligence, it would automatically know and care about human values and wouldn’t pose a threat to us. The opposing side argues that artificial intelligence would “want” to wipe humans out, either out of revenge or an intrinsic desire for survival. 

As it turns out, both of these views are wrong. 

Read more >

Aeon Magazine, Feb. 2013:

Omens

To understand why an AI might be dangerous, you have to avoid anthropomorphising it. When you ask yourself what it might do in a particular situation, you can’t answer by proxy. You can't picture a super-smart version of yourself floating above the situation. Human cognition is only one species of intelligence, one with built-in impulses like empathy that colour the way we see the world, and limit what we are willing to do to accomplish our goals. But these biochemical impulses aren’t essential components of intelligence. They’re incidental software applications, installed by aeons of evolution and culture. Bostrom told me that it’s best to think of an AI as a primordial force of nature, like a star system or a hurricane — something strong, but indifferent.

Read more >

TEDx/Youtube, Apr. 2015:

TEDx Talks: What happens when our computers get smarter than we are?


Artificial intelligence is getting smarter by leaps and bounds — within this century, research suggests, a computer AI could be as “smart” as a human being. Nick Bostrom asks us to think hard about the world we're building right now, driven by thinking machines. Will our smart machines help to preserve humanity and our values — or will they have values of their own?

Become a Member of SAR!

A School for Advanced Research membership opens doors to exploring a world of ideas about past and present peoples around the world and in the Southwest, as well as Native American life and arts. Become an SAR member today. Individual memberships start at $50.   Click here to join!




Header image, copyright: / 123RF Stock Photo


==============================
Dorothy H. Bracey -- Santa Fe, NM US
[hidden email]                      
==============================



============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
Reply | Threaded
Open this post in threaded view
|

Re: [ SPAM ] Re: Fwd: SAR lecture - of interest to FRIAM folks?

Merle Lefkoff-2
Thanks so much, Tom.  I've got my ticket. Sounds wonderful.  See you there.



On Tue, May 19, 2015 at 10:22 PM, John Dobson <[hidden email]> wrote:
cdobson@okstate,edu

On Mon, May 18, 2015 at 4:54 PM, Tom Johnson <[hidden email]> wrote:
FYI, Santa Fe folks.
-tj

============================================
Tom Johnson
Institute for Analytic Journalism   --     Santa Fe, NM USA
<a href="tel:505.577.6482" value="+15055776482" target="_blank">505.577.6482(c)                                    <a href="tel:505.473.9646" value="+15054739646" target="_blank">505.473.9646(h)
Society of Professional Journalists   -   Region 9 Director
Join more than 1,500 journalists Sept. 18-20 at
Excellence in Journalism 2015 in Orlando.  #EIJ15 Orlando
http://www.jtjohnson.com                   [hidden email]
============================================


Can We Reshape Humanity’s Deep Future?

Possibilities & Risks of Artificial Intelligence (AI), Human Enhancement, and Other Emerging Technologies


WHERE: The James A. Little Theater at the New Mexico School for the Deaf.
WHEN: Sunday, June 7, 2015, 2:00 pm
TICKETS: Book your seats now | More info.


Dr. Nick Bostrom spends much of his time calculating the possible rewards and dangers of rapid technological advances — how such advances will likely alter the course of human evolution and life as we know it. One useful concept in untangling this puzzle is existential risk — the question of whether an adverse outcome would end human intelligent life or drastically curtail what we, in the infancy of the twenty-first century, would consider a viable future. Figuring out how to reduce existential risk even slightly brings into play an array of thought-provoking issues. In this engaging lecture, Professor Bostrom will present the factors to be taken into consideration:

  • Future technology and its capabilities
  • Anthropics
  • Population ethics
  • Human enhancement ethics
  • Game theory
  • Fermi paradox

About Nick Bostrom

Nick Bostrom is Professor in the Faculty of Philosophy at Oxford University. He is the founding director of the Future of Humanity Institute, a multidisciplinary research center that enables a few exceptional mathematicians, philosophers, and scientists to think carefully about global priorities and big questions for humanity.

He is the recipient of a Eugene R. Gannon Award and has been listed on Foreign Policy’s Top 100 Global Thinkers list. He was included on Prospect magazine’s World Thinkers list, the youngest person in the top fifteen from all fields and the highest-ranked analytic philosopher. His writings have been translated into twenty-four languages.

Bostrom’s background includes physics, computational neuroscience, and mathematical logic as well as philosophy. He is the author of some 200 publications, including Anthropic Bias (Routledge, 2002), Global Catastrophic Risks (ed., OUP, 2008), Human Enhancement (ed., OUP, 2009), and Superintelligence: Paths, Dangers, Strategies (OUP, 2014), a New York Times bestseller. He is best known for his work in five areas: existential risk; the simulation argument; anthropics; impacts of future technology; and implications of consequentialism for global strategy. He has been referred to as one of the most important thinkers of our age.


SAR thanks these sponsors for underwriting this lecture:


 

Slate, Sept. 2014:

You Should Be Terrified of Superintelligent Machines

In the recent discussion over the risks of developing superintelligent machines—that is, machines with general intelligence greater than that of humans—two narratives have emerged. One side argues that if a machine ever achieved advanced intelligence, it would automatically know and care about human values and wouldn’t pose a threat to us. The opposing side argues that artificial intelligence would “want” to wipe humans out, either out of revenge or an intrinsic desire for survival. 

As it turns out, both of these views are wrong. 

Read more >

Aeon Magazine, Feb. 2013:

Omens

To understand why an AI might be dangerous, you have to avoid anthropomorphising it. When you ask yourself what it might do in a particular situation, you can’t answer by proxy. You can't picture a super-smart version of yourself floating above the situation. Human cognition is only one species of intelligence, one with built-in impulses like empathy that colour the way we see the world, and limit what we are willing to do to accomplish our goals. But these biochemical impulses aren’t essential components of intelligence. They’re incidental software applications, installed by aeons of evolution and culture. Bostrom told me that it’s best to think of an AI as a primordial force of nature, like a star system or a hurricane — something strong, but indifferent.

Read more >

TEDx/Youtube, Apr. 2015:

TEDx Talks: What happens when our computers get smarter than we are?


Artificial intelligence is getting smarter by leaps and bounds — within this century, research suggests, a computer AI could be as “smart” as a human being. Nick Bostrom asks us to think hard about the world we're building right now, driven by thinking machines. Will our smart machines help to preserve humanity and our values — or will they have values of their own?

Become a Member of SAR!

A School for Advanced Research membership opens doors to exploring a world of ideas about past and present peoples around the world and in the Southwest, as well as Native American life and arts. Become an SAR member today. Individual memberships start at $50.   Click here to join!




Header image, copyright: / 123RF Stock Photo


==============================
Dorothy H. Bracey -- Santa Fe, NM US
[hidden email]                      
==============================



============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com



--
Merle Lefkoff, Ph.D.
President, Center for Emergent Diplomacy
Santa Fe, New Mexico, USA
[hidden email]
mobile:  (303) 859-5609
skype:  merlelefkoff

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
Reply | Threaded
Open this post in threaded view
|

Re: [ SPAM ] Re: Fwd: SAR lecture - of interest to FRIAM folks?

George Duncan-2
I have ordered 2 tickets. Should be interesting. Thanks, Tom.

George Duncan
georgeduncanart.com
(505) 983-6895 
Represented by ViVO Contemporary
725 Canyon Road
Santa Fe, NM 87501
 
My art theme: Dynamic application of matrix order and luminous chaos.

"Attempt what is not certain. Certainty may or may not come later. It may then be a valuable delusion."

From "Notes to myself on beginning a painting" by Richard Diebenkorn

On Wed, May 20, 2015 at 9:48 AM, Merle Lefkoff <[hidden email]> wrote:
Thanks so much, Tom.  I've got my ticket. Sounds wonderful.  See you there.



On Tue, May 19, 2015 at 10:22 PM, John Dobson <[hidden email]> wrote:
cdobson@okstate,edu

On Mon, May 18, 2015 at 4:54 PM, Tom Johnson <[hidden email]> wrote:
FYI, Santa Fe folks.
-tj

============================================
Tom Johnson
Institute for Analytic Journalism   --     Santa Fe, NM USA
<a href="tel:505.577.6482" value="+15055776482" target="_blank">505.577.6482(c)                                    <a href="tel:505.473.9646" value="+15054739646" target="_blank">505.473.9646(h)
Society of Professional Journalists   -   Region 9 Director
Join more than 1,500 journalists Sept. 18-20 at
Excellence in Journalism 2015 in Orlando.  #EIJ15 Orlando
http://www.jtjohnson.com                   [hidden email]
============================================


Can We Reshape Humanity’s Deep Future?

Possibilities & Risks of Artificial Intelligence (AI), Human Enhancement, and Other Emerging Technologies


WHERE: The James A. Little Theater at the New Mexico School for the Deaf.
WHEN: Sunday, June 7, 2015, 2:00 pm
TICKETS: Book your seats now | More info.


Dr. Nick Bostrom spends much of his time calculating the possible rewards and dangers of rapid technological advances — how such advances will likely alter the course of human evolution and life as we know it. One useful concept in untangling this puzzle is existential risk — the question of whether an adverse outcome would end human intelligent life or drastically curtail what we, in the infancy of the twenty-first century, would consider a viable future. Figuring out how to reduce existential risk even slightly brings into play an array of thought-provoking issues. In this engaging lecture, Professor Bostrom will present the factors to be taken into consideration:

  • Future technology and its capabilities
  • Anthropics
  • Population ethics
  • Human enhancement ethics
  • Game theory
  • Fermi paradox

About Nick Bostrom

Nick Bostrom is Professor in the Faculty of Philosophy at Oxford University. He is the founding director of the Future of Humanity Institute, a multidisciplinary research center that enables a few exceptional mathematicians, philosophers, and scientists to think carefully about global priorities and big questions for humanity.

He is the recipient of a Eugene R. Gannon Award and has been listed on Foreign Policy’s Top 100 Global Thinkers list. He was included on Prospect magazine’s World Thinkers list, the youngest person in the top fifteen from all fields and the highest-ranked analytic philosopher. His writings have been translated into twenty-four languages.

Bostrom’s background includes physics, computational neuroscience, and mathematical logic as well as philosophy. He is the author of some 200 publications, including Anthropic Bias (Routledge, 2002), Global Catastrophic Risks (ed., OUP, 2008), Human Enhancement (ed., OUP, 2009), and Superintelligence: Paths, Dangers, Strategies (OUP, 2014), a New York Times bestseller. He is best known for his work in five areas: existential risk; the simulation argument; anthropics; impacts of future technology; and implications of consequentialism for global strategy. He has been referred to as one of the most important thinkers of our age.


SAR thanks these sponsors for underwriting this lecture:


 

Slate, Sept. 2014:

You Should Be Terrified of Superintelligent Machines

In the recent discussion over the risks of developing superintelligent machines—that is, machines with general intelligence greater than that of humans—two narratives have emerged. One side argues that if a machine ever achieved advanced intelligence, it would automatically know and care about human values and wouldn’t pose a threat to us. The opposing side argues that artificial intelligence would “want” to wipe humans out, either out of revenge or an intrinsic desire for survival. 

As it turns out, both of these views are wrong. 

Read more >

Aeon Magazine, Feb. 2013:

Omens

To understand why an AI might be dangerous, you have to avoid anthropomorphising it. When you ask yourself what it might do in a particular situation, you can’t answer by proxy. You can't picture a super-smart version of yourself floating above the situation. Human cognition is only one species of intelligence, one with built-in impulses like empathy that colour the way we see the world, and limit what we are willing to do to accomplish our goals. But these biochemical impulses aren’t essential components of intelligence. They’re incidental software applications, installed by aeons of evolution and culture. Bostrom told me that it’s best to think of an AI as a primordial force of nature, like a star system or a hurricane — something strong, but indifferent.

Read more >

TEDx/Youtube, Apr. 2015:

TEDx Talks: What happens when our computers get smarter than we are?


Artificial intelligence is getting smarter by leaps and bounds — within this century, research suggests, a computer AI could be as “smart” as a human being. Nick Bostrom asks us to think hard about the world we're building right now, driven by thinking machines. Will our smart machines help to preserve humanity and our values — or will they have values of their own?

Become a Member of SAR!

A School for Advanced Research membership opens doors to exploring a world of ideas about past and present peoples around the world and in the Southwest, as well as Native American life and arts. Become an SAR member today. Individual memberships start at $50.   Click here to join!




Header image, copyright: / 123RF Stock Photo


==============================
Dorothy H. Bracey -- Santa Fe, NM US
[hidden email]                      
==============================



============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com



--
Merle Lefkoff, Ph.D.
President, Center for Emergent Diplomacy
Santa Fe, New Mexico, USA
[hidden email]
mobile:  <a href="tel:%28303%29%20859-5609" value="+13038595609" target="_blank">(303) 859-5609
skype:  merlelefkoff

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com