|
Метка: новое перенаправление |
(не показана 1 промежуточная версия этого же участника) |
Строка 1: |
Строка 1: |
| [[File:Terminator.jpg|thumb|right|[[Terminator|The Terminator]] - a popular portrayal of an unfriendly AI]]
| | #redirect [[AI risk]] |
| '''AI risk''' is the potential for artificial intelligent systems to cause unintended harm.
| |
| | |
| == Sources of harm from AI ==
| |
| | |
| AI harm might arise from:
| |
| * Bugs: the software behaves different from the specification
| |
| * Specification errors: designers didn't foresee all the circumstances properly (this includes unanticipated interactions between different modules)
| |
| * Security errors: the software gets hacked for purposes other than its original design
| |
| * AI Control Problem: an AI that can't be controlled.
| |
| | |
| The potential for harm is compounded by:
| |
| * Fierce competitive pressures, which may lead some designers to cut corners
| |
| * Much software having a "black box" nature which means that its behaviour in new circumstances is difficult to predict
| |
| * AI components being available as open source, and utilised by third parties in ways their designers didn't intend (or foresee).
| |
| | |
| == Risks from AI even in the absence of an intelligence explosion ==
| |
| | |
| Popular accounts of AI risk often focus on two factors thought to be preconditions for any major harm from AI:
| |
| * The AI become [[self-aware]]
| |
| * The AI undergoes an intelligence explosion
| |
| | |
| However, [[Viktoriya Krakovna]] points out that risks can arise without either of these factors occurring<ref>[http://futureoflife.org/2015/11/30/risks-from-general-artificial-intelligence-without-an-intelligence-explosion/ Risks From General Artificial Intelligence Without an Intelligence Explosion]</ref>. Krakovna urges AI risk analysts to pay attention to factors such as
| |
| # Human incentives: Researchers, companies and governments have professional and economic incentives to build AI that is as powerful as possible, as quickly as possible | |
| # Convergent instrumental goals: Sufficiently advanced AI systems would by default develop drives like self-preservation, resource acquisition, and preservation of their objective functions, independent of their objective function or design.
| |
| # Unintended consequences: As in the stories of Sorcerer’s Apprentice and King Midas, you get what you asked for, but not what you wanted
| |
| # Value learning is hard: Specifying common sense and ethics in computer code is no easy feat.
| |
| # Value learning is insufficient: Even an AI system with perfect understanding of human values and goals would not necessarily adopt them
| |
| # Containment is hard: A general AI system with access to the internet would be able to hack thousands of computers and copy itself onto them, thus becoming difficult or impossible to shut down – this is a serious problem even with present-day computer viruses.
| |
| | |
| == Pathways to dangerous AIs ==
| |
| | |
| As classified by [[Roman Yampolskiy]], pathways to dangerous AIs include<ref>[http://arxiv.org/ftp/arxiv/papers/1511/1511.03246.pdf Taxonomy of Pathways to Dangerous AI]</ref>:
| |
| | |
| * On Purpose – Pre-Deployment
| |
| * On Purpose - Post Deployment
| |
| * By Mistake - Pre-Deployment
| |
| * By Mistake - Post-Deployment
| |
| * Environment – Pre-Deployment
| |
| * Environment – Post-Deployment
| |
| * Independently - Pre-Deployment
| |
| * Independently – Post-Deployment
| |
| | |
| ==AI Risk Advocates==
| |
| One of the most notable Risk Advocates in regards to AI is [[Elon Musk]], which is said to be one of the reasons behind his creation of [[OpenAI]].<ref>https://en.wikipedia.org/wiki/Open_Letter_on_Artificial_Intelligence</ref><ref>http://www.telegraph.co.uk/technology/news/11342200/Top-scientists-call-for-caution-over-artificial-intelligence.html</ref>
| |
| | |
| | |
| [[Category:Existential risks]]
| |
| | |
| == See also ==
| |
| * [[MIRI]]
| |
| == External links ==
| |
| * {{wikipedia|Existential risk from advanced artificial intelligence}}
| |
| | |
| == References ==
| |
| {{reflist}}
| |
| [[Category:Existential risks]]
| |
| [[Category:Artificial intelligence]]
| |