AI risk
(перенаправлено с «Unfriendly AI»)
AI risk is the potential for artificial intelligent systems to cause unintended harm.
Sources of harm from AI[править]
AI harm might arise from:
- Bugs: the software behaves different from the specification
- Specification errors: designers didn't foresee all the circumstances properly (this includes unanticipated interactions between different modules)
- Security errors: the software gets hacked for purposes other than its original design
- AI Control Problem: an AI that can't be controlled.
The potential for harm is compounded by:
- Fierce competitive pressures, which may lead some designers to cut corners
- Much software having a "black box" nature which means that its behaviour in new circumstances is difficult to predict
- AI components being available as open source, and utilised by third parties in ways their designers didn't intend (or foresee).
Risks from AI even in the absence of an intelligence explosion[править]
Popular accounts of AI risk often focus on two factors thought to be preconditions for any major harm from AI:
- The AI become self-aware
- The AI undergoes an intelligence explosion
However, Viktoriya Krakovna points out that risks can arise without either of these factors occurring[1]. Krakovna urges AI risk analysts to pay attention to factors such as
- Human incentives: Researchers, companies and governments have professional and economic incentives to build AI that is as powerful as possible, as quickly as possible
- Convergent instrumental goals: Sufficiently advanced AI systems would by default develop drives like self-preservation, resource acquisition, and preservation of their objective functions, independent of their objective function or design.
- Unintended consequences: As in the stories of Sorcerer’s Apprentice and King Midas, you get what you asked for, but not what you wanted
- Value learning is hard: Specifying common sense and ethics in computer code is no easy feat.
- Value learning is insufficient: Even an AI system with perfect understanding of human values and goals would not necessarily adopt them
- Containment is hard: A general AI system with access to the internet would be able to hack thousands of computers and copy itself onto them, thus becoming difficult or impossible to shut down – this is a serious problem even with present-day computer viruses.
Pathways to dangerous AIs[править]
As classified by Roman Yampolskiy, pathways to dangerous AIs include[2]:
- On Purpose – Pre-Deployment
- On Purpose - Post Deployment
- By Mistake - Pre-Deployment
- By Mistake - Post-Deployment
- Environment – Pre-Deployment
- Environment – Post-Deployment
- Independently - Pre-Deployment
- Independently – Post-Deployment
AI Risk Advocates[править]
One of the most notable Risk Advocates in regards to AI is Elon Musk, which is said to be one of the reasons behind his creation of OpenAI.[3][4]
See also[править]
External links[править]
References[править]
- ↑ Risks From General Artificial Intelligence Without an Intelligence Explosion
- ↑ Taxonomy of Pathways to Dangerous AI
- ↑ https://en.wikipedia.org/wiki/Open_Letter_on_Artificial_Intelligence
- ↑ http://www.telegraph.co.uk/technology/news/11342200/Top-scientists-call-for-caution-over-artificial-intelligence.html