AI safety: различия между версиями

Перейти к навигации Перейти к поиску
Нет изменений в размере ,  21 мая 2022
нет описания правки
H+Pedia>Deku-shrub
 
Нет описания правки
 
(не показана 1 промежуточная версия этого же участника)
Строка 1: Строка 1:
[[File:AISafety teaching comic.jpg|thumb|right|comic relief]]
[[File:AISafety teaching comic.jpg|thumb|right|comic relief]]
'''AI safety''' is a field of research used to describe a sequence of increasingly specific problems when developing [[Artificial Intelligence]] and [[AGI]]. The goals center around reducing risks posed by AI, especially powerful AI and includes problems in misuse, robustness, reliability, security, privacy, etc. (Subsumes AI control.) AI control: ensuring that AI systems try to do the right thing, and in particular that they don’t competently pursue the wrong thing. [[Value alignment]]: understanding how to build AI systems that share human preferences/values, typically by learning them from humans.<ref>https://ai-alignment.com/ai-safety-vs-control-vs-alignment-2a4b42a863cc</ref>  
'''AI safety''' is a field of research used to describe a sequence of increasingly specific problems when developing [[Artificial intelligence]] and [[AGI]]. The goals center around reducing risks posed by AI, especially powerful AI and includes problems in misuse, robustness, reliability, security, privacy, etc. (Subsumes AI control.) AI control: ensuring that AI systems try to do the right thing, and in particular that they don’t competently pursue the wrong thing. [[Value alignment]]: understanding how to build AI systems that share human preferences/values, typically by learning them from humans.<ref>https://ai-alignment.com/ai-safety-vs-control-vs-alignment-2a4b42a863cc</ref>  


== Dr. Roman V. Yampolskiy/Alex Klokus - What We Need To Know About A.I. - WGS 2018 ==
== Dr. Roman V. Yampolskiy/Alex Klokus - What We Need To Know About A.I. - WGS 2018 ==
3712

правок

Навигация