Бюрократы, Администраторы, Редакторы
3743
правки
Rodion (обсуждение | вклад) м (1 версия импортирована) |
Rodion (обсуждение | вклад) Нет описания правки |
||
Строка 1: | Строка 1: | ||
[[File:AISafety teaching comic.jpg|thumb|right|comic relief]] | [[File:AISafety teaching comic.jpg|thumb|right|comic relief]] | ||
'''AI safety''' is a field of research used to describe a sequence of increasingly specific problems when developing [[Artificial | '''AI safety''' is a field of research used to describe a sequence of increasingly specific problems when developing [[Artificial intelligence]] and [[AGI]]. The goals center around reducing risks posed by AI, especially powerful AI and includes problems in misuse, robustness, reliability, security, privacy, etc. (Subsumes AI control.) AI control: ensuring that AI systems try to do the right thing, and in particular that they don’t competently pursue the wrong thing. [[Value alignment]]: understanding how to build AI systems that share human preferences/values, typically by learning them from humans.<ref>https://ai-alignment.com/ai-safety-vs-control-vs-alignment-2a4b42a863cc</ref> | ||
== Dr. Roman V. Yampolskiy/Alex Klokus - What We Need To Know About A.I. - WGS 2018 == | == Dr. Roman V. Yampolskiy/Alex Klokus - What We Need To Know About A.I. - WGS 2018 == |