Редактирование: Luke Muehlhauser

Перейти к навигации Перейти к поиску
Внимание: Вы не вошли в систему. Ваш IP-адрес будет общедоступен, если вы запишете какие-либо изменения. Если вы войдёте или создадите учётную запись, её имя будет использоваться вместо IP-адреса, наряду с другими преимуществами.

Правка может быть отменена. Пожалуйста, просмотрите сравнение версий ниже, чтобы убедиться, что это нужная вам правка, и запишите страницу ниже, чтобы отменить правку.

Текущая версия Ваш текст
Строка 1: Строка 1:
'''Luke Muehlhauser''' is a former Executive Director of the [[Machine Intelligence Research Institute]]. He grew up as an evangelical Christian. After leaving Christianity, he ran the popular skeptic blog Common Sense Atheism. As he became more involved with the [[LessWrong]] community, he decided to shift his focus to [[AI safety]]. He joined MIRI (at the time known as the Singularity Institute) in 2011 as a researcher, and then was promoted to Executive Director. After leaving MIRI in 2015 (with [[Nate Soares]] succeeding him as Executive Director), he joined the [[Open Philanthropy Project]] as a Research Analyst, where his work has focused on topics such as [[AI timelines]], improving [[superforecasting|forecast accuracy]], and the question of the distribution of [[consciousness]].<ref>[http://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ai-timelines What Do We Know about AI Timelines?]</ref><ref>[http://www.openphilanthropy.org/blog/efforts-improve-accuracy-our-judgments-and-forecasts Efforts to Improve the Accuracy of Our Judgments and Forecasts]</ref><ref>[http://www.openphilanthropy.org/2017-report-consciousness-and-moral-patienthood 2017 Report on Consciousness and Moral Patienthood]</ref>
'''Luke Muehlhauser''' is a former Executive Director of the [[Machine Intelligence Research Institute]]. He grew up as an evangelical Christian. After leaving Christianity, he ran the popular skeptic blog Common Sense Atheism. As he became more involved with the [[Less Wrong]] community, he decided to shift his focus to [[AI safety]]. He joined MIRI (at the time known as the Singularity Institute) in 2011 as a researcher, and then was promoted to Executive Director. After leaving MIRI in 2015 (with [[Nate Soares]] succeeding him as Executive Director), he joined the [[Open Philanthropy Project]] as a Research Analyst, where his work has focused on topics such as [[AI timelines]], improving [[superforecasting|forecast accuracy]], and the question of the distribution of [[consciousness]].<ref>[http://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ai-timelines What Do We Know about AI Timelines?]</ref><ref>[http://www.openphilanthropy.org/blog/efforts-improve-accuracy-our-judgments-and-forecasts Efforts to Improve the Accuracy of Our Judgments and Forecasts]</ref><ref>[http://www.openphilanthropy.org/2017-report-consciousness-and-moral-patienthood 2017 Report on Consciousness and Moral Patienthood]</ref>


== External links ==
== External links ==
Пожалуйста, учтите, что любой ваш вклад в проект «hpluswiki» может быть отредактирован или удалён другими участниками. Если вы не хотите, чтобы кто-либо изменял ваши тексты, не помещайте их сюда.
Вы также подтверждаете, что являетесь автором вносимых дополнений, или скопировали их из источника, допускающего свободное распространение и изменение своего содержимого (см. Hpluswiki:Авторские права). НЕ РАЗМЕЩАЙТЕ БЕЗ РАЗРЕШЕНИЯ ОХРАНЯЕМЫЕ АВТОРСКИМ ПРАВОМ МАТЕРИАЛЫ!

Шаблоны, используемые на этой странице: