LessWrong/en: различия между версиями
Rodion (обсуждение | вклад) (Новая страница: «{{languages|LessWrong}} '''LessWrong''' (aka '''Less Wrong''') is a discussion forum founded by Eliezer Yudkowsky focused on rationality and futurist thinki...») |
Rodion (обсуждение | вклад) Нет описания правки |
||
(не показана 1 промежуточная версия этого же участника) | |||
Строка 4: | Строка 4: | ||
== History == | == History == | ||
According to the LessWrong FAQ, | According to the LessWrong FAQ,the site developed out Overcoming Bias, an earlier group blog focused on human rationality. Overcoming Bias originated in November 2006, with artificial intelligence (AI) theorist Eliezer Yudkowsky and economist [[Robin Hanson]] as the principal contributors. In February 2009, Yudkowsky's posts were used as the seed material to create the community blog LessWrong, and Overcoming Bias became Hanson's personal blog. | ||
LessWrong has been closely associated with the [[effective altruism]] movement. Effective-altruism-focused charity evaluator GiveWell has benefited from outreach to LessWrong. | LessWrong has been closely associated with the [[effective altruism]] movement. Effective-altruism-focused charity evaluator GiveWell has benefited from outreach to LessWrong. | ||
== Roko's basilisk == | == Roko's basilisk == | ||
In July 2010, LessWrong contributor Roko posted a thought experiment to the site in which an otherwise benevolent future AI system tortures anyone who does not work to bring the system into existence. This idea came to be known as "Roko's basilisk," based on Roko's idea that merely hearing about the idea would give the hypothetical AI system stronger incentives to employ blackmail. Yudkowsky deleted Roko's posts on the topic, later writing that he did so because although Roko's reasoning was mistaken, the topic shouldn't be publicly discussed in case some version of the argument could be made to work. Discussion of Roko's basilisk was banned on LessWrong for several years thereafter. | In July 2010, LessWrong contributor Roko posted a thought experiment to the site in which an otherwise benevolent future AI system tortures anyone who does not work to bring the system into existence. This idea came to be known as "Roko's basilisk," based on Roko's idea that merely hearing about the idea would give the hypothetical AI system stronger incentives to employ blackmail. Yudkowsky deleted Roko's posts on the topic, later writing that he did so because although Roko's reasoning was mistaken, the topic shouldn't be publicly discussed in case some version of the argument could be made to work. Discussion of Roko's basilisk was banned on LessWrong for several years thereafter. | ||
== Media coverage == | == Media coverage == | ||
LessWrong has been covered in Business Insider and Slate. Core concepts from LessWrong have been referenced in columns in The Guardian. | LessWrong has been covered in Business Insider and Slate. Core concepts from LessWrong have been referenced in columns in The Guardian. | ||
LessWrong has been mentioned briefly in articles related to the technological singularity and the work of the [[Machine Intelligence Research Institute]] (formerly called the Singularity Institute). It has also been mentioned in articles about [[Neo-reactionary movement|online monarchists and neo-reactionaries]] in positive light.<ref>http://techcrunch.com/2013/11/22/geeks-for-monarchy/</ref> | LessWrong has been mentioned briefly in articles related to the technological singularity and the work of the [[Machine Intelligence Research Institute]] (formerly called the Singularity Institute). It has also been mentioned in articles about [[Neo-reactionary movement|online monarchists and neo-reactionaries]] in positive light.<ref>http://techcrunch.com/2013/11/22/geeks-for-monarchy/</ref> | ||
Строка 40: | Строка 40: | ||
==Reference== | ==Reference== | ||
[[Category:Transhumanist organisations]] | [[Category:Transhumanist organisations]] | ||
[[Category:Transhumanist discussion forums]] | [[Category:Transhumanist discussion forums]] |
Текущая версия от 21:39, 17 апреля 2021
LessWrong (aka Less Wrong) is a discussion forum founded by Eliezer Yudkowsky focused on rationality and futurist thinking. It is operated by the Machine Intelligence Research Institute.
History[править]
According to the LessWrong FAQ,the site developed out Overcoming Bias, an earlier group blog focused on human rationality. Overcoming Bias originated in November 2006, with artificial intelligence (AI) theorist Eliezer Yudkowsky and economist Robin Hanson as the principal contributors. In February 2009, Yudkowsky's posts were used as the seed material to create the community blog LessWrong, and Overcoming Bias became Hanson's personal blog.
LessWrong has been closely associated with the effective altruism movement. Effective-altruism-focused charity evaluator GiveWell has benefited from outreach to LessWrong.
Roko's basilisk[править]
In July 2010, LessWrong contributor Roko posted a thought experiment to the site in which an otherwise benevolent future AI system tortures anyone who does not work to bring the system into existence. This idea came to be known as "Roko's basilisk," based on Roko's idea that merely hearing about the idea would give the hypothetical AI system stronger incentives to employ blackmail. Yudkowsky deleted Roko's posts on the topic, later writing that he did so because although Roko's reasoning was mistaken, the topic shouldn't be publicly discussed in case some version of the argument could be made to work. Discussion of Roko's basilisk was banned on LessWrong for several years thereafter.
Media coverage[править]
LessWrong has been covered in Business Insider and Slate. Core concepts from LessWrong have been referenced in columns in The Guardian.
LessWrong has been mentioned briefly in articles related to the technological singularity and the work of the Machine Intelligence Research Institute (formerly called the Singularity Institute). It has also been mentioned in articles about online monarchists and neo-reactionaries in positive light.[1]
Jargon and community[править]
Less wrong uses an extensive set of in-group jargon and memes. Useful ones to know are:
- Signaling theory
- Paperclip maximizer - a cautionary thought experiment about AI
- The Sequences - a series of writings from Eliezer later published in Rationality: From AI to Zombies
- Applied Rationality - the utilitarian ideas advocated by the likes of the Center for Applied Rationality around decision making theories
- Effective altruism - charity meets applied rationality
There are also international meetup groups around the world for people who subscribe to the associated ideas. The association with the main site is looser in recent years, often referred to as the 'Rationalist movement'.
Current status[править]
Less Wrong is currently far less active since it's 2012 peak with many core contributors having gone on to form their own blogs, or otherwise join what's commonly know as the Less Wrong diaspora.[2]
External links[править]
- lesswrong.com
- wiki.lesswrong.com
- Facebook Group - Brain Debugging Discussion
- Facebook Group - More Wrong
- Less Wrong Slack network
- Less Wrong on Wikipedia
- lesswrong on Twitter