Редактирование:
Transhumanist FAQ Version 3
(раздел)
Перейти к навигации
Перейти к поиску
Внимание:
Вы не вошли в систему. Ваш IP-адрес будет общедоступен, если вы запишете какие-либо изменения. Если вы
войдёте
или
создадите учётную запись
, её имя будет использоваться вместо IP-адреса, наряду с другими преимуществами.
Анти-спам проверка.
Не
заполняйте это!
===Aren’t these future technologies very risky? Could they even cause our extinction?=== Yes, and this implies an urgent need to analyze the risks before they materialize and to take steps to reduce them. Biotechnology, nanotechnology, and artificial intelligence pose especially serious risks of accidents and abuse. [See also “If these technologies are so dangerous, should they be banned? What can be done to reduce the risks?”] One can distinguish between, on the one hand, endurable or limited hazards, such as car crashes, nuclear reactor meltdowns, carcinogenic pollutants in the atmosphere, floods, volcano eruptions, and so forth, and, on the other hand, existential risks – events that would cause the extinction of intelligent life or permanently and drastically cripple its potential. While endurable or limited risks can be serious – and may indeed be fatal to the people immediately exposed – they are recoverable; they do not destroy the long-term prospects of humanity as a whole. Humanity has long experience with endurable risks and a variety of institutional and technological mechanisms have been employed to reduce their incidence. Existential risks are a different kind of beast. For most of human history, there were no significant existential risks, or at least none that our ancestors could do anything about. By definition, of course, no existential disaster has yet happened. As a species we may therefore be less well prepared to understand and manage this new kind of risk. Furthermore, the reduction of existential risk is a global public good (everybody by necessity benefits from such safety measures, whether or not they contribute to their development), creating a potential free-rider problem, i.e. a lack of sufficient selfish incentives for people to make sacrifices to reduce an existential risk. Transhumanists therefore recognize a moral duty to promote efforts to reduce existential risks. The gravest existential risks facing us in the coming decades will be of our own making. These include: Destructive uses of nanotechnology. The accidental release of a self-replicating nanobot into the environment, where it would proceed to destroy the entire biosphere, is known as the “gray goo scenario”. Since molecular nanotechnology will make use of positional assembly to create non-biological structures and to open new chemical reaction pathways, there is no reason to suppose that the ecological checks and balances that limit the proliferation of organic self-replicators would also contain nano-replicators. Yet, while gray goo is certainly a legitimate concern, relatively simple engineering safeguards have been described that would make the probability of such a mishap almost arbitrarily small (Foresight 2002). Much more serious is the threat posed by nanobots deliberately designed to be destructive. A terrorist group or even a lone psychopath, having obtained access to this technology, could do extensive damage or even annihilate life on earth unless effective defensive technologies had been developed beforehand (Center for Responsible Nanotechnology 2003). An unstable arms race between nanotechnic states could also result in our eventual demise (Gubrud 2000). Anti-proliferation efforts will be complicated by the fact that nanotechnology does not require difficult-to-obtain raw materials or large manufacturing plants, and by the dual-use functionality of many of the basic components of destructive nanomachinery. While a nanotechnic defense system (which would act as a global immune system capable of identifying and neutralizing rogue replicators) appears to be possible in principle, it could turn out to be more difficult to construct than a simple destructive replicator. This could create a window of global vulnerability between the potential creation of dangerous replicators and the development of an effective immune system. It is critical that nano-assemblers do not fall into the wrong hands during this period. Biological warfare. Progress in genetic engineering will lead not only to improvements in medicine but also to the capability to create more effective bioweapons. It is chilling to consider what would have happened if HIV had been as contagious as the virus that causes the common cold. Engineering such microbes might soon become possible for increasing numbers of people. If the RNA sequence of a virus is posted on the Internet, then anybody with some basic expertise and access to a lab will be able to synthesize the actual virus from this description. A demonstration of this possibility was offered by a small team of researchers from New York University at Stony Brook in 2002, who synthesized the polio virus (whose genetic sequence is on the Internet) from scratch and injected it into mice who subsequently became paralyzed and died. Artificial intelligence. No threat to human existence is posed by today’s AI systems or their near-term successors. But if and when superintelligence is created, it will be of paramount importance that it be endowed with human-friendly values. An imprudently or maliciously designed superintelligence, with goals amounting to indifference or hostility to human welfare, could cause our extinction. Another concern is that the first superintelligence, which may become very powerful because of its superior planning ability and because of the technologies it could swiftly develop, would be built to serve only a single person or a small group (such as its programmers or the corporation that commissioned it). While this scenario may not entail the extinction of literally all intelligent life, it nevertheless constitutes an existential risk because the future that would result would be one in which a great part of humanity’s potential had been permanently destroyed and in which at most a tiny fraction of all humans would get to enjoy the benefits of posthumanity. [See also “Will posthumans or superintelligent machines pose a threat to humans who aren’t augmented?”] Nuclear war. Today’s nuclear arsenals are probably not sufficient to cause the extinction of all humans, but future arms races could result in even larger build-ups. It is also conceivable that an all-out nuclear war would lead to the collapse of modern civilization, and it is not completely certain that the survivors would succeed in rebuilding a civilization capable of sustaining growth and technological development. Something unknown. All the above risks were unknown a century ago and several of them have only become clearly understood in the past two decades. It is possible that there are future threats of which we haven’t yet become aware. For a more extensive discussion of these and many other existential risks, see Bostrom (2002). Evaluating the total probability that some existential disaster will do us in before we get the opportunity to become posthuman can be done by various direct or indirect methods. Although any estimate inevitably includes a large subjective factor, it seems that to set the probability to less than 20% would be unduly optimistic, and the best estimate may be considerably higher. But depending on the actions we take, this figure can be raised or lowered. References: Bostrom, N. “Existential Risks: Analyzing Human Extinction Scenarios and Related Hazards,” Journal of Evolution and Technology. Vol. 9 (2002). http://www.nickbostrom.com/existential/risks.html Center for Responsible Nanotechnology. “Dangers of Nanotechnology” (2003). http://www.crnano.org/dangers.htm Foresight Institute. “Foresight Guidelines on Molecular Nanotechnology, version 3.7” (2000). http://www.foresight.org/guidelines/current.html Gubrud, M. “Nanotechnology and International Security,” Fifth Foresight Conference on Molecular Nanotechnology. (1997) http://www.foresight.org/Conferences/MNT05/Papers/Gubrud/index.html Wimmer, E. et al. “Chemical Synthesis of Poliovirus cDNA: Generation of Infectious Virus in the Absence of Natural Template,” Science, Vol. 257, No. 5583, (2002), pp. 1016-1018
Описание изменений:
Пожалуйста, учтите, что любой ваш вклад в проект «hpluswiki» может быть отредактирован или удалён другими участниками. Если вы не хотите, чтобы кто-либо изменял ваши тексты, не помещайте их сюда.
Вы также подтверждаете, что являетесь автором вносимых дополнений, или скопировали их из источника, допускающего свободное распространение и изменение своего содержимого (см.
Hpluswiki:Авторские права
).
НЕ РАЗМЕЩАЙТЕ БЕЗ РАЗРЕШЕНИЯ ОХРАНЯЕМЫЕ АВТОРСКИМ ПРАВОМ МАТЕРИАЛЫ!
Отменить
Справка по редактированию
(в новом окне)
Навигация
Персональные инструменты
Вы не представились системе
Обсуждение
Вклад
Создать учётную запись
Войти
Пространства имён
Статья
Обсуждение
русский
Просмотры
Читать
Править
История
Ещё
Навигация
Начало
Свежие правки
Случайная страница
Инструменты
Ссылки сюда
Связанные правки
Служебные страницы
Сведения о странице
Дополнительно
Как редактировать
Вики-разметка
Telegram
Вконтакте
backup