Редактирование:
Transhumanist FAQ Live
(раздел)
Перейти к навигации
Перейти к поиску
Внимание:
Вы не вошли в систему. Ваш IP-адрес будет общедоступен, если вы запишете какие-либо изменения. Если вы
войдёте
или
создадите учётную запись
, её имя будет использоваться вместо IP-адреса, наряду с другими преимуществами.
Анти-спам проверка.
Не
заполняйте это!
== Society and Politics == ===Will new technologies only benefit the rich and powerful?=== One could make the case that the average citizen of a developed country today has a higher standard of living than any king five hundred years ago. The king might have had a court orchestra, but you can afford a CD player that lets you to listen to the best musicians any time you want. When the king got pneumonia he might well die, but you can take antibiotics. The king might have a carriage with six white horses, but you can have a car that is faster and more comfortable. And you likely have television, Internet access, and a shower with warm water; you can talk with relatives who live in a different country over the phone; and you know more about the Earth, nature, and the cosmos than any medieval monarch. The typical pattern with new technologies is that they become cheaper as time goes by. In the medical field, for example, experimental procedures are usually available only to research subjects and the very rich. As these procedures become routine, costs fall and more people can afford them. Even in the poorest countries, millions of people have benefited from vaccines and penicillin. In the field of consumer electronics, the price of computers and other devices that were cutting-edge only a couple of years ago drops precipitously as new models are introduced. It is clear that everybody can benefit greatly from improved technology. Initially, however, the greatest advantages will go to those who have the resources, the skills, and the willingness to learn to use new tools. One can speculate that some technologies may cause social inequalities to widen. For example, if some form of intelligence amplification becomes available, it may at first be so expensive that only the wealthiest can afford it. The same could happen when we learn how to genetically enhance our children. Those who are already well off would become smarter and make even more money. This phenomenon is not new. Rich parents send their kids to better schools and provide them with resources such as personal connections and information technology that may not be available to the less privileged. Such advantages lead to greater earnings later in life and serve to increase social inequalities. Trying to ban technological innovation on these grounds, however, would be misguided. If a society judges existing inequalities to be unacceptable, a wiser remedy would be progressive taxation and the provision of community-funded services such as education, IT access in public libraries, genetic enhancements covered by social security, and so forth. Economic and technological progress is not a zero sum game; it’s a positive sum game. Technological progress does not solve the hard old political problem of what degree of income redistribution is desirable, but it can greatly increase the size of the pie that is to be divided. ===Do transhumanists advocate eugenics?=== Eugenics in the narrow sense refers to the pre-WWII movement in Europe and the United States to involuntarily sterilize the “genetically unfit” and encourage breeding of the genetically advantaged. These ideas are entirely contrary to the tolerant humanistic and scientific tenets of transhumanism. In addition to condemning the coercion involved in such policies, transhumanists strongly reject the racialist and classist assumptions on which they were based, along with the notion that eugenic improvements could be accomplished in a practically meaningful timeframe through selective human breeding. Transhumanists uphold the principles of bodily autonomy and procreative liberty. Parents must be allowed to choose for themselves whether to reproduce, how to reproduce, and what technological methods they use in their reproduction. The use of genetic medicine or embryonic screening to increase the probability of a healthy, happy, and multiply talented child is a responsible and justifiable application of parental reproductive freedom. Beyond this, one can argue that parents have a moral responsibility to make use of these methods, assuming they are safe and effective. Just as it would be wrong for parents to fail in their duty to procure the best available medical care for their sick child, it would be wrong not to take reasonable precautions to ensure that a child-to-be will be as healthy as possible. This, however, is a moral judgment that is best left to individual conscience rather than imposed by law. Only in extreme and unusual cases might state infringement of procreative liberty be justified. If, for example, a would-be parent wished to undertake a genetic modification that would be clearly harmful to the child or would drastically curtail its options in life, then this prospective parent should be prevented by law from doing so. This case is analogous to the state taking custody of a child in situations of gross parental neglect or child abuse. This defense of procreative liberty is compatible with the view that states and charities can subsidize public health, prenatal care, genetic counseling, contraception, abortion, and genetic therapies so that parents can make free and informed reproductive decisions that result in fewer disabilities in the next generation. Some disability activists would call these policies eugenic, but society may have a legitimate interest in whether children are born healthy or disabled, leading it to subsidize the birth of healthy children, without actually outlawing or imposing particular genetic modifications. When discussing the morality of genetic enhancements, it is useful to be aware of the distinction between enhancements that are intrinsically beneficial to the child or society on the one hand, and, on the other, enhancements that provide a merely positional advantage to the child. For example, health, cognitive abilities, and emotional well-being are valued by most people for their own sake. It is simply nice to be healthy, happy and to be able to think well, quite independently of any other advantages that come from possessing these attributes. By contrast, traits such as attractiveness, athletic prowess, height, and assertiveness seem to confer benefits that are mostly positional, i.e. they benefit a person by making her more competitive (e.g. in sports or as a potential mate), at the expense of those with whom she will compete, who suffer a corresponding disadvantage from her enhancement. Enhancements that have only positional advantages ought to be de-emphasized, while enhancements that create net benefits ought to be encouraged. It is sometimes claimed that the use of germinal choice technologies would lead to an undesirable uniformity of the population. Some degree of uniformity is desirable and expected if we are able to make everyone congenitally healthy, strong, intelligent, and attractive. Few would argue that we should preserve cystic fibrosis because of its contribution to diversity. But other kinds of diversity are sure to flourish in a society with germinal choice, especially once adults are able to adapt their own bodies according to their own aesthetic tastes. Presumably most Asian parents will still choose to have children with Asian features, and if some parents choose genes that encourage athleticism, others may choose genes that correlate with musical ability. It is unlikely that germ-line genetic enhancements will ever have a large impact on the world. It will take a minimum of forty or fifty years for the requisite technologies to be developed, tested, and widely applied and for a significant number of enhanced individuals to be born and reach adulthood. Before this happens, more powerful and direct methods for individuals to enhance themselves will probably be available, based on nanomedicine, artificial intelligence, uploading, or somatic gene therapy. (Traditional eugenics, based on selecting who is allowed to reproduce, would have even less prospect of avoiding preemptive obsolescence, as it would take many generations to deliver its purported improvements.) ===Aren’t these future technologies very risky? Could they even cause our extinction?=== Yes, and this implies an urgent need to analyze the risks before they materialize and to take steps to reduce them. Biotechnology, nanotechnology, and artificial intelligence pose especially serious risks of accidents and abuse. [See also “If these technologies are so dangerous, should they be banned? What can be done to reduce the risks?”] One can distinguish between, on the one hand, endurable or limited hazards, such as car crashes, nuclear reactor meltdowns, carcinogenic pollutants in the atmosphere, floods, volcano eruptions, and so forth, and, on the other hand, existential risks – events that would cause the extinction of intelligent life or permanently and drastically cripple its potential. While endurable or limited risks can be serious – and may indeed be fatal to the people immediately exposed – they are recoverable; they do not destroy the long-term prospects of humanity as a whole. Humanity has long experience with endurable risks and a variety of institutional and technological mechanisms have been employed to reduce their incidence. Existential risks are a different kind of beast. For most of human history, there were no significant existential risks, or at least none that our ancestors could do anything about. By definition, of course, no existential disaster has yet happened. As a species we may therefore be less well prepared to understand and manage this new kind of risk. Furthermore, the reduction of existential risk is a global public good (everybody by necessity benefits from such safety measures, whether or not they contribute to their development), creating a potential free-rider problem, i.e. a lack of sufficient selfish incentives for people to make sacrifices to reduce an existential risk. Transhumanists therefore recognize a moral duty to promote efforts to reduce existential risks. The gravest existential risks facing us in the coming decades will be of our own making. These include: Destructive uses of nanotechnology. The accidental release of a self-replicating nanobot into the environment, where it would proceed to destroy the entire biosphere, is known as the “gray goo scenario”. Since molecular nanotechnology will make use of positional assembly to create non-biological structures and to open new chemical reaction pathways, there is no reason to suppose that the ecological checks and balances that limit the proliferation of organic self-replicators would also contain nano-replicators. Yet, while gray goo is certainly a legitimate concern, relatively simple engineering safeguards have been described that would make the probability of such a mishap almost arbitrarily small (Foresight 2002). Much more serious is the threat posed by nanobots deliberately designed to be destructive. A terrorist group or even a lone psychopath, having obtained access to this technology, could do extensive damage or even annihilate life on earth unless effective defensive technologies had been developed beforehand (Center for Responsible Nanotechnology 2003). An unstable arms race between nanotechnic states could also result in our eventual demise (Gubrud 2000). Anti-proliferation efforts will be complicated by the fact that nanotechnology does not require difficult-to-obtain raw materials or large manufacturing plants, and by the dual-use functionality of many of the basic components of destructive nanomachinery. While a nanotechnic defense system (which would act as a global immune system capable of identifying and neutralizing rogue replicators) appears to be possible in principle, it could turn out to be more difficult to construct than a simple destructive replicator. This could create a window of global vulnerability between the potential creation of dangerous replicators and the development of an effective immune system. It is critical that nano-assemblers do not fall into the wrong hands during this period. Biological warfare. Progress in genetic engineering will lead not only to improvements in medicine but also to the capability to create more effective bioweapons. It is chilling to consider what would have happened if HIV had been as contagious as the virus that causes the common cold. Engineering such microbes might soon become possible for increasing numbers of people. If the RNA sequence of a virus is posted on the Internet, then anybody with some basic expertise and access to a lab will be able to synthesize the actual virus from this description. A demonstration of this possibility was offered by a small team of researchers from New York University at Stony Brook in 2002, who synthesized the polio virus (whose genetic sequence is on the Internet) from scratch and injected it into mice who subsequently became paralyzed and died. Artificial intelligence. No threat to human existence is posed by today’s AI systems or their near-term successors. But if and when superintelligence is created, it will be of paramount importance that it be endowed with human-friendly values. An imprudently or maliciously designed superintelligence, with goals amounting to indifference or hostility to human welfare, could cause our extinction. Another concern is that the first superintelligence, which may become very powerful because of its superior planning ability and because of the technologies it could swiftly develop, would be built to serve only a single person or a small group (such as its programmers or the corporation that commissioned it). While this scenario may not entail the extinction of literally all intelligent life, it nevertheless constitutes an existential risk because the future that would result would be one in which a great part of humanity’s potential had been permanently destroyed and in which at most a tiny fraction of all humans would get to enjoy the benefits of posthumanity. [See also “Will posthumans or superintelligent machines pose a threat to humans who aren’t augmented?”] Nuclear war. Today’s nuclear arsenals are probably not sufficient to cause the extinction of all humans, but future arms races could result in even larger build-ups. It is also conceivable that an all-out nuclear war would lead to the collapse of modern civilization, and it is not completely certain that the survivors would succeed in rebuilding a civilization capable of sustaining growth and technological development. Something unknown. All the above risks were unknown a century ago and several of them have only become clearly understood in the past two decades. It is possible that there are future threats of which we haven’t yet become aware. For a more extensive discussion of these and many other existential risks, see Bostrom (2002). Evaluating the total probability that some existential disaster will do us in before we get the opportunity to become posthuman can be done by various direct or indirect methods. Although any estimate inevitably includes a large subjective factor, it seems that to set the probability to less than 20% would be unduly optimistic, and the best estimate may be considerably higher. But depending on the actions we take, this figure can be raised or lowered. References: Bostrom, N. “Existential Risks: Analyzing Human Extinction Scenarios and Related Hazards,” Journal of Evolution and Technology. Vol. 9 (2002). http://www.nickbostrom.com/existential/risks.html Center for Responsible Nanotechnology. “Dangers of Nanotechnology” (2003). http://www.crnano.org/dangers.htm Foresight Institute. “Foresight Guidelines on Molecular Nanotechnology, version 3.7” (2000). http://www.foresight.org/guidelines/current.html Gubrud, M. “Nanotechnology and International Security,” Fifth Foresight Conference on Molecular Nanotechnology. (1997) http://www.foresight.org/Conferences/MNT05/Papers/Gubrud/index.html Wimmer, E. et al. “Chemical Synthesis of Poliovirus cDNA: Generation of Infectious Virus in the Absence of Natural Template,” Science, Vol. 257, No. 5583, (2002), pp. 1016-1018 ===If these technologies are so dangerous, should they be banned? What can be done to reduce the risks?=== The position that we ought to relinquish research into robotics, genetic engineering, and nanotechnology has been advocated in an article by Bill Joy (2000). Joy argued that some of the future applications of these technologies are so dangerous that research in those fields should be stopped now. Partly because of Joy’s previously technophiliac credentials (he was a software designer and a cofounder of Sun Microsystems), his article, which appeared in Wired magazine, attracted a great deal of attention. Many of the responses to Joy’s article pointed out that there is no realistic prospect of a worldwide ban on these technologies; that they have enormous potential benefits that we would not want to forgo; that the poorest people may have a higher tolerance for risk in developments that could improve their condition; and that a ban may actually increase the dangers rather than reduce them, both by delaying the development of protective applications of these technologies, and by weakening the position of those who choose to comply with the ban relative to less scrupulous groups who defy it. A more promising alternative than a blanket ban is differential technological development, in which we would seek to influence the sequence in which technologies developed. On this approach, we would strive to retard the development of harmful technologies and their applications, while accelerating the development of beneficial technologies, especially those that offer protection against the harmful ones. For technologies that have decisive military applications, unless they can be verifiably banned, we may seek to ensure that they are developed at a faster pace in countries we regard as responsible than in those that we see as potential enemies. (Whether a ban is verifiable and enforceable can change over time as a result of developments in the international system or in surveillance technology.) In the case of nanotechnology, the desirable sequence of development is that nanotech immune systems and other defensive measures be deployed before offensive capabilities become available to many independent powers. Once a technology is shared by many, it becomes extremely hard to prevent further proliferation. In the case of biotechnology, we should seek to promote research into vaccines, anti-viral drugs, protective gear, sensors, and diagnostics, and to delay as long as possible the development and proliferation of biological warfare agents and the means of their weaponization. For artificial intelligence, a serious risk will emerge only when capabilities approach or surpass those of humans. At that point one should seek to promote the development of friendly AI and to prevent unfriendly or unreliable AI systems. Superintelligence is an example of a technology that seems especially worth promoting because it can help reduce a broad range of threats. Superintelligent systems could advise us on policy and make the progress curve for nanotechnology steeper, thus shortening the period of vulnerability between the development of dangerous nanoreplicators and the deployment of effective defenses. If we have a choice, it seems preferable that superintelligence be developed before advanced nanotechnology, as superintelligence could help reduce the risks of nanotechnology but not vice versa. Other technologies that have wide risk-reducing uses include intelligence augmentation, information technology, and surveillance. These can make us smarter individually and collectively or make enforcement of necessary regulation more feasible. A strong prima facie case therefore exists for pursuing these technologies as vigorously as possible. Needless to say, we should also promote non-technological developments that are beneficial in almost all scenarios, such as peace and international cooperation. In confronting the hydra of existential, limited and endurable risks glaring at us from the future, it is unlikely that any one silver bullet will provide adequate protection. Instead, an arsenal of countermeasures will be needed so that we can address the various risks on multiple levels. The first step to tackling a risk is to recognize its existence. More research is needed, and existential risks in particular should be singled out for attention because of their seriousness and because of the special nature of the challenges they pose. Surprisingly little work has been done in this area (but see e.g. Leslie (1996), Bostrom (2002), and Rees (2003) for some preliminary explorations). The strategic dimensions of our choices must be taken into account, given that some of the technologies in questions have important military ramifications. In addition to scholarly studies of the threats and their possible countermeasures, public awareness must be raised to enable a more informed debate of our long-term options. Some of the lesser existential risks, such as an apocalyptic asteroid impact or the highly speculative scenario involving something like the upsetting of a metastable vacuum state in some future particle accelerator experiment, could be substantially reduced at relatively small expense. Programs to accomplish this – e.g. an early detection system for dangerous near-earth objects on potential collation course with Earth, or the commissioning of advance peer review of planned high-energy physics experiments – are probably cost-effective. However, these lesser risks must not deflect attention from the more serious concern raised by more probable existential disasters [see “Aren’t these future technologies very risky? Could they even cause our extinction?”]. In light of how superabundant the human benefits of technology can ultimately be, it matters less that we obtain all of these benefits in their precisely most optimal form, and more that we obtain them at all. For many practical purposes, it makes sense to adopt the rule of thumb that we should act so as to maximize the probability of an acceptable outcome, one in which we attain some (reasonably broad) realization of our potential; or, to put it in negative terms, that we should act so as to minimize net existential risk. References: Bostrom, N. “Existential Risks: Analyzing Human Extinction Scenarios and Related Hazards,” Journal of Evolution and Technology. Vol. 9 (2002). http://www.nickbostrom.com/existential/risks.html Joy, B. “Why the Future Doesn’t Need Us”. Wired, 8:04 (2000). http://www.wired.com/wired/archive/8.04/joy_pr.html Leslie, J. The End of the World: The Ethics and Science of Human Extinction. (London: Routledge, 1996). Rees, M. Our Final Hour. (New York: Basic Books, 2003). ===Shouldn’t we concentrate on current problems such as improving the situation of the poor, rather than putting our efforts into planning for the “far” future?=== We should do both. Focusing solely on current problems would leave us unprepared for the new challenges that we will encounter. Many of the technologies and trends that transhumanists discuss are already reality. Biotechnology and information technology have transformed large sectors of our economies. The relevance of transhumanist ethics is manifest in such contemporary issues as stem cell research, genetically modified crops, human genetic therapy, embryo screening, end of life decisions, enhancement medicine, information markets, and research funding priorities. The importance of transhumanist ideas is likely to increase as the opportunities for human enhancement proliferate. Transhuman technologies will tend to work well together and create synergies with other parts of human society. For example, one important factor in healthy life expectancy is access to good medical care. Improvements in medical care will extend healthy, active lifespan – “healthspan” – and research into healthspan extension is likely to benefit ordinary care. Work on amplifying intelligence has obvious applications in education, decision-making, and communication. Better communications would facilitate trade and understanding between people. As more and more people get access to the Internet and are able to receive satellite radio and television broadcasts, dictators and totalitarian regimes may find it harder to silence voices of dissent and to control the information flow in their populations. And with the Internet and email, people discover they can easily form friendships and business partnerships in foreign countries. A world order characterized by peace, international cooperation, and respect for human rights would much improve the odds that the potentially dangerous applications of some future technologies can be controlled and would also free up resources currently spent on military armaments, some of which could then hopefully be diverted to improving the condition of the poor. Nanotechnological manufacturing promises to be both economically profitable and environmentally sound. Transhumanists do not have a patent solution to achieve these outcomes, any more than anybody else has, but technology has a huge role to play. An argument can be made that the most efficient way of contributing to making the world better is by participating in the transhumanist project. This is so because the stakes are enormous – humanity’s entire future may depend on how we manage the coming technological transitions – and because relatively few resources are at the present time being devoted to transhumanist efforts. Even one extra person can still make a significant difference here. ===Will extended life worsen overpopulation problems?=== Population increase is an issue we would ultimately have to come to grips with even if healthy life-extension were not to happen. Leaving people to die is an unacceptable solution. A large population should not be viewed simply as a problem. Another way of looking at the same fact is that it means that many persons now enjoy lives that would not have been lived if the population had been smaller. One could ask those who complain about overpopulation exactly which people’s lives they would have preferred should not have been led. Would it really have been better if billions of the world’s people had never existed and if there had been no other people in their place? Of course, this is not to deny that too-rapid population growth can cause crowding, poverty, and the depletion of natural resources. In this sense there can be real problems that need to be tackled. How many people the Earth can sustain at a comfortable standard of living is a function of technological development (as well as of how resources are distributed). New technologies, from simple improvements in irrigation and management, to better mining techniques and more efficient power generation machinery, to genetically engineered crops, can continue to improve world resource and food output, while at the same time reducing environmental impact and animal suffering. Environmentalists are right to insist that the status quo is unsustainable. As a matter of physical necessity, things cannot stay as they are today indefinitely, or even for very long. If we continue to use up resources at the current pace, without finding more resources or learning how to use novel kinds of resources, then we will run into serious shortages sometime around the middle of this century. The deep greens have an answer to this: they suggest we turn back the clock and return to an idyllic pre-industrial age to live in sustainable harmony with nature. The problem with this view is that the pre-industrial age was anything but idyllic. It was a life of poverty, misery, disease, heavy manual toil from dawn to dusk, superstitious fears, and cultural parochialism. Nor was it environmentally sound – as witness the deforestation of England and the Mediterranean region, desertification of large parts of the middle east, soil depletion by the Anasazi in the Glen Canyon area, destruction of farm land in ancient Mesopotamia through the accumulation of mineral salts from irrigation, deforestation and consequent soil erosion by the ancient Mexican Mayas, overhunting of big game almost everywhere, and the extinction of the dodo and other big featherless birds in the South Pacific. Furthermore, it is hard to see how more than a few hundred million people could be maintained at a reasonable standard of living with pre-industrial production methods, so some ninety percent of the world population would somehow have to vanish in order to facilitate this nostalgic return. Transhumanists propose a much more realistic alternative: not to retreat to an imagined past, but to press ahead as intelligently as we can. The environmental problems that technology creates are problems of intermediary, inefficient technology, of placing insufficient political priority on environmental protection as well as of a lack of ecological knowledge. Technologically less advanced industries in the former Soviet-bloc pollute much more than do their advanced Western counterparts. High-tech industry is typically relatively benign. Once we develop molecular nanotechnology, we will not only have clean and efficient manufacturing of almost any commodity, but we will also be able to clean up much of the mess created by today’s crude fabrication methods. This would set a standard for a clean environment that today’s traditional environmentalists could scarcely dream of. Nanotechnology will also make it cheaper to colonize space. From a cosmic point of view, Earth is an insignificant speck. It has sometimes been suggested that we ought to leave space untouched in its pristine glory. This view is hard to take seriously. Every hour, through entirely natural processes, vast amounts of resources – millions of times more than the sum total of what the human species has consumed throughout its career – are transformed into radioactive substances or wasted as radiation escaping into intergalactic space. Can we not think of some more creative way of using all this matter and energy? Even with full-blown space colonization, however, population growth can continue to be a problem, and this is so even if we assume that an unlimited number of people could be transported from Earth into space. If the speed of light provides an upper bound on the expansion speed then the amount of resources under human control will grow only polynomially (~ t3). Population, on the other hand, can easily grow exponentially (~ et). If that happens, then, since a factor that grows exponentially will eventually overtake any factor that grows polynomially, average income will ultimately drop to subsistence levels, forcing population growth to slow. How soon this would happen depends primarily on reproduction rates. A change in average life span would not have a big effect. Even vastly improved technology can only postpone this inevitability for a relatively brief time. The only long-term method of assuring continued growth of average income is some form of population control, whether spontaneous or imposed, limiting the number of new persons created per year. This does not mean that population could not grow, only that the growth would have to be polynomial rather than exponential. Some additional points to consider: In technologically advanced countries, couples tend to have fewer children, often below the replacement rate. As an empirical generalization, giving people increased rational control over their lives, especially through women’s education and participation in the labor market, causes couples to have fewer children. If one took seriously the idea of controlling population by limiting life span, why not be more active about it? Why not encourage suicide? Why not execute anyone reaching the age of 75? If slowing aging were unacceptable because it might lead to there being more people, what about efforts to cure cancer, reduce traffic deaths, or improve worker safety? Why use double standards? When transhumanists say they want to extend lifespans, what they mean is that they want to extend healthspans. This means that the extra person-years would be productive and would add economic value to society. We can all agree that there would be little point in living an extra ten years in a state of dementia. The world population growth rate has been declining for several decades. It peaked in 1970 at 2.1%. In 2003, it was 1.2%; and it is expected to fall below 1.0% around 2015. (United Nations 2002). The doomsday predictions of the so-called “Club of Rome” from the early 1970s have consistently turned out to be wrong. The more people there are, the more brains there will be working to invent new ideas and solutions. If people can look forward to a longer healthy, active life, they will have a personal stake in the future and will hopefully be more concerned about the long-term consequences of their actions. References: United Nations. The World Population Prospects: The 2002 Revision (United Nations: New York, 2002). http://www.gov.za/reports/2003/unpdhighlights.pdf ===Is there any ethical standard by which transhumanists judge “improvement of the human condition”?=== Transhumanism is compatible with a variety of ethical systems, and transhumanists themselves hold many different views. Nonetheless, the following seems to constitute a common core of agreement: According to transhumanists, the human condition has been improved if the conditions of individual humans have been improved. In practice, competent adults are usually the best judges of what is good for themselves. Therefore, transhumanists advocate individual freedom, especially the right for those who so wish to use technology to extend their mental and physical capacities and to improve their control over their own lives. From this perspective, an improvement to the human condition is a change that gives increased opportunity for individuals to shape themselves and their lives according to their informed wishes. Notice the word “informed”. It is important that people be aware of what they choose between. Education, discussion, public debate, critical thinking, artistic exploration, and, potentially, cognitive enhancers are means that can help people make more informed choices. Transhumanists hold that people are not disposable. Saving lives (of those who want to live) is ethically important. It would be wrong to unnecessarily let existing people die in order to replace them with some new “better” people. Healthspan-extension and cryonics are therefore high on the transhumanist list of priorities. The transhumanist goal is not to replace existing humans with a new breed of super-beings, but rather to give human beings (those existing today and those who will be born in the future) the option of developing into posthuman persons. The non-disposability of persons partially accounts for a certain sense of urgency that is common among transhumanists. On average, 150,000 men, women, and children die every day, often in miserable conditions. In order to give as many people as possible the chance of a posthuman existence – or even just a decent human existence – it is paramount that technological development, in at least some fields, is pursued with maximal speed. When it comes to life-extension and its various enabling technologies, a delay of a single week equals one million avoidable premature deaths – a weighty fact which those who argue for bans or moratoria would do well to consider carefully. (The further fact that universal access will likely lag initial availability only adds to the reason for trying to hurry things along.) Transhumanists reject speciesism, the (human racist) view that moral status is strongly tied to membership in a particular biological species, in our case homo sapiens. What exactly does determine moral status is a matter of debate. Factors such as being a person, being sentient, having the capacity for autonomous moral choice, or perhaps even being a member of the same community as the evaluator, are among the criteria that may combine to determine the degree of somebody’s moral status (Warren 1997). But transhumanists argue that species-identity should be de-emphasized in this context. Transhumanists insist that all beings that can experience pain have some moral status, and that posthuman persons could have at least the same level of moral status as humans have in their current form. References: Warren, M.-A. Moral Status: Obligations to Persons and Other Living Things (Oxford: Oxford University Press, 1997). ===What kind of society would posthumans live in?=== Not enough information is available at the current time to provide a full answer to this question. In part, though, the answer is, “You decide.” The outcome may be influenced by the choices we make now and over the coming decades. In this respect, the situation is the same as in earlier epochs that had no transhuman possibilities: by becoming involved in political struggles against today’s social ills and injustices, we can help make tomorrow’s society better. Transhumanism does, however, inform us about new constraints, possibilities, and issues, and it highlights numerous important leverage points for intervention, where a small application of resources can make a big long-term difference. For example, one issue that moves into prominence is the challenge of creating a society in which beings with vastly different orders of capabilities (such as posthuman persons and as-yet non-augmented humans) can live happily and peacefully together. Another concern that becomes paramount is the need to build a world order in which dangerous arms races can be prevented and in which the proliferation of weapons of mass destruction can be suppressed or at least delayed until effective defenses have been developed [see “Aren’t these future technologies very risky? Could they even cause our extinction?”]. The ideal social organization may be one that includes the possibility for those who so wish to form independent societies voluntarily secluded from the rest of the world, in order to pursue traditional ways of life or to experiment with new forms of communal living. Achieving an acceptable balance between the rights of such communities for autonomy, on the one hand, and the security concerns of outside entities and the just demands for protection of vulnerable and oppressed individuals inside these communities on the other hand, is a delicate task and a familiar challenge in political philosophy. What types of society posthumans will live in depends on what types of posthumans eventually develop. One can project various possible developmental paths [see “What is a posthuman?”] which may result in very different kinds of posthuman, transhuman, and unaugmented human beings, living in very different sorts of societies. In attempting to imagine such a world, we must bear in mind that we are likely to base our expectations on the experiences, desires, and psychological characteristics of humans. Many of these expectations may not hold true of posthuman persons. When human nature changes, new ways of organizing a society may become feasible. We may hope to form a clearer understanding of what those new possibilities are as we observe the seeds of transhumanity develop. ===Will posthumans or superintelligent machines pose a threat to humans who aren’t augmented?=== Human society is always at risk from some group deciding to view another group of humans as fit for slavery or slaughter. To counteract such tendencies, modern societies have created laws and institutions, and endowed them with powers of enforcement, that act to prevent groups of citizens from assaulting one another. The efficacy of these institutions does not depend on all citizens having equal capacities. Modern, peaceful societies have large numbers of people with diminished physical or mental capacities along with many other people who may be exceptionally physically strong or healthy or intellectually talented in various ways. Adding people with technologically enhanced capacities to this already broad distribution of ability would not necessarily rip society apart or trigger genocide or enslavement. A common worry is that inheritable genetic modifications or other human enhancement technologies would lead to two distinct and separate species and that hostilities would inevitably develop between them. The assumptions behind this prediction should be questioned. It is a common theme in fiction because of the opportunities for dramatic conflict, but that is not the same as social, political, and economic plausibility in the real world. It seems more likely that there would be a continuum of differently modified or enhanced individuals, which would overlap with the continuum of as-yet unenhanced humans. The scenario in which “the enhanced” form a pact and then attack “the naturals” makes for exciting science fiction but is not necessarily the most plausible outcome. Even today, the segment containing the tallest 90 percent of the population could, in principle, get together and kill or enslave the shorter decile. That this does not happen suggests that a well-organized society can hold together even if it contains many possible coalitions of people sharing some attribute such that, if they unified under one banner, would make them capable of exterminating the rest. To note that the extreme case of a war between human and posthuman persons is not the most likely scenario is not to say that there are no legitimate social concerns about the steps that may take us closer to posthumanity. Inequity, discrimination, and stigmatization – against or on behalf of modified people – could become serious issues. Transhumanists would argue that these (potential) social problems call for social remedies. (One case study of how contemporary technology can change important aspects of someone’s identify is sex reassignment. The experiences of transsexuals show that some cultures still have work to do in becoming more accepting of diversity.) This is a task that we can begin to tackle now by fostering a climate of tolerance and acceptance towards those who are different from ourselves. We can also act to strengthen those institutions that prevent violence and protect human rights, for instance by building stable democratic traditions and constitutions and by expanding the rule of law to the international plane. What about the hypothetical case in which someone intends to create, or turn themselves into, a being of so radically enhanced capacities that a single one or a small group of such individuals would be capable of taking over the planet? This is clearly not a situation that is likely to arise in the imminent future, but one can imagine that, perhaps in a few decades, the prospective creation of superintelligent machines could raise this kind of concern. The would-be creator of a new life form with such surpassing capabilities would have an obligation to ensure that the proposed being is free from psychopathic tendencies and, more generally, that it has humane inclinations. For example, a superintelligence should be built with a clear goal structure that has friendliness to humans as its top goal. Before running such a program, the builders of a superintelligence should be required to make a strong case that launching it would be safer than alternative courses of action. References: Yudkowsky, E. Creating Friendly AI: The Analysis and Design of Benevolent Goal Architectures. (2003, Version 1.0). http://www.singinst.org/CFAI/index.html
Описание изменений:
Пожалуйста, учтите, что любой ваш вклад в проект «hpluswiki» может быть отредактирован или удалён другими участниками. Если вы не хотите, чтобы кто-либо изменял ваши тексты, не помещайте их сюда.
Вы также подтверждаете, что являетесь автором вносимых дополнений, или скопировали их из источника, допускающего свободное распространение и изменение своего содержимого (см.
Hpluswiki:Авторские права
).
НЕ РАЗМЕЩАЙТЕ БЕЗ РАЗРЕШЕНИЯ ОХРАНЯЕМЫЕ АВТОРСКИМ ПРАВОМ МАТЕРИАЛЫ!
Отменить
Справка по редактированию
(в новом окне)
Навигация
Персональные инструменты
Вы не представились системе
Обсуждение
Вклад
Создать учётную запись
Войти
Пространства имён
Статья
Обсуждение
русский
Просмотры
Читать
Править
История
Ещё
Навигация
Начало
Свежие правки
Случайная страница
Инструменты
Ссылки сюда
Связанные правки
Служебные страницы
Сведения о странице
Дополнительно
Как редактировать
Вики-разметка
Telegram
Вконтакте
backup