Редактирование:
Transhumanist FAQ Version 3
(раздел)
Перейти к навигации
Перейти к поиску
Внимание:
Вы не вошли в систему. Ваш IP-адрес будет общедоступен, если вы запишете какие-либо изменения. Если вы
войдёте
или
создадите учётную запись
, её имя будет использоваться вместо IP-адреса, наряду с другими преимуществами.
Анти-спам проверка.
Не
заполняйте это!
== Technologies and Projections == ===Biotechnology, genetic engineering, stem cells, and cloning: What are they and what are they good for?=== Biotechnology is the application of techniques and methods based on the biological sciences. It encompasses such diverse enterprises as brewing, manufacture of human insulin, interferon, and human growth hormone, medical diagnostics, cell cloning and reproductive cloning, the genetic modification of crops, bioconversion of organic waste and the use of genetically altered bacteria in the cleanup of oil spills, stem cell research and much more. Genetic engineering is the area of biotechnology concerned with the directed alteration of genetic material. Biotechnology already has countless applications in industry, agriculture, and medicine. It is a hotbed of research. The completion of the human genome project – a “rough draft” of the entire human genome was published in the year 2000 – was a scientific milestone by anyone’s standards. Research is now shifting to decoding the functions and interactions of all these different genes and to developing applications based on this information. The potential medical benefits are too many to list; researchers are working on every common disease, with varying degrees of success. Progress takes place not only in the development of drugs and diagnostics but also in the creation of better tools and research methodologies, which in turn accelerates progress. When considering what developments are likely over the long term, such improvements in the research process itself must be factored in. The human genome project was completed ahead of schedule, largely because the initial predictions underestimated the degree to which instrumentation technology would improve during the course of the project. At the same time, one needs to guard against the tendency to hype every latest advance. (Remember all those breakthrough cancer cures that we never heard of again?) Moreover, even in cases where the early promise is borne out, it usually takes ten years to get from proof-of-concept to successful commercialization. Genetic therapies are of two sorts: somatic and germ-line. In somatic gene therapy, a virus is typically used as a vector to insert genetic material into the cells of the recipient’s body. The effects of such interventions do not carry over into the next generation. Germ-line genetic therapy is performed on sperm or egg cells, or on the early zygote, and can be inheritable. (Embryo screening, in which embryos are tested for genetic defects or other traits and then selectively implanted, can also count as a kind of germ-line intervention.) Human gene therapy, except for some forms of embryo screening, is still experimental. Nonetheless, it holds promise for the prevention and treatment of many diseases, as well as for uses in enhancement medicine. The potential scope of genetic medicine is vast: virtually all disease and all human traits – intelligence, extroversion, conscientiousness, physical appearance, etc. – involve genetic predispositions. Single-gene disorders, such as cystic fibrosis, sickle cell anemia, and Huntington’s disease are likely to be among the first targets for genetic intervention. Polygenic traits and disorders, ones in which more than one gene is implicated, may follow later (although even polygenic conditions can sometimes be influenced in a beneficial direction by targeting a single gene). Stem cell research, another scientific frontier, offers great hopes for regenerative medicine. Stem cells are undifferentiated (unspecialized) cells that can renew themselves and give rise to one or more specialized cell types with specific functions in the body. By growing such cells in culture, or steering their activity in the body, it will be possible to grow replacement tissues for the treatment of degenerative disorders, including heart disease, Parkinson’s, Alzheimer’s, diabetes, and many others. It may also be possible to grow entire organs from stem cells for use in transplantation. Embryonic stem cells seem to be especially versatile and useful, but research is also ongoing into adult stem cells and the “reprogramming” of ordinary cells so that they can be turned back into stem cells with pluripotent capabilities. The term “human cloning” covers both therapeutic and reproductive uses. In therapeutic cloning, a preimplantation embryo (also known as a “blastocyst” – a hollow ball consisting of 30-150 undifferentiated cells) is created via cloning, from which embryonic stem cells could be extracted and used for therapy. Because these cloned stem cells are genetically identical to the patient, the tissues or organs they would produce could be implanted without eliciting an immune response from the patient’s body, thereby overcoming a major hurdle in transplant medicine. Reproductive cloning, by contrast, would mean the birth of a child who is genetically identical to the cloned parent: in effect, a younger identical twin. Everybody recognizes the benefit to ailing patients and their families that come from curing specific diseases. Transhumanists emphasize that, in order to seriously prolong the healthy life span, we also need to develop ways to slow aging or to replace senescent cells and tissues. Gene therapy, stem cell research, therapeutic cloning, and other areas of medicine that have the potential to deliver these benefits deserve a high priority in the allocation of research monies. Biotechnology can be seen as a special case of the more general capabilities that nanotechnology will eventually provide [see “What is molecular nanotechnology?”]. ===What is molecular nanotechnology?=== Molecular nanotechnology is an anticipated manufacturing technology that will make it possible to build complex three-dimensional structures to atomic specification using chemical reactions directed by nonbiological machinery. In molecular manufacturing, each atom would go to a selected place, bonding with other atoms in a precisely designated manner. Nanotechnology promises to give us thorough control of the structure of matter. Since most of the stuff around us and inside us is composed of atoms and gets its characteristic properties from the placement of these atoms, the ability to control the structure of matter on the atomic scale has many applications. As K. Eric Drexler wrote in Engines of Creation, the first book on nanotechnology (published in 1986): Coal and diamonds, sand and computer chips, cancer and healthy tissue: throughout history, variations in the arrangement of atoms have distinguished the cheap from the cherished, the diseased from the healthy. Arranged one way, atoms make up soil, air, and water arranged another, they make up ripe strawberries. Arranged one way, they make up homes and fresh air; arranged another, they make up ash and smoke. Nanotechnology, by making it possible to rearrange atoms effectively, will enable us to transform coal into diamonds, sand into supercomputers, and to remove pollution from the air and tumors from healthy tissue. Central to Drexler’s vision of nanotechnology is the concept of the assembler. An assembler would be a molecular construction device. It would have one or more submicroscopic robotic arms under computer control. The arms would be capable of holding and placing reactive compounds so as to positionally control the precise location at which a chemical reaction takes place. The assembler arms would grab a molecule (but not necessarily individual atoms) and add it to a work-piece, constructing an atomically precise object step by step. An advanced assembler would be able to make almost any chemically stable structure. In particular, it would be able to make a copy of itself. Since assemblers could replicate themselves, they would be easy to produce in large quantities. There is a biological parallel to the assembler: the ribosome. Ribosomes are the tiny construction machines (a few thousand cubic nanometers big) in our cells that manufacture all the proteins used in all living things on Earth. They do this by assembling amino acids, one by one, into precisely determined sequences. These structures then fold up to form a protein. The blueprint that specifies the order of amino acids, and thus indirectly the final shape of the protein, is called messenger RNA. The messenger RNA is in turned determined by our DNA, which can be viewed (somewhat simplistically) as an instruction tape for protein synthesis. Nanotechnology will generalize the ability of ribosomes so that virtually any chemically stable structure can be built, including devices and materials that resemble nothing in nature. Mature nanotechnology will transform manufacturing into a software problem. To build something, all you will need is a detailed design of the object you want to make and a sequence of instructions for its construction. Rare or expensive raw materials are generally unnecessary; the atoms required for the construction of most kinds of nanotech devices exist in abundance in nature. Dirt, for example, is full of useful atoms. By working in large teams, assemblers and more specialized nanomachines will be able to build large objects quickly. Consequently, while nanomachines may have features on the scale of a billionth of a meter – a nanometer – the products could be as big as space vehicles or even, in a more distant future, the size of planets. Because assemblers will be able to copy themselves, nanotech products will have low marginal production costs – perhaps on the same order as familiar commodities from nature’s own self-reproducing molecular machinery such as firewood, hay, or potatoes. By ensuring that each atom is properly placed, assemblers would manufacture products of high quality and reliability. Leftover molecules would be subject to this strict control, making the manufacturing process extremely clean. The speed with which designs and instruction lists for making useful objects can be developed will determine the speed of progress after the creation of the first full-blown assembler. Powerful software for molecular modeling and design will accelerate development, possibly assisted by specialized engineering AI. Another accessory that might be especially useful in the early stages after the assembler-breakthrough is the disassembler, a device that can disassemble an object while creating a three-dimensional map of its molecular configuration. Working in concert with an assembler, it could function as a kind of 3D Xerox machine: a device for making atomically exact replicas of almost any existing solid object within reach. Molecular nanotechnology will ultimately make it possible to construct compact computing systems performing at least 1021 operations per second; machine parts of any size made of nearly flawless diamond; cell-repair machines that can enter cells and repair most kinds of damage, in all likelihood including frostbite [see “ REF _Ref50109542 h What is cryonics? Isn’t the probability of success too small?”]; personal manufacturing and recycling appliances; and automated production systems that can double capital stock in a few hours or less. It is also likely to make uploading possible [see “What is uploading?”]. A key challenge in realizing these prospects is the bootstrap problem: how to build the first assembler. There are several promising routes. One is to improve current proximal probe technology. An atomic force microscope can drag individual atoms along a surface. Two physicists at IBM Almaden Labs in California illustrated this in 1989 when they used such a microscope to arrange 35 xenon atoms to spell out the trademark “I-B-M”, creating the world’s smallest logo. Future proximal probes might have more degrees of freedom and the ability to pick up and deposit reactive compounds in a controlled fashion. Another route to the first assembler is synthetic chemistry. Cleverly designed chemical building blocks might be made to self-assemble in solution phase into machine parts. Final assembly of these parts might then be made with a proximal probe. Yet another route is biochemistry. It might be possible to use ribosomes to make assemblers of more generic capabilities. Many biomolecules have properties that might be explored in the early phases of nanotechnology. For example, interesting structures, such as branches, loops, and cubes, have been made by DNA. DNA could also serve as a “tag” on other molecules, causing them to bind only to designated compounds displaying a complementary tag, thus providing a degree of control over what molecular complexes will form in a solution. Combinations of these approaches are of course also possible. The fact that there are multiple promising routes adds to the likelihood that success will eventually be attained. That assemblers of general capabilities are consistent with the laws of chemistry was shown by Drexler in his technical book Nanosystems in 1992. This book also established some lower bounds on the capabilities of mature nanotechnology. Medical applications of nanotechnology were first explored in detail by Robert A. Freitas Jr. in his monumental work Nanomedicine , the first volume of which came out in 1999. Today, nanotech is a hot research field. The U.S. government spent more than 600 million dollars on its National Nanotechnology Initiative in 2002. Other countries have similar programs, and private investment is ample. However, only a small part of the funding goes to projects of direct relevance to the development of assembler-based nanotechnology; most of it is for more humdrum, near-term objectives. While it seems fairly well established that molecular nanotechnology is in principle possible, it is harder to determine how long it will take to develop. A common guess among the cognoscenti is that the first assembler may be built around the year 2018, give or take a decade, but there is large scope for diverging opinion on the upper side of that estimate. Because the ramifications of nanotechnology are immense, it is imperative that serious thought be given to this topic now. If nanotechnology were to be abused the consequences could be devastating. Society needs to prepare for the assembler breakthrough and do advance planning to minimize the risks associated with it [see e.g. “Aren’t these future technologies very risky? Could they even cause our extinction?”]. Several organizations are working to preparing the world for nanotechnology, the oldest and largest being the Foresight Institute. References: Drexler, E. The Engines of Creation: The Coming Era of Nanotechnology. (New York: Anchor Books, 1986). http://www.foresight.org/EOC/index.html Drexler, E. Nanosystems: Molecular Machinery, Manufacturing, and Computation. (New York: John Wiley & Sons, Inc., 1992). Freitas, Jr., R. A. Nanomedicine, Volume I: Basic Capabilities. (Georgetown, Texas: Landes Bioscience, 1999). Foresight Institute. http://www.foresight.org ===What is superintelligence?=== A superintelligent intellect (a superintelligence, sometimes called “ultraintelligence”) is one that has the capacity to radically outperform the best human brains in practically every field, including scientific creativity, general wisdom, and social skills. Sometimes a distinction is made between weak and strong superintelligence. Weak superintelligence is what you would get if you could run a human intellect at an accelerated clock speed, such as by uploading it to a fast computer [see “What is uploading?”]. If the upload’s clock-rate were a thousand times that of a biological brain, it would perceive reality as being slowed down by a factor of a thousand. It would think a thousand times more thoughts in a given time interval than its biological counterpart. Strong superintelligence refers to an intellect that is not only faster than a human brain but also smarter in a qualitative sense. No matter how much you speed up your dog’s brain, you’re not going to get the equivalent of a human intellect. Analogously, there might be kinds of smartness that wouldn’t be accessible to even very fast human brains given their current capacities. Something as simple as increasing the size or connectivity of our neuronal networks might give us some of these capacities. Other improvements may require wholesale reorganization of our cognitive architecture or the addition of new layers of cognition on top of the old ones. However, the distinction between weak and strong superintelligence may not be clear-cut. A sufficiently long-lived human who didn’t make any errors and had a sufficient stack of scrap paper at hand could in principle compute any Turing computable function. (According to Church’s thesis, the class of Turing computable functions is identical to the class of physically computable functions.) Many but not all transhumanists expect that superintelligence will be created within the first half of this century. Superintelligence requires two things: hardware and software. Chip-manufacturers planning the next generation of microprocessors commonly rely on a well-known empirical regularity known as Moore’s Law. In its original 1965-formulation by Intel co-founder Gordon Moore, it stated that the number of components on a chip doubled every year. In contemporary use, the “law” is commonly understood as referring more generally to a doubling of computing power, or of computing power per dollar. For the past couple of years, the doubling time has hovered between 18 months and two years. The human brain’s processing power is difficult to determine precisely, but common estimates range from 1014 instructions per second (IPS) up to 1017 IPS or more. The lower estimate, derived by Carnegie Mellon robotics professor Hans Moravec, is based on the computing power needed to replicate the signal processing performed by the human retina and assumes a significant degree of software optimization. The 1017 IPS estimate is obtained by multiplying the number of neurons in a human brain (~100 billion) with the average number of synapses per neuron (~1,000) and with the average spike rate (~100 Hz), and assuming ~10 instructions to represent the effect on one action potential traversing one synapse. An even higher estimate would be obtained e.g. if one were to suppose that functionally relevant and computationally intensive processing occurs within compartments of a dendrite tree. Most experts, Moore included, think that computing power will continue to double about every 18 months for at least another two decades. This expectation is based in part on extrapolation from the past and in part on consideration of developments currently underway in laboratories. The fastest computer under construction is IBM’s Blue Gene/L, which when it is ready in 2005 is expected to perform ~2*1014 IPS. Thus it appears quite likely that human-equivalent hardware will have been achieved within not much more than a couple of decades. How long it will take to solve the software problem is harder to estimate. One possibility is that progress in computational neuroscience will teach us about the computational architecture of the human brain and what learning rules it employs. We can then implement the same algorithms on a computer. In this approach, the superintelligence would not be completely specified by the programmers but would instead have to grow by learning from experience the same way a human infant does. An alternative approach would be to use genetic algorithms and methods from classical AI. This might result in a superintelligence that bears no close resemblance to a human brain. At the opposite extreme, we could seek to create a superintelligence by uploading a human intellect and then accelerating and enhancing it [see “What is uploading?”]. The outcome of this might be a superintelligence that is a radically upgraded version of one particular human mind. The arrival of superintelligence will clearly deal a heavy blow to anthropocentric worldviews. Much more important than its philosophical implications, however, would be its practical effects. Creating superintelligence may be the last invention that humans will ever need to make, since superintelligences could themselves take care of further scientific and technological development. They would do so more effectively than humans. Biological humanity would no longer be the smartest life form on the block. The prospect of superintelligence raises many big issues and concerns that we should think deeply about in advance of its actual development. The paramount question is: What can be done to maximize the chances that the arrival of superintelligence will benefit rather than harm us? The range of expertise needed to address this question extends far beyond the community of AI researchers. Neuroscientists, economists, cognitive scientists, computer scientists, philosophers, ethicists, sociologists, science-fiction writers, military strategists, politicians, legislators, and many others will have to pool their insights if we are to deal wisely with what may be the most important task our species will ever have to tackle. Many transhumanists would like to become superintelligent themselves. This is obviously a long-term and uncertain goal, but it might be achievable either through uploading and subsequent enhancement or through the gradual augmentation of our biological brains, by means of future nootropics (cognitive enhancement drugs), cognitive techniques, IT tools (e.g. wearable computers, smart agents, information filtering systems, visualization software, etc.), neural-computer interfaces, or brain implants. References: Moravec, H. Mind Children (Harvard: Harvard University Press, 1989). Bostrom, N. “How Long Before Superintelligence?” International Journal of Futures Studies. Vol. 2. (1998). ===What is virtual reality?=== A virtual reality is a simulated environment that your senses perceive as real. Theatre, opera, cinema, television can be regarded as precursors to virtual reality. The degree of immersion (the feeling of “being there”) that you experience when watching television is quite limited. Watching football on TV doesn’t really compare to being in the stadium. There are several reasons for this. For starters, even a big screen doesn’t fill up your entire visual field. The number of pixels even on high-resolution screens is also too small (typically 1280*1224 rather than about 5000*5000 as would be needed in a flawless wide-angle display). Further, 3D vision is lacking, as is position tracking and focus effects (in reality, the picture on your retina changes continually as your head and eyeballs are moving). To achieve greater realism, a system should ideally include more sensory modalities, such as 3D sound (through headphones) to hear the crowd roaring, and tactile stimulation through a whole-body haptic interface so that you don’t have to miss out on the sensation of sitting on a cold, hard bench for hours. An essential element of immersion is interactivity. Watching TV is typically a passive experience. Full-blown virtual reality, by contrast, will be interactive. You will be able to move about in a virtual world, pick up objects you see, and communicate with people you meet. (A real football experience crucially includes the possibility of shouting abuse at the referee.) To enable interactivity, the system must have sensors that pick up on your movements and utterances and adjust the presentation to incorporate the consequences of your actions. Virtual worlds can be modeled on physical realities. If you are participating in a remote event through VR, as in the example of the imagined football spectator, you are said to be telepresent at that event. Virtual environments can also be wholly artificial, like cartoons, and have no particular counterpart in physical reality. Another possibility, known as augmented reality, is to have your perception of your immediate surroundings partially overlaid with simulated elements. For example, by wearing special glasses, nametags could be made to appear over the heads of guests at a dinner party, or you could opt to have annoying billboard advertisements blotted out from your view. Many users of today’s VR systems experience “simulator sickness,” with symptoms ranging from unpleasantness and disorientation to headaches, nausea, and vomiting. Simulator sickness arises because different sensory systems provide conflicting cues. For example, the visual system may provide strong cues of self-motion while the vestibular system in your inner ear tells your brain that your head is stationary. Heavy head-mounted display helmets and lag times between tracking device and graphics update can also cause discomfort. Creating good VR that overcomes these problems is technically challenging. Primitive virtual realities have been around for some time. Early applications included training modules for pilots and military personnel. Increasingly, VR is used in computer gaming. Partly because VR is computationally very intensive, simulations are still quite crude. As computational power increases, and as sensors, effectors and displays improve, VR could begin to approximate physical reality in terms of fidelity and interactivity. In the long run, VR could unlock limitless possibilities for human creativity. We could construct artificial experiential worlds, in which the laws of physics can be suspended, that would appear as real as physical reality to participants. People could visit these worlds for work, entertainment, or to socialize with friends who may be living on the opposite site of the globe. Uploads [see “What is uploading?”], who could interact with simulated environments directly without the need of a mechanical interface, might spend most of their time in virtual realities. ===What is cryonics? Isn’t the probability of success too small?=== Cryonics is an experimental medical procedure that seeks to save lives by placing in low-temperature storage persons who cannot be treated with current medical procedures and who have been declared legally dead, in the hope that technological progress will eventually make it possible to revive them. For cryonics to work today, it is not necessary that we can currently reanimate cryo-preserved patients (which we cannot). All that is needed is that we can preserve patients in a state sufficiently intact that some possible technology, developed in the future, will one day be able to repair the freezing damage and reverse the original cause of deanimation. Only half of the complete cryonics procedure can be scrutinized today; the other half cannot be performed until the (perhaps distant) future. What we know now is that it is possible to stabilize a patient’s condition by cooling him or her in liquid nitrogen (- 196 C°). A considerable amount of cell damage is caused by the freezing process. This injury can be minimized by following suspension protocols that involve suffusing the deanimated body with cryoprotectants. The formation of damaging ice crystals can even be suppressed altogether in a process known as vitrification, in which the patient’s body is turned into a kind of glass. This might sound like an improbable treatment, but the purpose of cryonics is to preserve the structure of life rather than the processes of life, because the life processes can in principle be re-started as long as the information encoded in the structural properties of the body, in particular the brain, are sufficiently preserved. Once frozen, the patient can be stored for millennia with virtually no further tissue degradation. Many experts in molecular nanotechnology believe that in its mature stage nanotechnology will enable the revival of cryonics patients. Hence, it is possible that the suspended patients could be revived in as little as a few decades from now. The uncertainty about the ultimate technical feasibility of reanimation may very well be dwarfed by the uncertainty in other factors, such as the possibility that you deanimate in the wrong kind of way (by being lost at sea, for example, or by having the brain’s information content erased by Alzheimer’s disease), that your cryonics company goes bust, that civilization collapses, or that people in the future won’t be interested in reviving you. So, a cryonics contract is far short of a survival guarantee. As a cryonicist saying goes, being cryonically suspended is the second worst thing that can happen to you. When we consider the procedures that are routine today and how they might have been viewed in (say) the 1700s, we can begin to see how difficult it is to make a well-founded argument that future medical technology will never be able to reverse the injuries that occur during cryonic suspension. By contrast, your chances of a this-worldly comeback if you opt for one of the popular alternative treatments – such as cremation or burial – are zero. Seen in this light, signing up for cryonics, which is usually done by making a cryonics firm one of the beneficiaries of your life insurance, can look like a reasonable insurance policy. If it doesn’t work, you would be dead anyway. If it works, it may save your life. Your saved life would then likely be extremely long and healthy, given how advanced the state of medicine must be to revive you. By no means are all transhumanists signed up for cryonics, but a significant fraction finds that, for them, a cost-benefit analysis justifies the expense. Becoming a cryonicist, however, requires courage: the courage to confront the possibility of your own death, and the courage to resist the peer-pressure from the large portion of the population which currently espouses deathist values and advocates complacency in the face of a continual, massive loss of human life. References: Merkle, R. “The Molecular Repair of the Brain.” Cryonics magazine, Vol. 15, No’s 1 & 2. (1994). http://www.merkle.com/cryo/techFeas.html ===What is uploading?=== Uploading (sometimes called “downloading”, “mind uploading” or “brain reconstruction”) is the process of transferring an intellect from a biological brain to a computer. One way of doing this might be by first scanning the synaptic structure of a particular brain and then implementing the same computations in an electronic medium. A brain scan of sufficient resolution could be produced by disassembling the brain atom for atom by means of nanotechnology. Other approaches, such as analyzing pieces of the brain slice by slice in an electron microscope with automatic image processing have also been proposed. In addition to mapping the connection pattern among the 100 billion-or-so neurons, the scan would probably also have to register some of the functional properties of each of the synaptic interconnections, such as the efficacy of the connection and how stable it is over time (e.g. whether it is short-term or long-term potentiated). Non-local modulators such as neurotransmitter concentrations and hormone balances may also need to be represented, although such parameters likely contain much less data than the neuronal network itself. In addition to a good three-dimensional map of a brain, uploading will require progress in neuroscience to develop functional models of each species of neuron (how they map input stimuli to outgoing action potentials, and how their properties change in response to activity in learning). It will also require a powerful computer to run the upload, and some way for the upload to interact with the external world or with a virtual reality. (Providing input/output or a virtual reality for the upload appears easy in comparison to the other challenges.) An alternative hypothetical uploading method would proceed more gradually: one neuron could be replaced by an implant or by a simulation in a computer outside of the body. Then another neuron, and so on, until eventually the whole cortex has been replaced and the person’s thinking is implemented on entirely artificial hardware. (To do this for the whole brain would almost certainly require nanotechnology.) A distinction is sometimes made between destructive uploading, in which the original brain is destroyed in the process, and non-destructive uploading, in which the original brain is preserved intact alongside the uploaded copy. It is a matter of debate under what conditions personal identity would be preserved in destructive uploading. Many philosophers who have studied the problem think that at least under some conditions, an upload of your brain would be you. A widely accepted position is that you survive so long as certain information patterns are conserved, such as your memories, values, attitudes, and emotional dispositions, and so long as there is causal continuity so that earlier stages of yourself help determine later stages of yourself. Views differ on the relative importance of these two criteria, but they can both be satisfied in the case of uploading. For the continuation of personhood, on this view, it matters little whether you are implemented on a silicon chip inside a computer or in that gray, cheesy lump inside your skull, assuming both implementations are conscious. Tricky cases arise, however, if we imagine that several similar copies are made of your uploaded mind. Which one of them is you? Are they all you, or are none of them you? Who owns your property? Who is married to your spouse? Philosophical, legal, and ethical challenges abound. Maybe these will become hotly debated political issues later in this century. A common misunderstanding about uploads is that they would necessarily be “disembodied” and that this would mean that their experiences would be impoverished. Uploading according to this view would be the ultimate escapism, one that only neurotic body-loathers could possibly feel tempted by. But an upload’s experience could in principle be identical to that of a biological human. An upload could have a virtual (simulated) body giving the same sensations and the same possibilities for interaction as a non-simulated body. With advanced virtual reality, uploads could enjoy food and drink, and upload sex could be as gloriously messy as one could wish. And uploads wouldn’t have to be confined to virtual reality: they could interact with people on the outside and even rent robot bodies in order to work in or explore physical reality. Personal inclinations regarding uploading differ. Many transhumanists have a pragmatic attitude: whether they would like to upload or not depends on the precise conditions in which they would live as uploads and what the alternatives are. (Some transhumanists may also doubt whether uploading will be possible.) Advantages of being an upload would include: Uploads would not be subject to biological senescence. Back-up copies of uploads could be created regularly so that you could be re-booted if something bad happened. (Thus your lifespan would potentially be as long as the universe’s.) You could potentially live much more economically as an upload since you wouldn’t need physical food, housing, transportation, etc. If you were running on a fast computer, you would think faster than in a biological implementation. For instance, if you were running on a computer a thousand times more powerful than a human brain, then you would think a thousand times faster (and the external world would appear to you as if it were slowed down by a factor of a thousand). You would thus get to experience more subjective time, and live more, during any given day. You could travel at the speed of light as an information pattern, which could be convenient in a future age of large-scale space settlements. Radical cognitive enhancements would likely be easier to implement in an upload than in an organic brain. A couple of other points about uploading: Uploading should work for cryonics patients provided their brains are preserved in a sufficiently intact state. Uploads could reproduce extremely quickly (simply by making copies of themselves). This implies that resources could very quickly become scarce unless reproduction is regulated. ===What is the singularity?=== Some thinkers conjecture that there will be a point in the future when the rate of technological development becomes so rapid that the progress-curve becomes nearly vertical. Within a very brief time (months, days, or even just hours), the world might be transformed almost beyond recognition. This hypothetical point is referred to as the singularity. The most likely cause of a singularity would be the creation of some form of rapidly self-enhancing greater-than-human intelligence. The concept of the singularity is often associated with Vernor Vinge, who regards it as one of the more probable scenarios for the future. (Earlier intimations of the same idea can be found e.g. in John von Neumann, as paraphrased by Ulam 1958, and in I. J. Good 1965.) Provided that we manage to avoid destroying civilization, Vinge thinks that a singularity is likely to happen as a consequence of advances in artificial intelligence, large systems of networked computers, computer-human integration, or some other form of intelligence amplification. Enhancing intelligence will, in this scenario, at some point lead to a positive feedback loop: smarter systems can design systems that are even more intelligent, and can do so more swiftly than the original human designers. This positive feedback effect would be powerful enough to drive an intelligence explosion that could quickly lead to the emergence of a superintelligent system of surpassing abilities. The singularity-hypothesis is sometimes paired with the claim that it is impossible for us to predict what comes after the singularity. A post-singularity society might be so alien that we can know nothing about it. One exception might be the basic laws of physics, but even there it is sometimes suggested that there may be undiscovered laws (for instance, we don’t yet have an accepted theory of quantum gravity) or poorly understood consequences of known laws that could be exploited to enable things we would normally think of as physically impossible, such as creating traversable wormholes, spawning new “basement” universes, or traveling backward in time. However, unpredictability is logically distinct from abruptness of development and would need to be argued for separately. Transhumanists differ widely in the probability they assign to Vinge’s scenario. Almost all of those who do think that there will be a singularity believe it will happen in this century, and many think it is likely to happen within several decades. References: Good, I. J. “Speculations Concerning the First Ultraintelligent Machine,” in Advances in Computers, Vol. 6, Franz L. Alt and Morris Rubinoff, eds (Academic Press, 1965), pp. 31-88. Vinge, V. “The Coming Technological Singularity,” Whole Earth Review, Winter Issue (1993). http://www.ugcs.caltech.edu/~phoenix/vinge/vinge-sing.html Ulam, S. “Tribute to John von Neumann,” Bulletin of the American Mathematical Society, Vol. 64, Nr. 3, Part II, pp. 1-49 (1958).
Описание изменений:
Пожалуйста, учтите, что любой ваш вклад в проект «hpluswiki» может быть отредактирован или удалён другими участниками. Если вы не хотите, чтобы кто-либо изменял ваши тексты, не помещайте их сюда.
Вы также подтверждаете, что являетесь автором вносимых дополнений, или скопировали их из источника, допускающего свободное распространение и изменение своего содержимого (см.
Hpluswiki:Авторские права
).
НЕ РАЗМЕЩАЙТЕ БЕЗ РАЗРЕШЕНИЯ ОХРАНЯЕМЫЕ АВТОРСКИМ ПРАВОМ МАТЕРИАЛЫ!
Отменить
Справка по редактированию
(в новом окне)
Навигация
Персональные инструменты
Вы не представились системе
Обсуждение
Вклад
Создать учётную запись
Войти
Пространства имён
Статья
Обсуждение
русский
Просмотры
Читать
Править
История
Ещё
Навигация
Начало
Свежие правки
Случайная страница
Инструменты
Ссылки сюда
Связанные правки
Служебные страницы
Сведения о странице
Дополнительно
Как редактировать
Вики-разметка
Telegram
Вконтакте
backup