Wireheading: различия между версиями
H+Pedia>Supernerd (Simplifying page. Fixing grammatical errors) |
(нет различий)
|
Версия от 13:59, 18 октября 2019
Wireheading or Transcranial direct-current stimulation is an emerging technology whereby direct current is applied to the brain to affect neurological activity. It is typically intended to induce experiences of pleasure, usually through the direct stimulation of an individual's brain's reward or pleasure center with electrical current. It can also be used in a more expanded sense, to refer to any kind of method that produces a form of counterfeit utility by directly maximizing a good feeling, but that fails to realize what we value.
In both thought experiments and laboratory experiments direct stimulation of the brain’s reward center makes the individual feel happy. In theory, wireheading with a powerful enough current would be the most pleasurable experience imaginable. There is some evidence that reward is distinct from pleasure, and that most currently hypothesized forms of wireheading just motivate a person to continue the wirehead experience, not to feel happy. However, there seems to be no reason to believe that a different form of wireheading which does create subjective pleasure could not be found. The possibility of wireheading raises difficult ethical questions for those who believe that morality is based on human happiness. A civilization of wireheads "blissing out" all day while being fed and maintained by robots would be a state of maximum happiness, but such a civilization would have no art, love, scientific discovery, or any of the other things humans find valuable.
If we take wireheading as a more general form of producing counterfeit utility, there are many examples of ways of directly stimulating the reward and pleasure centers of the brain, without actually engaging in valuable experiences. Cocaine, heroin, cigarettes, and gambling are all examples of current methods of directly achieving pleasure or reward but can be seen by many as lacking much of what we value and are potentially extremely detrimental. Steve Omohundro argues[1] that: “An important class of vulnerabilities arises when the subsystems for measuring utility become corrupted. Human pleasure may be thought of as the experiential correlate of an assessment of high utility. But pleasure is mediated by neurochemicals and these are subject to manipulation.”
Wireheading is also an illustration of the complexities of creating a Friendly AI. Any AGI naively programmed to increase human happiness could devote its energies to wireheading people, possibly without their consent, in preference to any other goals. Equivalent problems arise for any simple attempt to create AGIs who care directly about human feelings ("love", "compassion", "excitement", etc). An AGI could wirehead people to feel in love all the time, but this wouldn’t correctly realize what we value when we say love is a virtue. For Omohundro, because exploiting those vulnerabilities in our subsystems for measuring utility is much easier than truly realizing our values, a wrongly designed AGI would most certainly prefer to wirehead humanity instead of pursuing human values. In addition, an AGI itself could be vulnerable to wirehead and would need to implement “police forces” or “immune systems” to ensure its measuring system doesn’t become corrupted by trying to produce counterfeit utility.
Blog posts
- Are wireheads happy?
- Would Your Real Preferences Please Stand Up?
- You cannot be mistaken about (not) wanting to wirehead
See also
- Steve Omohundro's paper, section 4 deals with vulnerabilities to counterfeit utility and wireheading
- Hedonism
- Complexity of value
- Wanting and liking
- Near/far thinking
- Hedonium
- Abolitionism
- Transcranial direct-current stimulation on Wikipedia
- Wireheading on Wikipedia
- Brain hackers on Hackspace