Открыть главное меню
Главная
Случайная
Войти
Настройки
О hpluswiki
Отказ от ответственности
hpluswiki
Найти
Редактирование:
Whole brain emulation
(раздел)
Внимание:
Вы не вошли в систему. Ваш IP-адрес будет общедоступен, если вы запишете какие-либо изменения. Если вы
войдёте
или
создадите учётную запись
, её имя будет использоваться вместо IP-адреса, наряду с другими преимуществами.
Анти-спам проверка.
Не
заполняйте это!
== Sensory/Motor Neuron Map == The link between the body and the brain is in the spine, the most distinguishable part of the Peripheral Nervous System. The picture on the left shows a spinal nerve: The nerve fibers divide into two branches, or roots. The dorsal root carries sensory axons, while the ventral root carries motor axons: One is a receiver, and the other is a sender. Charles Bell, in 1810, performed tests on animals, cutting axon after axon and observing the resulting paralysis or loss of sensation, proving this. This arrangement is species-generic '''?REF''', and it may only be necessary to trace a single person's axons to map the sensory/motor axons all the way up the spine into the brain, where the arrangement becomes more specialized for the individual. In any case: The sensory and motor neurons are just another number in the global table of neurons, but once they have been matched to their corresponding axons in the PNS (That is, once it is known exactly where they send signals to or receive signals from), one forms a table -- A map -- with a set of pointers to those neurons, categorized, for example, by region. For example, one particular neuron may be classified as "Motor: Second phalax muscle, index finger, left hand, left arm.". <gallery> File:Virtual emulation system.png|The system for a completely virtual emulation. File:Embodied emulation system.jpg|The system for an embodied emulation. </gallery> This map links the sensory/motor neurons to their corresponding (Virtual) muscles and sensors. It's a simple way of abstracting away motor control, by providing simple, orderly access to the neurons responsible for it. === The Body Model === The virtual body can also be considered as means of abstraction: Instead of mapping the sensory/motor neuron map directly to a telepresence robot or to a virtual body, one could just pipe information to and from the individual virtual muscles and virtual sensors. For example, creating a virtual avatar that has points where collision checking is done. When a collision is detected, a pulse (Containing the information of the force vector, possibly) is piped to the corresponding sensor, that sends it to its corresponding neuron. The same is done with muscles, but viceversa: The nerve impulse is translated by the map from a pure impulse to, say, stress and force vectors in the virtual muscle. The muscle can then pipe this values to a software object that turns them into motion of a limb in the avatar. The body simulation translates between neural signals and the environment, as well as maintains a model of body state as it affects the brain emulation. How detailed the body simulation needs to be in order to function depends on the goal. An “adequate” simulation produces enough and the right kind of information for the emulation to function and act, while a convincing simulation is nearly or wholly indistinguishable from the “feel” of the original body. A number of relatively simple biomechanical simulations of bodies connected to simulated nervous systems have been created to study locomotion. (Suzuki, Goto et al., 2005) simulated the C. elegans body as a multi‐joint rigid link where the joints were controlled by motorneurons in a simulated motor control network. Örjan Ekeberg has simulated locomotion in lamprey (Ekeberg and Grillner, 1999), stick insects (Ekeberg, Blümel et al., 2004), and the hind legs of cat (Ekeberg and Pearson, 2005) where a rigid skeleton is moved by muscles either modeled as springs contracting linearly with neural signals, or in the case of the cat, a model fitting observed data relating neural stimulation, length, and velocity with contraction force (Brown, Scott et al., 1996). These models also include sensory feedback from stretch receptors, enabling movements to adapt to environmental forces: locomotion involves an information loop between neural activity, motor response, body dynamics, and sensory feedback (Pearson, Ekeberg et al., 2006). Today biomechanical model software enables fairly detailed models of muscles, the skeleton, and the joints, enabling calculation of forces, torques, and interaction with a simulated environment (Biomechanics Research Group Inc, 2005). Such models tend to simplify muscles as lines and make use of pre‐recorded movements or tensions to generate the kinematics. A detailed mechanical model of human walking has been constructed with 23 degrees of freedom driven by 54 muscles. However, it was not controlled by a neural network but rather used to find an energy‐optimizing gait (Anderson and Pandy, 2001). A state of‐the‐art model involving 200 rigid bones with over 300 degrees of freedom, driven by muscular actuators with excitation‐contraction dynamics and some neural control, has been developed for modelling human body motion in a dynamic environment, e.g. for ergonomics testing (Ivancevic and Beagley, 2004). This model runs on a normal workstation, suggesting that rigid body simulation is not a computationally hard problem in comparison to WBE. Other biomechanical models are being explored for assessing musculoskeletal function in human (Fernandez and Pandy, 2006), and can be validated or individualized by use of MRI data (Arnold, Salinas et al., 2000) or EMG (Lloyd and Besier, 2003). It is expected that near future models will be based on volumetric muscle and bone models found using MRI scanning (Blemker, Asakawa et al., 2007; Blemker and Delp, 2005), as well as construction of topological models (Magnenat‐Thalmann and Cordier, 2000). There are also various simulations of soft tissue (Benham, Wright et al., 2001), breathing (Zordan, Celly et al., 2004) and soft tissue deformation for surgery simulation (Cotin, Delingette et al., 1999). Another source of body models comes from computer graphics, where much effort has gone into rendering realistic characters, including modelling muscles, hair and skin. The emphasis has been on realistic appearance rather than realistic physics (Scheepers, Parent et al., 1997), but increasingly the models are becoming biophysically realistic and overlapping with biophysics (Chen and Zeltzer, 1992; Yucesoy, Koopman et al., 2002). For example, 30 contact/collision coupled muscles in the upper limb with fascia and tendons were generated from the visible human dataset and then simulated using a finite volume method; this simulation (using one million mesh tetrahedra) ran at a rate of 240 seconds per frame on a single CPU Xeon 3.06 GHz (on the order of a few GFLOPS) (Teran, Sifakis et al., 2005). Scaling this up 20 times to encompass ≈600 muscles implies a computational cost on the order of a hundred TFLOPS for a complete body simulation. Physiological models are increasingly used in medicine for education, research and patient evaluation. Relatively simple models can accurately simulate blood oxygenation (Hardman, Bedforth et al., 1998). For a body simulation this might be enough to provide the right feedback between exertion and brain state. Similarly simple nutrient and hormone models could be used insofar a realistic response to hunger and eating were desired. === Senses === <blockquote>See: [[Physical Enhancement#Sensory Augmentation|Sensory Augmentation]]</blockquote> ==== Vision ==== Vision Visual photorealism has been sought in computer graphics for about 30 years, and this appears to be a fairly mature area at least for static images and scenes. Much effort is currently going into such technology, for use in computer games and movies. (McGuigan, 2006) proposes a “graphics Turing test” and estimates that for 30 Hz interactive visual updates 518.4‐1036.8 TFLOPS would be enough for Monte Carlo global illumination. This might actually be an overestimate since he assumes generation of complete pictures. Generating only the signal needed for the retinal receptors (with higher resolution for the fovea than the periphery) could presumably reduce the demands. Similarly, more efficient implementations of the illumination model (or a cheaper one) would also reduce demands significantly. ==== Hearing ==== The full acoustic field can be simulated over the frequency range of human hearing by solving the differential equations for air vibration (Garriga, Spa et al., 2005). While accurate, this method has a computational cost that scales with the volume simulated, up to 16 TFLOPS for a 2×2×2 m room. This can likely be reduced by the use of adaptive mesh methods, or ray‐ or beam‐tracing of sound (Funkhouser, Tsingos et al., 2004). Sound generation occurs not only from sound sources such as instruments, loudspeakers, and people but also from normal interactions between objects in the environment. By simulating surface vibrations, realistic sounds can be generated as objects collide and vibrate. A basic model with N surface nodes requires 0.5292 N GFLOPS, but this can be significantly reduced by taking perceptual shortcuts (Raghuvanshi and Lin, 2006; Raghuvanshi and Lin, 2007). This form of vibration generation can likely be used to synthesize realistic vibrations for touch. ==== Smell and Taste ==== So far no work has been done on simulated smell and taste in virtual reality, mainly due to the lack of output devices. Some simulations of odorant diffusion have been done in underwater environments (Baird RC, Johari H et al., 1996 ) and in the human and rat nasal cavity (Keyhani, Scherer et al., 1997; Zhao, Dalton et al., 2006). In general, an odor simulation would involve modelling diffusion and transport of chemicals through air flow; and the relatively low temporal and spatial resolution of human olfaction would likely allow a fairly simple model. A far more involved issue is what odorant molecules to simulate. Humans have 350 active olfactory receptor genes, but we can likely detect more variation due to different diffusion in the nasal cavity (Shepherd, 2004). No work has been done on simulating smell and taste in a virtual environment, most likely due to lack of output for this data. The low quality of human olfaction would allow for a simple model. Taste relies only on a few types of receptor, but the tongue also detects texture and the placement of then nose forces on to smell objects entering the mouth. The former may require complex simulations of the physics of virtual objects in the case of virtual environments, and pressure/temperature sensors for simulacra. ==== Haptics ==== The haptic senses of touch, proprioception, and balance are crucial for performing skilled actions in real and virtual environments (Robles‐De‐La‐Torre, 2006). Tactile sensation relates both to the forces affecting the skin (and hair) and to how they are changing as objects or the body are moved. To simulate touch, stimuli collision detection is needed to calculate forces on the skin (and possibly deformations) as well as the vibrations when it is moved over a surface or exploring it with a hard object (Klatzky, Lederman et al., 2003). To achieve realistic haptic rendering, updates in the kilohertz range may be necessary (Lin and Otaduy, 2005). In environments with deformable objects various nonlinearities in response and restitution have to be taken into account (Mahvash and Hayward, 2004). Proprioception, the sense of how far muscles and tendons are stretched (and by inference, limb location) is important for maintaining posture and orientation. Unlike the other senses, proprioceptive signals would be generated by the body model internally. Simulated Golgi organs, muscle spindles, and pulmonary stretch receptors would then convert body states into nerve impulses. The balance signals from the inner ear appears relatively simple to simulate, since it is only dependent on the fluid velocity and pressure in the semicircular channels (which can likely be assumed to be laminar and homogeneous) and gravity effects on the utricle and saccule. Compared to other senses, the computational demands are minuscule. Thermoreception could presumably be simulated by giving each object in the virtual environment a temperature, activating thermoreceptors in contact with the object. Nocireception (pain) would be simulated by activating the receptors in the presence of excessive forces or temperatures; the ability to experience pain from simulated inflammatory responses may be unnecessary verisimilitude. === The Environment Model === ==== Embodiment ==== ==== Virtual Worlds ==== [[File:broken thing.jpg|thumb|left|[[Second Life]]'s forever-in-beta engine shows how awful it would be to live in a simulation. Also it's full of weirdos.]]
Описание изменений:
Пожалуйста, учтите, что любой ваш вклад в проект «hpluswiki» может быть отредактирован или удалён другими участниками. Если вы не хотите, чтобы кто-либо изменял ваши тексты, не помещайте их сюда.
Вы также подтверждаете, что являетесь автором вносимых дополнений, или скопировали их из источника, допускающего свободное распространение и изменение своего содержимого (см.
Hpluswiki:Авторские права
).
НЕ РАЗМЕЩАЙТЕ БЕЗ РАЗРЕШЕНИЯ ОХРАНЯЕМЫЕ АВТОРСКИМ ПРАВОМ МАТЕРИАЛЫ!
Отменить
Справка по редактированию
(в новом окне)