Редактирование:
Whole brain emulation
(раздел)
Перейти к навигации
Перейти к поиску
Внимание:
Вы не вошли в систему. Ваш IP-адрес будет общедоступен, если вы запишете какие-либо изменения. Если вы
войдёте
или
создадите учётную запись
, её имя будет использоваться вместо IP-адреса, наряду с другими преимуществами.
Анти-спам проверка.
Не
заполняйте это!
= Science & Technology = == Process == # The brain is extracted. Necessary extra fixatives are applied. # The brain is segmented into as many pieces as needed. # Each piece is laminated and scanned by an ultramicrotome, [[ATLUM]] or some other machine. ## Current technology is rather slow, but new massively-parallel electron microscopes are being, developed, primarily by the semiconductor industry. The speed of scanning depends on how many machines you have. # A stack of electron micrographs is built, one for every slice. # Noise is eliminated, different electron micrographs at the same height are pasted together by inferring edge connectivity. # An edge detector traces the contours of neurites and cellular structures. # Other algorithms detect intracellular structures of interest (Polyribosome complexes). # This is done for every layer. # Another algorithm joins the edges in different layers, creating a 3D model of the brain. # Another algorithm uses that model to create a graph of the connectivity of the brain. Each node in this graph is a neuron, and the neuron data structure is supplied with all the necessary extra information acquired by step 7. # After the scan is complete, the graph is stored in a neuromorphic computer; a machine where every processor is a hardware implementation of some model of neurons. # The graph at the lowest level of the stem is joined with a species-generic graph of the connectivity of the spine, ie: Axons of the brain are matched to virtual nerve endings. # The simulation of the body and brain, either connected to a robot or virtual avatar, is started. == Slicing == === ATLUM === The '''Automatic Tape-Collecting Lathe Ultramicrotome''' is currently the fastest way to fully automatically section and scan neural tissue.<ref>Hayworth, Kashturi et al., 2006; Hayworth, 2007</ref> [[File:ATLUM Prototype.jpg|thumb|right|The Automatic Tape Collecting Lathe Ultramicrotome - [[ATLUM]]]] Invented by transhumanist [[Ken Hayworth]], the ATLUM is essentially a rotating block of tissue (typically 1 to 2mm in width, 10mm in length, and 0.5mm in depth) mounted on a steel axle. As the tissue is turned by an ultraprecise [http://en.wikipedia.org/wiki/Fluid_bearing#Air_bearingsair bearing], the microtome (An ultrathin knife) advances by means of a piezoelectric component, creating a spiral cut of 40nm-thikc tissue, which slides onto the water on the back surface of the microtome. The slice is then sandwiched between two Carbon-coated mylar tapes, replacing the need for dexterous undergraduates to manually position the sections on a TEM grid. The tape has the extra benefits of being a sturdy substrate and preventing charging and beam damage during the scan. The position of the knife relative to the axle can be held to within approximately 10nm, so that many square millimeters of tissue can be sectioned. [[File:ATLUM_Closeup.jpg|thumb|left]] [[File:ATLUM poster.jpg|thumb]] After sectioning, the tissue can be preserved for later imaging, or exposed to heavy metals for SEM. This produces greater signal to noise ratios and much faster imaging time that what is provided by SBFSEM and FIBSEM. Imaging can also be done through the SEM backscattering signal, like in SBFSEM and FIBSEM, allowing the ATLUM to work without the need for a TEM grid and disposes of width-of-section limitations. Images obtained with an ATLUM are of a quality equivalent to that of traditional TEM, with a lateral resolution above 5nm. ATLUM-collected sections can be subject to tomographic tilt (See SSET), giving aditional depth data. This is done by tilting the section at various angles relative to the SEM beam, giving depth resolutions greater than that of the thickness itself. One of the main differences between the ATLUM and other microtomes is that others engage in discontinuous motion — That is, sweep forward, collect a slice into the water boat, sweep back, repeat. The ATLUM generates a continuous cut by moving in a spiral manner around the sample. There may be some problems reconstructing large volumes from EMs of slices that are flat but must be mapped to tubular surfaces of decreasing diameter (The brain), a problem that is avoided by having straight, one-above-the-other cuts. Another point, related to the above, is that the ATLUM produces a single tape of tissue, while others produce independent slices that must be gather and set in an orderly manner. The ATLUM does not have this problem. In an ordinary ultramicrotome, an accident that randomizes the slices will ruin the entire process. In the ATLUM, one only needs to distinguish one end from the other. With an ATLUM, one can scan volumes of brain tissue in the range of cubic millimeters, in a completely automated manner: Unlike SSET and SSTEM, which are only semi-automated: The operator must manually recover tissue from the knife's water boat and, again, manually place it into TEM slot grids using an eyelash, a terribly inefficient and unreliable process. === KESM === [[File:KESM.jpg|thumb|right|The Knife Edge Scanning Microscope - KESM. Texas A&M University.]] The Knife-Edge Scaning Microscope is a machine that integrates sectioning and imaging into a single process, with the diamond knife and microscope moving together in the same assembly, and can scan large volumes of tissue at high resolution (Below SBFSEM). It would allow for an entire mouse brain (≈1 cm<sup>3</sup>) to be scanned in one hundred hours, producing 15 Terabytes of raw EM data.<ref>McCormick, 2002a; McCormick, 2002b)</ref> The main limitation is the need to stain inside a volume. The array of stains is at present limited, but genetic modifications may allow for cells to express stains or become fluorescent. The tissue is fixed and embedded in plastic. As the knife and the microscope move over the tissue, a thin strip of tissue is cut and scanned. The KESM can scan up to 200 megapixels per second, with the width of the EM field being 2.5mm for 64μm resolution and 0.625mm for 32μm resolution<ref>McCormick, Koh et al., 2004</ref>. The sample is cut in a stair-step manner to reduce jittering and use knives larger than the microscope FOV<ref>Koh and McCormick, 2003</ref>. == Scanning == ''(Ordered by increasing resolution)'' === Optical Procedures === Optical microscopy methods are limited by the need for staining tissues to make relevant details stand out and the diffraction limits set by the wavelength of light (≈0.2 μm). The main benefit is that they go well together with various spectrometric methods (see below) for determining the composition of tissues. Sub‐diffraction optical microscopy is possible, if limited. Various fluorescence‐based methods have been developed that could be applicable if fluorophores could be attached to the brain tissue in a way that provided the relevant information. Structured illumination techniques use patterned illumination and post‐collection analysis of the interference fringes between the illumination and sample image together with optical nonlinearities to break the diffraction limit. This way, 50 nm resolving power can be achieved in a wide field, at the price of photodamage due to the high power levels (Gustafsson, 2005). Near‐field scanning optical microscopy (NSOM) uses a multinanometre optic fiber to scan the substrate using near‐field optics, gaining resolution (down to the multi‐nanometer scale) and freedom from using fluorescent markers at the expense of speed and depth of field. It can also be extended into near field spectroscopy. Confocal microscopy suffers from having to scan through the entire region of interest and quality degrades away from the focal plane. Using inverse scattering methods depth‐ independent focus can be achieved (Ralston, Marks et al., 2007). ==== Optical Histology ==== All‐optical histology uses femtosecond laser pulses to ablate tissue samples, avoiding the need for mechanical removal of the surface layer (Tsai, Friedman et al., 2003). This treatment appears to change the tissue 2‐10 μm from the surface. However, Tsai et al. were optimistic about being able to scan a fluorescence labelled entire mouse brain into 2 terapixels at the diffraction limit of spatial resolution. Another interesting application of femtosecond laser pulses is microdissection (Sakakura, Kajiyama et al., 2007; Colombelli, Grill et al., 2004). The laser was able to remove 100 μm samples from plant and animal material, modifying a ~10 μm border. This form of optical dissection might be an important complement for EM methods, in that, after scanning the geometry of the tissue at a high resolution, relevant pieces can be removed and analyzed microchemically. This could enable gaining both the EM connectivity data and detailed biochemistry information. Platforms already exist that can both inject biomolecules into individual cells, perform microdissection, isolate and collect individual cells using laser catapulting, and set up complex optical force patterns (Stuhrmann, Jahnke et al., 2006). === MRI Microscopy === === X-Ray Microscopy === X-ray microscopy also allows spectromicroscopy, which adds additional information to the scan about the chemical environment of the tissue. Different aminoacids can be detected with this method and individual proteins could be classified. Currently, X-ray microscopy is too slow to be relevant for WBE of mammals: Scanning X-ray microscopes have exposure times measured in minutes, although they deposit five to ten times less radiation in the sample<ref>Jacobsen,1999</ref>. === Atomic Beam Microscopy === Atomic-beam microscopy consists of using a beam of neutral atoms, instead of electrons of photons, to image tissue. The de Broglie wavelength of thermal atoms is in the subnanometer range, making the resolution match that of the best [[#Electron Microscopy|electron microscopes]]. If uncharged, inert atoms like Helium are used, the beam would not destroy tissue even at such a resolution<ref>Holst and Allison, 1997</ref>. Moreover, Helium atom scattering has a large cross-section with Hydrogen, which might make it possible to detect membranes even in unstained tissue. High resolution atomic beam microscopy has not been achieved, although low resolution has been<ref>Doak, Grisenti et al., 1999</ref>. Recent developments<ref>Oberst, Kouznetsov et al., 2005</ref><ref>Shimizu and Fujita, 2002</ref> have enabled focusing neutral atom beams to a spot size of tens of nanometers<ref>Kouznetsov, Oberst et al., 2006</ref>, which could be scanned across the tissue to construct the full image. === Electron Microscopy === ==== SSET ==== By tilting the sample relative to the electron beam, the TEM can detect depth and create high-resolution 3D images<ref>Frank, 1992</ref><ref>Penczek, Marko et al., 1995</ref>. Due to the limitations on depth (1µm), it is useful mostly for scanning of 'local' tissue samples, ie the organelles and cellular structures of small volumes of tissue.<ref>Lučić, Förster et al., 2005</ref> ==== SSTEM ==== '''Serial Section Transmission Electron Microscopy''' By making ultrathin slices, a three-dimensional model can be made. This method has been used to build a model of a neuromuscular juncture (50nm-thick sections)<ref>Tsang, 2005</ref> and to construct the connectome of the ''C. Elegans''<ref>White, Southgate et al., 1986</ref>. However, this process is labor intensive unless it can be automated. ==== SBFSEM ==== One way of reducing the problems of sectioning is to place the microtome inside the microscope chamber (Leighton, 1981) for further contrast, plasma etching was used (Kuzirian and Leighton, 1983) (Denk and Horstmann, 2004) demonstrated that backscattering contrast could be used instead in a SEM, simplifying the technique. They produced stacks of 50‐70 nm thick sections using an automated microtome in the microscope chamber, with lateral jitter less than 10 nm. The resolution and field size was limited by the commercially available system. They estimated that tracing of axons with 20 nm resolution and S/N ratio of about 10 within a 200 μm cube could take about a day (while 10 nm x 10 nm x 50 nm voxels at S/N 100 would require a scan time on the order of a year). Reconstructing volumes from ultrathin sections faces many practical challenges. Current electron microscopes cannot handle sections wider than 1‐2 mm. Long series of sections are needed but the risk of errors or damage increase with the length, and the number of specimen holding grids becomes excessive (unless sectioning occurs inside the microscope (Kuzirian and Leighton, 1983)). Current state of the art for practical reconstruction from tissue blocks is about 0.1 mm3 , containing about 107‐108 synapses (Fiala, 2002). ==== FIBSEM ==== The semiconductor industry has long used focused ion beams to perform failure analysis tests on integrated circuits. [http://www.fei.com/ FEI] researchers have shown that this can be used to image [[#Plastination|plastinated]] neural tissue. An ion beam ablates the top 30 to 50 nanometers of a 100x100μm tissue sample. The backscatter is imaged by the SEM, and the process is then repeated. It is similar to SBFSEM, but without the problems caused by high beam current. ==== Increasing Speed of SEM ==== From the above discussion it is clear that long imaging times constitute a major barrier to whole brain emulation using SEM techniques. However, there is currently a major research push toward massively parallel multi‐beam SEMs which has the potential to speed up SEM imaging by many orders‐of‐magnitude. This research push is being driven by the semiconductor industry as part of its effort to reduce feature sizes on computer chips below the level that traditional photolithography can produce. The circuitry patterns within computer chips are produced through a series of etching and doping steps. Each of these steps must affect only selected parts of the chip, so areas to be left unaffected are temporally covered by a thin layer of polymer which is patterned in exquisite detail to match the sub‐micron features of the desired circuitry. For current mass production of chips this polymer layer is patterned by shining ultraviolet light through a mask onto the surface of the silicon wafer which has been covered with the photopolymer in liquid form. This selectively cures only the desired parts of the photopolymer. To obtain smaller features than UV light can allow, electron beams (just as in a SEM) must instead be used to selectively cure the photopolymer. This process is called e‐beam lithography. Because the electron beam must be rastered across the wafer surface (instead of flood illuminating it as in light lithography) the process is currently much too slow for production level runs. Several research groups and companies are currently addressing this speed problem by developing multi‐beam e‐beam lithography systems (Kruit, 1998; van Bruggen, van Someren et al., 2005; van Someren, van Bruggen et al., 2006; Arradiance Inc). In these systems, hundreds to thousands of electron beams raster across a wafer’s surface simultaneously writing the circuitry patterns. These multi‐beam systems are essentially SEMs, and it should be a straightforward task to modify them to allow massively parallel scanning as well (Pickard, Groves et al., 2003). For backscatter imaging (as in the SBFSEM, FIBSEM, and ATLUM technologies) this might involve mounting a scintillator with a grid of holes (one for each e‐beam) very close to the surface of the tissue being imaged. In this way the interactions of each e‐beam with the tissue can be read off independently and simultaneously. It is difficult to predict how fast these SEMs may eventually get. A 1,000 beam SEM where each individual beam maintains the current 1 MHz acquisition rate for stained sections appears reachable within the next ten years. We can very tentatively apply this projected SEM speedup to ask how long imaging a human brain would take. First, assume a brain were sliced into 50nm sections on ATLUM‐like devices (an enormous feat which would itself take approximately 1,000 machines – each operating at 10x the current sectioning rate – a total of 3.5 years to accomplish). This massive ultrathin section library would contain the equivalent of 1.1∙1021 voxels (at 5×5×50 nm per voxel). Assuming judicious use of directed imaging within this ultrathin section library only 1/10 may have to be imaged at this extremely high resolution (using much lower, and thus faster, imaging on white mater tracts, cell body interiors etc.). This leaves roughly 1.1∙1020 voxels to be imaged at high resolution. If 1,000 SEMs each utilizing 1,000 beamlets were to tackle this imaging job in parallel their combined data acquisition rate would be 1∙1012 voxels per second. At this rate the entire imaging task could be completed in less than 4 years. === Nondestructive Procedures === ''Main article: [[Non-destructive uploading]]'' === Moravec Procedure === (This totally ruins the otherwise 100% science contents of the article, so please ignore it) Scanning of the neural structures may take the form of gradual replacement, in which a robot surgeon equipped with a manipulator that subdivies into increasingly smaller branches. While the patient is awake and conscious, this manipulator begins removing cells, clamping blood vessels, exposing synapses for analysis, etc. Once the onboard computing has a good picture of what's going on, it creates a simulation of the specific volume of the brain and replaces the part of the brain with the hardware running this simulation, using magic nanofingers to plug everything together. After a while, the entire brain is composed of this hardware, maintaining the same functionality as before. '''REF MORAVEC 1988'''. While this may help mitigate fears of loss of consciousness in an all-in-one 'kill,cut,scan' approach, it is technically infeasible as the system has to be both seamlessly integrated with living, changing, moving biological tissue. It is sometimes suggested that the successive aggregation of brain-computer interfaces to a brain will lead to a state where transfer is possible, by reaching a state where most functions are carried out in the external hardware and the brain is no longer necessary, or by reaching a point where the systems are so pervasive that it is possible to scan the whole of the brain with them, destructively or otherwise. === Summary === * '''Resolution:''' All forms of Electron Microscopy, except SBFSEM have sufficient resolution to construct a graph of the connectivity of the brain and also inspect the properties of individual synapses (Count the synaptic vesicles). It may be possible, with future modifications, for SBFSEM to reach the necessary resolution. * '''Reliability:''' * '''Time:''' {| border="1" class=wikitable ! style="background-color:#C0C0C0;" | Method || style="background-color:#C0C0C0;" | Resolution || style="background-color:#C0C0C0;" | Notes |- | MRI || > 5.5 µm (Non-frozen) || Does not require sectioning, may achieve better resolution on vitrified brains. |- | MRI microscopy || 3 µm || None |- | NIR microspectroscopy || 1 µm || None |- | All‐optical histology || 0.7 µm || None |- | KESM || 0.3 µm x 0.5 µm || None |- | X‐ray microtomography || 0.47 µm || None |- | MRFM || 80 nm || None |- | SI || 50 nm || None |- | X‐ray microscopy || 30 nm || Spectromicroscopy possible? |- | SBFSEM || 50-70 nm x 1-20 nm || None |- | FIBSEM || 30-50 nm x 1-20 nm || None |- | ATLUM || 40 nm x 5 nm || None |- | SSET || 50 nm x 1 nm || None |- | Atomic beam microscopy || 10 nm || Not implemented yet |- | NSOM || 5 nm? || Requires fluorescent markers, spectroscopy possible. |- | SEM || 1-20 nm || None |- | Array tomography || 1-20 nm SEM, 50x200x200 nm fluorescence stains || Enables multiple staining |- | TEM || <1 nm || Basic 2D method, must be combined with sectioning or tomography for 3D imaging. Damage from high energy electrons at high resolutions. |} == Simulation Hardware == {| border=1 class=wikitable ! colspan=6 align=center | '''Storage Requirements''' |- | align=center | '''Level''' || align=center | '''Description''' || align=center | '''# entities''' || align=center | '''Bytes per entity''' || align=center | '''Memory demands (Terabytes)''' || align=center | '''Earliest year at a cost of $1 million''' |- | 1 || Computational module || 100-1,000? || ? || ? || ? |- | 2 || Brain region connectivity || 10<sup>5</sup> regions, 10<sup>7</sup> connections || 2? (2-byte connectivity, 1 byte weight) || 3x10<sup>5</sup> || Present |- | 3 || Analog network population model || 10<sup>8</sup> populations, 10<sup>13</sup> connections. || 5 (3-byte connectivity, 1 byte weight, 1 byte extra state variable) || 50 || Present |- | 4 || Spiking neural network || 10<sup>11</sup> neurons, 10<sup>15</sup>. || 8 (4-byte connectivity, 4 state variables) || 8,000 || 2019 |- | 5 || Electrophysiology || 10<sup>15</sup> compartments x 10 state variables = 10<sup>36</sup> || 1 byte per state variable || 10,000 || 2019 |- | 6 || Metabolome || 10<sup>36</sup> compartments x 10<sup>2</sup> metabolites = 10<sup>18</sup>. || 1 byte per state variable || 10<sup>6</sup> || 2029 |- | 7 || Proteome || 10<sup>36</sup> compartments x 10<sup>3</sup> proteins and metabolites || 1 byte per state variable || 10<sup>7</sup> || 2034 |- | 8 || States of protein complexes || 10<sup>36</sup> compartments x 10<sup>3</sup> proteins x 10 states = 10<sup>20</sup>. || 1 byte per state variable || 10<sup>8</sup> || 2038 |- | 9 || Distribution of complexes || 10<sup>36</sup> compartments x 10<sup>3</sup> proteins and metabolites x 100 states/locations. || 1 byte per state variable || 10<sup>9</sup> || 2043 |- | None || Full 3D EM map || 50x2.5x2.5 nm || 1 byte per voxel, compressed. || 10<sup>9</sup> || 2043 |- | 10 || Molecular dynamics || 10<sup>25</sup> molecules. || 31 (2 bytes molecule type, 14 bytes position, 14 bytes velocity, 1 byte state). (Does that last byte refer to energy state? How do reactions happen anyways? You would need special considerations because most MD don't actually allow reactions --[[User:Eudoxia|Eudoxia]] 14:55, 28 January 2012 (UTC)) || 3.1x10<sup>14</sup> || 2069 |- | 11 || Quantum chemistry || Either ~10<sup>26</sup> atoms, or smaller number of quantum-state carrying molecules || Qbits || ? || ? |} === Neuromorphic Chips === A neuromorphic chip is a computer built to mimic specific architectures present in biological brains. [[File:CMOS neuromorphic.jpg|thumb]] ==== SpiNNaker ==== * SpiNNaker is a massively parallel, low power, neuromorphic supercomputer * Manchester University, UK ** lead by Professor Steve Furber ** collaborators from the universities of Southampton, Cambridge, and Sheffield * model very large, biologically realistic, spiking neural networks in real time * Machine ** The machine will consist of 65,536 identical custom-built 18-core processors, giving it 1,179,648 cores in total ** Each processor has an on-board router to form links with six neighbours, forming a toroidal network, as well as its own 128 MB of memory to hold synaptic weights. One processors contains 18 identical cores clocked at 200 MHz. A core is an ARM968 processor core manufactured using a 130 nm process, 32 kB of instruction memory, 64 kB of data memory, three controllers, a clock, and a timer. although ARM968 is old, it is used because the licensing agreement was committed to back in 2005. Each multiprocessor chip has about 100 million transistors, most of which are in the 55 blocks of 32 kB SRAM local instruction and data memory. ** On a separate die, but within the same chip package, is a 128 MB DDR SDRAM memory chip that operates at up to 166 MHz. This has about a billion transistors. The multiprocessor and memory chips are packaged together, one above the other, in a 19x19mm 300-pin ball grid array. ** Each core dissipates 1 Watt of energy. The SpiNNaker machine is expected to consume 50-100 kW peak, although the average is predicted to be well below 50 kW. For comparison, the average human brain consumes around 20 W. **The finished million-processor machine will occupy several cabinets. At least six to eight, possibly more if the power density turns out to be an issue. **A possible configuration would be: 48 chips per board, 12 boards per rack, 20 racks per cabinet, 6 cabinets. This is a purely speculative configuration dreamt up by this article's author. ** massive parallelism and resilience to failure of individual components. With over one million cores, and one thousand simulated neurons per core, the machine will be capable of simulating one billion neurons. This equates to just over 1% of the human brain's 86 billion neurons. * SpiNNaker will be a platform on which different algorithms can be tested ** many connectivities can be made * SpiNNaker is a contrived acronym derived from Spiking Neural Network Architecture. * The project started in 2005 and is currently funded by a UK government grant until early 2014. The microchips were manufactured and delivered to the lab in June 2011. A prototype with 864 cores was built in mid-2012. The full machine with over 1 million cores is expected to be complete by the end of 2013. * The SpiNNaker machine, when complete by the end of 2013, will be able to simulate around 1 billion neurons. This equates to just over 1% of the human brain's 86 billion neurons. If the system proves successful then similar machines can be built to take advantage of more advanced processors. For example, the 130 nm process used for the SpiNNaker chips is over a decade old - this process was used for the consumer processors that went on sale starting in 2001. If a more modern process were used, for example 22 nm as used in 2012's consumer-level devices, then power consumption could be reduced by a factor of 10. * [http://spectrum.ieee.org/computing/hardware/lowpower-chips-to-model-a-billion-neurons/0 ''Low-Power Chips to Model a Billion Neurons''] * [http://www.youtube.com/watch?v=xw8OH3VlYtg Professor Steve Furber giving a talk at Edinburgh University called 'Building brains'] * [http://www.sciencedirect.com/science/article/pii/S0165027012000866 ''Power-efficient simulation of detailed cortical microcircuits on SpiNNaker''] ==== DARPA SyNAPSE ==== ==== BrainScaleS ==== The BrainScaleS project aims to understand information processing in the brain at different scales ranging from individual neurons to whole functional brain areas. The research involves three approaches: (1) in vivo biological experimentation; (2) simulation on petascale supercomputers; (3) the construction of neuromorphic processors. The goal is to extract generic theoretical principles of brain function and to use this knowledge to build artificial cognitive systems. The neuromorphic hardware is based around wafer-scale analog VLSI. Each 20-cm-diameter silicon wafer contains 384 chips, each of which implements 128,000 synapses and up to 512 spiking neurons. This gives a total of around 200,000 neurons and 49 million synapses per wafer. VLSI models operate considerably faster than the biological originals. This allows the emulated neural networks to evolve tens-of-thousands times quicker than real time. The project is a European consortium of 13 research groups lead by a team at Heidelberg University, Germany. The project started in January 2011 and has funding from the European Union through until the end of 2014. May 25, 2012 - New video tour of the neuromorphic hardware shows one artificial spiking neuron triggering the firing of a second neuron. Jan 23, 2012 - The fully-assembled wafer-scale system shows its first spikes by the artificial neurons. Aug 25, 2011 - Neural network wafers arrive at the lab in Germany, sent from the UMC fabrication plant in Taiwan. The BrainScaleS hardware is based around wafer-scale integration of neuromorphic processors. The silicon wafers are 20 cm in diameter and contain an array of identical, tightly-connected chips. The circuitry is mixed-signal. That is, it contains a mix of both analog and digital circuits. The simulated neurons themselves are analog, while the synaptic weights and interchip communication is digital. One wafer is built to contain 48 reticles. Each reticle contains 8 HICANN chips (High Input Count Analog Neural Network). This makes a total of 384 identical chips per wafer. A HICANN chip is 5x10 mm2 in size. Each one contains an ANC (Analog Neural Core) which is the central functional block, plus supporting circuitry. Each HICANN implements 128,000 synapses and 512 membrane circuits. These can be grouped together to form simulated neurons. The number of neurons per chip depends on how many synapses are configured per neuron. For the maximum of 16,000 pre-synaptic inputs per neuron, 8 neurons are possible per chip. For the maximum of 512 neurons per chip, there can only be 256 synapses per neuron. Thus, per wafer there is a total of 49,152,000 synapses, or up to 196,608 neurons. This is assuming that every chip on the wafer is flawless and functional, which will not necessarily always be the case. The wafer is supported on an aluminum plate which also serves as a heat sink. A multi-layer printed circuit board (PCB) is placed on top of the wafer and this serves as the input/output interface to the neural circuitry. Larger systems can be built by interconnecting several wafer modules. The circuitry implements time-continuous leaky integrate-and-fire neurons with conductance-based synapses. Neural networks can be created with both short-term and long-term plasticity mechanism. Because of the timescales involved in the chip operation, the neural networks can be evolved thousands of times faster than their real time biological counterparts. Altogether, the BrainScaleS architecture shows promise for studying Hebbian learning, STDP, and cortical dynamics. The neuromorphic hardware was designed at the universities in Heidelberg and Dresden. The fabrication was done by UMC in Taiwan. Supercomputers are used to perform simulations of large-scale neural networks. The aim is to develop mathematical models of such networks. These models will then be used later to design the neuromorphic hardware. The simulation work is lead by Professor Markus Diesmann of the Computational Neurophysics group at the Jülich Research Center in the town of Jülich, Germany. The simulations are run on the JUGENE supercomputer - a Blue Gene/P system installed at Jülich. As of May 2011 this is ranked the 13th fastest supercomputer in the world. It has 294,912 processor cores and a performance of around 1 petaflops. The simulations are used to test mathematical models of neural circuits. The software used is NEST (NEural Simulation Tool). This simulates networks of point neurons or neurons with a small number of compartments. Although very large scale networks have been previously investigated, e.g. Izhikevich, the underlying simulation technologies have not been described in sufficient detail to be reproducible by other research groups. Recent work optimising the memory consumption of NEST showed that a network of 59 million neurons, with 10,000 synapses per neuron, can be distributed over all 294,912 cores of JUGENE. Networks of 100 million neurons and a trillion synapses are also theoretically realizable - either by increasing the number of cores, or reducing the overhead for neurons. This is still about three orders of magnitude away from the human brain, however, which has around 86 billion neurons and 1,000 trillion synapses. Paper published January 2012: Meeting the memory challenges of brain-scale network simulation The JUGENE supercomputer is scheduled for decommission for 31 July 2012 and will be replaced by a Blue Gene/Q system called JUQUEEN. It will have 131,072 compute cores and a peak performance of 1.6 petaflops. Each core is an IBM PowerPC A2 running at 1.6 GHz. BrainScaleS is funded by the European Union. It received €8.5 million initially, plus €700,000 in an extension. The funding comes from the Brain ICT program, which in turn is part of the Seventh Framework Programme (FP7). The project is set to run from January 1, 2011 until December 31, 2014. == Simulation Software == === Levels of Abstraction === The following table depicts the various levels of organization and scale at which the brain can be emulated -- Which increasing accuracy, increasing scanning difficulty, and increasing computational complexity. However, the amount of undestanding required is reduced the further we go: The first level of abstraction requires complete, high-level understanding of the ''mind''. Not the brain, but the abstract processes of cognition. The lowest level can be done with off-the-shelf software (If you have a couple thousand dollars to buy a copy of [[Software#Gaussian|Gaussian]]), but requires hardware beyond Greg Egan's imagination and scanning at the level of single atoms. {| border="1" align="center" style="text-align:center;" class=wikitable | '''Level''' || '''Kind''' || '''Description''' |- | 1 || Abstract model of the mind || align="left" | "Classic AI", high level representations of information and information processing. Built from the top down knowing the person's personality, ideas, etc., or using species-generic characteristics. |- | 2 || Brain region connectivity || align="left" | Each area represents a functional module, connected to others according to an abstract, species-generic connectome. |- | 3 || Analog network population model || align="left" | The population of neurons and their connectivity. Activity and states of neurons are represented as time-averages. This is similar to connectionist models using Artificial Neural Networs, rate-model neural simulations and cascade models. |- style="color:black; background-color:#ffffcc;" | 4 || Spiking neural network || align="left" | As above, plus firing properties, firing state and dynamical synaptic states. Integrate and fire models, reduce compartment models (But also some minicolumn models, eg<ref>Johansson C., Landser A. "Towards cortex sized artificial neural systems". ''Neural Networks'', 20: 48‐61, 2007.</ref>, and the Izhikevich model. |- style="color:black; background-color:#ffffcc;" | 5 || Electrophysiology || align="left" | As above, plus membrane sates (Ion channel types, properties, distribution, states), ion concenrtations, currents, voltages, modulation states. Multi-compartment model simulations only. |- style="color:black; background-color:#ffffcc;" | 6 || Metabolome || align="left" | As above, plus concentrations of metabolites and neurotransmitters in compartments. |- | 7 || Proteome || align="left" | As above, plus concentrations of proteins and gene expression levels. |- | 8 || States of protein complexes || align="left" | As above, plus protein quaternary structures. |- | 9 || Distribution of complexes || align="left" | As above, plus "locome" information and internal cellular geometry. |- | 10 || Molecular dynamics || align="left" | As above, plus molecular coordinates and molecular-level scanning. |- | 11 || Quantum chemistry || align="left" | Quantum interactions, orbitals. Requires a '''complete .pdb of the entire brain''', besides being completely computationally intractable. |} === Analysis === ==== Geometric Adjustment ==== Various methods are being developed to automatically correct the image stack so that they match best. The simplest method is finding a combination of translation, scaling and rotation that works best. However, this runs the risk of over-matching. Human brain and rat stacks have been corrected with good results using an elastic model to correct distortion by <ref>Schmitt O, Modersitzki J, Heldmann S, Wirtz S, and Fischer B. "Image registration of sectioned brains". ''International Journal of Computer Vision'', 73: 5‐39, 2007.</ref>. ==== Noise Removal ==== Noise removal is one of the oldest problems in the image processing side of computer science and thus has extensive literature and a strong research interest. In the case of brain scanning, the kind of noise imparted by the scanning method or by the brain itself are known, which makes removal easier. See <ref>Mayerich D, McCormick BH, and Keyser J. "Noise and artifact removal in knife‐edge scanning microscopy". In Piscataway N. (Ed.), ''Proceedings of 2007 ieee international symposium on biomedical imaging: From nano to macro'', IEEE Press, 2007.</ref>' for an example of light variations and knife chatter removed from KESM data. ==== Data Interpolation ==== [[File:artifact.jpg|thumb|left|Artifacts in electron micrographs cause loss of data. The missing neurites have to be interpolated from information of the surrounding tissues. (From the [[Nervous System State Vectors & Scan Data#Denk-Horstmann|Denk-Horstmann]] dataset, layer 49).]] Scanning a volume as large as the brain is likely to produce large volumes where data is lost (For example, the KESM suffers data up to 5 µm in width between different columns<ref>Kwon J, Mayerich D, Choe Y, McCormick BH. "Automated lateral sectioning for knife‐edge scanning microscopy". In ''IEEE international symposium on biomedical imaging: From nano to macro'', 2008.</ref>. In a sufficiently small case, surrounding data may be used to probabilistically interpolate the brain structure in the lost areas. In a large enough volume, however, interpolation is not sufficient, and one must generate a brain structure to fill the lost volume, using knowledge of the surrounding structure (For example, all neurons in the cerebral cortex are pyramidal, so if there is a lost volume in the cortex, stellate cells cannot be inserted. This is a high priority issue, since lost or poorly interpolated data may case mis tracing and an inexact emulation. ==== Cell tracing ==== Automated tracing of neurons imaged using confocal microscopy has been attempted using a variety of methods. Even if the scanning method used will be a different approach it seems likely that knowledge gained from these reconstruction methods will be useful. One approach is to enhance edges and find the optimal joining of edge pixels/voxels to detect contours of objects. Another is skeletonization. For example, (Urban, O’Malley et al., 2006) thresholded neuron images (after image processing to remove noise and artefacts), extracting the medial axis tree. (Dima, Scholz et al., 2002) employed a 3D wavelet transform to perform a multiscale validation of dendrite boundaries, in turn producing an estimate of a skeleton. A third approach is exploratory algorithms, where the algorithm starts at a point and uses image coherency to trace the cell from there. This avoids having to process all voxels, but risks losing parts of the neuron if the images are degraded or unclear. (Al‐Kofahi, Lasek et al., 2002) use directional kernels acting on the intensity data to follow cylindrical objects. (Mayerich and Keyser, 2008) use a similar method for KESM data, accelerating the kernel calculation by using graphics hardware. (Uehara, Colbert et al., 2004) calculates the probability of each voxel belonging to a cylindrical structure, and then propagates dendrite paths through it. One weakness of these methods is that they assume cylindrical shapes of dendrites and the lack of adjoining structures (such as dendritic spines). By using support‐vector machines that are trained on real data a more robust reconstruction can be achieved (Santamaría‐Pang, Bildea et al., 2006). Overall, tracing of branching tubular structures is a major interest in medical computing. A survey of vessel extraction techniques listed 14 major approaches, with several examples of each (Kirbas and Quek, 2004). The success of different methods is modality‐dependent.}} [[File:Thomas RSI paper neurites.jpg|thumb|3D neuron structure traced from a stack of electron micrographs.]] [[File:Eyewire screenshot.jpg|thumb|[https://eyewire.org/ Eyewire] is an online game where players trace neurites alongside an AI.]] ==== Synapse Identification ==== In electron micrographs, synapses are currently recognized using the criteria that within a structure there are synaptic vesicles adjacent to a presynaptic density, a synaptic density with electron‐dense material in the cleft and densities on the cytoplasmic faces in the pre‐ and postsynaptic membranes (Colonnier, 1981; Peters and Palay, 1996). One of the major unresolved issues for WBE is whether it is possible to identify the functional characteristics of synapses, in particular synaptic strength and neurotransmitter content, from their morphology. In general, cortical synapses tend to be either asymmetrical “type I” synapses (75‐95%) or symmetrical “type II” synapses (5‐25%), based on having a prominent or thin postsynaptic density. Type II synapses appear to be inhibitory, while type I synapses are mainly excitatory (but there are exceptions) (Peters and Palay, 1996). This allows at least some inference of function from morphology. The shape and type of vesicles may also provide clues about function. Small, clear vesicles appear to mainly contain small‐molecule neurotransmitters; large vesicles (60 nm diameter) with dense cores appear to contain noradrenaline, dopamine or 5‐HT; and large vesicles (up to 100 nm) with 50‐70 nm dense cores contain neuropeptides (Hokfelt, Broberger et al., 2000; Salio, Lossi et al., 2006). Unfortunately there does not appear to be any further distinctiveness of vesicle morphology to signal neurotransmitter type. ==== Cell Type Identification ==== Distinguishing neurons from glia and identifying their functional type requires other advances in image recognition. The definition of neuron types is debated, as well as the number of types. There might be as many as 10,000 types, generated through an interplay of genetic, posttranscriptional, epigenetic, and environmental interactions (Muotri and Gage, 2006). There are some 30+ named neuron types, mostly categorized based on chemistry and morphology (e.g. shape, the presence of synaptic spines, whether they target somata or distal dendrites). Distinguishing morphologically different groups appear feasible using geometrical analysis (Jelinek and Fernandez, 1998). In terms of electrophysiology, excitatory neurons are typically classified into regular‐spiking, intrinsic bursting, and chattering, while interneurons are classified into fast‐spiking, burst spiking, late‐spiking and regular spiking. However, alternate classifications exist. (Gupta, Wang et al., 2000) examined neocortical inhibitory neurons and found three different kinds of GABAergic synapses, three main electrophysiological classes divided into eight subclasses, and five anatomical classes, producing 15+ observed combinations. Examining the subgroup of somatostatin‐expressing inhibitory neurons produced three distinct groups in terms of layer location and electrophysiology (Ma, Hu et al., 2006) with apparently different functions. * The morphology and electrophysiology of inhibitory neurons in the 2nd and 3rd layers of trhe prefrontal cortex also indicates the existence of different clustered types. Overall, it appears that there exist distinct classes of neurons in terms of neurotransmitter, neuropeptide expression, protein expression (e.g. calcium binding proteins), and overall electrophysiological behaviour. Morphology often shows clustering, but there may exist intermediate forms. Similarly, details of electrophysiology may show overlap between classes, but have different population means. Some functional neuron types are readily distinguished from morphology (such as the five types of the cerebellar cortex). A key problem is that while differing morphologies likely implies differing functional properties, the reverse may not be true. Some classes of neurons appear to show a strong link between electrophysiology and morphology (Krimer, Zaitsev et al., 2005) that would enable inference of at least functional type just from geometry. In the case of layer 5 pyramidal cells, some studies have found a link between morphology and firing pattern (Kasper, Larkman et al., 1994; Mason and Larkman, 1990), while others have not (Chang and Luebke, 2007). It is quite possible that different classes are differently identifiable, and that the morphology‐function link could vary between species. * Unique and identifiable neurons are relative common in small animals become less and less common as brain size increases * Identifiable neurons are present in small animals **They can be distuinguished from other neurons inside the individual or across individuals(Bullock, 2000) ** === Models of Neurons === A model of a neuron is an abstract mathematical model that seems to imitate some aspect of the behavior of neurons. They are used to predict the outcome of biological processes and to study the nervous system in a more flexible environment of study (A computer). {| border=1 class=wikitable ! colspan=3 style="background-color:#C0C0C0;" | Costs of Neuron Models |- | '''Model''' || '''# of biological features''' || '''FLOPS/ms''' |- | Integrate-and-fire || 3 || 5 |- | Integrate‐and‐fire with adapt. || 5 || 10 |- | Integrate‐and‐fire‐or‐burst || 10 || 13 |- | Resonate‐and‐fire || 12 || 10 |- | Quadratic integrate‐and‐fire || 6 || 7 |- | Izikhevich (2003) || 21 || 13 |- | FitzHugh‐Nagumo || 11 || 72 |- | Hindmarsh‐Rose || 18 || 120 |- | Morris‐Lecar || 14º || 600 |- | Wilson || 15 || 180 |- | Hodgkin‐Huxley || 19º || 1200 |} [[File:Cost of neuron models.jpg]] '''Note:''' Only the Morris‐Lecar and Hodgkin‐Huxley models are "biophysically meaningful" in the sense that they attempt actually to model real biophysics, the others only aim for a correct phenomenology of spiking. === Existing Simulators === ==== NEURON ==== The primary software used by the BBP for neural simulations is a package called NEURON. This was developed starting in the 1990s by Michael Hines at Yale University and John Moore at Duke University. It is written in C, C++, and FORTRAN. The software continues to be under active development and, as of July 2012, is currently at version 7.2. It is free and open source software, both the code and the binaries are freely available on the website. Michael Hines and the BBP team collaborated in 2005 to port the package to the massively parallel Blue Gene supercomputer. The NEURON Simulation Environment (aka NEURON --see http://www.neuron.yale.edu/) is designed for modeling individual neurons and networks of neurons, and is widely used by experimental and theoretical neuroscientists. It provides tools for conveniently building, managing, and using models that are numerically sound and computationally efficient. NEURON is particularly well-suited to problems that are closely linked to experimental data, especially those that involve cells with complex anatomical and biophysical properties. NEURON began in the laboratory of John W. Moore at Duke University, where he and Michael Hines started their collaboration to develop simulation software for neuroscience research. It has benefited from judicious revision and selective enhancement, guided by feedback from the growing number of neuroscientists who have used it to incorporate empirically-based modeling into their research strategies. Most papers that report work done with NEURON have addressed the operation and functional consequences of mechanistic models of biological neurons and networks. Readers who wish to see specific examples are encouraged to peruse the online bibliography. Working code for many published NEURON models can be downloaded from ModelDB. * [http://www.neuron.yale.edu/neuron/ Site] * [http://www.scholarpedia.org/article/Neuron_simulation_environment Scholarpedia article] ==== GENESIS ==== * [http://www.genesis-sim.org/GENESIS/ Site] ==== PSICS ==== * [http://www.psics.org/ Site] [[File:PSICS screenshot.jpg|thumb|A screenshot of the Parallel Stochastic Ion Channel Simulator, the most detailed model of neurons achievable today. Each dot in the image is an individual ion channel.]] == Complications == === Spinal Cord === * While traditionally the vertebrate spinal cord is often regarded as little more than a bundle of motor and sensor axons together with a central column of stereotypical reflex circuits and pattern generators, there is evidence that the processing may be more complex (Berg, Alaburda et al., 2007) and that learning processes occur among spinal neurons (Crown, Ferguson et al., 2002). The networks responsible for standing and stepping are extremely flexible and unlikely to be hardwired (Cai, Courtine et al., 2006). * This means that emulating just the brain part of the central nervous system will lose much body control that has been learned and resides in the non‐scanned cord. On the other hand, it is possible that a generic spinal cord network would, when attached to the emulated brain, adapt (requiring only scanning and emulating one spinal cord, as well as finding a way of attaching the spinal emulation to the brain emulation). But even if this is true, the time taken may correspond to rehabilitation timescales of (subjective) months, during which time the simulated body would be essentially paralysed. This might not be a major problem for personal identity in mind emulations (since people suffering spinal injuries do not lose personal identity), but it would be a major limitation to their usefulness and might limit development of animal models for brain emulation. * A similar concern could exist for other peripheral systems such as the retina and autonomic nervous system ganglia. * The human spinal cord weighs 2.5% of the brain and contains around 10‐4 of the number of neurons in the brain (13.5 million neurons). Hence adding the spinal cord to an emulation would add a negligible extra scan and simulation load. === Synaptic Adaptation === * Synapses are usually characterized by their “strength”, the size of the postsynaptic potential they produce in response to a given magnitude of incoming excitation. Many (most?) synapses in the CNS also exhibit depression and/or facilitation: a temporary change in release probability caused by repeated activity (Thomson, 2000). This rapid dynamics likely plays a role in a variety of brain functions, such as temporal filtering (Fortune and Rose, 2001 ), auditory processing (Macleod, Horiuchi et al., 2007) and motor control (Nadim and Manor, 2000). These changes occur on timescales longer than neural activity (tens of milliseconds) but shorter than long‐term synaptic plasticity (minutes to hours). Adaptation has already been included in numerous computational models. The computational load is usually 1‐3 extra state variables in each synapse. === Unknown Neurotransmitters === * Not all neuromodulators are known. At present about 10 major neurotransmitters and 200+ neuromodulators are known, and the number is increasing. (Thomas, 2006) lists 272 endogenous extracellular neuroactive signal transducers with known receptors, 2 gases, 19 substances with putative or unknown binding sites and 48 endogenous substances that may or may not be neuroactive transducers (many of these may be more involved in general biochemical signalling than brain‐specific signals). Plotting the year of discovery for different substances (or families of substances) suggests a linear or possibly sigmoidal growth over time (Figure 11). * An upper bound on the number of neuromodulators can be found using genomics. About 800 G‐protein coupled receptors can be found in the human genome, of which about half were sensory receptors. Many are “orphans” that lack known ligands, and methods of “deorphanizing” receptors by expressing them and determining what they bind to have been developed. In the middle 1990’s about 150 receptors had been paired to 75 transmitters, leaving around 150‐200 orphans in 2003 (Wise, Jupe et al., 2004). At present, 7‐8 receptors are deorphanized each year (von Bohlen und Halbach and Dermietzel, 2006); at this rate all orphans should be adopted within ≈20 years, leading to the discovery of around 50 more transmitters (Civelli, 2005). * Similarly guanylyl cyclase‐coupled receptors (four orphans, (Wedel and Garbers, 1998)), tyrosine kinase‐coupled receptors (<<100, (Muller‐Tidow, Schwable et al., 2004)) and cytokine receptors would add a few extra transmitters. * However, there is room for some surprises. Recently it was found that protons were used to signal in C. elegans rhythmic defecation (Pfeiffer, Johnson et al., 2008) mediated using a Na+/H+ exchanger, and it is not inconceivable that similar mechanisms could exist in the brain. Hence the upper bound on all transmitters may be set by not just receptors but also by membrane transporter proteins. * For WBE modelling all modulatory interactions is probably crucial, since we know that neuromodulation does have important effects on mood, consciousness, learning and perception. This means not just detecting their existence but to create quantitative models of these interactions, a sizeable challenge for experimental and computational neuroscience. === Unknown Ion Channels === * Similar to receptors, there are likely unknown ion channels that affect neuron dynamics. * The Ligand Gated Ion Channel Database currently contains 554 entries with 71 designated as channel subunits from Homo sapiens (EMBL‐EBI, 2008; Donizelli, Djite et al., 2006). Voltage gated ion channels form a superfamily with at least 143 genes (Yu, Yarov‐Yarovoy et al., 2005). This diversity is increased by multimerization (combinations of different subunits), modifier subunits that do not form channels on their own but affect the function of channels they are incorporated into, accessory proteins as well as alternate mRNA splicing and post‐ translational modification (Gutman, Chandy et al., 2005). This would enable at least an order of magnitude more variants. * Ion channel diversity increases the diversity of possible neuron electrophysiology, but not necessarily in a linear manner. See the discussion of inferring electrophysiology from gene transcripts in the interpretation chapter. === Volume Transmission === Surrounding the cells of the brain is the extracellular space, on average 200 Å across and corresponding to 20% of brain volume (Nicholson, 2001). It transports nutrients and buffers ions, but may also enable volume transmission of signalling molecules. * Volume transmission of small molecules appears fairly well established. Nitrous oxide is hydrophobic and has low molecular weight and can hence diffuse relatively freely through membranes: it can reach up to 0.1‐0.2 mm away from a release point under physiological conditions (Malinski, Taha et al., 1993; Schuman and Madison, 1994; Wood and Garthwaite, 1994). While mainly believed to be important for autoregulation of blood supply, it may also have a role in memory (Ledo, Frade et al., 2004). This might explain how LTP (Long Term Potentiation) can induce “crosstalk” that reduces LTP induction thresholds over a span of 10 μm and ten minutes (Harvey and Svoboda, 2007). * Signal substances such as dopamine exhibit volume transmission (Rice, 2000) and this may have effect for potentiation of nearby synapses during learning: simulations show that a single synaptic release can be detected up to 20 μm away and with a 100 ms half‐life (Cragg, Nicholson et al., 2001). Larger molecules have their relative diffusion speed reduced by the limited geometry of the extracellular space, both in terms of its tortuosity and its anisotropy (Nicholson, 2001). As suggested by Robert Freitas, there may also exist active extracellular transport modes. Diffusion rates are also affected by local flow of the CSF and can differ from region to region (Fenstermacher and Kaye, 1988); if this is relevant then local diffusion and flow measurements may be needed to develop at least a general brain diffusion model. The geometric part of such data could be relatively easily gained from the high resolution 3D scans needed for other WBE subproblems. * Rapid and broad volume transmission such as from nitrous oxide can be simulated using a relatively coarse spatiotemporal grid size, while local transmission requires a grid with a spatial scale close to the neural scale if diffusion is severely hindered. * For constraining brain emulation it might be useful to analyse the expected diffusion and detection distances of the ≈200 known chemical signalling molecules based on their molecular weight, diffusion constant and uptake (for different local neural geometries and source/sink distributions). This would provide information on diffusion times that constrain the diffusion part of the emulation and possibly show which chemical species need to be spatially modelled. === Neurogenesis === * Recent results show that neurogenesis persists in some brain regions in adulthood, and might have nontrivial functional consequences (Saxe, Malleret et al., 2007). During neurite outgrowth, and possibly afterwards, cell adhesion proteins can affect gene expression and possible neuron function by affecting second messenger systems and calcium levels (Crossin and Krushel, 2000). However, neurogenesis is mainly confined to discrete regions of the brain and does not occur to a great extent in adult neocortex (Bhardwaj, Curtis et al., 2006). * Since neurogenesis occurs on fairly slow timescales (> 1 week) compared to brain activity and normal plasticity, it could probably be ignored in brain emulation if the goal is an emulation that is intended to function faithfully for only a few days and not to exhibit truly long‐term memory consolidation or adaptation. * A related issue is remodelling of dendrites and synapses. Over the span of months dendrites can grow, retract and add new branch tips in a cell type‐specific manner (Lee, Huang et al., 2006). Similarly synaptic spines in the adult brain can change within hours to days, although the majority remain stable over multi‐month timespans (Grutzendler, Kasthuri et al., 2002; Holtmaat, Trachtenberg et al., 2005; Zuo, Lin et al., 2005). Even if neurogenesis is ignored and the emulation is of an adult brain, it is likely that such remodelling is important to learning and adaptation. * Simulating stem cell proliferation would require data structures representing different cells and their differentiation status, data on what triggers neurogenesis, and models allowing for the gradual integration of the cells into the network. Such a simulation would involve modelling the geometry and mechanics of cells, possibly even tissue differentiation. Dendritic and synaptic remodelling would also require a geometry and mechanics model. While technically involved and requiring at least a geometry model for each dendritic compartment the computational demands appear small compared to neural activity. === Chemical Environment === === Neuroglia === * Glia cells have traditionally been regarded as merely supporting actors to the neurons, but recent results suggest that they may play a fairly active role in neural activity. Beside the important role of myelinization for increasing neural transmission speed, at the very least they have strong effects on the local chemical environment of the extracellular space surrounding neurons and synapses. * Glial cells exhibit calcium waves that spread along glial networks and affect nearby neurons (Newman and Zahs, 1998). They can both excite and inhibit nearby neurons through neurotransmitters (Kozlov, Angulo et al., 2006). Conversely, the calcium concentration of glial cells is affected by the presence of specific neuromodulators (Perea and Araque, 2005). This suggests that the glial cells acts as an information processing network integrated with the neurons (Fellin and Carmignoto, 2004). One role could be in regulating local energy and oxygen supply. * If glial processing turns out to be significant and fine‐grained, brain emulation would have to emulate the glia cells in the same way as neurons, increasing the storage demands by at least one order of magnitude. However, the time constants for glial calcium dynamics is generally far slower than the dynamics of action potentials (on the order of seconds or more), suggesting that the time resolution would not have to be as fine, making the computational demands increase far less steeply. === Loss of Instantaneous State === Every realistic method to scan brain tissue at a decent resolution deals only with structure, not continuous activity. Whatever information is stored as a pattern of brain activity instead of structure (Such as working memory) will be destroyed by the process. Information stored in the instantaneous state of neurons (Ion concentrations across membranes, synaptic vesicle depletion, and neurotransmitters in motion) would be lost. The most likely consequence would be memory loss up to some amount of time prior to the scanning. Cases where people have woken up from long periods of electrocerebral silence prove that instantaneous brain activity is not required for the long term maintenance of personal identity<ref>Elixson, 1991</ref>. === Summary === {| border=1 class=wikitable ! colspan=3 | '''Summary''' |- | style="background-color:#C0C0C0;" | Feature || style="background-color:#C0C0C0;" | Likeliehood of necessity for WBE || style="background-color:#C0C0C0;" | Implementation Problems |- | Spinal cord || Likely || Minor. Would require scanning some extra tissue. |- | Synaptic adaptation || Very likely || Minor. Introduces extra state-variables and parameters that need to be set. |- | Currently unknown neurotransmitters and neuromodulators || Very likely || Minor. Similar to known transmitters and modulators. |- | Currently unknown ion channels || Very likely || Minor. Similar to known ion channels. |- | Volume transmission || Somewhat likely || Medium. Requires diffusion models and microscale geometry. |- | Body chemical environment || Somewhat likely || Medium. Requires metabolomic models and data. |- | Neurogenesis and remodelling || Somewhat likely || Medium. Requires cell mechanics and growth models. |- | Glia cells || Possible || Minor. Would require more simulation compartments, but likely running on a lower timescale. |- | Ephaptic effects || Possible || Minor. Would require more simulation compartments, but likely running on a slower timescale. |- | Dynamical state || Very unlikely || Profound. Would preclude most proposed scanning methods. |- | Quantum computation || Very unlikely || Profound. Would preclude currently conceived scanning methods and would require quantum computing. |- | Analog computation || Very unlikely || Profound. Would require analog computer hardware. |- | True randomness || Very unlikely || Medium to profound, depending on whether merely 'true' random noise or 'hidden variables' are needed. |} == Sensory/Motor Neuron Map == The link between the body and the brain is in the spine, the most distinguishable part of the Peripheral Nervous System. The picture on the left shows a spinal nerve: The nerve fibers divide into two branches, or roots. The dorsal root carries sensory axons, while the ventral root carries motor axons: One is a receiver, and the other is a sender. Charles Bell, in 1810, performed tests on animals, cutting axon after axon and observing the resulting paralysis or loss of sensation, proving this. This arrangement is species-generic '''?REF''', and it may only be necessary to trace a single person's axons to map the sensory/motor axons all the way up the spine into the brain, where the arrangement becomes more specialized for the individual. In any case: The sensory and motor neurons are just another number in the global table of neurons, but once they have been matched to their corresponding axons in the PNS (That is, once it is known exactly where they send signals to or receive signals from), one forms a table -- A map -- with a set of pointers to those neurons, categorized, for example, by region. For example, one particular neuron may be classified as "Motor: Second phalax muscle, index finger, left hand, left arm.". <gallery> File:Virtual emulation system.png|The system for a completely virtual emulation. File:Embodied emulation system.jpg|The system for an embodied emulation. </gallery> This map links the sensory/motor neurons to their corresponding (Virtual) muscles and sensors. It's a simple way of abstracting away motor control, by providing simple, orderly access to the neurons responsible for it. === The Body Model === The virtual body can also be considered as means of abstraction: Instead of mapping the sensory/motor neuron map directly to a telepresence robot or to a virtual body, one could just pipe information to and from the individual virtual muscles and virtual sensors. For example, creating a virtual avatar that has points where collision checking is done. When a collision is detected, a pulse (Containing the information of the force vector, possibly) is piped to the corresponding sensor, that sends it to its corresponding neuron. The same is done with muscles, but viceversa: The nerve impulse is translated by the map from a pure impulse to, say, stress and force vectors in the virtual muscle. The muscle can then pipe this values to a software object that turns them into motion of a limb in the avatar. The body simulation translates between neural signals and the environment, as well as maintains a model of body state as it affects the brain emulation. How detailed the body simulation needs to be in order to function depends on the goal. An “adequate” simulation produces enough and the right kind of information for the emulation to function and act, while a convincing simulation is nearly or wholly indistinguishable from the “feel” of the original body. A number of relatively simple biomechanical simulations of bodies connected to simulated nervous systems have been created to study locomotion. (Suzuki, Goto et al., 2005) simulated the C. elegans body as a multi‐joint rigid link where the joints were controlled by motorneurons in a simulated motor control network. Örjan Ekeberg has simulated locomotion in lamprey (Ekeberg and Grillner, 1999), stick insects (Ekeberg, Blümel et al., 2004), and the hind legs of cat (Ekeberg and Pearson, 2005) where a rigid skeleton is moved by muscles either modeled as springs contracting linearly with neural signals, or in the case of the cat, a model fitting observed data relating neural stimulation, length, and velocity with contraction force (Brown, Scott et al., 1996). These models also include sensory feedback from stretch receptors, enabling movements to adapt to environmental forces: locomotion involves an information loop between neural activity, motor response, body dynamics, and sensory feedback (Pearson, Ekeberg et al., 2006). Today biomechanical model software enables fairly detailed models of muscles, the skeleton, and the joints, enabling calculation of forces, torques, and interaction with a simulated environment (Biomechanics Research Group Inc, 2005). Such models tend to simplify muscles as lines and make use of pre‐recorded movements or tensions to generate the kinematics. A detailed mechanical model of human walking has been constructed with 23 degrees of freedom driven by 54 muscles. However, it was not controlled by a neural network but rather used to find an energy‐optimizing gait (Anderson and Pandy, 2001). A state of‐the‐art model involving 200 rigid bones with over 300 degrees of freedom, driven by muscular actuators with excitation‐contraction dynamics and some neural control, has been developed for modelling human body motion in a dynamic environment, e.g. for ergonomics testing (Ivancevic and Beagley, 2004). This model runs on a normal workstation, suggesting that rigid body simulation is not a computationally hard problem in comparison to WBE. Other biomechanical models are being explored for assessing musculoskeletal function in human (Fernandez and Pandy, 2006), and can be validated or individualized by use of MRI data (Arnold, Salinas et al., 2000) or EMG (Lloyd and Besier, 2003). It is expected that near future models will be based on volumetric muscle and bone models found using MRI scanning (Blemker, Asakawa et al., 2007; Blemker and Delp, 2005), as well as construction of topological models (Magnenat‐Thalmann and Cordier, 2000). There are also various simulations of soft tissue (Benham, Wright et al., 2001), breathing (Zordan, Celly et al., 2004) and soft tissue deformation for surgery simulation (Cotin, Delingette et al., 1999). Another source of body models comes from computer graphics, where much effort has gone into rendering realistic characters, including modelling muscles, hair and skin. The emphasis has been on realistic appearance rather than realistic physics (Scheepers, Parent et al., 1997), but increasingly the models are becoming biophysically realistic and overlapping with biophysics (Chen and Zeltzer, 1992; Yucesoy, Koopman et al., 2002). For example, 30 contact/collision coupled muscles in the upper limb with fascia and tendons were generated from the visible human dataset and then simulated using a finite volume method; this simulation (using one million mesh tetrahedra) ran at a rate of 240 seconds per frame on a single CPU Xeon 3.06 GHz (on the order of a few GFLOPS) (Teran, Sifakis et al., 2005). Scaling this up 20 times to encompass ≈600 muscles implies a computational cost on the order of a hundred TFLOPS for a complete body simulation. Physiological models are increasingly used in medicine for education, research and patient evaluation. Relatively simple models can accurately simulate blood oxygenation (Hardman, Bedforth et al., 1998). For a body simulation this might be enough to provide the right feedback between exertion and brain state. Similarly simple nutrient and hormone models could be used insofar a realistic response to hunger and eating were desired. === Senses === <blockquote>See: [[Physical Enhancement#Sensory Augmentation|Sensory Augmentation]]</blockquote> ==== Vision ==== Vision Visual photorealism has been sought in computer graphics for about 30 years, and this appears to be a fairly mature area at least for static images and scenes. Much effort is currently going into such technology, for use in computer games and movies. (McGuigan, 2006) proposes a “graphics Turing test” and estimates that for 30 Hz interactive visual updates 518.4‐1036.8 TFLOPS would be enough for Monte Carlo global illumination. This might actually be an overestimate since he assumes generation of complete pictures. Generating only the signal needed for the retinal receptors (with higher resolution for the fovea than the periphery) could presumably reduce the demands. Similarly, more efficient implementations of the illumination model (or a cheaper one) would also reduce demands significantly. ==== Hearing ==== The full acoustic field can be simulated over the frequency range of human hearing by solving the differential equations for air vibration (Garriga, Spa et al., 2005). While accurate, this method has a computational cost that scales with the volume simulated, up to 16 TFLOPS for a 2×2×2 m room. This can likely be reduced by the use of adaptive mesh methods, or ray‐ or beam‐tracing of sound (Funkhouser, Tsingos et al., 2004). Sound generation occurs not only from sound sources such as instruments, loudspeakers, and people but also from normal interactions between objects in the environment. By simulating surface vibrations, realistic sounds can be generated as objects collide and vibrate. A basic model with N surface nodes requires 0.5292 N GFLOPS, but this can be significantly reduced by taking perceptual shortcuts (Raghuvanshi and Lin, 2006; Raghuvanshi and Lin, 2007). This form of vibration generation can likely be used to synthesize realistic vibrations for touch. ==== Smell and Taste ==== So far no work has been done on simulated smell and taste in virtual reality, mainly due to the lack of output devices. Some simulations of odorant diffusion have been done in underwater environments (Baird RC, Johari H et al., 1996 ) and in the human and rat nasal cavity (Keyhani, Scherer et al., 1997; Zhao, Dalton et al., 2006). In general, an odor simulation would involve modelling diffusion and transport of chemicals through air flow; and the relatively low temporal and spatial resolution of human olfaction would likely allow a fairly simple model. A far more involved issue is what odorant molecules to simulate. Humans have 350 active olfactory receptor genes, but we can likely detect more variation due to different diffusion in the nasal cavity (Shepherd, 2004). No work has been done on simulating smell and taste in a virtual environment, most likely due to lack of output for this data. The low quality of human olfaction would allow for a simple model. Taste relies only on a few types of receptor, but the tongue also detects texture and the placement of then nose forces on to smell objects entering the mouth. The former may require complex simulations of the physics of virtual objects in the case of virtual environments, and pressure/temperature sensors for simulacra. ==== Haptics ==== The haptic senses of touch, proprioception, and balance are crucial for performing skilled actions in real and virtual environments (Robles‐De‐La‐Torre, 2006). Tactile sensation relates both to the forces affecting the skin (and hair) and to how they are changing as objects or the body are moved. To simulate touch, stimuli collision detection is needed to calculate forces on the skin (and possibly deformations) as well as the vibrations when it is moved over a surface or exploring it with a hard object (Klatzky, Lederman et al., 2003). To achieve realistic haptic rendering, updates in the kilohertz range may be necessary (Lin and Otaduy, 2005). In environments with deformable objects various nonlinearities in response and restitution have to be taken into account (Mahvash and Hayward, 2004). Proprioception, the sense of how far muscles and tendons are stretched (and by inference, limb location) is important for maintaining posture and orientation. Unlike the other senses, proprioceptive signals would be generated by the body model internally. Simulated Golgi organs, muscle spindles, and pulmonary stretch receptors would then convert body states into nerve impulses. The balance signals from the inner ear appears relatively simple to simulate, since it is only dependent on the fluid velocity and pressure in the semicircular channels (which can likely be assumed to be laminar and homogeneous) and gravity effects on the utricle and saccule. Compared to other senses, the computational demands are minuscule. Thermoreception could presumably be simulated by giving each object in the virtual environment a temperature, activating thermoreceptors in contact with the object. Nocireception (pain) would be simulated by activating the receptors in the presence of excessive forces or temperatures; the ability to experience pain from simulated inflammatory responses may be unnecessary verisimilitude. === The Environment Model === ==== Embodiment ==== ==== Virtual Worlds ==== [[File:broken thing.jpg|thumb|left|[[Second Life]]'s forever-in-beta engine shows how awful it would be to live in a simulation. Also it's full of weirdos.]] == Map of Technological Capabilities == [[File:WBE_Capabilities_map.png|thumb|center|400px]] '''Driving forces for the development of the technology:''' * <span style="background-color:#33FF99;">Moore's Law</span> * <span style="background-color:#CCFF66;">Commercial</span> * <span style="background-color:#FF9933;">Research</span> * <span style="background-color:#CC3300;">WBE Specific?</span> {| border="1" align="center" style="text-align:center;" class=wikitable ! colspan="3" | Capability || Description || Status |- |rowspan="5" | Scanning || colspan="2" | Preprocessing/fixation || align="left" | Preparing brains appropriately, retaining relevant microstructure and state. || align="left" | See [[Cryonics]], [[Plastination]], but overall fairly good. |- | colspan="2" | Physical handling || align="left" | Methods of manipulating fixed brains and tissue pieces before, during, and after scanning. || align="left" | ATLUM automates slicing, and there's been some work in automating the feeding of tape into a machine that sprays contrast metals on it and then passes it under an electron microscope. |- | rowspan="3" | Imaging || Volume || align="left" | Capability to scan entire brain volumes in reasonable time and expense. || align="left" | Massively parallel ATLUM and massively parallel scanning electron microscopes. The latter are being developed by the semiconductor industry. |- | Resolution || align="left" | Scanning at enough resolution to enable reconstruction. || align="left" | Electron microscopy has provided sub-nanometer resolution since before the 1950's. Electron microscopes at that resolution may cost well above half a million dollars, though. |- | Functional information || align="left" | Scanning is able to detect the functionally relevant properties of tissue. || align="left" | The software to translate electron micrographs to abstract models is currently in very early stages, capable of tracing the shape of cells, but not much more. |- |rowspan="10"| Translation || rowspan="4" | Image processing || Geometric adjustment || align="left" | Handling distortions due to scanning imperfection. || align="left" | Can't be much of a situation (Some basic image recognition to sense overlap and matching of the electron micrographs). |- |Data interpolation || align="left" | Handling missing data (Using surrounding data to interpolate what should be placed in missing spots). || align="left" | Unknown. |- | Noise removal || align="left" | Improving scan quality. || align="left" | Can't be much of a situation. |- | Tracing || align="left" | Detecting structure and processing it into a consistent 3D model of the tissue. || align="left" | Doable right now. Shape tracing is possibly the simplest, cheapest part of the whole process. |- | rowspan="4" | Scan interpretation || Cell type identification || align="left" | Identifying cell types. || align="left" | The software to translate electron micrographs to abstract models is currently in very early stages, capable of tracing the shape of cells, but not much more. |- |Synapse identification || align="left" | Identifying synapses and their connectivity. || align="left" | The software to translate electron micrographs to abstract models is currently in very early stages, capable of tracing the shape of cells, but not much more. |- | Parameter estimation || align="left" | Estimating functionally relevant parameters of cells, synapses, and other entities. || align="left" | The software to translate electron micrographs to abstract models is currently in very early stages, capable of tracing the shape of cells, but not much more. |- | Databasing || align="left" | Storing the resulting inventory in an efficient way. || align="left" | This is essentially a hardware problem. The scan of a mere nematode produces whole terabytes of electron micrographs. There have to be stored for interpretation (Unless one interprets them during the scan), but the abstract model may be much lighter and easier to store. |- |rowspan="2" | Software model of neural system || Mathematical model || align="left" | Model of entities and their behaviour (The simulator itself). || align="left" | Pick one. |- | Efficient implementation || align="left" | A final, fast implementation of the model (For example, neuromorphic hardware or dedicated chips where the algorithms are implemented directly in the hardware). || align="left" | The [[Izhikevich model of spiking neurons|Izhikevich model]], with considerations. (Actually, Izhikevich already analyzed the implementation of the model in MEMS). |- |rowspan="6"| Emulation || colspan="2" | Storage || align="left" | Storage of original model and current state (And whatever snapshots may be made). || align="left" | Again, a hardware problem. |- | colspan="2" | Bandwidth || align="left" | Efficient inter‐processor communication (Long-range data buses to implement long-range axons). || align="left" | Unsure. |- | colspan="2" | CPU || align="left" | Processor power to run simulation. Moore's Law can't be expected to continue beyond the first half of this century. Processors can't exponentiate forever, because of this, alternative computing has to be used: Instead of running the simulation as a program in a Universal Computer (An ordinary computer), it should be done with Neuromorphic hardware: The model is implemented directly as hardware. One chip, one neuron (Or one compartment, or one minicolumn, as the case may be), mounted on some kind of routing system. || align="left" | Not even close... Well, technically we can already emulate an entire brain of Izhikevich neurons, and the original emulation was done on a Beowulf with 27 processors, so a bigger, badder computer could probably get the slowdown factor a little into the acceptable zone. |- | colspan="2" | Body model || align="left" | Simulation of body enabling interaction with virtual environment or through robot. || align="left" | Shouldn't be much of a situation. |- | colspan="2" | Environment model || align="left" | Virtual environment for virtual body. || align="left" | Can't be much of a situation, if it's visual only. A collision checker may provide some basic pressure to sensory neurons. |- | colspan="2" | Exoself || align="left" | A software object that holds the simulation, maps sensory/motor neurons to the body model, ties the model to a virtual body in the virtual environment or to a telepresence robot, and handling communication with the network and the operating system. || align="left" | Once the above are ready, write a wrapper that puts them all together, and you have an Exoself. |}
Описание изменений:
Пожалуйста, учтите, что любой ваш вклад в проект «hpluswiki» может быть отредактирован или удалён другими участниками. Если вы не хотите, чтобы кто-либо изменял ваши тексты, не помещайте их сюда.
Вы также подтверждаете, что являетесь автором вносимых дополнений, или скопировали их из источника, допускающего свободное распространение и изменение своего содержимого (см.
Hpluswiki:Авторские права
).
НЕ РАЗМЕЩАЙТЕ БЕЗ РАЗРЕШЕНИЯ ОХРАНЯЕМЫЕ АВТОРСКИМ ПРАВОМ МАТЕРИАЛЫ!
Отменить
Справка по редактированию
(в новом окне)
Навигация
Персональные инструменты
Вы не представились системе
Обсуждение
Вклад
Создать учётную запись
Войти
Пространства имён
Статья
Обсуждение
русский
Просмотры
Читать
Править
История
Ещё
Навигация
Начало
Свежие правки
Случайная страница
Инструменты
Ссылки сюда
Связанные правки
Служебные страницы
Сведения о странице
Дополнительно
Как редактировать
Вики-разметка
Telegram
Вконтакте
backup