Sunteți pe pagina 1din 16

We have always been interested in the notion of consciousness fact, which is, for us, the fact that

an individual endowed with a brain can think of something related to his position in the world right here right now. It is not about the continuity, or the performance, nor the profoundness of the thought, but it is about thinking of something in a knowable manner and which can be specified from a linguistic or mathematical angle, without it being an automatic and predefined response to a given situation. By analogy to the notion lengthily investigated by philosophers, psychologists, neurobiologists, we will pose the question of artificial consciousness: how can one transpose the fact of thinking of something into the computable field, so that an artificial system, founded on computer processes, would be able to generate consciousness facts, in a viewable manner. The system will have intentions, emotions and ideas about things and events related to it-self. The system would have to have a body that it could direct and which would constrain the system. It would also have to have a history, and intentions to act and, most of all, to think. It would have to have knowledge, notably language knowledge. It would have to have emotions, intentions and finally a certain consciousness about itself. We can name this system, by sheer semantic analogy, an artificial brain. However we will see that its architecture is quite different from living brains. The concern is transposing the effects, the movements; certainly not reproducing the components like neurons

and glial cells. We should keep in mind principally one characteristic of the process of thinking unfolding in a brain: there is a complex neural, biochemical, electrical activation movement happening. This movement is coupled to a similar but of a different mode in the nervous system deployed in the whole body. This complex movement generates, by selective emergence and by reaching a particular configuration, what we call a thought about something. This thought rapidly leads to actuators or language activity and descends then in the following thought which can be similar or different. This is the very complex phenomenon that has to be transposed into the computable domain. Hence, we should approach the sudden appearance of thoughts in brains at the level of the complex dynamics of a system building and reconfiguring recurrent and temporized flow. We can transpose this into computer processes architectures containing symbolic meaning and we should make it geometrically self-controlled. Two reasonable hypotheses are made for this transposition: analogy between the geometrical dynamics of the real brain and of the artificial brain. For one, flows are complex images, almost continuous; for the other, these are dynamical graphs which deformations are evaluated topologically. combinatory complexity reduction of the real brain in the computable domain by using symbolic and prelanguage level for this approach. The basic elements are completely different; they are not of the same scale.

However, once these hypotheses made, one should not start to develop an architecture that will operate its own control from the aspects of its changing geometry. One needs to ask the proper question about consciousness fact generation. A philosopher, a couple of decades ago, M. Heidegger, asked the proper question: what brings us to think about this thing right here right now? The answer, quite elaborate, to this question will conduct to a system architecture choice that will take us away from reactive or deductive systems. The system will generate intentionally its consciousness facts, intention as P. Ricoeur understood it. There are no consciousness facts without intention to think. This settles the question, considered as a formidable, of freedom to think. One thinks of everything according to his memory and his intuition on the moment, but only if it is expressible as a thought by the system producing thoughts. Some might see something infinite in this process; however it is not our case. A finite set of component which movements occur in a finite space has only a finite number of states in which it can be. Also, as the permanence of the physical real apprehensible by the sense is very strong, the preoccupation to think by man is quite limited, in his civilizations. Let us point out that artificial systems that will think artificially will be able to communicate directly at the level of forms of the ideas, without using a language mediator, and hence, would be co-active as well as being numerous in space. For different reasons, numerous people think that the

path of artificial consciousness investigation should not be taken at all. I feel differently, because, discoveries have been the very root of our existence, from fire to the mighty F-16s. The mind is a work of art moulded in mystery, and any effort to unlock its doors should be encouraged because, I am sure, that its discovery is only going to help us respect the great architect more.

Can you please summarize (in words)?


The brain is fundamentally different from and complementary to todays computers. The brain can exhibit awe-inspiring function of sensation, perception, action, interaction, and cognition. It can deal with ambiguity and interact with real-world, complex environments in a context-dependent fashion. And yet, it consumes less power than a light bulb and occupies less space than a 2-liter bottle of soda. Our long-term mission is to discover and demonstrate the algorithms of the brain and deliver cool, compact cognitive computers that that complements todays von Neumman computers and approach mammalian-scale intelligence. We are pursuing a combination of computational neuroscience, supercomputing, and nanotechnology to achieve this vision. Towards this end, we are announcing two major milestones. First, using Dawn Blue Gene / P supercomputer at Lawrence Livermore National Lab with 147,456 processors and 144 TB of main memory, we achieved a simulation with 1 billion spiking neurons and 10 trillion individual learning synapses. This is equivalent to 1,000 cognitive computing chips each with 1 million neurons and 10 billion synapses, and exceeds the scale of cat cerebral cortex. The simulation ran 100 to 1,000 times slower than real-time. Second, we have developed a new algorithm, BlueMatter, that exploits the Blue Gene supercomputing architecture to noninvasively measure and map the connections between all cortical and sub-cortical locations within the human brain using magnetic resonance diffusion weighted imaging. Mapping the wiring diagram of the brain is crucial to untangling its vast communication network and understanding how it represents and processes information. These milestones will provide a unique workbench for exploring a vast number of hypotheses of the structure and computational dynamics of the brain, and further our quest of building a cool, compact cognitive computing chip.

Why do we need cognitive computing? How could cognitive computing help build a smarter planet?
As the amount of digital data that we create continues to grow massively and the world becomes more instrumented and interconnected, there is a need for new kinds of computing systems imbued with a new intelligence that can spot hard-to-find patterns in vastly varied kinds of data, both digital and sensory; analyze and integrate information real-time in a context-dependent way; and deal with the ambiguity found in complex, real-world environments. Cognitive computing offers the promise of entirely new computing architectures, system designs and programming paradigms that will meet the needs of the instrumented and interconnected world of tomorrow.

What is the goal of the DARPA SyNAPSE project?


The goal of the DARPA SyNAPSE program is to create new electronics hardware and architecture that can understand, adapt and respond to an informative environment in ways that extend traditional computation to include fundamentally different capabilities found in biological brains.

Who is on your SyNAPSE team?


Stanford University: Brian A. Wandell, H.-S. Philip Wong

Cornell University: Rajit Manohar Columbia University Medical Center: Stefano Fusi University of Wisconsin-Madison: Giulio Tononi University of California-Merced: Christopher Kello IBM Research: Rajagopal Ananthanarayanan, Leland Chang, Daniel Friedman, Christoph Hagleitner, Bulent Kurdi, Chung Lam, Paul Maglio, Stuart Parkin, Bipin Rajendran, Raghavendra Singh

The Cat is Out of the Bag

What advantages does Blue Gene provide to enable these simulations?


Mammalian-scale simulations place tremendous restraints on the memory, processor and communication capabilities of any computer system. Blue Gene architecture provides the best match to meet these resource requirements by supporting hundreds of terabytes of memory, and hundreds of thousands of processors This is augmented with outstanding communication capabilities in terms of bi-section and point-to-point bandwidth, excellent low-latency of communication and very efficient broadcast and reduce networks, some of which have dedicated hardware resources, and thus allowing truly parallel exploitation of processors and their memory.

What role do large-scale cortical simulations play in the SyNAPSE project?


Please note that the cat-scale cortical simulation is equivalent to equivalent to 1,000 cool, compact cognitive computing chips each with 1 million neurons and 10 billion synapses, and compares very favorably to DARPAs published metrics. The simulations in C2 will help guide the design of features in the SyNAPSE chip and the overall architecture of the hardware. C2 supports customizable components, in which hardware neurons and synapses can be used instead of the default biologically inspired phenomenological neurons and synapses. Thus, C2 enables a functional simulation of the hardware and helps choose between alternate hardware implementations.

Can you place the cat-scale simulation in context of relate to your past work?
For past work on rat-scale simulations, please see here and for mouse-scale simulations, please seehere.

December 2006: Blue Gene/L at IBM Research - Almaden with 4,096 CPUs and 1 TB memory 40% mouse-scale with 8 million neurons, 50 billion synapses 10 times slower than real-time at 1 ms simulation resolution

April 2007: Blue Gene/L at IBM Research - Watson with 32,768 CPUs and 8 TB memory Rat-scale with 56 million neurons, 448 billion synapses 10 times slower than real-time at 1 ms simulation resolution

March 2009: Blue Gene/P on KAUST-IBM WatsonShaheen machine with 32,768 CPUs and 32 TB memory 1% of human-scale with 200 million neuron, 2 trillion synapses 100 - 1000 times slower than real-time at 0.1ms simulation resolution

SC09: this announcement: Blue Gene/P DAWN at LLNL with 147,456 CPUs and 144 TB memory Cat-scale with 1 billion neurons, 10 trillion synapses 100-1000 times slower than real-time at 0.1ms simulation resolution Neuroscience details: neuron dynamics, synapse dynamics, individual learning synapses, biologically realistic thalamocortical connectivity, axonal delays Prediction: In 2019, using a supercomputer with 1 Exaflop/s and 4PB of main memory, a near real-time humanscale simulation may become possible. Summary: Progress in large-scale cortical simulations. Each of the four charts above details recent achievements in the simulation of networks of single-compartment, phenomenological neurons with connectivity based on statistics derived from mammalian cortex. Simulations were run on Blue Gene supercomputers with progressively larger amounts of main memory. The number of synapses in the models varied from 5,485 to 10,000 synapses per neuron, reflecting construction from different sets of biological measurements. First: Simulations on a Blue Gene/L supercomputer of a 40% mouse-scale cortical model with 8 million neurons and 52 billion synapses, employing 4,096 processors and 1 TB of main memory. Second: Simulations on a Blue Gene/L supercomputer culminating in a rat-scale cortical model with 58 million neurons and 461 billion synapses, using 32,768 processors and 8 TB of main memory. Third: Simulations on a Blue Gene/P supercomputer culminating in a one-percent human-scale cortical model with 200 million neurons and 1.97 trillion synapses, employing 32,768 processors and 32 TB of main memory. Fourth: Simulations on a Blue Gene/P supercomputer culminating in a cat-scale cortical model with 1.62 billion neurons and 8.61 trillion synapses, using 147,456 processors and 144 TB of main memory. The largest simulations performed on this machine correspond to approximately 4.5% of human cerebral cortex.

When will human-scale simulations become possible?

The figure shows the progress that has been made in supercomputing since the early 90s. At each time point, the green line shows the 500th fast supercomputer, the dark blue line the fastest supercomputer, and the light blue line the summed power of the top 500 machines. These lines show a nice trend, which weve extrapolated out 10 years. The IBM teams latest simulation results represent a model about 4.5% the scale of the human cerebral cortex, which was run at 1/83 of real time. The machine used provided 144 TB of memory and 0.5 PFLop/s. Turning to the future, you can see that running human scale cortical simulations will probably require 4 PB of memory and to run these simulations in real time will require over 1 EFLop/s. If the current trends in supercomputing continue, it seems that human-scale simulations will be possible in the not too distant future.

What aspects of the brain does the model include?


The model reproduces a number of physiological and anatomical features of the mammalian brain. The key functional elements of the brain, neurons, and the connections between them, called synapses, are simulated using biologically derived models. The neuron models include such key functional features as input integration, spike generation and firing rate adaptation, while the simulated synapses reproduce time and voltage dependent dynamics of four major synaptic channel types found in cortex. Furthermore, the synapses are plastic, meaning that the strength of connections between neurons can change according to certain rules, which many neuroscientists believe is crucial to learning and memory formation. At an anatomical level, the model includes sections of cortex, a dense body of connected neurons where much of the brain's high level processing occurs, as well as the thalamus, an important relay center that mediates communication to and from cortex. Much of the connectivity within the model follows a statistical map derived from the most detailed study to date of the circuitry within the cat cerebral cortex.

What do the simulations demonstrate?


We are able to observe activity in our model at many scales, ranging from global electrical activity levels, to activity levels in specific populations, to topographic activity dynamics to individual neuronal membrane potentials. In these measurements, we have observed the model reproduce activity in cortex measured by neuroscientists using corresponding techniques: electroencephalography, local field potential recordings, optical imaging with voltage sensitive dyes, and intracellular recordings. Specifically, we were able to deliver a stimulus to the model then watch as it propagated within and between different populations of neurons. We found that this propagation showed a spatiotemporal pattern remarkably similar to what has been observed in experiments with real brains. In other simulations, we also observed oscillations between active and quiet periods, as is often observed in the brain during sleep or quiet waking. In all our simulations, we are able to simultaneously record from billions of individual model components, compared to cutting-edge neuroscience techniques that might allow simultaneous recording of a few hundred brain regions, thus providing us with an unprecedented picture of circuit dynamics.

Can I see the simulator in action?


Yes, if you can download a 150 MB movie The following is a frame from the movie. An earlier frame showing the input is here and a later frame is here. To understand the figure and the movie, it is helpful if you study Figure 1 in thepaper.

Caption: Like the surface of a still lake reacting to the impact of a pebble, the neurons in IBM's cortical simulator C2 respond to stimuli. Resembling a travelling wave, the activity propagates through different cortical layers and cortical regions. The simulator is an indispensable tool that enables researchers to bring static structural brain networks to life, to probe the mystery of cognition, and to pave the path to cool, compact cognitive computing systems. Please note that the simulator is demonstrating how information percolates and propagates. It is NOT learning the IBM logo.

How close is the model to producing high level cognitive function?


Please note that the rat (-scale simulation) does not sniff cheese, and the cat (-scale simulation) does not chase the rat. Up to this point, our efforts have primarily focused on developing the simulator as a tool of scientific discovery that incorporates many neuroscientific details to produce large-scale thalamocortical simulations as a means of studying behavior and dynamics within the brain. While diligent researchers have made tremendous strides in improving our understanding of the brain over the past 100 years, neuroscience has not yet reached the point where it can provide us with a recipe of how to wire up a cognitive system. Our hope is that by incorporating many of the ingredients that neuroscientists think may be important to cognition in the brain, such as a general statistical connectivity pattern and plastic synapses, we may be able to use the model as a tool to help understand how the brain produces cognition.

What do you see on the horizon for this work in thalamocortical simulations?
We are interested in expanding our model in both scale and in the details that it incorporates. In terms of scale, as the amount of memory available in cutting edge supercomputers continues to increase, we foresee that simulations at the scale of monkey cerebral cortex and eventually the human cerebral cortex will soon be within reach. As supercomputing speed increases, we also see the speed of our simulations increasing to approach real-time. In terms of details in our simulations, we are currently working on differentiating our cortical region into specific areas (such as primary visual cortex or motor cortex) and providing the long-range connections that form the circuitry between these areas in the mammalian brain. For this work, we are drawing from many studies describing the structure and input/output patterns of these areas as well as a study recently performed within IBM that collates a very large number of individual measurements of white matter, the substrate of long-range connectivity within the brain.

How will this affect neuroscience?


Within neuroscience, there is a rich history of using brain simulations as a means of developing models based on experimental observations, testing those models and then using those models to form predictions that can be tested through further experiments. A major limitation of such efforts is computational power, forcing models to make major sacrifices in terms of detail or scale. Through our work, we have developed and demonstrated a tool that enables simulations at very large-scales on cutting edge supercomputers. We believe that as this tool continues to grow, it will serve as a crucial test bed for testing hypotheses about brain function through simulations at a scale and level of detail never before possible.

BlueMatter What does BlueMatter mean?


BlueMatter is a highly parallelized algorithm for identifying white matter projectomes written to take advantage of the Blue Gene supercomputing architecture. Hence, the term BlueMatter.

Can you please provide more details on BlueMatter?


Our software, BlueMatter, is able to provide unique visualization and measurement of the long range circuitry (interior white matter) that allow geographically separated regions of the brain to communicate. The labels or colors of the fibers represent divisions of these fibrous networks that we are measuring. The colors and names are as follows: Red - Interhemispheric fibers projecting between the corpus callosum and frontal cortex. Green - Interhemispheric fibers projecting between primary visual cortex and the corpus callosum. Yellow - Interhemispheric fibers projecting from corpus callosum and not Red or Green. Brown - Fibers of the superior longitudinal fasciculus, connecting regions critical for language processing. Orange - Fibers of inferior longitudinal fasciculus and uncinate fasciculus, connecting regions to cortex responsible for memory. Purple - Projections between parietal lobe and lateral cortex Blue - Fibers connecting local regions of the frontal cortex

High-resolution version (2MB) The figure displays results from BlueMatter, a parallel algorithm for white matter projection measurement. Recent advances in diffusion-weighted magnetic resonance imaging (DW-MRI) have allowed the unprecedented ability to non-invasively measure the human white matter network across the entire brain. DW-MRI acquires an aggregate description of the diffusion of water molecules, which act as microscopic probes of the dense packing of axon bundles within the white matter. Understanding the architecture of all white matter projections (the projectome) may be crucial for understanding brain function, and has already lead to fundamental discoveries in normal and pathological brains. The figure displays a view from the top of the brain (top) and a view from the left hemisphere (bottom). The cortical surface is shown (gray) as well as the brain stem (pink) in context with a subset of BlueMatters projectome estimate coursing through the core of the white matter in the left hemisphere. Leveraging the Blue Gene/L supercomputing architecture, BlueMatter creates a massive database of 180 billion candidate pathways using multiple DW-MRI tracing algorithms, and then employs a global optimization algorithm to select a subset of these candidates as the projectome. The estimated projectome accounts for 72 million projections per square centimeter of cortex and is the highest resolution projectome of the human brain.

What role will BlueMatter play in the SyNAPSE project?


Long term, we hope that our work will lead to insights on how to wire together a system of cognitive computing chips. Short term, we are incorporating data from BlueMatter into our cortical simulations.

What makes all the computational power necessary?


Because of the relatively low resolution of the data compared with the white matter tissue, there are many possible sets of curves one may draw in order to estimate the projectome and compare it with a global error metric as we have done. Searching this space leads to a combinatorial explosion of possibilities. This has led many researchers to focus on individual tract estimation at the cost of ignoring global constraints, such as the volume consumption of the tracts. Rather than simplify our model, we have addressed the computational challenge with an algorithm designed to specifically leverage a supercomputing architecture of Blue Gene.

What are the next steps?


We are also interested in using our technique to make measurements on the projectome and communication between brain areas that can generate hypothesis about brain function that may be validated with behavioral results or perhaps functional imaging and can be integrated with large-scale simulations.

Future

How will your current project to design a computer similar to the human brain change the everyday computing experience?
While we have algorithms and computers to deal with structured data (for example, age, salary, etc.) and semi-structured data (for example, text and web pages), no mechanisms exist that parallel the brains uncanny ability to act in a context-dependent fashion while integrating ambiguous information across different senses (for example, sight, hearing, touch, taste, and smell) and coordinating multiple motor modalities. Success of cognitive computing will allow us to mine the boundary between digital and physical worlds where raw sensory information abounds. Imagine, for example, instrumenting the worlds oceans with temperature, pressure, wave height, humidity and turbidity sensors, and imagine streaming this information in real-time to a cognitive computer that may be able to detect spatiotemporal correlations, much like we can pick out a face in a crowd. We think that cognitive computing has the ability to profoundly transform the world and bring about entirely new computing architectures and, possibly even, industries.

What is the ultimate goal?


Cognitive computing seeks to engineer the mind by reverse engineering the brain. The mind arises from the brain, which is made up of billions of neurons that are liked by an internet like network. An emerging discipline, cognitive computing is about building the mind, by understanding the brain. It synthesizes neuroscience, computer science, psychology, philosophy, and mathematics to understand and mechanize the mental processes. Cognitive computing will lead to a universal computing platform that can handle a wide variety of spatio-temporally varying sensor streams.

S-ar putea să vă placă și