Documente Academic
Documente Profesional
Documente Cultură
an individual endowed with a brain can think of something related to his position in the world right here right now. It is not about the continuity, or the performance, nor the profoundness of the thought, but it is about thinking of something in a knowable manner and which can be specified from a linguistic or mathematical angle, without it being an automatic and predefined response to a given situation. By analogy to the notion lengthily investigated by philosophers, psychologists, neurobiologists, we will pose the question of artificial consciousness: how can one transpose the fact of thinking of something into the computable field, so that an artificial system, founded on computer processes, would be able to generate consciousness facts, in a viewable manner. The system will have intentions, emotions and ideas about things and events related to it-self. The system would have to have a body that it could direct and which would constrain the system. It would also have to have a history, and intentions to act and, most of all, to think. It would have to have knowledge, notably language knowledge. It would have to have emotions, intentions and finally a certain consciousness about itself. We can name this system, by sheer semantic analogy, an artificial brain. However we will see that its architecture is quite different from living brains. The concern is transposing the effects, the movements; certainly not reproducing the components like neurons
and glial cells. We should keep in mind principally one characteristic of the process of thinking unfolding in a brain: there is a complex neural, biochemical, electrical activation movement happening. This movement is coupled to a similar but of a different mode in the nervous system deployed in the whole body. This complex movement generates, by selective emergence and by reaching a particular configuration, what we call a thought about something. This thought rapidly leads to actuators or language activity and descends then in the following thought which can be similar or different. This is the very complex phenomenon that has to be transposed into the computable domain. Hence, we should approach the sudden appearance of thoughts in brains at the level of the complex dynamics of a system building and reconfiguring recurrent and temporized flow. We can transpose this into computer processes architectures containing symbolic meaning and we should make it geometrically self-controlled. Two reasonable hypotheses are made for this transposition: analogy between the geometrical dynamics of the real brain and of the artificial brain. For one, flows are complex images, almost continuous; for the other, these are dynamical graphs which deformations are evaluated topologically. combinatory complexity reduction of the real brain in the computable domain by using symbolic and prelanguage level for this approach. The basic elements are completely different; they are not of the same scale.
However, once these hypotheses made, one should not start to develop an architecture that will operate its own control from the aspects of its changing geometry. One needs to ask the proper question about consciousness fact generation. A philosopher, a couple of decades ago, M. Heidegger, asked the proper question: what brings us to think about this thing right here right now? The answer, quite elaborate, to this question will conduct to a system architecture choice that will take us away from reactive or deductive systems. The system will generate intentionally its consciousness facts, intention as P. Ricoeur understood it. There are no consciousness facts without intention to think. This settles the question, considered as a formidable, of freedom to think. One thinks of everything according to his memory and his intuition on the moment, but only if it is expressible as a thought by the system producing thoughts. Some might see something infinite in this process; however it is not our case. A finite set of component which movements occur in a finite space has only a finite number of states in which it can be. Also, as the permanence of the physical real apprehensible by the sense is very strong, the preoccupation to think by man is quite limited, in his civilizations. Let us point out that artificial systems that will think artificially will be able to communicate directly at the level of forms of the ideas, without using a language mediator, and hence, would be co-active as well as being numerous in space. For different reasons, numerous people think that the
path of artificial consciousness investigation should not be taken at all. I feel differently, because, discoveries have been the very root of our existence, from fire to the mighty F-16s. The mind is a work of art moulded in mystery, and any effort to unlock its doors should be encouraged because, I am sure, that its discovery is only going to help us respect the great architect more.
Why do we need cognitive computing? How could cognitive computing help build a smarter planet?
As the amount of digital data that we create continues to grow massively and the world becomes more instrumented and interconnected, there is a need for new kinds of computing systems imbued with a new intelligence that can spot hard-to-find patterns in vastly varied kinds of data, both digital and sensory; analyze and integrate information real-time in a context-dependent way; and deal with the ambiguity found in complex, real-world environments. Cognitive computing offers the promise of entirely new computing architectures, system designs and programming paradigms that will meet the needs of the instrumented and interconnected world of tomorrow.
Cornell University: Rajit Manohar Columbia University Medical Center: Stefano Fusi University of Wisconsin-Madison: Giulio Tononi University of California-Merced: Christopher Kello IBM Research: Rajagopal Ananthanarayanan, Leland Chang, Daniel Friedman, Christoph Hagleitner, Bulent Kurdi, Chung Lam, Paul Maglio, Stuart Parkin, Bipin Rajendran, Raghavendra Singh
Can you place the cat-scale simulation in context of relate to your past work?
For past work on rat-scale simulations, please see here and for mouse-scale simulations, please seehere.
December 2006: Blue Gene/L at IBM Research - Almaden with 4,096 CPUs and 1 TB memory 40% mouse-scale with 8 million neurons, 50 billion synapses 10 times slower than real-time at 1 ms simulation resolution
April 2007: Blue Gene/L at IBM Research - Watson with 32,768 CPUs and 8 TB memory Rat-scale with 56 million neurons, 448 billion synapses 10 times slower than real-time at 1 ms simulation resolution
March 2009: Blue Gene/P on KAUST-IBM WatsonShaheen machine with 32,768 CPUs and 32 TB memory 1% of human-scale with 200 million neuron, 2 trillion synapses 100 - 1000 times slower than real-time at 0.1ms simulation resolution
SC09: this announcement: Blue Gene/P DAWN at LLNL with 147,456 CPUs and 144 TB memory Cat-scale with 1 billion neurons, 10 trillion synapses 100-1000 times slower than real-time at 0.1ms simulation resolution Neuroscience details: neuron dynamics, synapse dynamics, individual learning synapses, biologically realistic thalamocortical connectivity, axonal delays Prediction: In 2019, using a supercomputer with 1 Exaflop/s and 4PB of main memory, a near real-time humanscale simulation may become possible. Summary: Progress in large-scale cortical simulations. Each of the four charts above details recent achievements in the simulation of networks of single-compartment, phenomenological neurons with connectivity based on statistics derived from mammalian cortex. Simulations were run on Blue Gene supercomputers with progressively larger amounts of main memory. The number of synapses in the models varied from 5,485 to 10,000 synapses per neuron, reflecting construction from different sets of biological measurements. First: Simulations on a Blue Gene/L supercomputer of a 40% mouse-scale cortical model with 8 million neurons and 52 billion synapses, employing 4,096 processors and 1 TB of main memory. Second: Simulations on a Blue Gene/L supercomputer culminating in a rat-scale cortical model with 58 million neurons and 461 billion synapses, using 32,768 processors and 8 TB of main memory. Third: Simulations on a Blue Gene/P supercomputer culminating in a one-percent human-scale cortical model with 200 million neurons and 1.97 trillion synapses, employing 32,768 processors and 32 TB of main memory. Fourth: Simulations on a Blue Gene/P supercomputer culminating in a cat-scale cortical model with 1.62 billion neurons and 8.61 trillion synapses, using 147,456 processors and 144 TB of main memory. The largest simulations performed on this machine correspond to approximately 4.5% of human cerebral cortex.
The figure shows the progress that has been made in supercomputing since the early 90s. At each time point, the green line shows the 500th fast supercomputer, the dark blue line the fastest supercomputer, and the light blue line the summed power of the top 500 machines. These lines show a nice trend, which weve extrapolated out 10 years. The IBM teams latest simulation results represent a model about 4.5% the scale of the human cerebral cortex, which was run at 1/83 of real time. The machine used provided 144 TB of memory and 0.5 PFLop/s. Turning to the future, you can see that running human scale cortical simulations will probably require 4 PB of memory and to run these simulations in real time will require over 1 EFLop/s. If the current trends in supercomputing continue, it seems that human-scale simulations will be possible in the not too distant future.
Caption: Like the surface of a still lake reacting to the impact of a pebble, the neurons in IBM's cortical simulator C2 respond to stimuli. Resembling a travelling wave, the activity propagates through different cortical layers and cortical regions. The simulator is an indispensable tool that enables researchers to bring static structural brain networks to life, to probe the mystery of cognition, and to pave the path to cool, compact cognitive computing systems. Please note that the simulator is demonstrating how information percolates and propagates. It is NOT learning the IBM logo.
What do you see on the horizon for this work in thalamocortical simulations?
We are interested in expanding our model in both scale and in the details that it incorporates. In terms of scale, as the amount of memory available in cutting edge supercomputers continues to increase, we foresee that simulations at the scale of monkey cerebral cortex and eventually the human cerebral cortex will soon be within reach. As supercomputing speed increases, we also see the speed of our simulations increasing to approach real-time. In terms of details in our simulations, we are currently working on differentiating our cortical region into specific areas (such as primary visual cortex or motor cortex) and providing the long-range connections that form the circuitry between these areas in the mammalian brain. For this work, we are drawing from many studies describing the structure and input/output patterns of these areas as well as a study recently performed within IBM that collates a very large number of individual measurements of white matter, the substrate of long-range connectivity within the brain.
High-resolution version (2MB) The figure displays results from BlueMatter, a parallel algorithm for white matter projection measurement. Recent advances in diffusion-weighted magnetic resonance imaging (DW-MRI) have allowed the unprecedented ability to non-invasively measure the human white matter network across the entire brain. DW-MRI acquires an aggregate description of the diffusion of water molecules, which act as microscopic probes of the dense packing of axon bundles within the white matter. Understanding the architecture of all white matter projections (the projectome) may be crucial for understanding brain function, and has already lead to fundamental discoveries in normal and pathological brains. The figure displays a view from the top of the brain (top) and a view from the left hemisphere (bottom). The cortical surface is shown (gray) as well as the brain stem (pink) in context with a subset of BlueMatters projectome estimate coursing through the core of the white matter in the left hemisphere. Leveraging the Blue Gene/L supercomputing architecture, BlueMatter creates a massive database of 180 billion candidate pathways using multiple DW-MRI tracing algorithms, and then employs a global optimization algorithm to select a subset of these candidates as the projectome. The estimated projectome accounts for 72 million projections per square centimeter of cortex and is the highest resolution projectome of the human brain.
Future
How will your current project to design a computer similar to the human brain change the everyday computing experience?
While we have algorithms and computers to deal with structured data (for example, age, salary, etc.) and semi-structured data (for example, text and web pages), no mechanisms exist that parallel the brains uncanny ability to act in a context-dependent fashion while integrating ambiguous information across different senses (for example, sight, hearing, touch, taste, and smell) and coordinating multiple motor modalities. Success of cognitive computing will allow us to mine the boundary between digital and physical worlds where raw sensory information abounds. Imagine, for example, instrumenting the worlds oceans with temperature, pressure, wave height, humidity and turbidity sensors, and imagine streaming this information in real-time to a cognitive computer that may be able to detect spatiotemporal correlations, much like we can pick out a face in a crowd. We think that cognitive computing has the ability to profoundly transform the world and bring about entirely new computing architectures and, possibly even, industries.