Sunteți pe pagina 1din 35

An assignment report on Supercomputers

Information Technology for Business

What is a Supercomputer?
A supercomputer is a computer at the frontline of contemporary processing capacity particularly speeds of calculation. A large computer or collection of computers that act as one large computer capable of processing enormous amounts of data. Supercomputers are used for very complex jobs such as nuclear research or collecting and calculating weather patterns. Below is an example picture of a super computer at the William R. Wiley Environmental Molecular Sciences Laboratory, the Linux-based supercomputer is composed of nearly 2,000 processors. Courtesy: Pacific Northwest National Laboratory. Supercomputers are the bodybuilders of the computer world. They boast tens of thousands of times the computing power of a desktop and cost tens of millions of dollars. They fill enormous rooms, which are chilled to prevent their thousands of microprocessor cores from overheating. And they perform trillions, or even thousands of trillions, of calculations per second. All of that power means supercomputers are perfect for tackling big scientific problems, from uncovering the origins of the universe to delving into the patterns of protein folding that make life possible. Here are some of the most intriguing questions being tackled by supercomputers today. A supercomputer is a computer that performs at or near the currently highest operational rate for computers. A supercomputer is typically used for scientific and engineering applications that must handle very large databases or do a great amount of computation (or both). At any given time, there are usually a few well-publicized supercomputers that operate at extremely high speeds. The term is also sometimes applied to far slower (but still impressively fast) computers. Most supercomputers are really multiple computers that perform parallel processing. In general, there are two parallel processing approaches: symmetric multiprocessing (SMP) and massively parallel processing (MPP). IBM's Roadrunner is the fastest supercomputer in the world, twice as fast as Blue Gene and six times as fast as any of the other current supercomputers. At the lower end of supercomputing, a new trend called clustering, takes more of a build-it-yourself approach to supercomputing. The Beowulf Project offers guidance on how to put together a number of off-the-shelf personal computer processors, using Linux operating systems, and

Department of Business and Industrial Management

Page 1

An assignment report on Supercomputers

Information Technology for Business

interconnecting the processors with Fast Ethernet. Applications must be written to manage the parallel processing. Perhaps the best-known builder of supercomputers has been Cray Research, now a part of Silicon Graphics. In September 2008, Cray and Microsoft launched CX1, a $25,000 personal supercomputer aimed markets such as aerospace, automotive, academic, financial services and life sciences. CX1 runs Windows HPC (High Performance Computing) Server 2008. In the United States, some supercomputer centres are interconnected on an Internet backbone known as vBNS or NSFNet. This network is the foundation for an evolving network infrastructure known as the National Technology Grid. Internet2 is a university-led project that is part of this initiative. Supercomputers were introduced in the 1960s, made initially and, for decades, primarily by Seymour Cray at Control Data Corporation (CDC), Cray Research and subsequent companies bearing his name or monogram. While the supercomputers of the 1970s used only a few processors, in the 1990s machines with thousands of processors began to appear and, by the end of the 20th century, massively parallel supercomputers with tens of thousands of "offthe-shelf" processors were the norm. As of November 2013, China's Tianhe-2 supercomputer is the fastest in the world at 33.86 petaFLOPS. Systems with massive numbers of processors generally take one of two paths: In one approach (e.g., in distributed computing), a large number of discrete computers (e.g., laptops) distributed across a network (e.g., the internet) devote some or all of their time to solving a common problem; each individual computer (client) receives and completes many small tasks, reporting the results to a central server which integrates the task results from all the clients into the overall solution. In another approach, a large number of dedicated processors are placed in close proximity to each other (e.g. in a computer cluster); this saves considerable time moving data around and makes it possible for the processors to work together (rather than on separate tasks), for example in mesh and hypercube architectures. The use of multi-core processors combined with centralization is an emerging trend; one can think of this as a small cluster (the multicore processor in a smartphone, tablet, laptop, etc.) that both depends upon and contributes to the cloud.

Department of Business and Industrial Management

Page 2

An assignment report on Supercomputers

Information Technology for Business

Supercomputers play an important role in the field of computational science, and are used for a wide range of computationally intensive tasks in various fields, including quantum mechanics, weather forecasting, climate research, oil and gas exploration, molecular modeling (computing the structures and properties of chemical compounds, biological macromolecules, polymers, and crystals), and physical simulations (such as simulations of the early moments of the universe, airplane and spacecraft aerodynamics, the detonation of nuclear weapons, and nuclear fusion). Throughout their history, they have been essential in the field of cryptanalysis.

Distinguishing features
Supercomputers have certain distinguishing features. Unlike conventional computers, they usually have more than one CPU (central processing unit), which contains circuits for interpreting program instructions and executing arithmetic and logic operations in proper sequence. The use of several CPUs to achieve high computational rates is necessitated by the physical limits of circuit technology. Electronic signals cannot travel faster than the speed of light, which thus constitutes a fundamental speed limit for signal transmission and circuit switching. This limit has almost been reached, owing to miniaturization of circuit components, dramatic reduction in the length of wires connecting circuit boards, and innovation in cooling techniques (e.g., in various supercomputer systems, processor and memory circuits are immersed in a cryogenic fluid to achieve the low temperatures at which they operate fastest). Rapid retrieval of stored data and instructions is required to support the extremely high computational speed of CPUs. Therefore, most supercomputers have a very large storage capacity, as well as a very fast input/output capability. Still another distinguishing characteristic of supercomputers is their use of vector arithmetici.e., they are able to operate on pairs of lists of numbers rather than on mere pairs of numbers. For example, a typical supercomputer can multiply a list of hourly wage rates for a group of factory workers by a list of hours worked by members of that group to produce a list of dollars earned by each worker in roughly the same time that it takes a regular computer to calculate the amount earned by just one worker.Supercomputers were originally used in applications related to national security, including nuclear weapons design and cryptography. Today they are also routinely employed by the aerospace, petroleum, and automotive industries. In addition, supercomputers have found wide application in areas involving engineering or scientific research, as, for example, in studies of the structure ofsubatomic
Department of Business and Industrial Management Page 3

An assignment report on Supercomputers

Information Technology for Business

particles and of the origin and nature of the universe. Supercomputers have become an indispensable tool in weather forecasting: predictions are now based on numerical models. As the cost of supercomputers declined, their use spread to the world of online gaming. In particular, the 5th through 10th fastest Chinese supercomputers in 2007 were owned by a company with online rights inChina to the electronic game World of War craft, which sometimes had more than a million people playing together in the same gaming world.

Characteristics which make super computers different from ordinary computer?


Super computers has a big differences from ordinary computers like high speed and it is the most high speed performance as of today and also capable of manipulating massive amount of data in a short time.

They are much more faster Generally used for scientific calculations Use much more power Give off more heat Much more expensive

Usage
Supercomputers play an important role in the field of computational science, and are used for a wide range of computationally intensive tasks in various fields, including quantum mechanics, weather modeling (computing forecasting, climate the structures research, oil and and gas of exploration, molecular chemical compounds,

properties

biological macromolecules, polymers, and crystals), and physical simulations (such as simulations of the early moments of the universe, airplane and spacecraft aerodynamics, the detonation of nuclear weapons, and nuclear fusion). Throughout their history, they have been essential in the field of cryptanalysis.

Department of Business and Industrial Management

Page 4

An assignment report on Supercomputers

Information Technology for Business

Hardware and Architecture


While the supercomputers of the 1970s used only a few processors, in the 1990s, machines with thousands of processors began to appear and by the end of the 20th century, massively parallel supercomputers with tens of thousands of "off-the-shelf" processors were the norm. Supercomputers of the 21st century can use over 100,000 processors (some being graphic units) connected by fast connections. Systems with a massive number of processors generally take one of two paths: in one approach, known as grid computing, the processing power of a large number of computers in distributed, diverse administrative domains, is opportunistically used whenever a computer is available. In another approach, a large number of processors are used in close proximity to each other, e.g. in a computer cluster. In such a centralized massively parallel system the speed and flexibility of the interconnect becomes very important and modern supercomputers have used various approaches ranging from enhanced Infiniband systems to threedimensional torus interconnects. The use of multi-core processors combined with

centralization is an emerging direction, e.g. as in the Cyclops64system.

Department of Business and Industrial Management

Page 5

An assignment report on Supercomputers

Information Technology for Business

Operating system
Since the end of the 20th century, supercomputer operating systems have undergone major transformations, as sea changes have taken place in supercomputer architecture. While early operating systems were custom tailored to each supercomputer to gain speed, the trend has been to move away from in-house operating systems to the adaptation of generic software such as Linux.Given that modern massively parallel supercomputers typically separate computations from other services by using multiple types of nodes, they usually run different operating systems on different nodes, e.g. using a small and efficient lightweight kernel such as CNK or CNL on compute nodes, but a larger system such as a Linux-derivative on server and I/O nodes. While in a traditional multi-user computer system job scheduling is in effect

a tasking problem for processing and peripheral resources, in a massively parallel system, the job management system needs to manage the allocation of both computational and communication resources, as well as gracefully dealing with inevitable hardware failures when tens of thousands of processors are present. Although most modern supercomputers use the Linux operating system, each manufacturer has made its own specific changes to the Linux-derivative they use, and no industry standard exists, partly due to the fact that the differences in hardware architectures require changes to optimize the operating system to each hardware design. Software tools and message passing The parallel architectures of supercomputers often dictate the use of special programming techniques to exploit their speed. Software tools for distributed processing include standard APIs such as MPI and PVM, VTL, and open source-based software solutions such as Beowulf. In the most common scenario, environments such as PVM and MPI for loosely connected clusters and OpenMP for tightly coordinated shared memory machines are used. Significant effort is required to optimize an algorithm for the interconnect characteristics of the machine it will be run on; the aim is to prevent any of the CPUs from wasting time waiting on data from other nodes. GPGPUs have hundreds of processor cores and are programmed using programming models such as CUDA. Moreover, it is quite difficult to debug and test parallel programs. Special techniques need to be used for testing and debugging such applications

Department of Business and Industrial Management

Page 6

An assignment report on Supercomputers

Information Technology for Business

Performance metrics

Top supercomputer speeds: logscale speed over 60 years

In general, the speed of supercomputers is measured and benchmarked in "FLOPS" (Floating Point Operations Per Second), and not in terms of MIPS, i.e. as "instructions per second", as is the case with general purpose computers.[69] These measurements are commonly used with an SI prefix such as tera-, combined into the shorthand "TFLOPS" (1012 FLOPS, pronounced teraflops), or peta-, combined into the shorthand "PFLOPS" (1015 FLOPS, pronounced petaflops.) "Petascale" supercomputers can process one quadrillion (1015) (1000 trillion) FLOPS. Exascale is computing performance in the exaflops range. An exaflop is one quintillion (1018) FLOPS (one million teraflops).

Applications of supercomputers
The stages of supercomputer application may be summarized in the following table:

Decade
1970s

Uses and computer involved


Weather forecasting, aerodynamic research (Cray-1).

1980s

Probabilistic analysis, radiation shielding modelling (CDC Cyber).

1990s

Brute force code breaking (EFF DES cracker),

Department of Business and Industrial Management

Page 7

An assignment report on Supercomputers

Information Technology for Business

2000s

3D nuclear test simulations as a substitute for legal conduct Nuclear NonProliferation Treaty (ASCI Q).

2010s

Molecular Dynamics Simulation (Tianhe-1A)

The IBM Blue Gene/P computer has been used to simulate a number of artificial neurons equivalent to approximately one percent of a human cerebral cortex, containing 1.6 billion neurons with approximately 9 trillion connections. The same research group also succeeded in using a supercomputer to simulate a number of artificial neurons equivalent to the entirety of a rat's brain. Modern-day weather forecasting also relies on supercomputers. The National Oceanic and Atmospheric Administration uses supercomputers to crunch hundreds of millions of observations to help make weather forecasts more accurate. In 2011, the challenges and difficulties in pushing the envelope in supercomputing were underscored by IBM's abandonment of the Blue Waters petascale project.

Department of Business and Industrial Management

Page 8

An assignment report on Supercomputers

Information Technology for Business

Supercomputers Operating System


Early systems

The first Cray-1 (sample shown with internals) was delivered to the customer without an operating system. [8]

The CDC 6600, generally considered the first supercomputer in the world, ran the Chippewa Operating System, which was then deployed on various other CDC 6000

series computers. The Chippewa was a rather simple job control oriented system derived from the earlier CDC 3000, but it influenced the later KRONOS and SCOPE systems. The first Cray 1 was delivered to the Los Alamos Lab without an operating system, or any other software. Los Alamos developed not only the application software for it, but also the operating system. The main timesharing system for the Cray 1, the Cray Time Sharing System (CTSS), was then developed at the Livermore Labs as a direct descendant of the Livermore Time Sharing System (LTSS) for the CDC 6600 operating system from twenty years earlier. The rising software costs in developing a supercomputer soon became dominant, as evidenced by the fact that in the 1980s the cost for software development at Cray came to equal what they spent on hardware. That trend was partly responsible for a move away from the in-house Cray Operating System to UNICOS system based on Unix. In 1985, the Cray 2 was the first system to ship with the UNICOS operating system. Around the same time, the EOS operating system was developed by ETA Systems for use in their ETA10 supercomputers. Written in Cybil, a Pascal-like language from Control Data Corporation, EOS highlighted the stability problems in developing stable operating systems for supercomputers and eventually a Unix-like system was offered on the same machine. The lessons learned from the development of ETA system software included the high level of risk

Department of Business and Industrial Management

Page 9

An assignment report on Supercomputers

Information Technology for Business

associated with the development of a new supercomputer operating system, and the advantages of using Unix with its large existing base of system software libraries. By the middle 1990s, despite the existing investment in older operating systems, the trend was towards the use of Unix-based systems, which also facilitated the use of interactive user interfacesfor scientific computing across multiple platforms. The move towards a 'commodity OS' was not without its opponents who cited the fast pace and focus of Linux development as a major obstacle towards adoption. As one author wrote "Linux will likely catch up, but we have large-scale systems now". Nevertheless, that trend continued to build momentum and by 2005, virtually all supercomputers used some UNIX like OS. These variants of UNIX included AIX from IBM, the open source Linux system, and other adaptations such as UNICOS from Cray. By the end of the 20th century, Linux was estimated to command the highest share of the supercomputing pie.

Modern approaches

The Blue Gene/P supercomputer at Argonne National Lab

The IBM Blue Gene supercomputer uses the CNK operating system on the compute nodes, but uses a modified Linux-based kernel called INK (for I/O Node Kernel) on the I/O nodes. CNK is a lightweight kernel that runs on each node and supports a single application running for a single user on that node. For the sake of efficient operation, the design of CNK was kept simple and minimal, with physical memory being statically mapped and the CNK neither needing nor providing scheduling or context switching. CNK does not even implement file I/O on the compute node, but delegates that to dedicated I/O nodes. However, given that on the Blue Gene multiple compute nodes share a single I/O node, the I/O node operating system does require multi-tasking, hence the selection of the Linux-based operating system.
Department of Business and Industrial Management Page 10

An assignment report on Supercomputers

Information Technology for Business

While in traditional multi-user computer systems and early supercomputers, job scheduling was in effect a scheduling problem for processing and peripheral resources, in a massively parallel system, the job management system needs to manage the allocation of both computational and communication resources. The need to tune task scheduling and tune the operating system in different configurations of a supercomputer is essential. A typical parallel job scheduler has a master scheduler which instructs a number of slave schedulers to launch, monitor and control parallel jobs, and periodically receives reports from them about the status of job progress. Some, but not all supercomputer schedulers attempt to maintain locality of job execution. The PBS Pro scheduler used on the Cray XT3 and Cray XT4 systems does not attempt to optimize locality on its three dimensional torus interconnect, but simply uses the first available processor. On the other hand, IBM's scheduler on the Blue Gene supercomputers aims to exploit locality and minimize network contention by assigning tasks from the same application to one or more midplanes of an 8x8x8 node group. The SLURM scheduler uses a best fit algorithm, and performs Hilbert curve scheduling in order to optimize locality of task assignments. A number of modern supercomputers such as the Tianhe-I use the SLURM job scheduler which arbitrates contention for resources across the system. SLURM is open source, Linux-based, is quite scalable, and can manage thousands of nodes in a computer cluster with a sustained throughput of over 100,000 jobs per hour.

Department of Business and Industrial Management

Page 11

An assignment report on Supercomputers

Information Technology for Business

Applications of super computers


Recreating the Big Bang
It takes big computers to look into the biggest question of all: What is the origin of the universe? The "Big Bang," or the initial expansion of all energy and matter in the universe, happened more than 13 billion years ago in trillion-degree Celsius temperatures, but supercomputer simulations make it possible to observe what went on during the universe's birth. Researchers at the Texas Advanced Computing Centre (TACC) at the University of Texas in Austin have also used supercomputers to simulate the formation of the first galaxy, while scientists at NASAs Ames Research Centre in Mountain View, Calif., have simulated the creation of stars from cosmic dust and gas. Supercomputer simulations also make it possible for physicists to answer questions about the unseen universe of today. Invisible dark matter makes up about 25 percent of the universe, and dark energy makes up more than 70 percent, but physicists know little about either. Using powerful supercomputers like IBM's Roadrunner at Los Alamos National Laboratory, researchers can run models that require upward of a thousand trillion calculations per second, allowing for the most realistic models of these cosmic mysteries yet.

Understanding earthquakes
Other supercomputer simulations hit closer to home. By modeling the three-dimensional structure of the Earth, researchers can predict how earthquake waves will travel both locally and globally. It's a problem that seemed intractable two decades ago, says Princeton

Department of Business and Industrial Management

Page 12

An assignment report on Supercomputers

Information Technology for Business

geophysicist Jeroen Tromp. But by using supercomputers, scientists can solve very complex equations that mirror real life. "We can basically say, if this is your best model of what the earth looks like in a 3-D sense, this is what the waves look like," Tromp said. By comparing any remaining differences between simulations and real data, Tromp and his team are perfecting their images of the earth's interior. The resulting techniques can be used to map the subsurface for oil exploration or carbon sequestration, and can help researchers understand the processes occurring deep in the Earth's mantle and core.

Folding Proteins
In 1999, IBM announced plans to build the fastest supercomputer the world had ever seen. The first challenge for this technological marvel, dubbed "Blue Gene"? Unravelling the mysteries of folding. Proteins are made of long strands of amino acids folded into complex three-dimensional shapes. Their function is driven by their form. When a protein misfolds, there can be serious consequences, including disorders like cystic fibrosis, Mad Cow disease and Alzheimer's disease. Finding out how proteins fold and how folding can go wrong could be the first step in curing these diseases. Blue Gene isn't the only supercomputer to work on this problem, which requires massive amounts of power to simulate mere microseconds of folding time. Using simulations, researchers have uncovered the folding strategies of several proteins, including one found in the lining of the mammalian gut. Meanwhile, the Blue Gene project has expanded. As of November 2009, a Blue Gene system in Germany is ranked as the fourth-most powerful supercomputer in the world, with a maximum processing speed of a thousand trillion calculations per second.
Department of Business and Industrial Management Page 13

An assignment report on Supercomputers

Information Technology for Business

Mapping the blood stream


Think you have a pretty good idea of how your blood flows? Think again. The total length of all of the veins, arteries and capillaries in the human body is between 60,000 and 100,000 miles. To map blood flow through this complex system in real time, Brown University professor of applied mathematics George Karniadakis works with multiple laboratories and multiple computer clusters. In a 2009 paper in the journal Philosophical Transactions of the Royal Society, Karniadakas and his team describe the flow of blood through the brain of a typical person compared with blood flow in the brain of a person with hydrocephalus, a condition in which cranial fluid builds up inside the skull. The results could help researchers better understand strokes, traumatic brain injury and other vascular brain diseases, the authors write.

Modeling swine flu


Potential pandemics like the H1N1 swine flu require a fast response on two fronts: First, researchers have to figure out how the virus is spreading. Second, they have to find drugs to stop it. Supercomputers can help with both. During the recent H1N1 outbreak, researchers at Virginia Polytechnic Institute and State University in Blacksburg, Va., used an advanced model of disease spread called EpiSimdemics to predict the transmission of the flu. The program, which is designed to model populations up to 300 million strong, was used by the U.S. Department of Defense during the outbreak, according to a May 2009 report in IEEE Spectrum magazine.
Department of Business and Industrial Management Page 14

An assignment report on Supercomputers

Information Technology for Business

Meanwhile, researchers at the University of Illinois at Urbana-Champaign and the University of Utah were using supercomputers to peer into the virus itself. Using the Ranger supercomputer at the TACC in Austin, Texas, the scientists unraveled the structure of swine flu. They figured out how drugs would bind to the virus and simulated the mutations that might lead to drug resistance. The results showed that the virus was not yet resistant, but would be soon, according to a report by the TeraGrid computing resources center. Such simulations can help doctors prescribe drugs that won't promote resistance.

Testing nuclear weapons


Since 1992, the United States has banned the testing of nuclear weapons. But that doesn't mean the nuclear arsenal is out of date. The Stockpile Stewardship program uses non-nuclear lab tests and, yes, computer simulations to ensure that the country's cache of nuclear weapons are functional and safe. In 2012, IBM plans to unveil a new supercomputer, Sequoia, at Lawrence Livermore National Laboratory in California. According to IBM, Sequoia will be a 20 petaflop machine, meaning it will be capable of performing twenty thousand trillion calculations each second. Sequoia's prime directive is to create better simulations of nuclear explosions and to do away with real-world.

Department of Business and Industrial Management

Page 15

An assignment report on Supercomputers

Information Technology for Business

Forecasting hurricanes
With Hurricane Ike bearing down on the Gulf Coast in 2008, forecasters turned to Ranger for clues about the storm's path. This supercomputer, with its cowboy moniker and 579 trillion calculations per second processing power, resides at the TACC in Austin, Texas. Using data directly from National Oceanographic and Atmospheric Agency airplanes, Ranger calculated likely paths for the storm. According to a TACC report, Ranger improved the five-day hurricane forecast by 15 percent. Simulations are also useful after a storm. When Hurricane Rita hit Texas in 2005, Los Alamos National Laboratory in New Mexico lent manpower and computer power to model vulnerable electrical lines and power stations, helping officials make decisions about evacuation, power shutoff and repairs.

Predicting climate change


The challenge of predicting global climate is immense. There are hundreds of variables, from the reflectivity of the earth's surface (high for icy spots, low for dark forests) to the vagaries of ocean currents. Dealing with these variables requires supercomputing capabilities. Computer power is so coveted by climate scientists that the U.S. Department of Energy gives out access to its most powerful machines as a prize. The resulting simulations both map out the past and look into the future. Models of the ancient past can be matched with fossil data to check for reliability, making future predictions stronger. New variables, such as the effect of cloud cover on climate, can be explored. One model, created in 2008 at Brookhaven National Laboratory in New York, mapped the aerosol particles and turbulence of clouds to a resolution of 30 square feet. These maps will have to
Department of Business and Industrial Management Page 16

An assignment report on Supercomputers

Information Technology for Business

become much more detailed before researchers truly understand how clouds affect climate over time.

Building brains
So how do supercomputers stack up to human? Well, they're really good at computation: It would take 120 billion people with 120 billion calculators 50 years to do what the Sequoia supercomputer will be able to do in a day. But when it comes to the brain's ability to process information in parallel by doing many calculations simultaneously, even supercomputers lag behind. Dawn, a supercomputer at Lawrence Livermore National Laboratory, can simulate the brain power of a cat but 100 to 1,000 times slower than a real cat brain. Nonetheless, supercomputers are useful for modeling the nervous system. In 2006, researchers at the colePolytechniqueFdrale de Lausanne in Switzerland successfully simulated a 10,000-neuron chunk of a rat brain called a neocortical unit. With enough of these units, the scientists on this so-called "Blue Brain" project hope to eventually build a complete model of the human brain. The brain would not be an artificial intelligence system, but rather a working neural circuit that researchers could use to understand brain function and test virtual psychiatric treatments. But Blue Brain could be even better than artificial intelligence, lead researcher Henry Markram told The Guardian newspaper in 2007: "If we build it right, it should speak."

Department of Business and Industrial Management

Page 17

An assignment report on Supercomputers

Information Technology for Business

How super computer is different from other computers??


Mainframe computers were introduced in 1975. A mainframe computer is a large computer in term of price, power and speed. It is more powerful than minicomputer. Mainframe computer can serve up to 50,000 users simultaneously. Its price is $5000 to $5 million. These computers can store large amount of data, information and instructions. The users access a mainframe computer through terminal or personal computer.A typical mainframe computer can execute 16 million instructions per second. Qualified operators and programmers are required to use these computers. Mainframe computers can accept all types of high-level languages. Different types of peripheral devices can be attached with mainframe computer. Examples: 1- IBM4381 2- NEC 610 3- DEC 10 etc. Super Computer: Super computer were introduced in 1980. Super computer is the biggest in size and the most expensive in price than any other computers. It is the most sophisticated, complex and advanced computer. It has very large storage capacity. It can process trillions of instructions in one second. Its price is $500000 to $350 million. Super computer use high speed facilities such as satellite for online processing. A supercomputer can handle high amounts of scientific computation. It is maintained in a special room. It is 50000 times faster than that of microcomputers, which are very common nowadays. The cost that is associated with a supercomputer is roughly $20 million. Due to its high cost it is not used for domestic or office level of work. Examples: 1- CRAY-XP 2- ETA-10 etc.

Department of Business and Industrial Management

Page 18

An assignment report on Supercomputers

Information Technology for Business

It is used in areas such as defence, weaponry systems, weather forecasting or scientific research. It was first used for defence purposes and was used to keep the record information of war weapons and its allied products. For example George and David Chudnovsky broke the world record for PI calculation by using two supercomputers to calculate PI to 480 million decimal places (PI is a commonly used mathematical constant that is based on the relationship of a circle's circumference to its diameter). From there onwards the value of PI became popular for geographically related calculations. In the next few years, more and more large industries will start using supercomputers such as the parallel computer, which will have hundreds or even thousands of processors. Mainframe computers are used in large scale organization whereas super computers are considered to be fast with regardless of its size.Mainframes are generally called as an operating system whereas super computers are a mini computers. A MAINFRAME is one form of a computer system that is generally more powerful than other typical mini systems. They r used in large organizations 4 large scale jobs& also mainframes themselves may vary widely cost capability. The kind traditionally used as the main record-keeper and data processor for large businesses and government facilities. But Super-computer" is a term used for very fast computers, regardless of their physical size. It used to be that a computer that could perform more than one gigaflop (one billion operations per second) was considered to be a supercomputer. Now, most high-end personal computers operate at that speed The most largest ,fastest the most expensive computers in the world is SUPER COMPUTER. They are used for Bio-Medical Research, Weather Forecasting,and Chemical Analysis in Laboratory etc.NEC'sEarth Simulator in Japan is now world's fastest computer.

Department of Business and Industrial Management

Page 19

An assignment report on Supercomputers

Information Technology for Business

Top 10 Supercomputers in world


The following table gives the Top 10 positions of the supercomputers on November 18, 2013.

R a Rmax n Rpeak k

Name

Computer design Processor type, interconnect

Vendor

Site Country, year

33.863 54.902

Tianhe-2

NUDT Xeon E52692 + Xeon NUDT Phi 31S1P, TH Express-2 Cray XK7 Opteron 6274 + Tesla K20X, Cray Gemini Interconnect Blue Gene/Q PowerPC A2, Custom

National Supercomputing Center in Guangz China, 2013

17.590 27.113

Titan

Cray

Oak Ridge National Laboratory United States, 2012

17.173 20.133 10.510 11.280 8.586 10.066 5.168 8.520 5.008 5.872

Sequoia

IBM

Lawrence Livermore National Laboratory United States, 2013 RIKEN Japan, 2011 Argonne National Laboratory United States, 2013 Texas Advanced Computing Center United States, 2013 ForschungszentrumJlich Germany, 2013 Lawrence Livermore National Laboratory United States, 2013

K computer

RIKEN Fujitsu SPARC64 VIIIfx, Tofu Blue Gene/Q PowerPC A2, Custom IBM

Mira

Stampede

PowerEdge C8220 Xeon E52680 + Xeon Dell Phi, Infiniband Blue Gene/Q PowerPC A2, Custom IBM

JUQUEEN

4.293 5.033

Vulcan

Blue Gene/Q PowerPC A2, Custom

IBM

Department of Business and Industrial Management

Page 20

An assignment report on Supercomputers

Information Technology for Business

The following table gives the Top 10 positions of the supercomputers on November 18, 2013.

R a Rmax n Rpeak k
2.897 3.185

Name

Computer design Processor type, interconnect

Vendor

Site Country, year

iDataPlex DX360M4 SuperMUC Xeon E5 2680, Infiniband

IBM

Leibniz-Rechenzentrum Germany, 2012

2.566 10 4.701

NUDT Xeon E52692 + Xeon Tiahne-1A NUDT Phi 31S1P, TH Express-2

National Supercomputing Center in Tianjin China.

Rank In the TOP500 List table, the computers are ordered first by their Rmax value. In the case of equal performances (Rmax value) for different computers, the order is by Rpeak. Rmax The highest score measured using the LINPACK benchmark suite. This is the number that is used to rank the computers. Measured in quadrillions of floating point operations per second, i.e. petaflops. Rpeak This is the theoretical peak performance of the system. Measured in Pflops. Name Some supercomputers are unique, at least on its location, and are therefore christened by its owner. Computer The computing platform as it is marketed. Processor cores The number of active processor cores actively used running LINPACK. After this figure is the processor architecture of the cores named. Vendor The manufacturer of the platform and hardware. Site The name of the facility operating the supercomputer. Country The country in which the computer is situated.
Department of Business and Industrial Management Page 21

An assignment report on Supercomputers

Information Technology for Business

1. Tianhe-2 (MilkyWay-2)

Country: China Site: National University of Defence Technology (NUDT) Manufacturer: NUDT Cores: 3,120,000 Linpack Performance (Rmax): 33,862.7 TFlop/s Theoretical Peak (Rpeak): 54,902.4 TFlop/s Power: 17,808.00 kW Memory: 1,024,000 GB Interconnect: TH Express-2 Operating System: Kylin Linux Compiler: ICC Math Library: Intel MKL-11.0.0 MPI: MPICH2 with a customized GLEX channel
Department of Business and Industrial Management Page 22

An assignment report on Supercomputers

Information Technology for Business

2. Titan

Country: U.S. Site: DOE/SC/Oak Ridge National Laboratory System URL: http://www.olcf.ornl.gov/titan/ Manufacturer: Cray Inc. Cores: 560,640 Linpack Performance (Rmax): 17,590.0 TFlop/s Theoretical Peak (Rpeak): 27,112.5 TFlop/s Power: 8,209.00 kW Memory: 710,144 GB Interconnect: Cray Gemini interconnect Operating System: Cray Linux Environment

Department of Business and Industrial Management

Page 23

An assignment report on Supercomputers

Information Technology for Business

3. Sequoia

Country: U.S. Site: DOE/NNSA/LLNL Manufacturer: IBM Cores: 1,572,864 Linpack Performance (Rmax): 17,173.2 TFlop/s Theoretical Peak (Rpeak): 20,132.7 TFlop/s Power: 7,890.00 kW Memory: 1,572,864 GB Interconnect: Custom Interconnect Operating System: Linux

Department of Business and Industrial Management

Page 24

An assignment report on Supercomputers

Information Technology for Business

4. K computer

Country: Japan Site: RIKEN Advanced Institute for Computational Science (AICS) Manufacturer: Fujitsu Cores: 705,024 Linpack Performance (Rmax): 10,510.0 TFlop/s Theoretical Peak (Rpeak): 11,280.4 TFlop/s Power: 12,659.89 kW Memory: 1,410,048 GB Interconnect: Custom Interconnect Operating System: Linux

Department of Business and Industrial Management

Page 25

An assignment report on Supercomputers

Information Technology for Business

5. Mira

Country: U.S. Site: DOE/SC/Argonne National Laboratory Manufacturer: IBM Cores: 786,432 Linpack Performance (Rmax): 8,586.6 TFlop/s Theoretical Peak (Rpeak): 10,066.3 TFlop/s Power: 3,945.00 kW Interconnect: Custom Interconnect Operating System: Linux

Department of Business and Industrial Management

Page 26

An assignment report on Supercomputers

Information Technology for Business

6. Stampede

Country: U.S. Site: Texas Advanced Computing Center/Univ. of Texas, Austin System URL: http://www.tacc.utexas.edu/stampede Manufacturer: Dell Cores: 462,462 Linpack Performance (Rmax): 5,168.1 TFlop/s Theoretical Peak (Rpeak): 8,520.1 TFlop/s Power: 4,510.00 kW Memory: 192,192 GB Interconnect: Infiniband FDR Operating System: Linux Compiler: Intel Math Library: MKL
Department of Business and Industrial Management Page 27

An assignment report on Supercomputers

Information Technology for Business

7. JUQUEEN

Country: Germany Site: ForschungszentrumJuelich (FZJ) System URL: http://www.fzjuelich.de/ias/jsc/EN/Expertise/Supercomputers/JUQUEEN/JUQUEEN_node.html Manufacturer: IBM Cores: 458,752 Linpack Performance (Rmax): 5,008.9 TFlop/s Theoretical Peak (Rpeak): 5,872.0 TFlop/s Power: 2,301.00 kW Memory: 458,752 GB Interconnect: Custom Interconnect Operating System: Linux

Department of Business and Industrial Management

Page 28

An assignment report on Supercomputers

Information Technology for Business

8. Vulcan

Country: U.S. Site: DOE/NNSA/LLNL Manufacturer: IBM Cores: 393,216 Linpack Performance (Rmax): 4,293.3 TFlop/s Theoretical Peak (Rpeak): 5,033.2 TFlop/s Power: 1,972.00 kW Memory: 393,216 GB Interconnect: Custom Interconnect Operating System: Linux

Department of Business and Industrial Management

Page 29

An assignment report on Supercomputers

Information Technology for Business

9. SuperMUC

Country: Germany Site: Leibniz Rechenzentrum System URL: http://www.lrz.de/services/compute/supermuc/ Manufacturer: IBM Cores: 147,456 Linpack Performance (Rmax): 2,897.0 TFlop/s Theoretical Peak (Rpeak): 3,185.1 TFlop/s Power: 3,422.67 kW Interconnect: Infiniband FDR Operating System: Linux

Department of Business and Industrial Management

Page 30

An assignment report on Supercomputers

Information Technology for Business

10.Tianhe-1A (MilkyWay-1A)

Country: China Site: National Supercomputing Center in Tianjin Manufacturer: NUDT Cores: 186,368 Linpack Performance (Rmax): 2,566.0 TFlop/s Theoretical Peak (Rpeak): 4,701.0 TFlop/s Power: 4,040.00 kW Memory: 229,376 GB Interconnect: Proprietary Operating System: Linux Compiler: ICC MPI: MPICH2 with a custom GLEX channel
Department of Business and Industrial Management Page 31

An assignment report on Supercomputers

Information Technology for Business

Supercomputing in India
India's supercomputer program was started in late 1980s because Cray supercomputers were denied for import due to an arms embargo imposed on India, as it was a dual use technology and could be used for developing nuclear weapons. PARAM 8000 is considered India's first supercomputer. It was indigenously built in 1990 by Centre for Development of Advanced Computing and was replicated and installed at ICAD Moscow in 1991 under Russian collaboration.

India's Rank in Top500 super computers


As of June 2013, India has 11 systems on the Top500 list ranking 36, 69, 89, 95, 174, 245, 291, 309, 310, 311 and 439.

Rank

Site

Name

Rmax Rpeak (TFlop/s) (TFlop/s)

36

Indian Institute of Tropical Meteorology

iDataPlex DX360M4

719.2

790.7

69

Centre for Development of Advanced Computing

PARAM Yuva - II

386.7

529.4

89

National Centre for Medium Range Weather Forecasting

iDataPlex DX360M4

318.4

350.1

CSIR Centre for Mathematical 95 Modelling and Computer Simulation

Cluster Platform 3000 BL460c Gen8

303.9

360.8

Department of Business and Industrial Management

Page 32

An assignment report on Supercomputers

Information Technology for Business

174

Vikram Sarabhai Space Centre, ISRO

SAGA Z24XX/SL390s Cluster

188.7

394.8

245

Manufacturing Company India

Cluster Platform 3000 BL460c Gen8

149.2

175.7

291

Computational Research Laboratories

EKA - Cluster Platform 3000 BL460c

132.8

172.6

309

Semiconductor Company (F)

Cluster Platform 3000 BL460c Gen8

129.2

182.0

310

Semiconductor Company (F)

Cluster Platform 3000 BL460c Gen8

129.2

182.0

311

Network Company

Cluster Platform 3000 BL460c Gen8

128.8

179.7

439

IT Services Provider (B)

Cluster Platform 3000 BL460c Gen8

104.2

199.7

Department of Business and Industrial Management

Page 33

An assignment report on Supercomputers

Information Technology for Business

PARAM SERIES
After being denied Cray supercomputers as a result of a technology embargo, India started a program to develop indigenous supercomputers and supercomputing technologies. Supercomputers were considered a double edged weapon capable of assisting in the development of nuclear weapons.[5] For the purpose of achieving self-sufficiency in the field, the Centre for Development of Advanced Computing (C-DAC) was set up in 1988 by the then Department of Electronics with Dr. Vijay Bhatkar as its Director. The project was given an initial run of 3 years and an initial funding of 300,000,000. Because the same amount of money and time was usually expended to purchase a supercomputer from the US. In 1990, a prototype was produced and was benchmarked at the 1990 Zurich Supercomputing Show. It

surpassed most other systems, placing India second after US.


The final result of the effort was the PARAM 8000, which was installed in 1991. It is considered India's first supercomputer.

PARAM 8000
Unveiled in 1991, PARAM 8000 used Inmos T800 transputers.Transputers were a fairly new and innovative microprocessor architecture designed for parallel processing at the time. It was a distributed MIMD architecture with a reconfigurable interconnection network. It had 64 CPUs.

PARAM 8600
PARAM 8600 was an improvement over PARAM 8000. It was a 256 CPU computer. For every four Inmos T800, it employed an Intel i860 coprocessor. The result was over 5 GFLOPS at peak for vector processing. Several of these models were exported.

PARAM 9900/SS
PARAM 9900/SS was designed to be a MPP system. It used the SuperSPARC II processor. The design was changed to be modular so that newer processors could be easily accommodated. Typically, it used 32-40 processors. But, it could be scaled up to 200 CPUs using the close network topology.PARAM 9900/US was the UltraSPARC variant

and PARAM 9900/AA was the DEC variant.

Department of Business and Industrial Management

Page 34

An assignment report on Supercomputers

Information Technology for Business

PARAM 10000
In 1998, the PARAM 10000 was unveiled. PARAM 10000 used several independent nodes, each based on the Sun Enterprise 250 server and each such server contained two 400MhzUltraSPARC II processors. The base configuration had three compute nodes and a server node. The peak speed of this base system was 6.4 GFLOPS.A typical system would contain 160CPUs and be capable of 100 GFLOPS.But; it was easily scalable to the TFLOP range.

PARAM Padma
PARAM Padma (Padma means Lotus in Sanskrit) was introduced in April 2003.It had a peak speed of 1024 GFLOPS (about 1 TFLOP) and peak storage of 1 TB. It used 248 IBM Power4CPUs of 1 GHz each. The operating system was IBM AIX 5.1L. It used PARAMnet II as its primary interconnects. It was the first Indian supercomputer to break the 1 TFLOP barriers.

PARAM Yuva
PARAM Yuva (Yuva means Youth in Sanskrit) was unveiled in November 2008. It has a maximum sustainable speed (Rmax) of 38.1 TFLOPS and a peak speed (Rpeak) of 54 TFLOPS.[10] There are 4608 cores in it, based on Intel 73XX of 2.9 GHz each. It has a storage capacity of 25 TB up to 200 TB. It uses PARAMnet 3 as its primary interconnects.

ParamYuva II
ParamYuva II was made by Centre for Development of Advanced Computing in a period of three months, at a cost of 16 crore (US$2 million), and was unveiled on 8 February 2013. It performs at a peak of 524 teraflops and consumes 35% less energy as compared to ParamYuva. It delivers sustained performance of 360.8 teraflops on the community standard Linpack benchmark, and would have been ranked 62 in the November 2012 ranking list of Top500. In terms of power efficiency, it would have been ranked 33rd in the November 2012 List of Top Green500 supercomputers of the world. It is the first Indian supercomputer achieving more than 500 teraflops.

Department of Business and Industrial Management

Page 35