YOU ARE MY PRECIOUS VISITOR NUMBER

Translate this page

Wednesday, July 23, 2008

ALL ABOUT SUPER COMPUTERS

A supercomputer is a computer that is at the front line of processing capacity, particularly speed of calculation. The term "Super Computing" was first used by New York World newspaper in 1929 to refer to large custom-built tabulators that IBM had made for Columbia University.
Supercomputers introduced in the 1960s were designed primarily by Seymour Cray at Control Data Corporation (CDC), and led the market into the 1970s until Cray left to form his own company, Cray Research. He then took over the supercomputer market with his new designs, holding the top spot in supercomputing for five years (1985–1990). Cray, himself, never used the word "supercomputer"; a little-remembered fact is that he only recognized the word "computer". In the 1980s a large number of smaller competitors entered the market, in a parallel to the creation of the minicomputer market a decade earlier, but many of these disappeared in the mid-1990s "supercomputer market crash". Today, supercomputers are typically one-of-a-kind custom designs produced by "traditional" companies such as Cray, IBM and HP, who had purchased many of the 1980s companies to gain their experience.
Common uses
Supercomputers are used for highly calculation-intensive tasks such as problems involving quantum mechanical physics, weather forecasting, climate research (including research into global warming), molecular modeling (computing the structures and properties of chemical compounds, biological macromolecules, polymers, and crystals), physical simulations (such as simulation of airplanes in wind tunnels, simulation of the detonation of nuclear weapons, and research into nuclear fusion), cryptanalysis, and the like. Major universities, military agencies and scientific research laboratories are heavy users.
A particular class of problems, known as Grand Challenge problems, are problems whose full solution requires semi-infinite computing resources.
Relevant here is the distinction between capability computing and capacity computing. Capability computing is typically thought of as using the maximum computing power to solve a large problem in the shortest amount of time. Often a capability system is able to solve a problem of a size or complexity that no other computer can. Capacity computing in contrast is typically thought of as using efficient cost-effective computing power to solve somewhat large problems or many small problems or to prepare for a run on a capability system.
Hardware and software design
Supercomputers have been designed to do complex calculations at faster speeds than other computers. Its designers make use of 2 processes for the enhancement of its performance.The first method is called pipe lining.It does complex operations at the same time by grouping numbers which have the same order that it calculates and these are passed to the CPU in an orderly manner. The circuits in the CPU continuously perform the operations while data is being entered into it.
Another method used is called parallelism. It does calculations in a similar than orderly way. This is where it performs various datas at the same time and moves ahead step by step. A usual way to do it is connecting together various CPUs which does calculations together. Each of these CPUs do the commands it needs to carry out on every piece of information
All supercomputers make use of parallelism or pipelining separately or even combine them to enhance its processing speed. However,an increased demand for calculation machines brought upon the creation of the (MPP)massively-parallel processing supercomputers. It consists of various machines connected together to attain a high level of parallelism
Supercomputers using custom CPUs traditionally gained their speed over conventional computers through the use of innovative designs that allow them to perform many tasks in parallel, as well as complex detail engineering. They tend to be specialized for certain types of computation, usually numerical calculations, and perform poorly at more general computing tasks. Their memory hierarchy is very carefully designed to ensure the processor is kept fed with data and instructions at all times — in fact, much of the performance difference between slower computers and supercomputers is due to the memory hierarchy. Their I/O systems tend to be designed to support high bandwidth, with latency less of an issue, because supercomputers are not used for transaction processing.
As with all highly parallel systems, Amdahl's law applies, and supercomputer designs devote great effort to eliminating software serialization, and using hardware to address the remaining bottlenecks.
Supercomputer challenges, technologies
• A supercomputer generates large amounts of heat and must be cooled. Cooling most supercomputers is a major HVAC problem.
• Information cannot move faster than the speed of light between two parts of a supercomputer. For this reason, a supercomputer that is many meters across must have latencies between its components measured at least in the tens of nanoseconds. Seymour Cray's supercomputer designs attempted to keep cable runs as short as possible for this reason: hence the cylindrical shape of his Cray range of computers. In modern supercomputers built of many conventional CPUs running in parallel, latencies of 1-5 microseconds to send a message between CPUs are typical.
• Supercomputers consume and produce massive amounts of data in a very short period of time. According to Ken Batcher, "A supercomputer is a device for turning compute-bound problems into I/O-bound problems." Much work on external storage bandwidth is needed to ensure that this information can be transferred quickly and stored/retrieved correctly.
Technologies developed for supercomputers include:
• Vector processing
• Liquid cooling
• Non-Uniform Memory Access (NUMA)
• Striped disks (the first instance of what was later called RAID)
• Parallel filesystems
THE DIFFERENCES BETWEEN SUPER COMPUTERS AND NORMAL COMPUTERS
Supercomputers, just like any other typical computer, have two basic parts. The first one is the CPU which executes the commands it needs to do. The other one is the memory which stores data. The only difference between an ordinary computer and supercomputers is that supercomputers have their CPUs opened at faster speeds than standard computers. This certain length of time determines the exact speed that a CPU can work. By using complex and state-of-the-art materials being connected as circuits,supercomputer designers optimize the functions of the machine. They also try to have smaller length of circuits connected as possible in order for the information from the memory reach the CPU at a lesser time.
Measuring supercomputer speed
The speed of a supercomputer is generally measured in "FLOPS" (FLoating Point Operations Per Second), commonly used with an SI prefix such as tera-, combined into the shorthand "TFLOPS" (1012 FLOPS, pronounced teraflops), or peta-, combined into the shorthand "PFLOPS" (1015 FLOPS, pronounced petaflops.).
Current fastest supercomputer system
On June 8, 2008, the Cell/AMD Opteron-based IBM Roadrunner at the Los Alamos National Laboratory (LANL) was announced as the fastest operational supercomputer, with a sustained processing rate of 1.026 PFLOPS. However, Roadrunner was then taken out of service to be shipped to its new home.
India unleashes 4th fastest super computer
For the first time in recent years there is no made-in-India machine in the semi-annual ranking of the world's ``Top 500'' supercomputers. Constantly improving performance has shifted the entry point into the 500 fastest computers to 1 tera flop per second (TFLOP/s) or faster. A tera flop is a trillion, that is a million million, computations or floating point operations per second.
`Desi' machines such as the Chennai-based Institute of Mathematical Sciences' cluster-computer ``Kabru'' or the Pune-based Centre for Development of Advanced Computing's (CDAC) ``Param'' were short of the teraflop mark when they made the Top 500 and have not upgraded their systems significantly since then.
However, India still plays host to 8 U.S.-made supercomputers in the list released by Mannheim University in Germany and the Lawrence Berkeley National Laboratory with the University of Tennessee, both in the U.S.: Of these a 2 teraflop Hewlett Packard cluster is housed in the Institute of Genomics and Integrative Biology, Delhi, while the others are HP or IBM machines operated by private IT players and geophysical exploration companies.
India has now official broke into top ten super computers in the world.
For the first time ever, India placed a system in the Top 10. The Computational Research Laboratories, a wholly owned subsidiary of Tata Sons Ltd. in Pune, India, installed a Hewlett-Packard Cluster Platform 3000 BL460c system. They integrated this system with their own innovative routing technology and achieved 117.9 TFlop/s performance.
The twice-yearly TOP500 list of the world’s fastest supercomputers, already a closely watched event in the world of high performance computing, is expected to become an even hotter topic of discussion as the latest list shows five new entrants in the Top 10, which includes sites in the United States, Germany, India and Sweden.
OTHER Fastest Computers
USA - BlueGene/L - eServer Blue Gene Solution
Germany - JUGENE - Blue Gene/P Solution
USA - SGI Altix ICE 8200, Xeon quad core 3.0 GHz
India - Cluster Platform 3000 BL460c, Xeon 53xx 3GHz, Infiniband
Sweden - Cluster Platform 3000 BL460c, Xeon 53xx 2.66GHz, Infiniband

1 comment:

Unknown said...

Mr Manmauji

Plz publish Simrandeep singh's marksheet AIR 52

Aishwarya Singh AIR 27

Satyajit Naik Air 167

Also possible then of IAR 42..cant remember name...
ThanX.