Conventional Hybrid Super-computer Reaches 1,000 Trillion CPS

December 8, 2008 in Human Health, Hyper-convergence paradigm, Off-Earth Advances by Joseph Robertson

A hybrid super-computer has reached the astounding speed of 1,000 trillion calculations per second, termed a petaflop. The Roadrunner super-computer at Los Alamos National Laboratory operates on a conventional paradigm of computational mechanics — meaning it operates over semiconductors and established systems of computer circuitry, not quantum computing innovations or molecular processors.

The Roadrunner, like other hybrid super-computers, is made up of thousands of distributed computing “nodes”, each with its own microprocessor and separate memory store. There is a time-lapse between memory retrieval and central processing registry of computation, which means researchers have to come up with creative ways to narrow the ever-widening gap between computation and memory-retrieval time, a barrier difficult to overcome due to the physical limitations of the raw materials.

Roadrunner’s specific hybrid design is a breakthrough because it allows some improvement on this front. According to the Los Alamos website:

Roadrunner is a cluster of approximately 3,250 compute nodes interconnected by an off-the-shelf parallel-computing network. Each compute node consists of two AMD Opteron dual-core microprocessors, with each of the Opteron cores internally attached to one of four enhanced Cell microprocessors. This enhanced Cell does double-precision arithmetic faster and can access more memory than can the original Cell in a PlayStation 3. The entire machine will have almost 13,000 Cells and half as many dual-core Opterons.

It is believed the petaflop speed will allow Roadrunner to be useful in calculating the rapid evolution of supernovae, the massive explosions that sometimes result from dying stars, a process which, if understood, can help to explain to astronomers, physicists and cosmologists, not only how the details of cosmic radiation have played out but also: what can be expected in the evolution and decay of certain star systems, and what that means for the physics of stars and galaxies, forces like gravity, the nature of black holes and, ultimately, provide some of the information necessary for testing sweeping theories about the beginnings of our universe.

The Milagro Cosmic Ray Observatory at Los Alamos, using special code designed to trace fluctuation and tranmission of radiation, to map the celestial background and study the effects of interstellar radiation on the Earth and near-Earth objects, would be the forum through which such applications for Roadrunner would be explored. In Milagro’s work with specially designed code to study radiation physics, according to Los Alamos itself:

The major application areas addressed were radiation transport (how radiation deposits energy in and moves through matter), neutron transport (how neutrons move through matter), molecular dynamics (how matter responds at the molecular level to shock waves and other extreme conditions), fluid turbulence, and the behavior of plasmas (ionized gases) in relation to fusion experiments at the National Ignition Facility at Lawrence Livermore National Laboratory.

It is also expected the petaflop speed will be useful in testing medical advances, potentially projecting cell reaction to chemical treatments, radiation innovations, gene therapy and other complex metabolic interventions that could adversely affect or significantly improve patient prognoses. John Turner, a Los Alamos researcher, says his team expects “proposals in cosmology, antibiotic drug design, HIV vaccine development, astrophysics, ocean or climate modeling, turbulence, and we hope many others”.

The Lab’s website also reports plans to use Roadrunner, starting in 2010, to test means of improving nuclear weapons technology, to enhance performance, and facilitate higher levels of maintenance and security, with a state goal of “maintaining confidence in the nation’s nuclear weapons stockpile without actual nuclear testing”.

Molecular computing innovations, like 16-bit, 128-bit or 1,024-bit simultaneous molecular processing hubs, could allow processor speeds to accelerate exponentially, once such technologies are developed and able to be specialized, mass-produced and widely distributed. Research into nano-scale molecular chemical brains or chemical computational network nodes means “nano-chemical computation may soon be possible, ushering in a new era in super-light, super-fast, more versatile computer processing capabilities and, by extension, robotics.”

Computing speed is relevant not only to improving the performance of super-computers and later commercial microprocessors, enabling more advanced research, but also to the practical application of computational solutions for new zero-emissions models of energy capture, storage and distribution, distributed cloud computing processing platforms, the next generation of hyper-convergent online services, neural nets and artificial intelligence.