The Modern Day Supercomputer

 In Technology

Supercomputers

Supercomputers refer to computers that have powerful levels of computing power well beyond the range of an average computer.

Computer scientists measure the power of a supercomputer with something called floating-point operations per second (FLOPS). FLOPS can be a complicated concept. I’ll discuss a bit about how it works below, but the important concept to remember is that more FLOPS means a faster computer.

A supercomputer is valuable because of its ability to complete tasks that require massive amounts of computation. In the past, this might have been something like modeling weather. One of the primary usages of modern supercomputers is to run simulation or modeling programs for complex systems, which is discussed below.

This field is often referred to as computational science

Early History and Seymour Cray

The supercomputer as we think of it now became a reality in the 1960s. From then until the 1990s, Seymour Cray dominated the field. To tell the story of supercomputing is to begin with the story of Cray.

Cray was born in Wisconsin in 1925. He is considered the father of supercomputing. He designed and created the fastest supercomputers in the world for decades. His company, Cray Research, continues to be a leader in the field.

Cray served as a radio operator in World War II. After the war he studied electrical engineering and applied mathematics, graduating with his Master’s in 1951. After a few years of working in the industry, he co-founded Control Data Corporation (CDC) with William Norris, a co-worker.

First Commercial Supercomputer

It was there that he would build the CDC 6600, the world’s first commercial supercomputer.

He solved critical design elements by focusing on making the entire system faster, not just the CPU. By pairing the CPU with multiple attached processors, the processors could run the actual computations. This freed the CPU to solely read data, drastically increasing the speed of the overall system.

He followed the 6600 with the 7600, which was 5 times faster. He left CDC to found Cray Research because of a difference of opinion with Norris over the design of the 8600. The split was amicable though, and Norris invested in Cray Research.

Cray Research

While funding might have been a concern, it turned out otherwise. Wall Street was happy to fund the increasingly well-known Cray. The release of the Cray 1 in 1976 beat everything on the market by a wide margin and was a major financial success. Ultimately, Cray would leave Cray Research as well. He never quite achieved the same level of success with later companies.

However, he continued to work on pushing the limits of computing until his sudden death in a car accident in 1996. At the time, he had just begun work on massively parallel systems with SRC Computers. Both Cray Research and SRC Computers survived him.

It is rumored that when someone told him that Apple Computer had bought a Cray 1 to design the new Macintosh, Cray said he had just bought a Mac to design the next Cray.

FLOPS

FLOPS, mentioned earlier, stands for floating-point operations per second. Essentially, floating-point arithmetic uses a particular method to represent numbers. It does this because the calculations involve require very small or very large numbers. It’s called “floating-point” because the decimal point, or “radix point” is moved, or floats. It becomes represented by an exponent. In this way, it can be thought of as a kind of scientific notation.

This is necessary because of the nature of the computations completed by supercomputers. A supercomputer often deals with massive numbers like the distance between two galaxies. Or they deal with incredibly small numbers, like the diameter of an electron. Standardizing these numbers allows a uniformity that makes calculations simpler and faster. This is similar to the concept of hashing.

The standard used to represent floating-point numbers by most modern supercomputers is IEEE 754, although there are others. This standard was created by the Institute of Electrical and Electronics Engineers (IEEE) in 1985 and revised in 2008.

Vector Computing  and the Transition to Massively Parallel Systems

In the 1990s, the supercomputing industry began moving from vector computing into massively parallel systems. Cray thought this method was ridiculous, although it came to be the industry standard. He was working on a project in this field shortly before his death.

Vector computing works by giving the computer less instructions. The computer groups data into a vector and a single instruction is given to work on the vector. Imagine you need to mow your lawn. But, someone has to tell you to mow each blade of glass individually. You can only cut a blade once the person tells you to. Mowing the lawn would take hours. But if you put all the blades of grass into a single vector (the “yard”), it’s much faster. It still takes time to mow, but you only need to be given one instruction. That’s vector computing.

In contrast, massively parallel systems are where many processors operate in coordination. Think millions. In our yard example, imagine one thousand people. 500 are cutting grass. 500 are giving orders. The yard might get cut in minutes. As building processors got cheaper and better, this became the faster option.

Supercomputer

Fastest Computer in the World

The fastest supercomputers in the world are ranked on a TOP500 list and have been since 1993. The US and China have a considerable rivalry.

The fastest supercomputer in the world currently is the Sunway TaihuLight in China with 93 PFLOPS (petaFLOPS). The previous record holder with 59 PFLOPS is also a Chinese supercomputer. Sunway used no hardware from the U.S. and was the first Chinese supercomputer to achieve this.

China now has more computers on the TOP500 list than the United States. But, US built computers hold a higher percentage of the top spots.

This is all the more impressive considering China didn’t have a single supercomputer of any magnitude in 1997.

Titan is currently the fastest U.S. supercomputer. It was built, perhaps unsurprisingly, by Cray. However supercomputers used by U.S. defense and intelligence agencies aren’t listed on the TOP500.

Titan is nowhere near as powerful as Sunway. It’s only half as powerful asTianhe-2, but it uses less than half the electric and only a fifth of the cores. This means it does a lot more with less.

Other Considerations for a Supercomputer

One of the biggest issues with supercomputers is heat. Not only can excessive heat engender the whole system, but long-term exposure to higher heats can erode components quickly. The Thor Data Center in Reykjavik Iceland utilizes colder temperatures to combat heat. Paired with renewable energy, it now houses the world’s first zero-emissions computer.

Graphics processors (GPUs) have begun to replace CPUs in some systems because their price, performance, and energy efficiency have increased dramatically.

There are also special-purpose systems that are designed for certain problems.  Deep Blue is a famous supercomputer used to play chess. Gravity Pipe was built for astrophysics and MDGRAPE-3 for molecular modeling of proteins.

Applications of the Supercomputer

In the 1970s and 1980s, supercomputers were often used for weather forecasting, aerodynamic research, and probability analysis. As time went on, supercomputers focused on code-breaking, 3D nuclear test simulations, and molecular modeling. Essentially, supercomputers are most useful for taking complex situations and modeling them. When a system, like weather, has millions of variables and billions of data points, only a supercomputer can handle the simulation.

It’s expected that supercomputers will reach 1 EFLOPS (1000 PFLOPS) by 2018.

Amazing numbers of computationally intensive tasks would be possible with a supercomputer that powerful. Some new tasks would become possible. For example, some theorize that a zettaFLOPS supercomputer could achieve full weather modeling and be able to accurately predict weather across a two-week time span. Such a system could be built by 2030.

Supercomputers are also used for modeling quantum mechanics.

Final Thoughts

The supercomputer is a crucial tool. It’s vital for U.S. scientific, academic, and educational research efforts as well. Currently, researches compete for time slots so that they can use supercomputer to run their complex simulations. There’s a great need to expand overall supercomputing capacity to meet the needs of researches. This would allow more simultaneously research to occur and would drive the speed of discover. These are necessary competitive advantages for the U.S.

In 2017, the U.S. Department of Energy announced $258 million in grants to 5 U.S. companies including HP and Intel to develop more powerful supercomputer to model biological systems like climate. Such efforts continue to fuel the China / U.S. rivalry.

The supercomputer will be at the heart of our continuing quest to understand the world around us for many years.

Other Technology Articles

Matt Cameron

About Matt Cameron

Hard-working, dedicated, and passionate are three traits that describe me. I've spent my entire life learning the skills that I need to be able to be a successful entrepreneur. Whether I'm doing work for my companies, or writing content for my blogs, I'm always giving it my best effort.

View All Posts
Recommended Posts

Leave a Reply

Be the First to Comment!

avatar
  Subscribe  
Notify of

Start typing and press Enter to search

Define SpectreSpace Elevator