Supercomputer Evolution: Over 60 years Of History

Brian Wood Blog

Supercomputer 3

The U.S. Army’s ENIAC Supercomputer circa 1945 (computerhistory.org)

The building of supercomputers began in earnest at the end of WWII. Starting out with the United States Army’s 1945 huge machine called ENIAC (Electronic Numerator, Integrator, Analyzer, and Computer), information was collected and processed like never before. The sheer size of ENIAC was quite impressive as well: it weighted 30 tons, took up 1,800 square feet of floor space, required six full-time technicians to keep it fully operational, and completed 5,000 operations per second.

Flash forward a few years. Non-military and government supercomputers were introduced in the 1960s by data pioneer Seymour Cray at the Control Data Corporation (CDC) and slowly began to become popular with a handful of large corporations. While the supercomputers of the 1970s used only a few processors, in the 1990s machines with thousands of processors began to appear and, by the end of the 20th century, massively parallel supercomputers with tens of thousands of “off-the-shelf” processors were the norm. Supercomputers of the 21st century can use over 100,000 processors (some being graphic units) connected by ultra-fast connections. (Hoffman, 1989; Hill, Jouppi, Sohi, 2000).

Supercomputers are rapidly evolving and although more advances have been made since the start of the new millennium than in any other time in history, we still thought it would be interesting to take a look at the history of supercomputers beginning in the 1960s, when computer use became more prevalent. Below is a decade-by-decade breakdown of the major milestones since then.

1960s:

Computers were primarily used by government agencies up until the early 1960s. They were large “mainframes” that were stored in separate rooms (today, we call them “datacenters). Most of these computers cost about $5 million each and could be rented for approximately $17,000 per month.

Later on during the 60s, as computer use developed commercially and was shared by multiple parties, American Airlines and IBM teamed up to develop a reservation program named the Sabre system. It was installed on two IBM 7090 computers located in New York and processed 84,000 telephone calls per day (rackspace.com).

Computer memory slowly began to move away from magnetic-core devices and into solid-state static and dynamic semiconductor memory. This greatly reduced the size, cost, and power consumption of computers.

1970s:

Intel released the world’s first commercial microprocessor, the 4004 in 1971.

About this same time, datacenters in the U.S. began documenting formal disaster recovery plans for their computer-based business operations (in 1978, SunGuard developed the first commercial disaster recovery business located in Philadelphia).

In 1973, the Xerox Alto minicomputer (later becoming servers) became a landmark step in the development of personal computers because of its graphical user interface, bit-mapped high-resolution screen, large internal and external memory storage, mouse, and special software (rackspace.com).

The world’s first commercially-available local area network, ARCnet, was put into service in 1977 at Chase Manhattan Bank in New York. At the time, it was the simplest and least expensive type of local area network using “token-ring” architecture, while supporting data rates of 2.5 Mbps and connecting up to 255 computers.

Mainframes required special cooling and during the late 1970’s, air-cooled (and newer, smaller) computers moved into offices, essentially eliminating the need for them.

1980s:

The 1980s were highlighted by the boom of the microcomputer (server) era due to the birth of the IBM personal computer (PC).

Computers gained popularity with the public as well as the academic community and beginning in 1985, IBM provided more than $30 million in products and support over the course of five years to a supercomputer facility established at Cornell University in Ithaca, New York.

In 1988, IBM introduced their application system, AS/400, and it quickly became one of the world’s most popular computing systems, especially within the business realm. As the decade came to a close and information technology operations started to grow in complexity, companies became aware of the need to control their IT resources.

1990s:

Microcomputers, now called “servers,” started to find their places in old computer rooms and became referred to as “data centers.” Companies began assembling server rooms within company walls which provided them with inexpensive networking equipment.

The boom of data centers came during the dotcom bubble. “Companies needed fast Internet connectivity and nonstop operation to deploy systems and establish a presence on the Internet.” (rackspace.com).

Many companies started building very large facilities to provide businesses with a wide range of solutions as well as creating systems for deployment and operation. It became a growing trend (and an important one) and these “datacenters” eventually became crucial to businesses large and small to serve a variety of needs including big data storage, security, and much more.

2000-Now:

As of 2007, the average datacenter consumed as much energy as 25,000 homes. Since then, the number has grown quite considerably, especially since datacenters are rapidly gaining in size. In 2013, it was estimated that one datacenter could use approximately enough electricity to power 177,000 homes (science.time.com). Datacenters account for approximately 1.5% of all U.S. energy consumption and the demand is growing at a rate of roughly 10% per year.

Online data is indeed growing exponentially and there are roughly 6 million new servers deployed every year, a far cry from twenty years ago and even from five years ago. Microsoft alone has over 1 million servers and Google has approximately 900,000 (extremetech.com).

Many data centers have recently stepped up their energy efficiency efforts and are starting to “go green.” For example, in 2011, Facebook launched the OpenCompute project, providing specifications to their Oregon datacenter that uses 38% less energy to do the same amount of work as their other facilities. This also saves money, costing Facebook 24% less.

As online data grows exponentially, there is opportunity as well as a need to run more efficient data centers. The future looks promising, however, as companies like Facebook and others explore ways to make their operations more energy efficient while making them more cost-effective as well.

—–

Sources:
rackspace.com
wikipedia.org
time.com/science/
extremetech.com
computerhistory.org