A Computer Cluster Or System Cluster | Part 1

A Computer Cluster Or System Cluster | Part 1

What is a Cluster of Computers or Systems?

A computer cluster or simply a cluster is a collection of computers connected via a computer network. The purpose of a cluster is to deploy a very complex process between the various components of the computer cluster. Essentially a problem that requires lot of processing to be solved is decomposed into separate sub problems which are solved in parallel. This obviously increases the computational power of the system.

Requirements to form a cluster of computers

To obtain a computer system operating as a cluster, you must:

1. an operating system capable of operating computers at the cluster (for example GNU / Linux, using openMosix)
2. high performance network hardware
3. parallelizable algorithm.

Types of cluster computing

There are three types of clusters: Fail-over, Load Balancing and High Performance Computing, with the first two which are probably the most common:

* Fail-over Cluster: machine operation is continuously monitored, and when one of the two host stops working the other machine takes over. The aim is to ensure a continuous service;

* Cluster with load balancing is a system where work requests are sent to the machine under load;

* HPC Cluster: computers are configured to provide extremely high performance. Machines break down the processes of a job on multiple machines in order to gain in performance. The salient feature is that the processes are parallel and that the routines that can run on different machines will be distributed separately instead of waiting to be executed one after the other. THE HPC are common especially among data centers;

The use of this technology is widespread: for example, Ferrari and Dreamworks using cluster (based on the operating system GNU / Linux) on which they run programs for rendering and computational fluid dynamics simulation very expensive.

Application of high performance cluster computing

The organization TOP500 lists every semester the 500 fastest computers in the world and usually in this list includes many clusters.

TOP500 is a collaboration between the University of Mannheim, University of Tennessee and processing center of energy research at national scientific Lawrence Berkeley National Laboratory.

In November 2006, the fastest supercomputer was IBM Blue Gene / L of the U.S. Department of Energy with the performance of 280.6 TFlops.

Using cluster can provide significant performance gains contain the cost.

System X supercomputer of Virginia Tech in June 2006 was the twenty-eighth most powerful supercomputer on earth [1]. This is a cluster consisting 12:25 TFlops 1100 Apple XServe G5 dual processor 2.3 GHz (4 GB RAM, 80 GB SATA HD) running Mac OS X, interconnected via InfiniBand. The cluster initially consisted of Power Mac G5, which then were sold. The XServe are stackable and less bulky the Mac desktop, and then allow you to achieve a more compact cluster. The total cost of that cluster of Power Mac was $ 5.2 million, one tenth of the cost of slower supercomputer consisting of a single computer (mainframe).

The central concept of Beowulf cluster is the use of computers to produce a cheap commercial to a traditional supercomputer. A project that has taken this concept was the strongest form of Stone Supercomputer.

The SETI @ home seems to be the largest distributed cluster exists. Use about three million personal computers around the world to analyze data from the Arecibo radio telescope in order to find evidence of extraterrestrial intelligence.

History of computing cluster

The history of computing cluster is best summarized as a footnote in In Search of Clusters by Greg Pfister:

Virtually every statement issued by the DEC which refers to the cluster says: DEC, who invented the cluster. Not even IBM invented them. Users have invented the cluster, since they could not carry all their work on a single computer, or needed a backup. The date of invention is unknown, but I think during the ’60s, or even the end of ’50.

The base of cluster computing technology understood as the fulfillment of any job in parallel was arguably introduced by Gene Amdahl of IBM, who in 1967 published a paper with what was considered the basis for parallel processing: Amdahl’s Law, which describes mathematically the increase in performance that can be achieved by performing an operation on a parallel architecture.

The article written by Amdhal engineering sets out the basis for both multiprocessor computing for calculating cluster. The significant difference between the two is that inter-processor communications are supported inside the computer (for example, a bus / internal communication network adapted) or outside the computer, on a commercial network. This article sets the foundation for both the engineering calculations for the multiprocessor cluster, where the primary distinction is whether the inter-processor communication is supported or not to ‘internal computer (such as on a customized internal communications bus or on a network) or to the ‘outside’ of the computer on a commercial network.

Consequently, the history of early computer clusters is more or less directly included in the history of early networks, as a principal reason for the development of a network was able to link together computing resources, in effect creating a cluster of computer.

Packet switched networks were conceptually invented by the company RAND in 1962. Using the concept of a packet switched network, the ARPANET project succeeded in creating in 1969 what was perhaps the first cluster of computers based on a commercial network linking four different computer centers (each of which was almost a “cluster” but probably not a cluster commercial).

The ARPANET project grew so like the Internet, which can be considered the mother of all cluster computers, Internet represents the paradigm of today’s cluster of all the computers in the world.



Santosh is an experienced content writer with good search engine optimisation skills. Santosh works with our marketing department in the creation of articles and how-to guides for our company blog and knowledgebase.