What Is A Computer Network Cluster?
A computer network or Computer cluster or clusters usually designates a number of networked computers that can be seen from the outside in many cases than a computer. In general, the individual elements of a cluster are connected over a fast network. The aim of the “clustering” is mostly in the increase in computing capacity or availability to a single computer. As are in a cluster computer (and nodes (nodes English) or servers) are often referred to as a server farm.
The term describes cluster primarily on the architecture of the individual components and their interaction. Hardware or software clusters are fundamentally different. The simple form of a hardware cluster is as active / passive known. Other variants are known as cascading. This interruption of service must be considered.
Cluster software or application clusters, however, are better able to realize a continuous operation (e.g., DNS server). It depends from the client in the client / server architecture, if the client can handle the switching of the service.
A distinction between so-called homogeneous and heterogeneous clusters: Homogeneous computer cluster running the same OS and same hardware, the heterogeneous cluster, different operating systems or hardware can be used. Known Linux cluster software are HP-Service Guard, Beowulf and openMosix.
Clusters are often used for a number of different uses:
High Availability Cluster
High Availability Cluster (High-Availability Cluster – Cluster HA) are used to increase the availability and better reliability. This occurs on one node of the cluster to be a mistake to migrate the services running on this cluster to another node. Most of the HA cluster have 2 node. There are clusters, which run continuously on all the nodes services. These clusters are called active-active or symmetric. Are not all active nodes, it is called active-passive, or asymmetrical. Both the hardware and the software of an HA cluster must be free of single-point-of-Failures (components, which would bring by a defect in the entire system to fail to be) apply such an HA cluster in critical environments, where maximum allowed downtime of only a few minutes a year.
In the context of disaster scenarios, critical computer systems must be secured. In addition, the cluster nodes are often located several miles apart in different data centers. In the event of a disaster may take over the nodes in the data center is not affected, the entire load. This type of clustering is also called “stretched clusters”.
Load Balancing (SLB) clusters are constructed for the purpose of load balancing over multiple machines. The load distribution is usually a redundant, central instance. Possible application areas are environments with high demands on computer power. The power requirement is not covered here by upgrading individual computers, but by adding additional computers. Reason to use not least is the use of standard low-cost computers (COTS components) instead of expensive specialized computers.
High Performance Computing Cluster
High-performance computing cluster (HPC clusters) are used for processing mathematical problems. These computing tasks are split among multiple nodes. Either the functions are divided into different packages and run simultaneously on multiple nodes or the arithmetic (Jobs called) are distributed to each node. The allocation of jobs takes on a mostly Job Management System. HPC clusters are often found in science. Even the so-called render farms fall into this category.
Computer cluster — History
The first commercially available product cluster was ARCnet, developed in 1977 by Data Point. The first real success was by DEC in 1983 with the idea of the product VAXcluster for their computer system VAX. The product not only supported parallel computing on the cluster nodes, but also the sharing of file systems and devices of all participating nodes. These properties are still not included in many free and commercial products. VAXcluster as “VMSCluster” still available today from the company for the operating system HP OpenVMS Alpha and Itanium processors and the.
HA Cluster – Technology
The failover feature is not usually made available by the operating system (Service failover, IP, bid). The acquisition of services can be achieved, for example through the automatic migration of IP addresses or using a multicast address.
Generally, a distinction between the shared nothing architectures and shared all. Typical representative of the “active-active” cluster with shared-nothing architecture with DB2 EEE (pronounced “triple e”). Here, each cluster node contains its own data partition. A performance gain is achieved by partitioning the data and the associated distributed processing. Reliability is not guaranteed hereby.
Otherwise, this is the “shared-all cluster. This architecture enabled by a concurrent access to shared storage, that all cluster nodes can access the entire database. In addition to scaling and performance of this architecture, an additional reliability is achieved. If a node, other nodes take over its role(s). A typical representative of the shared-all-architecture is Oracle Real Application Clusters (RAC).
HA-Cluster computer can no local disk directly from a Storage Area Network (SAN) boot out as a “single system image”. Such diskless shared root clusters facilitate the exchange of cluster nodes that are in such a configuration, only their performance and I / O bandwidth.
Services must be specifically programmed for use on a cluster. A service is called “cluster aware” if it to special events (such as the failure of a cluster node) reacts and converts them appropriately. Cluster software can be implemented by scripts or integrated into the operating system kernel in the form.
In HPC clusters, the task at hand, the “job”, often divided by a decomposition program into smaller parts and then distributed among the nodes.
Communication between nodes on different parts of the current job is usually done using Message Passing Interface (MPI) as a rapid communication is desired between individual processes. This is the node coupled with a fast network such as InfiniBand.
One common method of distributing jobs on an HPC cluster is a job-scheduling program that can make a distribution according to different categories, such as Load Sharing Facility (LSF) or Network Queuing System (NQS).
Recently, more and more rows of a Linux cluster in the TOP500 supercomputers of, not least because for the most demanding computing tasks cheap COTS hardware can be used.
Study: From Wikipedia, the free encyclopedia. The text is available under the Creative Commons.
- Cloud Computing: The Concept and Examples of its Virtual Services | Part 1 - July 23, 2012
- Why Rapidly Growing Companies Need Cloud Computing | Part 1 - July 22, 2012
- Web Designing Process | Strategic Planning | Part 1 - August 7, 2011