Network Load Balanced Windows Servers
What is Network Load Balancing (NLB)
Network Load Balancing (NLB) combines the resources of two or more servers running Windows Server 2008 into a single virtual cluster. All servers in an NLB cluster are addressed by the same set of cluster IP addresses however each server in a NLB cluster also maintains a set of unique, dedicated IP addresses individually. After the implementation of network load balancing, all servers in the cluster function collectively and if any one of them fails or goes offline, the load is automatically redistributed among the servers that are still operating.
Each server in a single cluster can run a separate copy of the desired server applications such as applications for Web, FTP, and Telnet servers. The minimum number of servers required in an NLB environment is two while the maximum number can be based upon the number of applications to be load balanced. Once configured, NLB distributes incoming client requests across the servers in the cluster. The load or traffic to be handled by each server can be defined as necessary. Additionally, you can also add more servers dynamically to the cluster in case of increase in load/traffic in future. NLB is also capable of directing all traffic to a designated single server, which is also referred to as the default host.
When a server fails or goes offline unexpectedly, active connections to the failed or offline server are lost. However, if you intend to take down any server within the cluster intentionally, you can use the drainstop command to service all active connections prior to bringing the server offline. Once the offline server is back online, it can transparently rejoin the cluster and regain its share of the workload, thereby allowing the other servers in the cluster to handle less traffic thereafter.All servers in an NLB cluster exchange heartbeat messages to maintain consistent communication between them. By default, when any server fails to send heartbeat messages within five seconds, it has failed. In such a scenario, the remaining servers in the cluster converge and do the following:
- Establish which servers are still active within the cluster.
- Elect the server with the highest priority as the new default host.
- Ensure that all new client requests are handled by the remaining active servers.
NLB includes the following features: Scalability Scalability within a NLB cluster refers to the ability to add one or more systems to an existing cluster when the overall load of the cluster exceeds its capabilities. The following list details the scalability features of NLB:
- Balances load requests across the NLB cluster for individual TCP/IP services
- Supports up to 32 computers in a single cluster
- Balances multiple server load requests (from either the same client or from several clients) across multiple systems in the cluster
- Supports the ability to add more servers to the cluster as the load goes up, without bringing the cluster down
- Supports the ability to remove servers from the cluster when the load goes down
- Enables high performance and low overhead through fully pipelined implementation. Pipelining allows requests to be sent to the NLB cluster without waiting for response to the previously sent request.
- A highly available system is capable providing an acceptable and reliable level of service with minimal or no downtime. NLB includes some built-in features that are capable of providing high availability:
- Detecting and recovering from a server that fails or goes offline.
- Balancing the network load when servers are added or removed.
- Ability to recover and redistribute the workload within ten seconds.
- NLB is installed as a standard Windows networking driver component.
- NLB requires no hardware changes to enable and run.
- NLB Manager enables creation of new NLB clusters.
- NLB Manager enables configuration and management of multiple clusters and all of the cluster's servers from a single remote or local computer.
- NLB lets clients access the cluster by using a single, logical Internet name and virtual IP address—known as the cluster IP address (it also retains individual names for each computer).
- If any server fails and then is subsequently brought back online, NLB can be configured to automatically add that server to the cluster. The added server will then be able to start handling new requests from clients.
- A server can be taken offline for preventive maintenance without disturbing cluster operations on the other servers.
Other features of NLB:
- Runs on the servers which are to be load-balanced, rather than a separate device.
- Presents a Virtual Internet Protocol (VIP) TCP/IP address to the clients.
- Distributes incoming TCP connections and User Datagram Protocol (UDP) datagram's among up to 32 servers, scaling the performance of the cluster.
- Detects hosts that have become unavailable and automatically redistributes traffic between the remaining servers within seconds, ensuring high availability.
- Permits full remote control from any Microsoft Windows NT 4.0-based, Microsoft Windows 2000-based, or Microsoft Windows Server 2003-based computer.
- Inherently supports Secure Sockets Layer (SSL) sessions.
Minimum Hardware Requirements:
- Minimum 2 hardware nodes
- 2 Network Interface cards for public and private network communication
- Minimum 2 GB of RAM on each node
- 300 GB Disk
- Shared storage (optional)
- Software Requirements:
- OS requirements: Any one OS on all the nodes
- Microsoft® Windows Server™ 2008 Web Edition.
- Microsoft® Windows Server® 2008 Standard Edition.
- Microsoft® Windows Server™ 2008 Enterprise Edition.
- Microsoft® Windows Server™ 2008 Datacenter Edition.
- Applications supported in NLB:
- IIS Web Server Application can be load balanced.
- DNS Service application can be load balanced.
- Terminal Service session broker
Replication uses a publish-subscribe model, allowing a primary server, referred to as the Publisher, to distribute data to one or more secondary servers, or Subscribers. Replication allows real-time availability and scalability across these servers. It supports filtering to provide a subset of data at Subscribers, and also allows partitioned updates. Subscribers are online and available for reporting or other functions, without query recovery. SQL Server offers three types of replication: snapshot, transactional, and merge.
Snapshot Replication Overview:
Snapshot replication distributes data exactly as it appears at a specific moment in time and does not monitor for updates to the data. When synchronization occurs, the entire snapshot is generated and sent to Subscribers. Using snapshot replication by itself is most appropriate when one or more of the following is true:
- Data changes infrequently.
- It is acceptable to have copies of data that are out of date with respect to the Publisher for a period of time.
- Replicating small volumes of data.
- A large volume of changes occurs over a short period of time.
Snapshot replication is most appropriate when data changes are substantial but infrequent. For example, if a sales organization maintains a product price list and the prices are all updated at the same time once or twice each year, replicating the entire snapshot of data after it has changed is recommended. Given certain types of data, more frequent snapshots may also be appropriate. For example, if a relatively small table is updated at the Publisher during the day, but some latency is acceptable, changes can be delivered nightly as a snapshot.
Snapshot replication has a lower continuous overhead on the Publisher than transactional replication, because incremental changes are not tracked. However, if the dataset set being replicated is very large, it will require substantial resources to generate and apply the snapshot. Consider the size of the entire data set and the frequency of changes to the data when evaluating whether to utilize snapshot replication.
Transactional Replication Overview:
Transactional replication provides the lowest latency and is most commonly used for high availability. Transactional replication typically starts with a snapshot of the publication database objects and data. As soon as the initial snapshot is taken, subsequent data changes and schema modifications made at the Publisher are usually delivered to the Subscriber as they occur (in near real time). The data changes are applied to the Subscriber in the same order and within the same transaction boundaries as they occurred at the Publisher; therefore, within a publication, transactional consistency is guaranteed.
Transactional replication is typically used in server-to-server environments and is appropriate in each of the following cases:
- You want incremental changes to be propagated to Subscribers as they occur.
- The application requires low latency between the time changes are made at the Publisher and the changes arrive at the Subscriber.
- The application requires access to intermediate data states. For example, if a row changes five times, transactional replication allows an application to respond to each change (such as firing a trigger), not simply the net data change to the row.
- The Publisher has a very high volume of insert, update, and delete activity.
- The Publisher or Subscriber is a non-SQL Server database, such as Oracle.
By default, Subscribers to transactional publications should be treated as read-only, because changes are not propagated back to the Publisher. However, transactional replication does offer options that allow updates at the Subscriber.
Merge Replication Overview:
Merge replication, like transactional replication, typically starts with a snapshot of the publication database objects and data. Subsequent data changes and schema modifications made at the Publisher and Subscribers are tracked with triggers. The Subscriber synchronizes with the Publisher when connected to the network and exchanges all rows that have changed between the Publisher and Subscriber since the last time synchronization occurred.
Merge replication is typically used in server-to-client environments. Merge replication is appropriate in any of the following situations:
- Multiple Subscribers might update the same data at various times and propagate those changes to the Publisher and to other Subscribers.
- Subscribers need to receive data, make changes offline, and later synchronize changes with the Publisher and other Subscribers.
- Each Subscriber requires a different partition of data.
- Conflicts might occur and, when they do, you need the ability to detect and resolve them.
- The application requires net data change rather than access to intermediate data states. For example, if a row changes five times at a Subscriber before it synchronizes with a Publisher, the row will change only once at the Publisher to reflect the net data change (that is, the fifth value).
Merge replication allows various sites to work autonomously and later merge updates into a single, uniform result. Because updates are made at more than one node, the same data may have been updated by the Publisher and by more than one Subscriber. Therefore, conflicts can occur when updates are merged and merge replication provides a number of ways to handle conflicts.
Microsoft SQL Server database mirroring is a primarily software solution for increasing database availability. It's implemented on a per-database basis and works only with databases that use the full recovery model. Database mirroring works with any supported database compatibility level. Database mirroring maintains two copies of a single database that must reside on different server instances of SQL Server Database Engine. Typically, these server instances reside on computers in different locations. One server instance serves the database to clients (the principal server).
The other instance acts as a hot or warm standby server (the mirror server), depending on the configuration and state of the mirroring session. When a database mirroring session is synchronized, database mirroring provides a hot standby server that supports rapid failover without a loss of data from committed transactions. When the session is not synchronized, the mirror server is typically available as a warm standby server (with possible data loss). Mirroring provides availability support by sending transactions directly from a principal database and server to a mirror database and server when the transaction log buffer for the principal database is written to disk. High-availability database mirroring involves three server instances: a principal, a mirror, and a witness. The witness server enables SQL Server to automatically fail over from the principal server to the mirror server in case the principal server goes down. Failover from the principal database to the mirror database typically takes only seconds.
Benefits of Database Mirroring:
SQL Database mirroring is a simple solution that offers the following benefits:
Increases availability of a database.
In the event of a disaster, in high-safety mode with automatic failover, failover quickly brings the standby copy of the database online (without data loss) through the mirror server. In the Forced Service operating mode, the database administrator has the alternative of forcing service (with possible data loss) to the standby copy of the database. Manual Failover also works in high-safety mode. It requires both servers to be connected to each other, and the database must already be synchronized.
Increases data protection.
Database mirroring provides complete or near complete redundancy of the data, depending on whether the operating mode is high-safety or high-performance. A database mirroring partner running on SQL Server 2008 Enterprise or later versions automatically tries to resolve certain types of errors that prevent reading a data page. The partner that is unable to read a page requests a fresh copy from the other partner. If this request succeeds, the unreadable page is replaced by the copy, which usually resolves the error.
Improves the availability of the production database during upgrades.
To minimize downtime for a mirrored database, the instances of SQL Server that are hosting the failover partners can be upgraded sequentially. This will incur the downtime of only a single failover.
Pre-Requisites of SQL Database Mirroring
The following are the pre-requisites for database mirroring.
Three physical servers: a Principal Server, a Mirror Server and a Witness Server. Edition of SQL Server should be Standard, Enterprise or Developer edition. Principal Database involved in database mirroring should be in full recovery mode.
This type of server clustering provides redundancy and fault tolerance for mission critical databases. Unlike load-balanced clustering, where a group of servers function together in order to increase availability and scalability, SQL server clustering involves two database servers in an active/passive configuration so that one server provides backup resources for the other.
If the active database server encounters errors or fails, the passive server becomes active and assumes control over the database resources until the failed server is back online. In such a scenario, the database service fails over and restores data connections to the new active server and enables uninterrupted functioning.
SQL Clustering Features
- Fault Tolerance + High Availability.
- For Microsoft SQL Server 2008.
- Software solution for high availability.
- Increased data protection, availability, upgrades availability.
- SQL HA option without the high-end hardware requirement.
- Automatic Failover.
Managed Microsoft SQL Clustering is perfect for applications that require a high availability database with automatic failover! In addition to the vital database protection provided by SQL Clustering, you also avail fully managed service which is backed by 24x7 customer support. Managed SQL Clustering solution is an able answer to the most important data safety questions your business may face. Enjoy the peace of mind knowing that your mission critical databases are safe and protected with our managed SQL Clustering solution.
Do you want your site to be always available just like Google, Yahoo or Microsoft? We at eUKhost have made this possible for you on Windows Server Hosting Platform.
Configuration of the Cluster
- At least 10 IP addresses in the same subnet.
- 2 Web servers with Windows 2008 Web Edition with same hardware specification.
- 2 SQL servers with Windows 2003 enterprise edition with exactly the same hardware.
- Microsoft SQL Server Standard Edition installed on both SQL Servers.
- iSCSI cards on SQL servers to attach the SAN partitions.
- 4 logical partitions on a SAN system attached in Active/Passive mode to both SQL servers.
- 2 Ethernet NIC on all servers.
- An internal LAN setup on second NIC between the servers.
- One will require at least 4 web servers to have this setup running, yes it is a bit expensive but it provides you with a very robust method to have your site always available to your clients. 2 servers will have Windows 2008 OS which will run IIS 7 under “Network Load Balance Cluster Service” and the other 2 servers will have MySQL & MSSQL servers with Windows 2003 Enterprise Edition running "8220;Windows Cluster Service."
- The reason to choose Windows 2008 for web servers is because it provides 2 major features that are not available with Windows 2003:
- 1. It enables us to have more than one dedicated IP address on a single node.
- 2. It provide in built Robust File Copy (robocopy) tool to copy data between the web servers.
What is Network Load Balancing Service?
Network Load Balancing clusters enable you to manage a group of independent servers as a single system for greater scalability, increased availability and easier manageability. One can use Network Load Balancing to implement enterprise-wide scalable solutions for the delivery of TCP/IP based services and applications. It is not a service based application, it only redirects traffic on a particular protocol so that the load between the server is easily distributed.
What is SQL Clustering Service?
Cluster Service acts as a back-end cluster; it provides high availability for applications such as databases, messaging and file and print services. Multiple servers (nodes) in a cluster remain in constant communication. If one of the nodes in a cluster becomes unavailable as a result of failure or maintenance, another node immediately begins providing service, a process known as 'failover'. MSCS attempts to minimize the effect of failure on the system as any node (a server in the cluster) fails or is taken offline.
A cluster connects two or more web servers together so that they appear as a single computer to clients. Connecting servers in a cluster allows for workload sharing, enables a single point of operation/management, and provides a path for scaling to meet increased demand. Thus, clustering gives you the ability to produce high availability applications.
How does setup work?
The figure below will help you understand the 100% uptime Setup that eUKhost Offers:
Network Load Balanced Cluster Architecture
In this Figure, WEB-SERVER1 and WEB-SERVER2 are setup in a Network Load Balancing (NLB) environment with 2 NIC cards on each. One NIC is configured as Public for external connections and other in an internal vLAN for internal connections, which will be used to check availability of the Web servers since we will be setting up the server in Unicast mode. Both these servers will send small packets to each other on private network in an interval to make sure that the web severs are available to accept the traffic, know as Heart Beat Method.
We will also use the internal vLAN to replicate data from one web server to another using either Robust File Copy (robocopy) or rSync (a Linux utility), which ever the client is familiar with. This will allow us to have high speed transfer and 0% packet loss while transferring data and above all it will keep the external connection free for the incoming traffic on the websites.
Once the servers are setup in NLB they will share a virtual IP address on floating ARP (Address Resolution Protocol) between the nodes. So if we add an IP to one node in a Cluster it will automatically get added to the Public NIC of other nodes. We can add multiple IP for a cluster which can be further used to assign the websites.
In the same figure, the other 2 servers are the SQL Servers configured in a domain with Windows Cluster Service provided by Windows 2003 Enterprise edition. SQL Standard/Enterprise edition allows us to configure a failover SQL cluster service.
Both SQL Servers share a common storage space on a SAN (Storage Area Network) device attached using iSCSI cards, which stores the SQL server files as well as the databases. The network drive on SAN device is available on the active server only and as soon as the server running SQL service go down for any reason the other node in the cluster will take over the SQL service along with the network drive causing no down time.
These servers also share a common Virtual IP address for the SQL server and the IP will be assigned to the node that is running the SQL service and the same IP should be used in the scripts to connect the SQL server.
These servers will also use the internal vLAN to check the availability of each not in the cluster with the Heart Beat Method.
This entire cluster is protected with a set of IP Security Policies designed by eUKHost to make sure that the servers are secure and protected by any sort of network attacks. The setup provided by eUKhost is tested and used by some of the leading companies as well as government agencies and are enjoying the 100% uptime for their websites.