SQL Server 2005 Clustering


SQL Server 2005 Clustering

If you want to install, configure, and maintain a solid SQL Server 2005 cluster. Think of this as a cluster checklist to success.

First step & think twice reason to go for cluster enviroment. What you have and what you want to get from Cluster.
Planning
•Planning in detail is essential!
•You should spend more time planning than you do in the actual implementation of SQL Server Clustering.
•Don’t wing it!
Risk Analysis
•Identify all the potential risks you could encounter as the result of using a cluster.
•Remember, a cluster is only a small part of a larger business continuity plan or disaster recovery plan.
•Plan how to deal with all identified risks.

General Practices
•A two-node Active/Passive configuration is preferred over an Active/Active configuration.
•If you build a 3-node or larger cluster, one node should be dedicated to failover, while the rest of the nodes run a single instance of SQL Server.
•Running multiple instances of SQL Server on the same node is not recommended.
•SQL Server clusters should be dedicated to SQL Server. Don’t run other software on these nodes.
•Cluster nodes should be member servers, not domain controllers.
•Cluster nodes must be all in the same domain and have access to multiple domain controllers.
•Before you begin to install Cluster Services or SQL Server Clustering, determine your virtual names and IP addresses.
•After each major step, check all logs to look for potential problems, before moving onto the next step.
•Before installing SQL Server Clustering, ensure that the Windows Cryptographic Service Provider and the Task Scheduler are on.
•Once Clustering Services and SQL Server Clustering have been installed, give the domain controllers some time to replicate the virtual server names before clients try to access them.

Hardware Practices
•All cluster hardware must be on the Microsoft Windows 2003 Clustering hcl.
•Server and array must be approved to work together.
•Each cluster node should have identical hardware, firmware, drivers, software, etc.
•Ensure that all firmware and drivers are the most current and are approved for clustering.
•Ensure that each node’s local drive is named the same, e.g. “c,” and is mirrored.
•Be sure you consider future disk space needs when selecting the amount of disk space to be in the shared array. It is much easier to add it now, than later.
•SQL Server clustering is not designed to protect data. Shared arrays must incorporate their own fault tolerant technology. RAID 10 is preferred for best protection and performance.
•Consider options for storing data off site using mirroring or log shipping.
•If you are using a Smart UPS for any node of the cluster, remove it before installing Cluster Services, and then re-add it.
•If your server nodes have AMP/ACPI power saving features, turn them off. This includes network cards, drives, etc. Their activation can cause a failover.

Software Practices
•Use Windows 2003 Enterprise or Data Center.
•Use latest OS service pack.
•Test service pack and patches before adding them to a cluster.

Configuration Practices
•Each node of the OS cluster should have identical OS and software configuration settings.
•Don’t locate the paging file on the shared array.

Clustering Hardware Practices
Memory
•If you are using extended memory in your cluster nodes, each node must have the exact same amount of RAM, and it must be configured exactly the same on all nodes.
•Even if you are not using extended memory, it is a good idea to have the same amount of RAM in each node.
Network
•Use two separate (virtual or physical) networks, one each for the public and private links. The public and private networks should be on separate subnets.
•The nodes of a cluster may be on different physical networks, and if they are, both the private and public network connections must appear as a single, non-routed LAN using a VLAN.
•Label the network connections, from within the OS, as Public and Private for each node in the cluster. Use the same names for each node.
•Use a separate network card for the Public and Private network connections. Don’t use multi-port NICs.
•NICs must use identical drivers and configurations.
•Each NIC’s speed and duplex should be manually set.
•To keep the Private network as fault tolerant as possible, use a crossover cable in a 2-node cluster; or a VLAN or hub for a larger cluster.
•Use static IP addresses for both the Public and Private networks.
•For the Private network, use a private class A, B, or C address, not a public address.
•For the Public network cards, configure at least two DNS servers (for fault tolerance).
•These DNS servers must support dynamic updates; otherwise, failover may not work.
•Do not use IPSec for clustered applications; it is not cluster-aware.

Disk Array or RAID
•Use fiber channel arrays over SCSI.
•When using fiber-attached storage, use two fiber cards (HBAs) in each node, with each one connecting to a different switch.
•If using SAN zoning, hardware zoning is preferred over software zoning.
•Be sure the shared array is successfully running before installing clustering.
•Be sure the shared array and HBAs have the latest firmware updates and drivers.
•Arrays must be set up as Basic disks, not Dynamic.
•Before installing clustering services, be sure that only one node of the cluster is on at a time.
•One at a time, each node of the cluster must be made to see the shared array.
•Use Disk Administrator to recognize and format an array. An array only has to be formatted once.
•Drives must be formatted NTFS, not compressed.
•All drives on all nodes of the cluster must access the shared array using the same drive letters.
•Name the Quorum drive “Q” for ease of identification.
•Generally speaking, set the size of the logical drives to match the physical size of the drives. Fewer logical volumes contribute to higher performance.
•The Quorum drive would be an exception to the above.
Quorum Drive
•Should be on its own logical drive, dedicated for this purpose.
•Quorum drive should be fault tolerant.
•Do not use a single node cluster quorum.
•Recommended minimum size is 500MB, which is the minimum recommended size for an efficient NTFS partition.
Infrastructure Protection
•An expensive cluster is not of much value if there are weak links in the infrastructure that supports the cluster, such as backup power, routers/switches, domain controllers, DSN servers, and so on.
•If the infrastructure is not fault tolerant, then using a cluster is not buying you much.

Installing Cluster Service
•After the Clustering Wizard analyzes your configuration, review all alerts and take any necessary action.
•Virtual server names must be unique and 15 characters or less.
•Choose a “Typical” installation. Generally, only choose “Advanced” if you have fiber channel switched fabric with multiple switches, or similar complex configurations.

Testing Cluster Service
•Once the cluster is built, test it extensively before it goes into production. Test for the following:

Group moves.
*Initiate manual failover.
*Turn each node off.
*Disconnect network connections from each node.
*Disconnect shared array (this sometimes can be interesting) connection from each node.
•Consider stress testing your disk array for several days using SQLIOTest to help identify potential array issues.
•Ideally, before installing new Service Packs or patches, test on a test cluster before moving into production.

Before SQL Server 2005 Clustering
•Ensure that each node works as expected and that the OS clustering works as expected. Check all logs and fix any problems before you install SQL Server Clustering.
•Select a node of the cluster to be your active node. This can be any node, but it is good to do this for your frame of reference, and it prevents potential confusion.
•Do not install applications into the default cluster group.
•Be sure that you have created a cluster resource group using Cluster Administrator before installing SQL Server Clustering.
•In addition, be sure that all of the shared array drives that will be used by SQL Server are in this group.

Installing SQL Server Clustering
•Check logs after each major step of the installation, looking for potential errors.
•When the Clustering Wizard analyzes your configuration, be sure to carefully read all alerts and be sure that they do not apply to you. If they do, then correct them before continuing the installation.

After SQL Server Clustering
Once the SQL Server cluster is built, test it extensively before it goes into production. Test for the following:

•Group moves.
•Initiate manual failover.
•Turn each node off.
•Disconnect network connections from each node.
•Disconnect shared array (this sometimes can be interesting) connection from each node.
•Use your SQL Server application (or Enterprise Manger) to act as a client to see if you can still access SQL Server after each failover.
•Ideally, before installing new Service Packs or patches, test on a test cluster before moving into production.

Upgrading Practices
•Ideally, don’t upgrade in place. Instead, create a brand new cluster on new hardware and move the SQL Server databases from the old cluster to the new.
•If you have to upgrade in place, perform a rolling upgrade and use the SQL Server 2005 Upgrade Advisor.
•Have a carefully thought-out back out plan should the upgrade fail.

Security Practices
•Cluster nodes and storage should be physically secure.
•Cluster should be behind a firewall.
•Do not install antivirus or antispyware on your cluster. Instead, run scans remotely on a daily basis.
•The cluster service and SQL Server service accounts need to be a member of the Local Administrators group of each node, but they should not be a member of the Domain Administrators group.

*The above accounts must be domain accounts and a member of the Domain Users group.
*The passwords should be set not to expire.
*When the Cluster Service is installed and configured, the Setup Wizard automatically grants the Local Administrator’s group additional, local privileges, listed below:

*Act as part of the operating system (required for Windows 2000 or later).
*Back up files and directories.
*Increase quotas.
*Increase scheduling priority.
*Load and unload device drivers.
*Lock pages in memory.
*Log on as a service.
*Restore files and directories.
•Do not manually remove these privileges; otherwise, the Cluster Service will not function properly.
•Use the same cluster service and SQL Server service accounts for all clusters in the same domain.
•The cluster service and SQL Server service accounts should only be used for their own specific purposes.
•Don’t create any file shares on the shared array, other than if needed for replication.
•Only trained cluster administrators should have access and permission to use Cluster Administrator.
•By default, all local administrators of cluster nodes can use Cluster Administrator. This means that all Domain Administrators are SQL Server Administrators by default.
•Do not let any non-SQL Server Administrators have access to any nodes of the cluster. Or at least, minimize any outside access.
•Do not remove the Local Administrators group from the Cluster Service Security Configuration.
•Do not use local accounts on clustered nodes. Always use domain accounts for clusters. This is because during a failover, local account information is not failed over.
•If the BUILTIN/Administrators account is removed, ensure that the account that the Cluster Service is running under can log into SQL Server for the IsAlive check. If it cannot, the IsAlive check fails.
•The MSCS Cluster Service account must have sysadmin rights to SQL Server.
•Don’t access e-mail or browse the Web from a cluster node.

Cluster Administration Practices
•Do not delete or rename the default cluster group, or remove any of the resources from this group.
•Do not delete or rename any resources from the SQL Server resource group.
•If you must make a change, then uninstall SQL Server and/or Cluster Services and reinstall them.
•All SQL Server services should be turned off and on through Cluster Administrator, although not mandatory.
•If you are running a cluster, keep in mind that if a failover occurs, a single node will have to carry more load. The total workload of each node should not exceed 100%.
•Review logs on a daily basis.
•Set up alerts to be sent to you automatically. Don’t install an e-mail client on cluster nodes for this.
•If you are going to implement replication from a SQL Server cluster, and the SQL Server cluster participates as a Publisher and a Distributor, use a file share located on the cluster disk resource as the snapshot folder. This way, replication will failover when SQL Server fails over. Limit access to this share point.
•To ensure failover will work when needed, it is important to schedule test failovers periodically in order to test if failover is really working.
•Don’t run regrebld, which is used to rebuild SQL Server’s registry, on a clustered SQL Server.
•To prevent the mssqlserver and the sqlserveragent services from being failed over because a less critical service on the server fails, configure the less critical services not to failover the cluster.
•When you install SQL Server Clustering, the Full-Text Service is installed. If do not plan to use this service, you should configure it so that its failure will not cause SQL Server to failover.

When building your cluster, you may want to modify it to meet your own unique needs. Let me know if you have any further question.

Khan sqldba – mcts

2 thoughts on “SQL Server 2005 Clustering

  1. Nice post. I was checking continuously this blog and I’m impressed!
    Very helpful info specially the last part 🙂 I care for such information a lot.
    I was seeking this particular information for a very long time.
    Thank you and best of luck.

  2. Way cool! Some extremely valid points! I appreciate you penning this post plus the rest of
    the website is also very good.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s