Summary
Customers using software only installations of Greenplum have an option to configure mirrors that fit their availability and performance needs. This post describes a way that can be leveraged to maximize availability and performance when Greenplum cluster is utilizing servers on 2 more racks.
Rack/Server setup
In a typical data center, customers usually have servers in more than 1 rack. In order to mitigate risk of having a platform service (such as GPDB) go down because of a node or a rack going down, it’s recommended to configure the cluster on servers in multiple racks. By default, GPDB comes with a standby node for the master services (in active-passive mode). This host should be hosted on a rack different that the master node. In addition, GPDB provides mirroring for protection against data loss due to node failures. These mirrors can be configured in a way to provide protection against failures at the rack level (power failures, TOR switch issues etc) to enhance the availability of the clusters by a large degree. Following picture depicts how we did this at one of our customer sites:
- GPDB Rack setup
Important things to notice:
- Master and Standby nodes are on separate racks. VIP in front of them directs traffic to ‘live’ master
- Segments on a worker node in a given rack has mirrors spread across segment hosts on different racks, using spread mirroring scheme (arrows depict master -> mirror relationship)
- Every single node in the cluster hosts same number of mirrors, thus providing same performance degradation in case of node failures (Eg: In a setup where we have 4 mirrors/segment host as depicted in the diagram, in case of a node failure, each of it’s mirror hosts get 1 mirror)
More details…
We used separate ports for master (for incoming connection requests) and standby nodes (for replication traffic). In addition, F5 hardware load balancer was used that detected which was the ‘real’ master by probing the port that the master is supposed to be listening at. Finally, we had to manually move the mirrors using ‘gpmovemirrors’ utility using custom configuration file. We tested the setup in 3 separate ways:
- Remove rack 3, by rebooting all the servers on that rack in live cluster. Check if those segments fail over with minimal disruption.
- Remove rack 2, by rebooting all the servers on that rack in live cluster. Cluster should still be available.
- Remove rack 1, by rebooting all the servers on that rack in live cluster. Fail master over to rack 2 and bring the cluster up. It should work just fine.