summaryrefslogtreecommitdiffstats
path: root/doc/features/server-quorum.md
diff options
context:
space:
mode:
authorJeff Darcy <jdarcy@redhat.com>2014-04-22 15:37:09 +0000
committerJeff Darcy <jdarcy@redhat.com>2014-04-22 15:37:09 +0000
commita827c5eab32a43ade5551259ea56a6a1af7e861b (patch)
treee6707df68f72baa8645210ba931272285116ad85 /doc/features/server-quorum.md
parent46d333783a968ab39e0beade9c7a1eec8035f8b1 (diff)
parent99bfc2a2a1689da1e173cb2f8ef54d2b09ef3a5d (diff)
Merge branch 'upstream'
Conflicts: glusterfs.spec.in xlators/mgmt/glusterd/src/Makefile.am xlators/mgmt/glusterd/src/glusterd-utils.c xlators/mgmt/glusterd/src/glusterd.h Change-Id: I27bdcf42b003cfc42d6ad981bd2bf8180176806d
Diffstat (limited to 'doc/features/server-quorum.md')
-rw-r--r--doc/features/server-quorum.md44
1 files changed, 44 insertions, 0 deletions
diff --git a/doc/features/server-quorum.md b/doc/features/server-quorum.md
new file mode 100644
index 000000000..7b20084ce
--- /dev/null
+++ b/doc/features/server-quorum.md
@@ -0,0 +1,44 @@
+# Server Quorum
+
+Server quorum is a feature intended to reduce the occurrence of "split brain"
+after a brick failure or network partition. Split brain happens when different
+sets of servers are allowed to process different sets of writes, leaving data
+in a state that can not be reconciled automatically. The key to avoiding split
+brain is to ensure that there can be only one set of servers - a quorum - that
+can continue handling writes. Server quorum does this by the brutal but
+effective means of forcing down all brick daemons on cluster nodes that can no
+longer reach enough of their peers to form a majority. Because there can only
+be one majority, there can be only one set of bricks remaining, and thus split
+brain can not occur.
+
+## Options
+
+Server quorum is controlled by two parameters:
+
+ * **cluster.server-quorum-type**
+
+ This value may be "server" to indicate that server quorum is enabled, or
+ "none" to mean it's disabled.
+
+ * **cluster.server-quorum-ratio**
+
+ This is the percentage of cluster nodes that must be up to maintain quorum.
+ More precisely, this percentage of nodes *plus one* must be up.
+
+Note that these are cluster-wide flags. All volumes served by the cluster will
+be affected. Once these values are set, quorum actions - starting or stopping
+brick daemons in response to node or network events - will be automatic.
+
+## Best Practices
+
+If a cluster with an even number of nodes is split exactly down the middle,
+neither half can have quorum (which requires **more than** half of the total).
+This is particularly important when N=2, in which case the loss of either node
+leads to loss of quorum. Therefore, it is highly advisable to ensure that the
+cluster size is three or greater. The "extra" node in this case need not have
+any bricks or serve any data. It need only be present to preserve the notion
+of a quorum majority less than the entire cluster membership, allowing the
+cluster to survive the loss of a single node without losing quorum.
+
+
+