diff options
author | Richard Wareing <rwareing@fb.com> | 2014-07-08 20:07:54 -0700 |
---|---|---|
committer | Kevin Vigor <kvigor@fb.com> | 2016-12-27 12:16:06 -0800 |
commit | 88ef24b83f49c7d670720d59832d4e0f09efbe78 (patch) | |
tree | 1ec9c5b77308d8af57baa5ced91f916039e9cf5c /tests | |
parent | 3bb25b0882964b6c9c1623593f3a81902ff69aa0 (diff) |
Add option to toggle x-halo fail-over
Summary:
- Adds "halo-failover-enabled" option to enable/disable failing over to a brick outside of the defined halo to satisfy min-replicas
- There are some use-cases where failing over to a brick which is out of region will be undesirable. I such cases we will more than likely opt to have more replicas within the region to tolerate the loss of a single replica in that region without losing quorum.
- Fixed quorum accounting problem as well, now correctly goes RO in case where we lose a brick and aren't able to swap one in for some reason (fail-over not enabled or otherwise)
Test Plan:
- run prove -v tests/basic/halo.t
- run prove -v tests/basic/halo-disable.t
- run prove -v tests/basic/halo-failover-enabled.t
- run prove -v tests/basic/halo-failover-disabled.t
Reviewers: dph, cjh, jackl, mmckeen
Reviewed By: mmckeen
Conflicts:
xlators/cluster/afr/src/afr.h
xlators/mount/fuse/utils/mount.glusterfs.in
Change-Id: Ia3ebf83f34b53118ca4491a3c4b66a178cc9795e
Signed-off-by: Kevin Vigor <kvigor@fb.com>
Reviewed-on: http://review.gluster.org/16275
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
Reviewed-by: Shreyas Siravara <sshreyas@fb.com>
Smoke: Gluster Build System <jenkins@build.gluster.org>
Diffstat (limited to 'tests')
-rw-r--r-- | tests/basic/halo-failover-disabled.t | 67 | ||||
-rw-r--r-- | tests/basic/halo-failover-enabled.t (renamed from tests/basic/halo-failover.t) | 24 |
2 files changed, 81 insertions, 10 deletions
diff --git a/tests/basic/halo-failover-disabled.t b/tests/basic/halo-failover-disabled.t new file mode 100644 index 00000000000..05ccd7e822a --- /dev/null +++ b/tests/basic/halo-failover-disabled.t @@ -0,0 +1,67 @@ +#!/bin/bash +# +# Tests that fail-over works correctly for Halo Geo-replication +# +# 1. Create a volume @ 3x replication w/ halo + quorum enabled +# 2. Write some data, background it & fail a brick +# 3. The expected result is that the writes fail-over to the 3rd +# brick immediatelly, and md5s will show they are equal once +# the write completes. +# 4. The mount should also be RW after the brick is killed as +# quorum will be immediately restored by swapping in the +# other brick. +# +. $(dirname $0)/../include.rc +. $(dirname $0)/../volume.rc + +cleanup; + +TEST glusterd +TEST pidof glusterd +TEST $CLI volume create $V0 replica 3 $H0:$B0/${V0}{0,1,2} +TEST $CLI volume set $V0 cluster.background-self-heal-count 0 +TEST $CLI volume set $V0 cluster.shd-max-threads 1 +TEST $CLI volume set $V0 cluster.halo-enabled True +TEST $CLI volume set $V0 cluster.halo-max-latency 9999 +TEST $CLI volume set $V0 cluster.halo-shd-max-latency 9999 +TEST $CLI volume set $V0 cluster.halo-max-replicas 2 +TEST $CLI volume set $V0 cluster.halo-failover-enabled off +TEST $CLI volume set $V0 cluster.quorum-type fixed +TEST $CLI volume set $V0 cluster.quorum-count 2 +TEST $CLI volume set $V0 cluster.heal-timeout 5 +TEST $CLI volume set $V0 cluster.entry-self-heal on +TEST $CLI volume set $V0 cluster.data-self-heal on +TEST $CLI volume set $V0 cluster.metadata-self-heal on +TEST $CLI volume set $V0 cluster.self-heal-daemon on +TEST $CLI volume set $V0 cluster.eager-lock off +# Use a large ping time here so the spare brick is not marked up +# based on the ping time. The only way it can get marked up is +# by being swapped in via the down event (which is what we are disabling). +TEST $CLI volume set $V0 network.ping-timeout 1000 +TEST $CLI volume set $V0 cluster.choose-local off +TEST $CLI volume start $V0 +TEST glusterfs --volfile-id=/$V0 --volfile-server=$H0 $M0 --attribute-timeout=0 --entry-timeout=0 +cd $M0 + +# Write some data to the mount +dd if=/dev/urandom of=$M0/test bs=1k count=200 oflag=sync &> /dev/null & + +sleep 0.5 +# Kill the first brick, fail-over to 3rd +TEST kill_brick $V0 $H0 $B0/${V0}0 + +# Test that quorum should fail and the mount is RO, the reason here +# is that although there _is_ another brick running which _could_ +# take the failed bricks place, it is not marked "up" so quorum +# will not be fullfilled. If we waited 1000 second the brick would +# indeed be activated based on ping time, but for our test we want +# the decision to be solely "down event" driven, not ping driven. +TEST ! dd if=/dev/urandom of=$M0/test_rw bs=1M count=1 + +TEST $CLI volume start $V0 force +sleep 2 + +# Test that quorum should be restored and the file is writable +TEST dd if=/dev/urandom of=$M0/test_rw bs=1M count=1 + +cleanup diff --git a/tests/basic/halo-failover.t b/tests/basic/halo-failover-enabled.t index 220fa1f2207..e897d076813 100644 --- a/tests/basic/halo-failover.t +++ b/tests/basic/halo-failover-enabled.t @@ -22,6 +22,7 @@ TEST $CLI volume create $V0 replica 3 $H0:$B0/${V0}{0,1,2} TEST $CLI volume set $V0 cluster.background-self-heal-count 0 TEST $CLI volume set $V0 cluster.shd-max-threads 1 TEST $CLI volume set $V0 cluster.halo-enabled True +TEST $CLI volume set $V0 cluster.halo-failover-enabled on TEST $CLI volume set $V0 cluster.halo-max-replicas 2 TEST $CLI volume set $V0 cluster.quorum-type fixed TEST $CLI volume set $V0 cluster.quorum-count 2 @@ -38,26 +39,29 @@ TEST glusterfs --volfile-id=/$V0 --volfile-server=$H0 $M0 --attribute-timeout=0 cd $M0 # Write some data to the mount -dd if=/dev/urandom of=$M0/test bs=1k count=200 oflag=sync &> /dev/null & +dd if=/dev/urandom of=$M0/test bs=1k count=200 conv=fsync + +# Calulate the MD5s on the two up volumes. +MD5_B0=$(md5sum $B0/${V0}0/test | cut -d' ' -f1) +MD5_B1=$(md5sum $B0/${V0}1/test | cut -d' ' -f1) + +# Verify they are the same +TEST [ "$MD5_B0" == "$MD5_B1" ] sleep 0.5 # Kill the first brick, fail-over to 3rd TEST kill_brick $V0 $H0 $B0/${V0}0 # Test the mount is still RW (i.e. quorum works) -TEST dd if=/dev/urandom of=$M0/test_rw bs=1M count=1 - -# Wait for the dd to finish -wait -sleep 3 +TEST dd if=/dev/urandom of=$M0/test_rw bs=1M count=1 conv=fsync # Calulate the MD5s -MD5_B0=$(md5sum $B0/${V0}0/test | cut -d' ' -f1) -MD5_B1=$(md5sum $B0/${V0}1/test | cut -d' ' -f1) -MD5_B2=$(md5sum $B0/${V0}2/test | cut -d' ' -f1) +MD5_B0=$(md5sum $B0/${V0}0/test_rw | cut -d' ' -f1) +MD5_B1=$(md5sum $B0/${V0}1/test_rw | cut -d' ' -f1) +MD5_B2=$(md5sum $B0/${V0}2/test_rw | cut -d' ' -f1) # Verify they are the same -TEST [ "$MD5_B1" == "$MD5_B2" ] +TEST [ x"$MD5_B1" == x"$MD5_B2" ] # Verify the failed brick has a different MD5 TEST [ x"$MD5_B0" != x"$MD5_B1" ] |