diff options
author | Ravishankar N <ravishankar@redhat.com> | 2015-04-15 22:22:08 +0530 |
---|---|---|
committer | Vijay Bellur <vbellur@redhat.com> | 2015-05-05 00:06:34 -0700 |
commit | 25e8e74eb7b81ccd114a9833371a3f72d284c48d (patch) | |
tree | 0c82b2ad9133c55c74852ce65d027bc36ed6cac5 /tests | |
parent | 6d7428d2018c061ca2791443bd90980f9755ded3 (diff) |
afr: add arbitration support
Add logic in afr to work in conjunction with the arbiter xlator when a
replica 3 arbiter volume is created. More specifically, this patch:
* Enables full locks for afr data transaction for such volumes.
* Removes the upfront marking of pending xattrs at the time of pre-op
and defer it to post-op. (This is an arbiter independent change and is made for all afr transactions.)
* After pre-op stage, check if we can proceed with the fop stage without
ending up in split-brain by examining the changelog xattrs.
* Unwinds the fop with failure if only one source was available at the
time of pre-op and the fop happened to fail on particular source brick.
* Skips data self-heal if arbiter brick is the only source available.
* Adds the arbiter-count option to the shd graph.
This patch is a part of the arbiter logic implementation for 3 way AFR
details of which can be found at http://review.gluster.org/#/c/9656/
Change-Id: I9603db9d04de5626eb2f4d8d959ef5b46113561d
BUG: 1199985
Signed-off-by: Ravishankar N <ravishankar@redhat.com>
Reviewed-on: http://review.gluster.org/10258
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
Diffstat (limited to 'tests')
-rw-r--r-- | tests/basic/afr/arbiter.t | 64 |
1 files changed, 64 insertions, 0 deletions
diff --git a/tests/basic/afr/arbiter.t b/tests/basic/afr/arbiter.t new file mode 100644 index 00000000000..a9d485cd7b4 --- /dev/null +++ b/tests/basic/afr/arbiter.t @@ -0,0 +1,64 @@ +#!/bin/bash + +. $(dirname $0)/../../include.rc +. $(dirname $0)/../../volume.rc +. $(dirname $0)/../../afr.rc +cleanup; + +TEST glusterd; +TEST pidof glusterd + +# Non arbiter replica 3 volumes should not have arbiter-count option enabled. +TEST $CLI volume create $V0 replica 3 $H0:$B0/${V0}{0,1,2} +TEST $CLI volume start $V0 +TEST glusterfs --volfile-id=$V0 --volfile-server=$H0 --entry-timeout=0 $M0; +TEST ! stat $M0/.meta/graphs/active/$V0-replicate-0/options/arbiter-count +TEST umount $M0 +TEST $CLI volume stop $V0 +TEST $CLI volume delete $V0 + +# Create and mount a replica 3 arbiter volume. +TEST $CLI volume create $V0 replica 3 arbiter 1 $H0:$B0/${V0}{0,1,2} +TEST $CLI volume set $V0 performance.write-behind off +TEST $CLI volume set $V0 cluster.self-heal-daemon off +TEST $CLI volume start $V0 +TEST glusterfs --volfile-id=$V0 --volfile-server=$H0 --entry-timeout=0 $M0; +TEST stat $M0/.meta/graphs/active/$V0-replicate-0/options/arbiter-count +EXPECT "1" cat $M0/.meta/graphs/active/$V0-replicate-0/options/arbiter-count + +# Write data and metadata +TEST `echo hello >> $M0/file` +TEST setfattr -n user.name -v value1 $M0/file + +# Data I/O will fail if arbiter is the only source. +TEST kill_brick $V0 $H0 $B0/${V0}0 +TEST `echo "B0 is down, B1 and B2 are sources" >> $M0/file` +TEST setfattr -n user.name -v value2 $M0/file +TEST $CLI volume start $V0 force +EXPECT_WITHIN $PROCESS_UP_TIMEOUT "1" afr_child_up_status $V0 0 +TEST kill_brick $V0 $H0 $B0/${V0}1 +TEST `echo "B2 is down, B3 is the only source, writes will fail" >> $M0/file` +TEST ! cat $M0/file +# Metadata I/O should still succeed. +TEST getfattr -n user.name $M0/file +TEST setfattr -n user.name -v value3 $M0/file + +#shd should not data self-heal from arbiter to the sinks. +TEST $CLI volume set $V0 cluster.self-heal-daemon on +EXPECT_WITHIN $PROCESS_UP_TIMEOUT "Y" glustershd_up_status +TEST $CLI volume heal $V0 +EXPECT_WITHIN $HEAL_TIMEOUT '1' echo $(count_sh_entries $B0/$V0"1") +EXPECT_WITHIN $HEAL_TIMEOUT '1' echo $(count_sh_entries $B0/$V0"2") + +TEST $CLI volume start $V0 force +EXPECT_WITHIN $PROCESS_UP_TIMEOUT "Y" glustershd_up_status +EXPECT_WITHIN $PROCESS_UP_TIMEOUT "1" afr_child_up_status $V0 1 +TEST $CLI volume heal $V0 +EXPECT 0 afr_get_pending_heal_count $V0 + +# I/O can resume again. +TEST cat $M0/file +TEST getfattr -n user.name $M0/file +TEST `echo append>> $M0/file` +TEST umount $M0 +cleanup |