diff options
author | Niels de Vos <ndevos@redhat.com> | 2014-12-26 12:57:48 +0100 |
---|---|---|
committer | Vijay Bellur <vbellur@redhat.com> | 2015-01-06 03:24:24 -0800 |
commit | 64954eb3c58f4ef077e54e8a3726fd2d27419b12 (patch) | |
tree | 52cd5a39bbfda7442a5f0955ac2800b74a45b58a /tests/bugs/glusterfs/bug-873962.t | |
parent | c4ab37c02e9edc23d0637e23d6f2b42d0827dad2 (diff) |
tests: move all test-cases into component subdirectories
There are around 300 regression tests, 250 being in tests/bugs. Running
partial set of tests/bugs is not easy because this is a flat directory
with almost all tests inside.
It would be valuable to make partial test/bugs easier, and allow the use
of mulitple build hosts for a single commit, each running a subset of
the tests for a quicker result.
Additional changes made:
- correct the include path for *.rc shell libraries and *.py utils
- make the testcases pass checkpatch
- arequal-checksum in afr/self-heal.t was never executed, now it is
- include.rc now complains loudly if it fails to find env.rc
Change-Id: I26ffd067e9853d3be1fd63b2f37d8aa0fd1b4fea
BUG: 1178685
Reported-by: Emmanuel Dreyfus <manu@netbsd.org>
Reported-by: Atin Mukherjee <amukherj@redhat.com>
URL: http://www.gluster.org/pipermail/gluster-devel/2014-December/043414.html
Signed-off-by: Niels de Vos <ndevos@redhat.com>
Reviewed-on: http://review.gluster.org/9353
Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com>
Reviewed-by: Emmanuel Dreyfus <manu@netbsd.org>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
Diffstat (limited to 'tests/bugs/glusterfs/bug-873962.t')
-rwxr-xr-x | tests/bugs/glusterfs/bug-873962.t | 107 |
1 files changed, 107 insertions, 0 deletions
diff --git a/tests/bugs/glusterfs/bug-873962.t b/tests/bugs/glusterfs/bug-873962.t new file mode 100755 index 00000000000..492d0285497 --- /dev/null +++ b/tests/bugs/glusterfs/bug-873962.t @@ -0,0 +1,107 @@ +#!/bin/bash + +#AFR TEST-IDENTIFIER SPLIT-BRAIN +. $(dirname $0)/../../include.rc +. $(dirname $0)/../../volume.rc + +cleanup; + +TEST glusterd +TEST pidof glusterd +TEST $CLI volume info; + +B0_hiphenated=`echo $B0 | tr '/' '-'` +TEST $CLI volume create $V0 replica 2 $H0:$B0/${V0}{1,2} + +# If we allow self-heal to happen in the background, we'll get spurious +# failures - especially at the point labeled "FAIL HERE" but +# occasionally elsewhere. This behavior is very timing-dependent. It +# doesn't show up in Jenkins, but it does on JD's and KP's machines, and +# it got sharply worse because of an unrelated fsync change (6ae6f3d) +# which changed timing. Putting anything at the FAIL HERE marker tends +# to make it go away most of the time on affected machines, even if the +# "anything" is unrelated. +# +# What's going on is that the I/O on the first mountpoint is allowed to +# complete even though self-heal is still in progress and the state on +# disk does not reflect its result. In fact, the state changes during +# self-heal create the appearance of split brain when the second I/O +# comes in, so that fails even though we haven't actually been in split +# brain since the manual xattr operations. By disallowing background +# self-heal, we ensure that the second I/O can't happen before self-heal +# is complete, because it has to follow the first I/O which now has to +# follow self-heal. +TEST $CLI volume set $V0 cluster.background-self-heal-count 0 + +#Make sure self-heal is not triggered when the bricks are re-started +TEST $CLI volume set $V0 cluster.self-heal-daemon off +TEST $CLI volume set $V0 performance.stat-prefetch off +TEST $CLI volume start $V0 +TEST glusterfs --entry-timeout=0 --attribute-timeout=0 -s $H0 --volfile-id=$V0 $M0 --direct-io-mode=enable +TEST touch $M0/a +TEST touch $M0/b +TEST touch $M0/c +TEST touch $M0/d +echo "1" > $M0/b +echo "1" > $M0/d +TEST kill_brick $V0 $H0 $B0/${V0}2 +echo "1" > $M0/a +echo "1" > $M0/c +TEST setfattr -n trusted.mdata -v abc $M0/b +TEST setfattr -n trusted.mdata -v abc $M0/d +TEST $CLI volume start $V0 force +EXPECT_WITHIN $CHILD_UP_TIMEOUT "1" afr_child_up_status $V0 1 +TEST kill_brick $V0 $H0 $B0/${V0}1 +echo "2" > $M0/a +echo "2" > $M0/c +TEST setfattr -n trusted.mdata -v def $M0/b +TEST setfattr -n trusted.mdata -v def $M0/d +TEST $CLI volume start $V0 force +EXPECT_WITHIN $CHILD_UP_TIMEOUT "1" afr_child_up_status $V0 0 +EXPECT_WITHIN $CHILD_UP_TIMEOUT "1" afr_child_up_status $V0 1 + +TEST glusterfs --entry-timeout=0 --attribute-timeout=0 -s $H0 --volfile-id=$V0 $M1 --direct-io-mode=enable + +#Files are in split-brain, so open should fail +TEST ! cat $M0/a; +TEST ! cat $M1/a; +TEST cat $M0/b; +TEST cat $M1/b; + +#Reset split-brain status +TEST setfattr -n trusted.afr.$V0-client-1 -v 0x000000000000000000000000 $B0/${V0}1/a; +TEST setfattr -n trusted.afr.$V0-client-1 -v 0x000000000000000000000000 $B0/${V0}1/b; + +#The operations should do self-heal and give correct output +EXPECT "2" cat $M0/a; +# FAIL HERE - see comment about cluster.self-heal-background-count above. +EXPECT "2" cat $M1/a; +TEST dd if=$M0/b of=/dev/null bs=1024k +EXPECT "def" getfattr -n trusted.mdata --only-values $M0/b 2>/dev/null +EXPECT "def" getfattr -n trusted.mdata --only-values $M1/b 2>/dev/null + +EXPECT_WITHIN $UMOUNT_TIMEOUT "Y" force_umount $M0 +EXPECT_WITHIN $UMOUNT_TIMEOUT "Y" force_umount $M1 + +TEST $CLI volume set $V0 cluster.data-self-heal off +TEST $CLI volume set $V0 cluster.metadata-self-heal off + +TEST glusterfs --entry-timeout=0 --attribute-timeout=0 -s $H0 --volfile-id=$V0 $M0 --direct-io-mode=enable +TEST glusterfs --entry-timeout=0 --attribute-timeout=0 -s $H0 --volfile-id=$V0 $M1 --direct-io-mode=enable + +#Files are in split-brain, so open should fail +TEST ! cat $M0/c +TEST ! cat $M1/c +TEST cat $M0/d +TEST cat $M1/d + +TEST setfattr -n trusted.afr.$V0-client-1 -v 0x000000000000000000000000 $B0/${V0}1/c +TEST setfattr -n trusted.afr.$V0-client-1 -v 0x000000000000000000000000 $B0/${V0}1/d + +#The operations should NOT do self-heal but give correct output +EXPECT "2" cat $M0/c +EXPECT "2" cat $M1/c +EXPECT "1" cat $M0/d +EXPECT "1" cat $M1/d + +cleanup; |