diff options
author | karthik-us <ksubrahm@redhat.com> | 2018-08-13 16:29:49 +0530 |
---|---|---|
committer | Pranith Kumar Karampuri <pkarampu@redhat.com> | 2018-08-14 04:30:25 +0000 |
commit | e4015ece284ad3f5de8a1632984594480533e0a0 (patch) | |
tree | 0d787d81698e057d567c98bc334ac897ab7ab430 /tests | |
parent | 58d2c13c7996d6d192cc792eca372538673f808e (diff) |
tests: Fix for gfid-mismatch-resolution-with-fav-child-policy.t failure
This test was retried once on build
https://build.gluster.org/job/regression-on-demand-multiplex/174/
(logs for the first try is not available with this build)
Test case was failing in line #47 where it was was checking for the
heal count to be 0. Line #51 had passed that means file got the gfid
split brain resolved, and both the bricks had same gfids.
At line #54 it again failed which checks for the md5sum on both the
bricks. At this point the md5sum of the brick where the file got
impunged had the md5sum same as the newly created empty file. This
means the data heal has not happened for the file.
At line #64 enabling granular-entry-heal faild, but without the logs
it is not possible to debug this issue.
Change-Id: I56d854dbb9e188cafedfd24a9d463603ae79bd06
fixes: bz#1615331
Signed-off-by: karthik-us <ksubrahm@redhat.com>
Diffstat (limited to 'tests')
-rw-r--r-- | tests/basic/afr/gfid-mismatch-resolution-with-fav-child-policy.t | 1 |
1 files changed, 1 insertions, 0 deletions
diff --git a/tests/basic/afr/gfid-mismatch-resolution-with-fav-child-policy.t b/tests/basic/afr/gfid-mismatch-resolution-with-fav-child-policy.t index 8d80e5e3527..f4aa351e461 100644 --- a/tests/basic/afr/gfid-mismatch-resolution-with-fav-child-policy.t +++ b/tests/basic/afr/gfid-mismatch-resolution-with-fav-child-policy.t @@ -10,6 +10,7 @@ TEST pidof glusterd TEST $CLI volume create $V0 replica 2 $H0:$B0/${V0}{0,1} TEST $CLI volume start $V0 TEST $GFS --volfile-id=/$V0 --volfile-server=$H0 $M0 +TEST $CLI volume set $V0 cluster.heal-timeout 5 TEST $CLI volume set $V0 self-heal-daemon off TEST $CLI volume set $V0 cluster.data-self-heal off TEST $CLI volume set $V0 cluster.metadata-self-heal off |