diff options
| author | Mohit Agrawal <moagrawa@redhat.com> | 2016-08-19 10:33:50 +0530 | 
|---|---|---|
| committer | Raghavendra G <rgowdapp@redhat.com> | 2016-09-12 00:38:29 -0700 | 
| commit | c4e9ec653c946002ab6d4c71ee8e6df056438a04 (patch) | |
| tree | f8a2e31e1c686098c2b340b651f36596fc22c92b /tests | |
| parent | 801cd07a4c6ec65ff930b2ae6bb5e405ccd03334 (diff) | |
dht: "replica.split-brain-status" attribute value is not correct
Problem: In a distributed-replicate volume attribute
         "replica.split-brain-status" value does not display split-brain
           condition though directory is in split-brain.
         If directory is in split brain on mutiple replica-pairs
         it does not show full list of replica pairs.
Solution: Update the dht_aggregate code to aggregate the xattr
          value in this specific condition.
Fix:      1) function getChoices returns the choices from split-brain
             status string.
          2) function add_opt adding the choices to local buffer to
             store in dictionary
          3) For the key "replica.split-brain-status" function dht_aggregate
             call dht_aggregate_split_brain_xattr to prepare the list.
Test:     To verify the patch followed below steps
          1) Create a distributed replica volume and create mount point
          2) Stop heal daemon
          3) Touch file and directories on mount point
             mkdir test{1..5};touch tmp{1..5}
          4) Down brick process on one of the replica set
             pkill -9 glusterfsd
          5) Change permission of dir on mount point
             chmod 755 test{1..5}
          6) Restart brick process on node with force option
          7) kill brick process on other node in same replica set
          8) Change permission of dir again on mount point
             chmod 766 test{1..5}
          9) Reexecute same step from 4-9 on other replica set also
          10) After check heal status on server it will show dir's are
              in split brain on all replica sets
          11) After check the replica.split-brain-status attr on mount
              point it will show wrong status of split brain.
          12) After apply the patch the attribute shows correct value.
BUG: 1368312
Change-Id: Icdfd72005a4aa82337c342762775a3d1761bbe4a
Signed-off-by: Mohit Agrawal <moagrawa@redhat.com>
Reviewed-on: http://review.gluster.org/15201
Smoke: Gluster Build System <jenkins@build.gluster.org>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
Diffstat (limited to 'tests')
| -rw-r--r-- | tests/bugs/bug-1368312.t | 84 | 
1 files changed, 84 insertions, 0 deletions
diff --git a/tests/bugs/bug-1368312.t b/tests/bugs/bug-1368312.t new file mode 100644 index 00000000000..135048f448e --- /dev/null +++ b/tests/bugs/bug-1368312.t @@ -0,0 +1,84 @@ +#!/bin/bash +. $(dirname $0)/../include.rc +. $(dirname $0)/../volume.rc +cleanup; + +function compare_get_split_brain_status { +        local path=$1 +        local choice=$2 +        echo `getfattr -n replica.split-brain-status $path` | cut -f2 -d"=" | sed -e 's/^"//'  -e 's/"$//' | grep $choice +        if [ $? -ne 0 ] +        then +                echo 1 +        else +                echo 0 +        fi + +} + +TEST glusterd +TEST pidof glusterd +TEST $CLI volume create $V0 replica 2 $H0:$B0/${V0}{0,1,2,3,4,5} +TEST $CLI volume start $V0 + +#Disable self-heal-daemon +TEST $CLI volume set $V0 cluster.self-heal-daemon off + +TEST glusterfs --volfile-id=$V0 --volfile-server=$H0 --entry-timeout=0 $M0; + +TEST mkdir $M0/tmp1 + +#Create metadata split-brain +TEST kill_brick $V0 $H0 $B0/${V0}0 +TEST chmod 666 $M0/tmp1 +TEST $CLI volume start $V0 force +TEST kill_brick $V0 $H0 $B0/${V0}1 +EXPECT_WITHIN $PROCESS_UP_TIMEOUT "1" afr_child_up_status $V0 0 + +TEST chmod 757 $M0/tmp1 + +TEST $CLI volume start $V0 force +EXPECT_WITHIN $PROCESS_UP_TIMEOUT "1" afr_child_up_status $V0 0 +EXPECT_WITHIN $PROCESS_UP_TIMEOUT "1" afr_child_up_status $V0 1 + +EXPECT 2 get_pending_heal_count $V0 + + +TEST kill_brick $V0 $H0 $B0/${V0}2 +TEST chmod 755 $M0/tmp1 +TEST $CLI volume start $V0 force +TEST kill_brick $V0 $H0 $B0/${V0}3 +EXPECT_WITHIN $PROCESS_UP_TIMEOUT "1" afr_child_up_status $V0 2 + +TEST chmod 766 $M0/tmp1 + +TEST $CLI volume start $V0 force +EXPECT_WITHIN $PROCESS_UP_TIMEOUT "1" afr_child_up_status $V0 2 +EXPECT_WITHIN $PROCESS_UP_TIMEOUT "1" afr_child_up_status $V0 3 + +EXPECT 4 get_pending_heal_count $V0 + +TEST kill_brick $V0 $H0 $B0/${V0}4 +TEST chmod 765 $M0/tmp1 +TEST $CLI volume start $V0 force +TEST kill_brick $V0 $H0 $B0/${V0}5 +EXPECT_WITHIN $PROCESS_UP_TIMEOUT "1" afr_child_up_status $V0 4 + +TEST chmod 756 $M0/tmp1 + +TEST $CLI volume start $V0 force +EXPECT_WITHIN $PROCESS_UP_TIMEOUT "1" afr_child_up_status $V0 4 +EXPECT_WITHIN $PROCESS_UP_TIMEOUT "1" afr_child_up_status $V0 5 + +EXPECT 6 get_pending_heal_count $V0 + +cd $M0 +EXPECT 0 compare_get_split_brain_status ./tmp1 patchy-client-0 +EXPECT 0 compare_get_split_brain_status ./tmp1 patchy-client-1 +EXPECT 0 compare_get_split_brain_status ./tmp1 patchy-client-2 +EXPECT 0 compare_get_split_brain_status ./tmp1 patchy-client-3 +EXPECT 0 compare_get_split_brain_status ./tmp1 patchy-client-4 +EXPECT 0 compare_get_split_brain_status ./tmp1 patchy-client-5 + +cd - +cleanup  | 
