diff options
author | Poornima G <pgurusid@redhat.com> | 2017-04-13 16:20:29 +0530 |
---|---|---|
committer | Raghavendra G <rgowdapp@redhat.com> | 2017-04-18 02:16:11 -0400 |
commit | 94196dee1f1b0e22faab69cd9b1b1c70ba3d2f6f (patch) | |
tree | a49dae1cd1da8079de34d5bfa5be3589927c774a /tests/bugs/readdir-ahead/bug-1436090.t | |
parent | a9b5333d7bae6e20ffef07dffcda49eaf9d6823b (diff) |
dht: Add readdir-ahead in rebalance graph if parallel-readdir is on
Issue:
The value of linkto xattr is generally the name of the dht's
next subvol, this requires that the next subvol of dht is not
changed for the life time of the volume. But with parallel
readdir enabled, the readdir-ahead loaded below dht, is optional.
The linkto xattr for first subvol, when:
- parallel readdir is enabled : "<volname>-readdir-head-0"
- plain distribute volume : "<volname>-client-0"
- distribute replicate volume : "<volname>-afr-0"
The value of linkto xattr is "<volname>-readdir-head-0" when
parallel readdir is enabled, and is "<volname>-client-0" if
its disabled. But the dht_lookup takes care of healing if it
cannot identify which linkto subvol, the xattr points to.
In dht_lookup_cbk, if linkto xattr is found to be "<volname>-client-0"
and parallel readdir is enabled, then it cannot understand the
value "<volname>-client-0" as it expects "<volname>-readdir-head-0".
In that case, dht_lookup_everywhere is issued and then the linkto file
is unlinked and recreated with the right linkto xattr. The issue is
when parallel readdir is enabled, mount point accesses the file
that is currently being migrated. Since rebalance process doesn't
have parallel-readdir feature, it expects "<volname>-client-0"
where as mount expects "<volname>-readdir-head-0". Thus at some point
either the mount or rebalance will fail.
Solution:
Enable parallel-readdir for rebalance as well and then do not
allow enabling/disabling parallel-readdir if rebalance is in
progress.
Change-Id: I241ab966bdd850e667f7768840540546f5289483
BUG: 1436090
Signed-off-by: Poornima G <pgurusid@redhat.com>
Reviewed-on: https://review.gluster.org/17056
Smoke: Gluster Build System <jenkins@build.gluster.org>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
Diffstat (limited to 'tests/bugs/readdir-ahead/bug-1436090.t')
-rwxr-xr-x | tests/bugs/readdir-ahead/bug-1436090.t | 44 |
1 files changed, 44 insertions, 0 deletions
diff --git a/tests/bugs/readdir-ahead/bug-1436090.t b/tests/bugs/readdir-ahead/bug-1436090.t new file mode 100755 index 00000000000..58e9093f1c3 --- /dev/null +++ b/tests/bugs/readdir-ahead/bug-1436090.t @@ -0,0 +1,44 @@ +#!/bin/bash + +. $(dirname $0)/../../include.rc +. $(dirname $0)/../../volume.rc +. $(dirname $0)/../../cluster.rc + +cleanup; + +TEST launch_cluster 2; +TEST $CLI_1 peer probe $H2; +EXPECT_WITHIN $PROBE_TIMEOUT 1 peer_count + +$CLI_1 volume create $V0 $H1:$B1/$V0 $H2:$B2/$V0 +EXPECT 'Created' cluster_volinfo_field 1 $V0 'Status'; + +$CLI_1 volume start $V0 +EXPECT 'Started' cluster_volinfo_field 1 $V0 'Status'; + +TEST glusterfs -s $H1 --volfile-id $V0 $M0; +TEST mkdir $M0/dir1 + +# Create a large file (3.2 GB), so that rebalance takes time +# Reading from /dev/urandom is slow, so we will cat it together +dd if=/dev/urandom of=/tmp/FILE2 bs=64k count=10240 +for i in {1..5}; do + cat /tmp/FILE2 >> $M0/dir1/foo +done + +TEST mv $M0/dir1/foo $M0/dir1/bar + +TEST $CLI_1 volume rebalance $V0 start force +TEST ! $CLI_1 volume set $V0 parallel-readdir on +EXPECT_WITHIN $REBALANCE_TIMEOUT "completed" cluster_rebalance_status_field 1 $V0 +EXPECT_WITHIN $REBALANCE_TIMEOUT "completed" cluster_rebalance_status_field 2 $V0 +TEST $CLI_1 volume set $V0 parallel-readdir on +TEST mv $M0/dir1/bar $M0/dir1/foo + +EXPECT_WITHIN $UMOUNT_TIMEOUT "Y" force_umount $M0 +TEST glusterfs -s $H1 --volfile-id $V0 $M0; +TEST $CLI_1 volume rebalance $V0 start force +TEST ln $M0/dir1/foo $M0/dir1/bar +EXPECT_WITHIN $REBALANCE_TIMEOUT "completed" cluster_rebalance_status_field 1 $V0 +EXPECT_WITHIN $REBALANCE_TIMEOUT "completed" cluster_rebalance_status_field 2 $V0 +cleanup; |