diff options
author | Pranith Kumar K <pkarampu@redhat.com> | 2015-01-07 20:48:51 +0530 |
---|---|---|
committer | Raghavendra G <rgowdapp@redhat.com> | 2015-01-11 21:19:28 -0800 |
commit | 119bedfa06c3a6bb38089dafbcd6e0c2bd9b26cf (patch) | |
tree | 56a5dcf254fa988d8126304868febd641af0e862 /tests/basic/quota-ancestry-building.t | |
parent | 12022fc87ee6b12c07bff0381701e2977e722382 (diff) |
tests: ancestry building quota tests on fuse mount
quota-anon-fd-nfs.t is essentially testing ancestry building code path and
quota limit reaching. Since nfs client and server on same machine leads to
deadlocks, it is better to use fuse mount to trigger these code paths. Just
stop the volume and start again, this wipes the inode table clean. Performing
writes after this will trigger ancestry building + quota checks.
Change-Id: I2d37a8662040a638d3fac3f9535d32498a5b434d
BUG: 1163543
Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
Reviewed-on: http://review.gluster.org/9408
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
Tested-by: Raghavendra G <rgowdapp@redhat.com>
Diffstat (limited to 'tests/basic/quota-ancestry-building.t')
-rwxr-xr-x | tests/basic/quota-ancestry-building.t | 63 |
1 files changed, 63 insertions, 0 deletions
diff --git a/tests/basic/quota-ancestry-building.t b/tests/basic/quota-ancestry-building.t new file mode 100755 index 00000000000..e3817b2bd8c --- /dev/null +++ b/tests/basic/quota-ancestry-building.t @@ -0,0 +1,63 @@ +#!/bin/bash + +. $(dirname $0)/../include.rc +. $(dirname $0)/../volume.rc +. $(dirname $0)/../fileio.rc + +cleanup; +# This tests quota enforcing on an inode without any path information. +# This should cover anon-fd type of workload as well. + +TESTS_EXPECTED_IN_LOOP=8 +TEST glusterd +TEST pidof glusterd +TEST $CLI volume info; + +TEST $CLI volume create $V0 replica 2 $H0:$B0/brick1 $H0:$B0/brick2; + +TEST $CLI volume set $V0 performance.write-behind off +TEST $CLI volume set $V0 cluster.self-heal-daemon off +TEST $CLI volume start $V0; +TEST glusterfs --volfile-id=/$V0 --volfile-server=$H0 $M0 --attribute-timeout=0 --entry-timeout=0 +EXPECT 'Started' volinfo_field $V0 'Status'; + +TEST $CLI volume quota $V0 enable +TEST $CLI volume quota $V0 limit-usage / 1 +TEST $CLI volume quota $V0 soft-timeout 0 +TEST $CLI volume quota $V0 hard-timeout 0 + +deep=/0/1/2/3/4/5/6/7/8/9 +TEST mkdir -p $M0/$deep + +TEST touch $M0/$deep/file1 $M0/$deep/file2 $M0/$deep/file3 $M0/$deep/file4 + +TEST fd_open 3 'w' "$M0/$deep/file1" +TEST fd_open 4 'w' "$M0/$deep/file2" +TEST fd_open 5 'w' "$M0/$deep/file3" +TEST fd_open 6 'w' "$M0/$deep/file4" + +# consume all quota +TEST ! dd if=/dev/zero of="$M0/$deep/file" bs=1MB count=1 + +# simulate name-less lookups for re-open where the parent information is lost. +# Stopping and starting the bricks will trigger client re-open which happens on +# a gfid without any parent information. Since no operations are performed on +# the fds {3..6} every-xl will be under the impression that they are good fds + +TEST $CLI volume stop $V0 +TEST $CLI volume start $V0 force +EXPECT_WITHIN $PROCESS_UP_TIMEOUT "1" afr_child_up_status $V0 0 +EXPECT_WITHIN $PROCESS_UP_TIMEOUT "1" afr_child_up_status $V0 1 + +for i in $(seq 3 6); do +# failing writes indicate that we are enforcing quota set on / +TEST_IN_LOOP ! fd_write $i "content" +TEST_IN_LOOP sync +done + +exec 3>&- +exec 4>&- +exec 5>&- +exec 6>&- + +cleanup; |