summaryrefslogtreecommitdiffstats
path: root/tests/basic/quota-ancestry-building.t
diff options
context:
space:
mode:
authorvmallika <vmallika@redhat.com>2015-05-25 13:35:48 +0530
committerRaghavendra G <rgowdapp@redhat.com>2015-05-25 11:34:22 -0700
commit225ff553106396066d68d8c757e5c001f5d9ab15 (patch)
tree34acc904eb69ec0ee5507ab3cb9e2632bb34a426 /tests/basic/quota-ancestry-building.t
parentb51ee5f8d1f80d66effffc06c1e49099c04014a4 (diff)
Quota: fix testcases not to send parallel writes for accurate
quota enforcement Currently quota enforcer doesn't consider parallel writes and allows quota to exceed limit where there are high rate of parallel writes. Bug# 1223658 tracks the issue. This patch fixes the spurious failures by not sending parallel writes. Using O_SYNC and O_APPEND flags and block size not more that 256k (For higher block size NFS client splits the block into 256k chinks and does parallel writes) Change-Id: I297c164b030cecb87ce5b494c02b09e8b073b276 BUG: 1223798 Signed-off-by: vmallika <vmallika@redhat.com> Reviewed-on: http://review.gluster.org/10878 Tested-by: NetBSD Build System Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Raghavendra G <rgowdapp@redhat.com> Tested-by: Raghavendra G <rgowdapp@redhat.com>
Diffstat (limited to 'tests/basic/quota-ancestry-building.t')
-rwxr-xr-xtests/basic/quota-ancestry-building.t8
1 files changed, 7 insertions, 1 deletions
diff --git a/tests/basic/quota-ancestry-building.t b/tests/basic/quota-ancestry-building.t
index e86e1e250ee..5824db37879 100755
--- a/tests/basic/quota-ancestry-building.t
+++ b/tests/basic/quota-ancestry-building.t
@@ -8,6 +8,10 @@ cleanup;
# This tests quota enforcing on an inode without any path information.
# This should cover anon-fd type of workload as well.
+QDD=$(dirname $0)/quota
+# compile the test write program and run it
+build_tester $(dirname $0)/quota.c -o $QDD
+
TESTS_EXPECTED_IN_LOOP=8
TEST glusterd
TEST pidof glusterd
@@ -37,7 +41,7 @@ TEST fd_open 5 'w' "$M0/$deep/file3"
TEST fd_open 6 'w' "$M0/$deep/file4"
# consume all quota
-TEST ! dd if=/dev/zero of="$M0/$deep/file" bs=1000000 count=1
+TEST ! $QDD $M0/$deep/file 256 4
# simulate name-less lookups for re-open where the parent information is lost.
# Stopping and starting the bricks will trigger client re-open which happens on
@@ -62,4 +66,6 @@ exec 6>&-
TEST $CLI volume stop $V0
EXPECT "1" get_aux
+
+rm -f $QDD
cleanup;