diff options
author | Mohit Agrawal <moagrawal@redhat.com> | 2019-03-29 11:48:32 +0530 |
---|---|---|
committer | Mohit Agrawal <moagrawal@redhat.com> | 2019-04-15 20:50:50 +0530 |
commit | 26a19d9da3ab5604db02d4ca02ce868fb57193a4 (patch) | |
tree | a82d9ed8aa17b1b86ecab4b2f945f99f521d6bc1 /tests/bugs | |
parent | f316c8b797283818bd800569771870a4b9bf1310 (diff) |
glusterd: Optimize glusterd handshaking code path
Problem: At the time of handshaking glusterd populate volume
data in a dictionary.While no. of volumes are configured
more than 1500 glusterd takes more than 10 min to generated
the data.Due to taking more time rpc request times out and
rpc start bailing of call frames.
Solution: To optimize the code done below changes
1) Spawn multiple threads to populate volumes data in bulk
in separate dictionary and introduce an option
glusterd.brick-dict-thread-count to configure no. of threads
to populate volume data.
2) Populate tier data only while volume type is tier
3) Compare snap data only while snap_count is non zero
Fixes: bz#1699339
Change-Id: I38dc71970c049217f9d1a06fc0aaf4c26eab18f5
Signed-off-by: Mohit Agrawal <moagrawal@redhat.com>
Diffstat (limited to 'tests/bugs')
-rw-r--r-- | tests/bugs/glusterd/bug-1699339.t | 69 |
1 files changed, 69 insertions, 0 deletions
diff --git a/tests/bugs/glusterd/bug-1699339.t b/tests/bugs/glusterd/bug-1699339.t new file mode 100644 index 00000000000..3e950f48432 --- /dev/null +++ b/tests/bugs/glusterd/bug-1699339.t @@ -0,0 +1,69 @@ +#!/bin/bash + +. $(dirname $0)/../../include.rc +. $(dirname $0)/../../volume.rc +. $(dirname $0)/../../cluster.rc + +cleanup; + +NUM_VOLS=15 + + +get_brick_base () { + printf "%s/vol%02d" $B0 $1 +} + +function count_up_bricks { + vol=$1; + $CLI_1 --xml volume status $vol | grep '<status>1' | wc -l +} + +create_volume () { + + local vol_name=$(printf "%s-vol%02d" $V0 $1) + + TEST $CLI_1 volume create $vol_name replica 3 $H1:$B1/${vol_name} $H2:$B2/${vol_name} $H3:$B3/${vol_name} + TEST $CLI_1 volume start $vol_name +} + +TEST launch_cluster 3 +TEST $CLI_1 volume set all cluster.brick-multiplex on + +# The option accepts the value in the range from 5 to 200 +TEST ! $CLI_1 volume set all glusterd.vol_count_per_thread 210 +TEST ! $CLI_1 volume set all glusterd.vol_count_per_thread 4 + +TEST $CLI_1 volume set all glusterd.vol_count_per_thread 5 + +TEST $CLI_1 peer probe $H2; +EXPECT_WITHIN $PROBE_TIMEOUT 1 peer_count + +TEST $CLI_1 peer probe $H3; +EXPECT_WITHIN $PROBE_TIMEOUT 2 peer_count + +# Our infrastructure can't handle an arithmetic expression here. The formula +# is (NUM_VOLS-1)*5 because it sees each TEST/EXPECT once but needs the other +# NUM_VOLS-1 and there are 5 such statements in each iteration. +TESTS_EXPECTED_IN_LOOP=28 +for i in $(seq 1 $NUM_VOLS); do + starttime="$(date +%s)"; + create_volume $i +done + +TEST kill_glusterd 1 + +vol1=$(printf "%s-vol%02d" $V0 1) +TEST $CLI_2 volume set $vol1 performance.readdir-ahead on +vol2=$(printf "%s-vol%02d" $V0 2) +TEST $CLI_2 volume set $vol2 performance.readdir-ahead on + +# Bring back 1st glusterd +TEST $glusterd_1 +EXPECT_WITHIN $PROBE_TIMEOUT 2 peer_count + +EXPECT_WITHIN $PROBE_TIMEOUT "on" volinfo_field_1 $vol1 performance.readdir-ahead + +vol_name=$(printf "%s-vol%02d" $V0 2) +EXPECT_WITHIN $PROBE_TIMEOUT "on" volinfo_field_1 $vol2 performance.readdir-ahead + +cleanup |