summaryrefslogtreecommitdiffstats
path: root/tests/basic
diff options
context:
space:
mode:
authorAvra Sengupta <asengupt@redhat.com>2014-02-06 13:03:58 +0530
committerVijay Bellur <vbellur@redhat.com>2014-02-10 23:25:40 -0800
commit97ce783de326b51fcba65737f07db2c314d1e218 (patch)
tree70ee9d9c1a4abd4c28a236920121d1bfbc56912b /tests/basic
parentb000b934aff4b452cf1c35c42272482a7738506e (diff)
glusterd: Volume locks and transaction specific opinfos
With this patch we are replacing the existing cluster-wide lock taken on glusterds across the cluster, with volume locks which are also taken on glusterds across the cluster, but are volume specific. So with the volume locks we are able to perform more than one gluster operation at the same time, as long as the operations are being performed on different volumes. We maintain a global list of volume-locks (using a dict for a list) where the key is the volume name, and which saves the uuid of the originator glusterd. These locks are held and released per volume transaction. In order to acheive multiple gluster operations occuring at the same time, we also separate opinfos in the op-state-machine, as a part of this patch. To do so, we generate a unique transaction-id (uuid) per gluster transaction. An opinfo is then associated with this transaction id, which is used throughout the transaction. We maintain a run-time global list(using a dict) of transaction-ids, and their respective opinfos to achieve this. Upstream Feature Page: http://www.gluster.org/community/documentation/index.php/Features/glusterd-volume-locks Change-Id: Iaad505a854bac8de8f83beec0357eb6cde3f7ea8 BUG: 1011470 Signed-off-by: Avra Sengupta <asengupt@redhat.com> Reviewed-on: http://review.gluster.org/5994 Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Vijay Bellur <vbellur@redhat.com>
Diffstat (limited to 'tests/basic')
-rwxr-xr-xtests/basic/volume-locks.t106
1 files changed, 106 insertions, 0 deletions
diff --git a/tests/basic/volume-locks.t b/tests/basic/volume-locks.t
new file mode 100755
index 000000000..b9e94b7e1
--- /dev/null
+++ b/tests/basic/volume-locks.t
@@ -0,0 +1,106 @@
+#!/bin/bash
+
+. $(dirname $0)/../include.rc
+. $(dirname $0)/../cluster.rc
+
+function check_peers {
+ $CLI_1 peer status | grep 'Peer in Cluster (Connected)' | wc -l
+}
+
+function volume_count {
+ local cli=$1;
+ if [ $cli -eq '1' ] ; then
+ $CLI_1 volume info | grep 'Volume Name' | wc -l;
+ else
+ $CLI_2 volume info | grep 'Volume Name' | wc -l;
+ fi
+}
+
+function volinfo_field()
+{
+ local vol=$1;
+ local field=$2;
+
+ $CLI_1 volume info $vol | grep "^$field: " | sed 's/.*: //';
+}
+
+function two_diff_vols_create {
+ # Both volume creates should be successful
+ $CLI_1 volume create $V0 $H1:$B1/$V0 $H2:$B2/$V0 $H3:$B3/$V0 &
+ $CLI_2 volume create $V1 $H1:$B1/$V1 $H2:$B2/$V1 $H3:$B3/$V1
+}
+
+function two_diff_vols_start {
+ # Both volume starts should be successful
+ $CLI_1 volume start $V0 &
+ $CLI_2 volume start $V1
+}
+
+function two_diff_vols_stop_force {
+ # Force stop, so that if rebalance from the
+ # remove bricks is in progress, stop can
+ # still go ahead. Both volume stops should
+ # be successful
+ $CLI_1 volume stop $V0 force &
+ $CLI_2 volume stop $V1 force
+}
+
+function same_vol_remove_brick {
+
+ # Running two same vol commands at the same time can result in
+ # two success', two failures, or one success and one failure, all
+ # of which are valid. The only thing that shouldn't happen is a
+ # glusterd crash.
+
+ local vol=$1
+ local brick=$2
+ $CLI_1 volume remove-brick $1 $2 start &
+ $CLI_2 volume remove-brick $1 $2 start
+}
+
+cleanup;
+
+TEST launch_cluster 3;
+TEST $CLI_1 peer probe $H2;
+TEST $CLI_1 peer probe $H3;
+
+EXPECT_WITHIN 20 2 check_peers
+
+two_diff_vols_create
+EXPECT 'Created' volinfo_field $V0 'Status';
+EXPECT 'Created' volinfo_field $V1 'Status';
+
+two_diff_vols_start
+EXPECT 'Started' volinfo_field $V0 'Status';
+EXPECT 'Started' volinfo_field $V1 'Status';
+
+same_vol_remove_brick $V0 $H2:$B2/$V0
+# Checking glusterd crashed or not after same volume remove brick
+# on both nodes.
+EXPECT_WITHIN 20 2 check_peers
+
+same_vol_remove_brick $V1 $H2:$B2/$V1
+# Checking glusterd crashed or not after same volume remove brick
+# on both nodes.
+EXPECT_WITHIN 20 2 check_peers
+
+$CLI_1 volume set $V0 diagnostics.client-log-level DEBUG &
+$CLI_1 volume set $V1 diagnostics.client-log-level DEBUG
+kill_glusterd 3
+$CLI_1 volume status $V0
+$CLI_2 volume status $V1
+$CLI_1 peer status
+EXPECT_WITHIN 20 1 check_peers
+EXPECT 'Started' volinfo_field $V0 'Status';
+EXPECT 'Started' volinfo_field $V1 'Status';
+
+TEST $glusterd_3
+$CLI_1 volume status $V0
+$CLI_2 volume status $V1
+$CLI_1 peer status
+#EXPECT_WITHIN 20 2 check_peers
+#EXPECT 'Started' volinfo_field $V0 'Status';
+#EXPECT 'Started' volinfo_field $V1 'Status';
+#two_diff_vols_stop_force
+#EXPECT_WITHIN 20 2 check_peers
+cleanup;