diff options
author | Jeff Darcy <jdarcy@redhat.com> | 2013-02-20 14:11:36 -0500 |
---|---|---|
committer | Anand Avati <avati@redhat.com> | 2013-02-21 17:27:56 -0800 |
commit | 1dbe9a05feac5032990457058f7cef686a293973 (patch) | |
tree | a66b6420dd244b27f6195d570335df0e3120ae18 /tests | |
parent | 673287ae4d265f67a445dedb8ace38b06e72dff7 (diff) |
glusterd: allow multiple instances of glusterd on one machine
This is needed to support automated testing of cluster-communication
features such as probing and quorum. In order to use this, you need to
do the following preparatory steps.
* Copy /var/lib/glusterd to another directory for each virtual host
* Ensure that each virtual host has a different UUID in its glusterd.info
Now you can start each copy of glusterd with the following xlator-options.
* management.transport.socket.bind-address=$ip_address
* management.working-directory=$unique_working_directory
You can use 127.x.y.z addresses for binding without needing to assign
them to interfaces explicitly. Note that you must use addresses, not
names, because of some stuff in the socket code that's not worth fixing
just for this usage, but after that you can use names in /etc/hosts
instead.
At this point you can issue CLI commands to a specific glusterd using
the --remote-host option. So far probe, volume create/start/stop,
mount, and basic I/O all seem to work as expected with multiple
instances.
Change-Id: I1beabb44cff8763d2774bc208b2ffcda27c1a550
BUG: 913555
Signed-off-by: Jeff Darcy <jdarcy@redhat.com>
Reviewed-on: http://review.gluster.org/4556
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Anand Avati <avati@redhat.com>
Diffstat (limited to 'tests')
-rwxr-xr-x | tests/bugs/bug-913555.t | 66 |
1 files changed, 66 insertions, 0 deletions
diff --git a/tests/bugs/bug-913555.t b/tests/bugs/bug-913555.t new file mode 100755 index 00000000000..0e08bd377ae --- /dev/null +++ b/tests/bugs/bug-913555.t @@ -0,0 +1,66 @@ +#!/bin/bash + +# Test that a volume becomes unwritable when the cluster loses quorum. + +. $(dirname $0)/../include.rc +. $(dirname $0)/../volume.rc + +function vglusterd { + wd=$1/wd-$2 + cp -r /var/lib/glusterd $wd + rm -rf $wd/peers/* $wd/vols/* + echo -n "UUID=$(uuidgen)\noperating-version=1\n" > $wd/glusterd.info + opt1="management.transport.socket.bind-address=127.0.0.$2" + opt2="management.working-directory=$wd" + glusterd --xlator-option $opt1 --xlator-option $opt2 +} + +function check_fs { + df $1 &> /dev/null + echo $? +} + +function check_peers { + $VCLI peer status | grep 'Peer in Cluster (Connected)' | wc -l +} + +cleanup; + +topwd=$(mktemp -d) +trap "rm -rf $topwd" EXIT + +vglusterd $topwd 100 +VCLI="$CLI --remote-host=127.0.0.100" +vglusterd $topwd 101 +TEST $VCLI peer probe 127.0.0.101 +vglusterd $topwd 102 +TEST $VCLI peer probe 127.0.0.102 + +EXPECT_WITHIN 20 2 check_peers + +create_cmd="$VCLI volume create $V0" +for i in $(seq 100 102); do + mkdir -p $B0/$V0$i + create_cmd="$create_cmd 127.0.0.$i:$B0/$V0$i" +done + +TEST $create_cmd +TEST $VCLI volume set $V0 cluster.server-quorum-type server +TEST $VCLI volume start $V0 +TEST glusterfs --volfile-server=127.0.0.100 --volfile-id=$V0 $M0 + +# Kill one pseudo-node, make sure the others survive and volume stays up. +kill -9 $(ps -ef | grep gluster | grep 127.0.0.102 | awk '{print $2}') +EXPECT_WITHIN 20 1 check_peers +fs_status=$(check_fs $M0) +nnodes=$(pidof glusterfsd | wc -w) +TEST [ "$fs_status" = 0 -a "$nnodes" = 2 ] + +# Kill another pseudo-node, make sure the last one dies and volume goes down. +kill -9 $(ps -ef | grep gluster | grep 127.0.0.101 | awk '{print $2}') +EXPECT_WITHIN 20 0 check_peers +fs_status=$(check_fs $M0) +nnodes=$(pidof glusterfsd | wc -w) +TEST [ "$fs_status" = 1 -a "$nnodes" = 0 ] + +cleanup |