diff options
| author | Niels de Vos <ndevos@redhat.com> | 2014-12-26 12:57:48 +0100 |
|---|---|---|
| committer | Vijay Bellur <vbellur@redhat.com> | 2015-01-06 03:24:24 -0800 |
| commit | 64954eb3c58f4ef077e54e8a3726fd2d27419b12 (patch) | |
| tree | 52cd5a39bbfda7442a5f0955ac2800b74a45b58a /tests/bugs/glusterd/bug-1112559.t | |
| parent | c4ab37c02e9edc23d0637e23d6f2b42d0827dad2 (diff) | |
tests: move all test-cases into component subdirectories
There are around 300 regression tests, 250 being in tests/bugs. Running
partial set of tests/bugs is not easy because this is a flat directory
with almost all tests inside.
It would be valuable to make partial test/bugs easier, and allow the use
of mulitple build hosts for a single commit, each running a subset of
the tests for a quicker result.
Additional changes made:
- correct the include path for *.rc shell libraries and *.py utils
- make the testcases pass checkpatch
- arequal-checksum in afr/self-heal.t was never executed, now it is
- include.rc now complains loudly if it fails to find env.rc
Change-Id: I26ffd067e9853d3be1fd63b2f37d8aa0fd1b4fea
BUG: 1178685
Reported-by: Emmanuel Dreyfus <manu@netbsd.org>
Reported-by: Atin Mukherjee <amukherj@redhat.com>
URL: http://www.gluster.org/pipermail/gluster-devel/2014-December/043414.html
Signed-off-by: Niels de Vos <ndevos@redhat.com>
Reviewed-on: http://review.gluster.org/9353
Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com>
Reviewed-by: Emmanuel Dreyfus <manu@netbsd.org>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
Diffstat (limited to 'tests/bugs/glusterd/bug-1112559.t')
| -rwxr-xr-x | tests/bugs/glusterd/bug-1112559.t | 61 |
1 files changed, 61 insertions, 0 deletions
diff --git a/tests/bugs/glusterd/bug-1112559.t b/tests/bugs/glusterd/bug-1112559.t new file mode 100755 index 00000000000..f318db61b8a --- /dev/null +++ b/tests/bugs/glusterd/bug-1112559.t @@ -0,0 +1,61 @@ +#!/bin/bash + +. $(dirname $0)/../../include.rc +. $(dirname $0)/../../cluster.rc +. $(dirname $0)/../../volume.rc +. $(dirname $0)/../../snapshot.rc + +function check_peers { + $CLI_1 peer status | grep 'Peer in Cluster (Connected)' | wc -l +} + +function check_snaps_status { + $CLI_1 snapshot status | grep 'Snap Name : ' | wc -l +} + +function check_snaps_bricks_health { + $CLI_1 snapshot status | grep 'Brick Running : Yes' | wc -l +} + + +SNAP_COMMAND_TIMEOUT=40 +NUMBER_OF_BRICKS=2 + +cleanup; +TEST verify_lvm_version +TEST launch_cluster 3 +TEST setup_lvm 3 + +TEST $CLI_1 peer probe $H2 +EXPECT_WITHIN $PROBE_TIMEOUT 1 peer_count + +TEST $CLI_1 volume create $V0 $H1:$L1 $H2:$L2 + +TEST $CLI_1 volume start $V0 + +#Create snapshot and add a peer together +$CLI_1 snapshot create ${V0}_snap1 ${V0} & +PID_1=$! +$CLI_1 peer probe $H3 +wait $PID_1 + +#Snapshot should be created and in the snaplist +TEST snapshot_exists 1 ${V0}_snap1 + +#Not being paranoid! Just checking for the status of the snapshot +#During the testing of the bug the snapshot would list but actually +#not be created.Therefore check for health of the snapshot +EXPECT_WITHIN $SNAP_COMMAND_TIMEOUT 1 check_snaps_status + +#Disabling the checking of snap brick status , Will continue investigation +#on the failure of the snapbrick port bind issue. +#EXPECT_WITHIN $SNAP_COMMAND_TIMEOUT $NUMBER_OF_BRICKS check_snaps_bricks_health + +#check if the peer is added successfully +EXPECT_WITHIN $PROBE_TIMEOUT 2 peer_count + +TEST $CLI_1 snapshot delete ${V0}_snap1 + +cleanup; + + |
