summaryrefslogtreecommitdiffstats
path: root/tests/basic
diff options
context:
space:
mode:
authorAtin Mukherjee <amukherj@redhat.com>2015-08-21 10:54:39 +0530
committerRaghavendra Talur <rtalur@redhat.com>2015-08-26 01:53:28 -0700
commit27b455dc369f60d36d83c6bcbc10245dfe733f46 (patch)
treef003ffdb57fbdf9a304b763e66f27fb0848aae83 /tests/basic
parentddae88755280da5747d8600025159e0257e63abe (diff)
tests: remove unwanted tests from volume-snapshot.t
Backport of http://review.gluster.org/#/c/11972/ volume-snapshot.t failspuriously because of having additional test cases which restarts glusterd and they are really not needed as far as the test coverage is concerned. Currently glusterd doesn't have a mechanism to indicate that volumes handshaking has been completed or not, due to this even if the peer handshaking finishes and all the peers are back to the cluster there could be a case where any command which accesses the volume structure might end up in corruption as volume handshaking is still in progress. This is because of volume list is still not been made URCU protected. Change-Id: Id8669c22584384f988be5e0a5a0deca7708a277d BUG: 1255636 Signed-off-by: Atin Mukherjee <amukherj@redhat.com> Reviewed-on: http://review.gluster.org/11975 Tested-by: NetBSD Build System <jenkins@build.gluster.org> Tested-by: Gluster Build System <jenkins@build.gluster.com> Reviewed-by: Avra Sengupta <asengupt@redhat.com> Reviewed-by: Raghavendra Talur <rtalur@redhat.com>
Diffstat (limited to 'tests/basic')
-rwxr-xr-xtests/basic/volume-snapshot.t12
1 files changed, 0 insertions, 12 deletions
diff --git a/tests/basic/volume-snapshot.t b/tests/basic/volume-snapshot.t
index a5e6d2d1b56..794ab4944b0 100755
--- a/tests/basic/volume-snapshot.t
+++ b/tests/basic/volume-snapshot.t
@@ -114,20 +114,8 @@ activate_snapshots
EXPECT 'Started' snapshot_status ${V0}_snap;
EXPECT 'Started' snapshot_status ${V1}_snap;
-#testing handshake with glusterd (bugid:1122064)
-
-TEST kill_glusterd 2
deactivate_snapshots
-TEST start_glusterd 2
-#Wait for glusterd handsahke complete/check status of cluster.
-EXPECT_WITHIN $PROBE_TIMEOUT 2 peer_count
-EXPECT_WITHIN $PROCESS_UP_TIMEOUT "Success" snapshot_snap_status ${V0}_snap "Brick\ Running" "No"
-TEST kill_glusterd 2
activate_snapshots
-TEST start_glusterd 2
-#Wait for glusterd handsahke complete/check status of cluster.
-EXPECT_WITHIN $PROBE_TIMEOUT 2 peer_count
-EXPECT_WITHIN $PROCESS_UP_TIMEOUT "Success" snapshot_snap_status ${V0}_snap "Brick\ Running" "Yes"
TEST snapshot_exists 1 ${V0}_snap
TEST snapshot_exists 1 ${V1}_snap