summaryrefslogtreecommitdiffstats
path: root/tests/functional/glusterd
Commit message (Collapse)AuthorAgeFilesLines
...
* [Test] Add TC to check SEL context on glusterfs.xml fileLeela Venkaiah G2020-04-221-0/+75
| | | | | | | | | | Test Steps: 1. Check the existence of '/usr/lib/firewalld/services/glusterfs.xml' 2. Validate the owner of this file as 'glusterfs-server' 3. Validate SELinux label context as 'system_u:object_r:lib_t:s0' Change-Id: I55bfb3b51a9188e2088459eaf5304b8b73f2834a Signed-off-by: Leela Venkaiah G <lgangava@redhat.com>
* [py2py3] Fixing a bunch of python3 incompatibilitieskshithijiyer2020-04-202-3/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Problem: There are two python2 to python3 incompatibilities present in test_add_brick_when_quorum_not_met.py and test_add_identical_brick_new_node.py. In test_add_brick_when_quorum_not_met.py the testcase fails with the below error: > for node in range(num_of_nodes_to_bring_down, num_of_servers): E TypeError: 'float' object cannot be interpreted as an integer This is because a = 10 / 5 returns a float in python3 but it returns a int in python2 as shown below: Python 2.7.15 (default, Oct 15 2018, 15:26:09) [GCC 8.2.1 20180801 (Red Hat 8.2.1-2)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> a = 10/5 >>> type(a) <type 'int'> Python 3.7.3 (default, Mar 27 2019, 13:41:07) [GCC 8.3.1 20190223 (Red Hat 8.3.1-2)] on linux Type "help", "copyright", "credits" or "license" for more information. >>> a = 10/5 >>> type(a) <class 'float'> In test_add_identical_brick_new_node.py testcase fails with the below error: > add_bricks.append(string.replace(bricks_list[0], self.servers[0], self.servers[1])) E AttributeError: module 'string' has no attribute 'replace' This is because string is depriciated in python3 and is replaced with str. Solution: For the first issue we would need to change a = 10/5 to a = 10//5 as it is constant across both python versions. For the second issue adding try except block as shown below would be suffice: except AttributeError: add_bricks.append(str.replace(bricks_list[0], self.servers[0], self.servers[1])) Change-Id: I9ec325760b279032af3748101bd2bfc58589d57d Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [testfix] Add sleep after glusterd restartSri Vignesh2020-04-201-1/+7
| | | | | | | | | Add sleep after glusterd restart is run async in servers by avoiding another transaction in progress failure in testcase Change-Id: I514c24813dc7c102b807a582ae2b0d19069e0d34 Signed-off-by: Sri Vignesh <sselvan@redhat.com>
* [testfix] Add steps in teardown to wait for all bricks onlineSri Vignesh2020-04-141-1/+7
| | | | | | | | Add steps wait_for_bricks_to_be_online in teardown after the glusterd is started in teststeps Change-Id: Id30a3d870c6ba7c77b0e79604521ec41fe624822 Signed-off-by: Sri Vignesh <sselvan@redhat.com>
* [Test]: Add checks for peer detach of offline volumesnchilaka2020-03-171-43/+60
| | | | | | | | | | Changes done in this patch include: 1. reduced runtime of test by removing multiple volume configs 2. added extra validation for node already peer detached 3. added test steps to cover peer detach when volume is offline Change-Id: I80413594e90b59dc63b7f4f52e6e348ddb7a9fa0 Signed-off-by: nchilaka <nchilaka@redhat.com>
* [Testfix] Remove python version dependency(Part 5)kshithijiyer2020-03-094-18/+11
| | | | | | | | | Please refer to commit message of patch [1]. [1] https://review.gluster.org/#/c/glusto-tests/+/24140/ Change-Id: I5319ce497ca3359e0e7dbd9ece481bada1ee2205 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Test]: check peer probe behavior when glusterd is downnchilaka2020-03-051-0/+160
| | | | | | | | | | | | | | | | | | | | | BZ#1257394 - Provide meaningful errors on peer probe and peer detach Test Steps: 1 check the current peer status 2 detach one of the valid nodes which is already part of cluster 3 stop glusterd on that node 4 try to attach above node to cluster, which must fail with Transport End point error 5 Recheck the test using hostname, expected to see same result 6 start glusterd on that node 7 halt/reboot the node 8 try to peer probe the halted node, which must fail again. 9 The only error accepted is as below "peer probe: failed: Probe returned with Transport endpoint is not connected" 10 Check peer status and make sure no other nodes in peer reject state Change-Id: Ic0a083d5cb150275e927723d960e89fe1a5528fb Signed-off-by: nchilaka <nchilaka@redhat.com>
* [testfix] Add timeout to fix failuresSri Vignesh2020-03-034-11/+31
| | | | | | | | | | | Add extra time for beaker machines to validate the testcases for test_rebalance_spurious.py added cleanup in teardown because fix layout patch is still not merged. Change-Id: I7ee8324ff136bbdb74600b730b4b802d86116427 Signed-off-by: Sri Vignesh <sselvan@redhat.com>
* [Testfix] Remove python version dependency(Part 4)kshithijiyer2020-02-267-41/+21
| | | | | | | | | Please refer to commit message of patch [1]. [1] https://review.gluster.org/#/c/glusto-tests/+/24140/ Change-Id: I25d30f7bdb20f0825709c4c852140e1906870ce7 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Testfix] Modify test to work with new performance.io-cache defaultkshithijiyer2020-02-251-3/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Problem: The default value of performance.io-cache was ON by default before gluster 6.0, in gluster 6.0 it was set to OFF. Solution: Adding code to check gluster version and then check weather it is ON or OFF as shown below: ``` if get_gluster_version(self.mnode) >= 6.0: self.assertIn("off", ret['performance.io-cache'], "io-cache value is not correct") else: self.assertIn("on", ret['performance.io-cache'], "io-cache value is not correct") ``` CentOS-CI failure analysis: This patch is expected to failed as if we run `gluster --version` on nightly builds the output returned as shown below: ``` # gluster --version glusterfs 20200220.a0e0890 Repository revision: git://git.gluster.org/glusterfs.git Copyright (c) 2006-2016 Red Hat, Inc. <https://www.gluster.org/> GlusterFS comes with ABSOLUTELY NO WARRANTY. It is licensed to you under your choice of the GNU Lesser General Public License, version 3 or any later version (LGPLv3 or later), or the GNU General Public License, version 2 (GPLv2), in all cases as published by the Free Software Foundation. ``` This output can't be parsed by get_gluster_version() function which is used in this patch to get the gluster version and check for perfromance.io-cache's default value accordingly. Change-Id: I00b652a9d5747cbf3006825bb17b9ca2f69cb9cd Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [testfix] Add steps to stabilise glusterdSri Vignesh2020-02-256-54/+78
| | | | | | | | | | | Moved steps from teardown class to teardown and removed unwanted teardown class and rectified the testcase failing with wait for io to complete by removing the step because after validate io the sub process terminates and results in failure. Change-Id: I2eaf05680b817b681aff8b48683fc9dac88896b0 Signed-off-by: Sri Vignesh <sselvan@redhat.com>
* [testfix] Add steps to add peer_probe_servers in cleanupSri Vignesh2020-02-2010-89/+60
| | | | | Change-Id: I0fa6bbacda16fb97d3454a8510a937442b5755a4 Signed-off-by: Sri Vignesh <sselvan@redhat.com>
* [testfix] removed the rmdir which was merged in baseclassSri Vignesh2020-02-198-47/+4
| | | | | Change-Id: I04f7b7c894d48d0188379028412d9c6b48eac210 Signed-off-by: Sri Vignesh <sselvan@redhat.com>
* [testfix] Add steps to stabilise content in glusterd - part2Sri Vignesh2020-02-1912-93/+57
| | | | | | | | | | and used wait for peer to connect and wait for glusterd to connect functions in testcases added fixes to check file exists increased timeout value for failure cases Change-Id: I9d5692f635ed324ffe7dac9944ec9b8f3b933fd1 Signed-off-by: Sri Vignesh <sselvan@redhat.com>
* Add steps to stabilize the existing content in glusterdSri Vignesh2020-02-1814-153/+143
| | | | | | | | | Added wait_for_io_to_complete function to testcases used wait_for_glusterd function and wait_for_peer_connect function Change-Id: I4811848aad8cca4198cc93d8e200dfc47ae7ac9b Signed-off-by: Sri Vignesh <sselvan@redhat.com>
* [libfix][testfix] Add waiter function for glusterd and peer connected ↵Sri Vignesh2020-01-211-32/+12
| | | | | | | | | library files Moving waiters from testcase and adding it as function in library in gluster_init and peer_ops. Change-Id: I5ab1e42a5a0366fadb399789da1c156d8d96ec18 Signed-off-by: Sri Vignesh <sselvan@redhat.com>
* [TC] Validation of brick status after node rebootBala Konda Reddy M2020-01-071-0/+178
| | | | | Change-Id: I0c20652d598c198b58871724e354f2fe803c1243 Signed-off-by: Bala Konda Reddy M <bmekala@redhat.com>
* [Fix] Remove variable script_local_path(Part 2)kshithijiyer2020-01-0711-33/+11
| | | | | | | | Please refer to the commit message of the below patch: https://review.gluster.org/#/c/glusto-tests/+/23902/ Change-Id: I0d2eeb978c6757d6d910ebfe21b07811bf74b80a Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [py2to3] Add py3 support in 'tests/functional/glusterd'. Part 2Valerii Ponomarov2019-12-1830-109/+125
| | | | | Change-Id: I1395e14d8d0aa0cc6097e51c64262fb481f36f05 Signed-off-by: Valerii Ponomarov <kiparis.kh@gmail.com>
* [py2to3] Add py3 support in 'tests/functional/glusterd'. Part 1Valerii Ponomarov2019-12-1832-95/+94
| | | | | Change-Id: Ib414b8496ca65a48bbe42936e32a863c9c1072e4 Signed-off-by: Valerii Ponomarov <kiparis.kh@gmail.com>
* [py2to3] Refactor gluster_base_class.pyValerii Ponomarov2019-11-281-2/+2
| | | | | | | | | | | | | | | | | | Following changes were implemented: - Delete unused imports and place used ones in the alphabetical order. Imports are splitted into 3 groups: built-ins, third-parties and local modules/libs. - Make changes to support py3 in addition to py2. - Minimize number of code lines keeping the same behaviour and improving readability. - Add possibility to get 'bound' (cls) methods using 'get_super_method' staticmethod from base class. Before it was possible to call only unbound (self) methods. - Update 'test_add_brick.py' module as PoC for running base class bound methods in both - py2 and py3. Now this module py2/3 compatible. Change-Id: I1b66b3a91084b2487c26bec8763ab2b4e12ac482 Signed-off-by: Valerii Ponomarov <kiparis.kh@gmail.com>
* [py2to3] Add method to the base class for proper calling of it's methodsValerii Ponomarov2019-11-221-3/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Lots of test classes are wrapped by the 'runs_on' decorator. This decorator replaces original test class with it's copy where parent class is original test class if we use py3. Such situation leads to the impossibility to use following approach in py3: super(SomeClass, some_class_instance).some_method() And, the above approach is py2/3 compatible approach for calling parent class's methods. The problem we face is that we fall into the unexpected recursion here. So, add 'get_super_method' to the base class, which detects such situation and returns proper method of a proper parent class. Also, fix test class located at 'glusterd/test_peer_status.py' module to have proof of concept. With this change 'test_peer_probe_status' test case becomes completely py2/3 compatible. Example of new method usage: @runs_on([['distributed'], ['glusterfs']]) class TestDecoratedClass(GlusterBaseClass): ... def setUp(self): self.get_super_method(self, 'setUp')() ... This approach must be used instead of existing calls of 'im_func' function if we want to support both at once - python2 and python3. Signed-off-by: Valerii Ponomarov <kiparis.kh@gmail.com> Change-Id: I23f4462b64f9d4dd90812273f08fb756d073ab76
* [TC] Resubmitting testcase test_glusterd_quorum_validation after bug fixkshithijiyer2019-11-211-0/+301
| | | | | | | | | | | | | | | | | | | As the below mentioned bug is fixed resubmitting testcase: https://bugzilla.redhat.com/show_bug.cgi?id=1690753 Test case: -> Creating two volumes and starting them, stop the second volume -> Set the server quorum and set the ratio to 90 -> Stop the glusterd in one of the node, so the quorum won't meet -> Peer probing a new node should fail -> Volume stop will fail -> Volume delete will fail -> Volume reset will fail -> Start the glusterd on the node where it is stopped -> Volume stop, start, delete will succeed once quorum is met Change-Id: Ic9dea44364d4cb84b6170eb1f1cfeff1398b7a9b Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* Test rebalance operation when quorum not metRajesh Madaka2019-10-161-0/+161
| | | | | | | | | | | | | | | -> Create volume -> Stop the volume -> Enabling serve quorum -> start the volume -> Set server quorum ratio to 95% -> Stop the glusterd of any one of the node -> Perform rebalance operation operation -> Check gluster volume status -> start glusterd Change-Id: I3bb42a83414dbcabdc61178e11d584eaf90c3b40 Signed-off-by: Rajesh Madaka <rmadaka@redhat.com>
* [Fix] Adding count++ to prevent test from running in infinite loopyinkui2019-10-152-0/+4
| | | | | | | | | In a few testcases in glusterd count++ is missing due to which the testcase in an infinite loop. Fixing that and sending patch. Change-Id: I56a355f6ea3ae79231e09d7aee80031da3ebec52 Signed-off-by: yinkui <13965432176@163.com> Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* Checking quota daemon and self heal daemon process after rebootkshithijiyer2019-09-111-0/+167
| | | | | | | | | | | | | For each node should have one self heal daemon and one quota daemon should be up running that means total number of self heal daemon and quota daemon to be up and ruuning is (number of nodes *2), in code i am checking that count should be equalent to (number of nodes * 2) Change-Id: I79d40467edc255a479a369f19a6fd1fec9111f53 Signed-off-by: Rajesh Madaka <rmadaka@redhat.com> Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* Added a library for daemon reload and fixing testcaseBala Konda Reddy M2019-09-041-8/+26
| | | | | | | | | | | | | | After changing the type of unit file from INFO to DEBUG. Performing daemon reload. Earlier using running commands continuosly to generated debug messages instead of running continuosly, restarted glusterd in one of the nodes so that while handshake the logs will be in Debug mode. After validating reverting back the unit file to INFO and daemon reload Change-Id: I8c99407eff2ea98a836f37fc2d89bb99f7eeccb7 Signed-off-by: Bala Konda Reddy M <bmekala@redhat.com>
* glusterd test cases: Enabling and disabling shared storagekshithijiyer2019-08-191-0/+192
| | | | | | | | | | | | | | | | | | | | Steps: -> Enable a shared storage -> Disable a shared storage -> Create volume of any type with name gluster_shared_storage -> Disable the shared storage -> Check, volume created in step-3 is not deleted -> Delete the volume -> Enable the shared storage -> Check volume with name gluster_shared_storage is created -> Disable the shared storage Change-Id: I1fd29d51e32cadd7978771f4a37ac87176d90372 Signed-off-by: Gaurav Yadav <gyadav@redhat.com> Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* Adding test case to mount volume, remove /var/log/glusterfs/ and remount the ↵kshithijiyer2019-08-061-0/+60
| | | | | | | | | | | | | | | volume. Test case: 1. Create all types of volumes and start them. 2. Mount all volumes on clients. 3. Delete /var/log/glusterfs folder on client. 4. Run IO on all the mount points. 5. Unmount and remount all volumes. 6. Check if logs are regenerated or not. Change-Id: I4f90d709c4da6e1c73cf95f4075c50aa44cdd811 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* Optimized test case: tests/functional/glusterd/test_remove_brick_scenarios.pyhadarsharon2019-08-051-16/+28
| | | | | | | | | | Worked on the following: Improved I/O performance of test (writing 100k files to a mounted volume) by applying the following changes: 1. Modified the touch command to write as many files as possible per process, thus requiring less processes to write the 100k files 2. Using Threads to parallelize the touch processes from within the test, for better efficiency Change-Id: Id969f387f4b7b8e88daf688f7bada950cff2c412 Signed-off-by: hadarsharon <hsharon@redhat.com>
* Adding test case to enable brickmux, create start and stop 3 volumes.kshithijiyer2019-07-291-0/+115
| | | | | | | | | | | Test Case: 1.Set cluster.brick-multiplex to enabled. 2.Create three 1x3 replica volumes. 3.Start all the three volumes. 4.Stop three volumes one by one. Change-Id: Ibf3e81e7424d6a429da0aa12efeae7fffd3338f2 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* Adding testcase to remove /var/log/glusterfs and mounting volume.kshithijiyer2019-07-231-0/+125
| | | | | | | | | | | | | Test Case: 1. Create all types of volumes. 2. Start all volumes. 3. Delete /var/log/glusterfs folder on the client. 4. Mount all the volumes one by one. 5. Run IO on all the mount points. 6. Check if logs are generated in /var/log/glusterfs/. Change-Id: I7a3275aad940116c3506b22b13a670e455d9ef00 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* Adding code to not stop glusterd on mnode.kshithijiyer2019-07-081-6/+28
| | | | | | | | | | | | | This test case was failing in a test run as the mnode was not removed from the list self.servers because of which there were runs where glusterd was stoppped instead and command was executed on mnod.As well as adding code to check and start glusterd on the node in instances where the test case fails. Change-Id: Id203102d3f0ec82af0ac215f0ecaf7ae22b630f5 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* Modifying test_enabling_gluster_debug_mode to do more operations.kshithijiyer2019-07-021-3/+7
| | | | | | | | | | | | While running test_enabling_gluster_debug_mode through jenkins it was observed that running volume operation once wasn't generating enough of logs by the time the logs were checked which lead to failure of the test case in the jenkins run. So modifying the logic which generates logs to run operation in a loop to generated a good amount of logs. Change-Id: Id7a12c86a04dc86d4856dbe30d945e70e64ea4f7 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* Adding code to cleanup bricks in test_add_brick.pykshithijiyer2019-06-131-9/+18
| | | | | | | | | | | While executing the test suit it was observerd that the test case test_add_remove_brick was failing due to remains from the test case test_add_brick_functionality. Hence adding the code to clean all the briks post test in test_add_brick.py. Change-Id: Iace9e51582ab4fa1f0f184283e6205aa6140b4a2 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* Optimized test case: tests/functional/glusterd/test_add_brickhadarsharon2019-06-091-65/+54
| | | | | | | | | | | Worked on the following: 1. Removed redundant throwaway variables (ret, _, _) 2. More consistent exceptions 3. Added comments within code 4. Clarified Error messages in case of Assertion Errors Change-Id: I8ca0acce848bd9a8a5d217b5a4e247590177154d Signed-off-by: hadarsharon <hsharon@redhat.com>
* glusto-tests/glusterd: enable glusterd in debug modekshithijiyer2019-05-291-0/+169
| | | | | | | | | | | | | | | | | In this test case we will enable glusterd in debug mode and check the glusterd log for the debug messages. Steps followed: 1. Stop glusterd. 2. Change log level to DEBUG in /usr/local/lib/systemd/system/glusterd.service. 3. Remove glusterd log. 4. Start glusterd. 5. Issue some gluster commands. 6. Check for debug messages in glusterd log. Change-Id: Id1173be6da2ef1c2233459fb23f4b27308c923f2 Signed-off-by: Sanju Rakonde <srakonde@redhat.com> Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* Adding the assert statement removed during changes.kshithijiyer2019-05-231-0/+2
| | | | | | | | | It seems like an assert statement got missed duing the recent changes. Adding back the assert and submitting a patch Signed-off-by: kshithijiyer <kshithij.ki@gmail.com> Change-Id: Ic8ab0d6e54da510faf479cd09cf122ccf8cedfbb
* Changing systemctl to service to fix jira issue RHGSQE-197kshithijiyer2019-05-231-2/+4
| | | | | | | | Bug https://bugzilla.redhat.com/show_bug.cgi?id=1690254 has to be fixed before merging this patch. Change-Id: I90e669269fafa9d0a064a64883c3e4b88080d25f Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* Changing error messages to be checked as per new messages.kshithijiyer2019-05-081-2/+18
| | | | | | | | | | | | | | | | | | | | Changing error message displayed when peer detach is issued with bricks are present on the node which is being detached. Adding a logic to handle both the new as well as the old error message. Old msg: peer detach: failed: Brick(s) with the peer <my_server> exist in cluster New msg: peer detach: failed: Peer <my_server> hosts one or more bricks. If the peer is in not recoverable state then use either replace-brick or remove-brick command with force to remove all bricks from the peer and attempt the peer detach again. Change-Id: I3d8fdac2c33638ecc2a8b5782c68caebbf17cf41 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* Changing error message to be check in test_volume_getkshithijiyer2019-05-061-0/+2
| | | | | | | | | | | | | | The error message which is displayed when we do a gluster v get for options which don't exist has been changed. Adding a if based logic which can check for the old as well as the new error message. Old msg: volume get option: failed: Did you mean auth.allow or ...reject? New msg: volume get option: failed: Did you mean ctime.noatime? Signed-off-by: kshithijiyer <kshithij.ki@gmail.com> Change-Id: I9496d391a7da9dba64d3426a024c2b1b68455f20
* Adding test to validate output of profile infokshithijiyer2019-05-031-0/+223
| | | | | | | | | | | | | | | Test Case: 1) Create a volume and start it. 2) Mount volume on client and start IO. 3) Start profile info on the volume. 4) Run profile info with different parameters and see if all bricks are present or not. 5) Stop profile on the volume. 6) Create another volume. 7) Start profile without starting the volume. Change-Id: I6e8ec9285d48c1c828cd1d20bff6ea8f3de064f7 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* Adding test for profile operations with one node downkshithijiyer2019-04-291-0/+220
| | | | | | | | | | | | | | Test Case: 1) Create a volume and start it. 2) Mount volume on client and start IO. 3) Start profile info on the volume. 4) Stop glusterd on one node. 5) Run profile info with different parameters and see if all bricks are present or not. 6) Stop profile on the volume. Change-Id: Ie573414816362ebbe30d2c419fd0e348522ceaec Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* Test case to detach node used for mountingkshithijiyer2019-04-171-0/+219
| | | | | | | | | | | | | | | | | | | Test case: 1.Create a 1X3 volume with only 3 nodes from the cluster. 2.Mount volume on client node using the ip of the fourth node. 3.Write IOs to the volume. 4.Detach node N4 from cluster. 5.Create a new directory on the mount point. 6.Create a few files using the same command used in step 3. 7.Add three more bricks to make the volume 2x3 using add-brick command. 8.Do a gluster volume rebalance on the volume. 9.Create more files from the client on the mount point. 10.Check for files on bricks from both replica sets. 11.Create a new directory from the client on the mount point. 12.Check for directory in both replica sets. Change-Id: I228b79955dca565a40994919b2903e59cad7d8f5 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* Test to profile start when quorum not metBala Konda Reddy M2019-04-161-0/+147
| | | | | | | | | | | | | 1. Create a volume 2. Set the quorum type to server and ratio to 90 3. Stop glusterd randomly on one of the node 4. Start profile on the volume 5. Start glusterd on the node where it is stopped 6. Start profile on the volume 7. Stop profile on the volume where it is started Change-Id: Ifeb9fddf6f1a14c9df73ed2f0453636d2853e944 Signed-off-by: Bala Konda Reddy M <bmekala@redhat.com>
* Adding test to run gluster commands when glusterd is down on one nodekshithijiyer2019-04-121-0/+102
| | | | | Change-Id: Ibf41c11a4e98baeaad658ee10ba8a807318504be Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* Validating whether peers are connected or not before volume creationBala Konda Reddy M2019-04-121-1/+17
| | | | | | | | | | In jenkins this case is failing with peers are not connected while volume creation. Now having a check before creating the volume to make sure that peers are in cluster and in connected state after peer probe. Change-Id: I8aa9d2c4d1669475dd8867d42752a31604ff572f Signed-off-by: Bala Konda Reddy M <bmekala@redhat.com>
* Adding code for cleanup of all bricks on each serverkshithijiyer2019-04-091-2/+18
| | | | | Change-Id: I405843e0093ddb7138ee0a8afbfd4cd2f91e6284 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* Checking if peers are connected after peer probekshithijiyer2019-04-091-0/+24
| | | | | Change-Id: I252ab0c0f6248b9a5c1d7977146c15876e144b38 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* Adding code to check if peers are connected in test_spurious_rebalancekshithijiyer2019-03-291-1/+13
| | | | | Change-Id: I4a1097fbdebd49555fffcfa5fe609f4070e39182 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>