| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
| |
Steps:
1. Create a volume and start it.
2. Fetch the brick list
3. Bring any one brick down umount the brick
4. Force start the volume and check that all the bricks are not online
5. Remount the removed brick and bring back the brick online
6. Force start the volume and check if all the bricks are online
Change-Id: I464d3fe451cb7c99e5f21835f3f44f0ea112d7d2
Signed-off-by: nik-redhat <nladha@redhat.com>
|
|
|
|
|
|
|
|
|
| |
Fix:
Added more volume types to perform tests and
optimized the code for a better flow.
Change-Id: I8249763161f30109d068da401504e0a24cde4d78
Signed-off-by: nik-redhat <nladha@redhat.com>
|
|
|
|
|
|
|
|
| |
Adding check to verify gluster volume status
doesn't cause any error msg in glusterd logs
Change-Id: I5666aa7fb7932a7b61a56afa7d60341ef66a978e
Signed-off-by: Pranav <prprakas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Steps-
1. Create a volume and start it.
2. Mount volume on the client and start IO.
3. Start profile on the volume
4. Run profile info and see if all bricks
are present or not
5. Create another volume and start it.
6. Run profile info without starting profile.
7. Run profile info with all possible options
without starting profile.
Change-Id: I0eb2424f385197c45bc0c4e3084c053a9498ae7d
Signed-off-by: nik-redhat <nladha@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
Steps-
1. Create a volume and start it.
2. Set the quorum-type to 'server' and verify it.
3. Set the quorum-type to 'none' and verify it.
4. Set the quorum-ratio to some value and verify it.
Change-Id: I08715972c13fc455cee25f25bdda852b92a48e10
Signed-off-by: srijan-sivakumar <ssivakum@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
Steps-
1. Create a volume and start it.
2. Set storage.reserve limit on the created volume and verify
3. Reset storage.reserve limit on the created volume and verify
Change-Id: I6592d19463696ba2c43efbb8f281024fc610d18d
Signed-off-by: srijan-sivakumar <ssivakum@redhat.com>
|
|
|
|
|
|
|
|
|
| |
Test to validate gluster peer probe scenarios using ip addr,
hostname and fqdn by verifying each with peer status output,
pool list and cmd_history.log
Change-Id: I77512cfcf62b28e70682405c47014646be71593c
Signed-off-by: Pranav <prprakas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Steps-
1) Create a volume and start it.
2) Fetch the brick list
3) Remove any brickpath
4) Check number of bricks online is equal
to number of bricks in volume
Change-Id: I4c3a6692fc88561a47a7d2564901f21dfe0073d4
Signed-off-by: nik-redhat <nladha@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
1. Check for the presence of /var/lib/glusterd/glusterd.info file
2. Get the UUID of the current NODE
3. check the value of the uuid returned by executing the command
"gluster system:: uuid get "
4. Check the uuid value shown by other node in the cluster
for the same node "gluster peer status"
on one node will give the UUID of the other node
Change-Id: I61dfb227e37b87e889577b77283d65eda4b3cd29
Signed-off-by: “Milind” <mwaykole@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
Currently, there is no validation for shared storage whether it is
mounted or not post reboot. Added the validation for reboot scenario
Made the testcase modular for future updates to the test.
Change-Id: I9d39beb3c6718e648eabe15a409c4b4985736645
Signed-off-by: Bala Konda Reddy M <bala12352@gmail.com>
|
|
|
|
|
|
|
|
|
|
| |
Problem :ValueError: invalid literal for int()
with base 10: 'N/A'
Solution : Wait for 5 sec so that brick will get the
port
Change-Id: Idf518392ba5584d09e81e76fca6e29037ac43e90
Signed-off-by: “Milind” <mwaykole@redhat.com>
|
|
|
|
|
|
|
|
|
| |
- `replace` funciton to used to forgo version check
- `unicode` is not being recognized from builtins in py2
- `replace` seems correct alternative than fixing unicode
Change-Id: Ieb9b5ad283e1a31d65bd8a9715b80f9deb0c05fe
Signed-off-by: Leela Venkaiah G <lgangava@redhat.com>
|
|
|
|
|
|
|
| |
- Translate function is availble on `unicode` string in Python2
Change-Id: I6aa01606acc73b18d889a965f1c01f9a393c2c46
Signed-off-by: Leela Venkaiah G <lgangava@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
The testcase test_volume_create_with_glusterd_restarts
consist of a asynchronous loop of glusterd restarts
which fails in the lastest runs due to patch [1]
and [2] added to glusterfs which limits the
glusterd restarts to 6.
Fix:
Add `systemctl reset-failed glusterd` to the
asynchronous loop.
Links:
[1] https://review.gluster.org/#/c/glusterfs/+/23751/
[2] https://review.gluster.org/#/c/glusterfs/+/23970/
Change-Id: Idd52bfeb99c0c43afa45403d71852f5f7b4514fa
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
| |
Test Steps:
1) Create and start a distributed-replicated volume.
2) Give different inputs to the storage.reserve volume set options
3) Validate the command behaviour on wrong inputs
Change-Id: I4bbad81cbea9b3b9e59a61fcf7f2b70eac19b216
Signed-off-by: Leela Venkaiah G <lgangava@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Testcase steps:
1. Form a gluster cluster by peer probing and create a volume
2. Unmount the brick using which the volume is created
3. Run 'gluster get-state' and validate absence of error 'Failed to get
daemon state. Check glusterd log file for more details'
4. Create another volume and start it using different bricks which are
not used to create above volume
5. Run 'gluster get-state' and validate the absence of above error.
Change-Id: Ib629b53c01860355e5bfafef53dcc3233af071e3
Signed-off-by: Leela Venkaiah G <lgangava@redhat.com>
|
|
|
|
|
|
|
|
|
| |
Test case verifies whether the gluster get-state shows the proper brick status in the output.
The test case checks the brick status when the brick is up and also after killing the brick process.
It also verifies whether the other bricks are up when a particular brick process is killed.
Change-Id: I9801249d25be2817104194bb0a8f6a16271d662a
Signed-off-by: Pranav <prprakas@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
Test Steps:
1. Check the existence of '/usr/lib/firewalld/services/glusterfs.xml'
2. Validate the owner of this file as 'glusterfs-server'
3. Validate SELinux label context as 'system_u:object_r:lib_t:s0'
Change-Id: I55bfb3b51a9188e2088459eaf5304b8b73f2834a
Signed-off-by: Leela Venkaiah G <lgangava@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
There are two python2 to python3 incompatibilities
present in test_add_brick_when_quorum_not_met.py
and test_add_identical_brick_new_node.py.
In test_add_brick_when_quorum_not_met.py the testcase
fails with the below error:
> for node in range(num_of_nodes_to_bring_down, num_of_servers):
E TypeError: 'float' object cannot be interpreted as an integer
This is because a = 10 / 5 returns a float in python3
but it returns a int in python2 as shown below:
Python 2.7.15 (default, Oct 15 2018, 15:26:09)
[GCC 8.2.1 20180801 (Red Hat 8.2.1-2)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> a = 10/5
>>> type(a)
<type 'int'>
Python 3.7.3 (default, Mar 27 2019, 13:41:07)
[GCC 8.3.1 20190223 (Red Hat 8.3.1-2)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> a = 10/5
>>> type(a)
<class 'float'>
In test_add_identical_brick_new_node.py testcase
fails with the below error:
> add_bricks.append(string.replace(bricks_list[0],
self.servers[0], self.servers[1]))
E AttributeError: module 'string' has no attribute 'replace'
This is because string is depriciated in python3 and
is replaced with str.
Solution:
For the first issue we would need to change
a = 10/5 to a = 10//5 as it is constant
across both python versions.
For the second issue adding try except
block as shown below would be suffice:
except AttributeError:
add_bricks.append(str.replace(bricks_list[0],
self.servers[0],
self.servers[1]))
Change-Id: I9ec325760b279032af3748101bd2bfc58589d57d
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
| |
Add sleep after glusterd restart is run async
in servers by avoiding another transaction in
progress failure in testcase
Change-Id: I514c24813dc7c102b807a582ae2b0d19069e0d34
Signed-off-by: Sri Vignesh <sselvan@redhat.com>
|
|
|
|
|
|
|
|
| |
Add steps wait_for_bricks_to_be_online in teardown after
the glusterd is started in teststeps
Change-Id: Id30a3d870c6ba7c77b0e79604521ec41fe624822
Signed-off-by: Sri Vignesh <sselvan@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
Changes done in this patch include:
1. reduced runtime of test by removing multiple volume configs
2. added extra validation for node already peer detached
3. added test steps to cover peer detach when volume is offline
Change-Id: I80413594e90b59dc63b7f4f52e6e348ddb7a9fa0
Signed-off-by: nchilaka <nchilaka@redhat.com>
|
|
|
|
|
|
|
|
|
| |
Please refer to commit message of patch [1].
[1] https://review.gluster.org/#/c/glusto-tests/+/24140/
Change-Id: I5319ce497ca3359e0e7dbd9ece481bada1ee2205
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
BZ#1257394 - Provide meaningful errors on peer probe and peer detach
Test Steps:
1 check the current peer status
2 detach one of the valid nodes which is already part of cluster
3 stop glusterd on that node
4 try to attach above node to cluster, which must fail with
Transport End point error
5 Recheck the test using hostname, expected to see same result
6 start glusterd on that node
7 halt/reboot the node
8 try to peer probe the halted node, which must fail again.
9 The only error accepted is as below
"peer probe: failed: Probe returned with Transport endpoint is not
connected"
10 Check peer status and make sure no other nodes in peer reject state
Change-Id: Ic0a083d5cb150275e927723d960e89fe1a5528fb
Signed-off-by: nchilaka <nchilaka@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
Add extra time for beaker machines to validate
the testcases
for test_rebalance_spurious.py added cleanup in
teardown because fix layout patch is still not
merged.
Change-Id: I7ee8324ff136bbdb74600b730b4b802d86116427
Signed-off-by: Sri Vignesh <sselvan@redhat.com>
|
|
|
|
|
|
|
|
|
| |
Please refer to commit message of patch [1].
[1] https://review.gluster.org/#/c/glusto-tests/+/24140/
Change-Id: I25d30f7bdb20f0825709c4c852140e1906870ce7
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
The default value of performance.io-cache was ON by default
before gluster 6.0, in gluster 6.0 it was set to OFF.
Solution:
Adding code to check gluster version and then check
weather it is ON or OFF as shown below:
```
if get_gluster_version(self.mnode) >= 6.0:
self.assertIn("off", ret['performance.io-cache'],
"io-cache value is not correct")
else:
self.assertIn("on", ret['performance.io-cache'],
"io-cache value is not correct")
```
CentOS-CI failure analysis:
This patch is expected to failed as if we run `gluster --version` on
nightly builds the output returned as shown below:
```
# gluster --version
glusterfs 20200220.a0e0890
Repository revision: git://git.gluster.org/glusterfs.git
Copyright (c) 2006-2016 Red Hat, Inc. <https://www.gluster.org/>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
It is licensed to you under your choice of the GNU Lesser
General Public License, version 3 or any later version (LGPLv3
or later), or the GNU General Public License, version 2 (GPLv2),
in all cases as published by the Free Software Foundation.
```
This output can't be parsed by get_gluster_version() function
which is used in this patch to get the gluster version
and check for perfromance.io-cache's default value accordingly.
Change-Id: I00b652a9d5747cbf3006825bb17b9ca2f69cb9cd
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
| |
Moved steps from teardown class to teardown and removed
unwanted teardown class and rectified the testcase
failing with wait for io to complete by removing the step
because after validate io the sub process terminates and
results in failure.
Change-Id: I2eaf05680b817b681aff8b48683fc9dac88896b0
Signed-off-by: Sri Vignesh <sselvan@redhat.com>
|
|
|
|
|
| |
Change-Id: I0fa6bbacda16fb97d3454a8510a937442b5755a4
Signed-off-by: Sri Vignesh <sselvan@redhat.com>
|
|
|
|
|
| |
Change-Id: I04f7b7c894d48d0188379028412d9c6b48eac210
Signed-off-by: Sri Vignesh <sselvan@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
and used wait for peer to connect and
wait for glusterd to connect functions in testcases
added fixes to check file exists
increased timeout value for failure cases
Change-Id: I9d5692f635ed324ffe7dac9944ec9b8f3b933fd1
Signed-off-by: Sri Vignesh <sselvan@redhat.com>
|
|
|
|
|
|
|
|
|
| |
Added wait_for_io_to_complete function to testcases
used wait_for_glusterd function
and wait_for_peer_connect function
Change-Id: I4811848aad8cca4198cc93d8e200dfc47ae7ac9b
Signed-off-by: Sri Vignesh <sselvan@redhat.com>
|
|
|
|
|
|
|
|
|
| |
library files
Moving waiters from testcase and adding it as function in library in gluster_init and peer_ops.
Change-Id: I5ab1e42a5a0366fadb399789da1c156d8d96ec18
Signed-off-by: Sri Vignesh <sselvan@redhat.com>
|
|
|
|
|
| |
Change-Id: I0c20652d598c198b58871724e354f2fe803c1243
Signed-off-by: Bala Konda Reddy M <bmekala@redhat.com>
|
|
|
|
|
|
|
|
| |
Please refer to the commit message of the below patch:
https://review.gluster.org/#/c/glusto-tests/+/23902/
Change-Id: I0d2eeb978c6757d6d910ebfe21b07811bf74b80a
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
| |
Change-Id: I1395e14d8d0aa0cc6097e51c64262fb481f36f05
Signed-off-by: Valerii Ponomarov <kiparis.kh@gmail.com>
|
|
|
|
|
| |
Change-Id: Ib414b8496ca65a48bbe42936e32a863c9c1072e4
Signed-off-by: Valerii Ponomarov <kiparis.kh@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Following changes were implemented:
- Delete unused imports and place used ones in the alphabetical order.
Imports are splitted into 3 groups: built-ins, third-parties and
local modules/libs.
- Make changes to support py3 in addition to py2.
- Minimize number of code lines keeping the same behaviour and improving
readability.
- Add possibility to get 'bound' (cls) methods using 'get_super_method'
staticmethod from base class. Before it was possible to call only
unbound (self) methods.
- Update 'test_add_brick.py' module as PoC for running base class bound
methods in both - py2 and py3. Now this module py2/3 compatible.
Change-Id: I1b66b3a91084b2487c26bec8763ab2b4e12ac482
Signed-off-by: Valerii Ponomarov <kiparis.kh@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Lots of test classes are wrapped by the 'runs_on' decorator.
This decorator replaces original test class with it's copy where
parent class is original test class if we use py3.
Such situation leads to the impossibility to use following approach in
py3:
super(SomeClass, some_class_instance).some_method()
And, the above approach is py2/3 compatible approach for calling
parent class's methods.
The problem we face is that we fall into the unexpected recursion here.
So, add 'get_super_method' to the base class, which detects such
situation and returns proper method of a proper parent class.
Also, fix test class located at 'glusterd/test_peer_status.py' module
to have proof of concept.
With this change 'test_peer_probe_status' test case becomes completely
py2/3 compatible.
Example of new method usage:
@runs_on([['distributed'], ['glusterfs']])
class TestDecoratedClass(GlusterBaseClass):
...
def setUp(self):
self.get_super_method(self, 'setUp')()
...
This approach must be used instead of existing calls of 'im_func'
function if we want to support both at once - python2 and python3.
Signed-off-by: Valerii Ponomarov <kiparis.kh@gmail.com>
Change-Id: I23f4462b64f9d4dd90812273f08fb756d073ab76
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
As the below mentioned bug is fixed resubmitting testcase:
https://bugzilla.redhat.com/show_bug.cgi?id=1690753
Test case:
-> Creating two volumes and starting them, stop the second volume
-> Set the server quorum and set the ratio to 90
-> Stop the glusterd in one of the node, so the quorum won't meet
-> Peer probing a new node should fail
-> Volume stop will fail
-> Volume delete will fail
-> Volume reset will fail
-> Start the glusterd on the node where it is stopped
-> Volume stop, start, delete will succeed once quorum is met
Change-Id: Ic9dea44364d4cb84b6170eb1f1cfeff1398b7a9b
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
-> Create volume
-> Stop the volume
-> Enabling serve quorum
-> start the volume
-> Set server quorum ratio to 95%
-> Stop the glusterd of any one of the node
-> Perform rebalance operation operation
-> Check gluster volume status
-> start glusterd
Change-Id: I3bb42a83414dbcabdc61178e11d584eaf90c3b40
Signed-off-by: Rajesh Madaka <rmadaka@redhat.com>
|
|
|
|
|
|
|
|
|
| |
In a few testcases in glusterd count++ is missing due to which the
testcase in an infinite loop. Fixing that and sending patch.
Change-Id: I56a355f6ea3ae79231e09d7aee80031da3ebec52
Signed-off-by: yinkui <13965432176@163.com>
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
For each node should have one self heal daemon and
one quota daemon should be up running
that means total number of self heal daemon and quota daemon
to be up and ruuning is (number of nodes *2),
in code i am checking that count should
be equalent to (number of nodes * 2)
Change-Id: I79d40467edc255a479a369f19a6fd1fec9111f53
Signed-off-by: Rajesh Madaka <rmadaka@redhat.com>
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
After changing the type of unit file from INFO
to DEBUG. Performing daemon reload. Earlier using
running commands continuosly to generated debug
messages instead of running continuosly, restarted
glusterd in one of the nodes so that while handshake
the logs will be in Debug mode. After validating
reverting back the unit file to INFO and daemon
reload
Change-Id: I8c99407eff2ea98a836f37fc2d89bb99f7eeccb7
Signed-off-by: Bala Konda Reddy M <bmekala@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Steps:
-> Enable a shared storage
-> Disable a shared storage
-> Create volume of any type with
name gluster_shared_storage
-> Disable the shared storage
-> Check, volume created in step-3 is
not deleted
-> Delete the volume
-> Enable the shared storage
-> Check volume with name gluster_shared_storage
is created
-> Disable the shared storage
Change-Id: I1fd29d51e32cadd7978771f4a37ac87176d90372
Signed-off-by: Gaurav Yadav <gyadav@redhat.com>
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
volume.
Test case:
1. Create all types of volumes and start them.
2. Mount all volumes on clients.
3. Delete /var/log/glusterfs folder on client.
4. Run IO on all the mount points.
5. Unmount and remount all volumes.
6. Check if logs are regenerated or not.
Change-Id: I4f90d709c4da6e1c73cf95f4075c50aa44cdd811
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
| |
Worked on the following:
Improved I/O performance of test (writing 100k files to a mounted volume) by applying the following changes:
1. Modified the touch command to write as many files as possible per process, thus requiring less processes to write the 100k files
2. Using Threads to parallelize the touch processes from within the test, for better efficiency
Change-Id: Id969f387f4b7b8e88daf688f7bada950cff2c412
Signed-off-by: hadarsharon <hsharon@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
Test Case:
1.Set cluster.brick-multiplex to enabled.
2.Create three 1x3 replica volumes.
3.Start all the three volumes.
4.Stop three volumes one by one.
Change-Id: Ibf3e81e7424d6a429da0aa12efeae7fffd3338f2
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Test Case:
1. Create all types of volumes.
2. Start all volumes.
3. Delete /var/log/glusterfs folder on the client.
4. Mount all the volumes one by one.
5. Run IO on all the mount points.
6. Check if logs are generated in /var/log/glusterfs/.
Change-Id: I7a3275aad940116c3506b22b13a670e455d9ef00
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This test case was failing in a test run as the
mnode was not removed from the list self.servers
because of which there were runs where glusterd
was stoppped instead and command was executed
on mnod.As well as adding code to check and
start glusterd on the node in instances where
the test case fails.
Change-Id: Id203102d3f0ec82af0ac215f0ecaf7ae22b630f5
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|