| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
| |
Add steps wait_for_bricks_to_be_online in teardown after
the glusterd is started in teststeps
Change-Id: Id30a3d870c6ba7c77b0e79604521ec41fe624822
Signed-off-by: Sri Vignesh <sselvan@redhat.com>
|
|
|
|
|
|
|
|
|
| |
Steps:
1.Checks replace-brick and data intergrity post that
2.Checks replace-brick while IO's are in progress
Change-Id: Idfc801fde50967924696b2e909633b9ca95ac721
Signed-off-by: ubansal <ubansal@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
Line 135 is missing () which leads to the below trace back
when the testcase fails:
```
Traceback (most recent call last):
File "/usr/lib64/python2.7/logging/__init__.py", line 851, in emit
msg = self.format(record)
File "/usr/lib64/python2.7/logging/__init__.py", line 724, in format
return fmt.format(record)
File "/usr/lib64/python2.7/logging/__init__.py", line 464, in format
record.message = record.getMessage()
File "/usr/lib64/python2.7/logging/__init__.py", line 328, in getMessage
msg = msg % self.args
TypeError: not all arguments converted during string formatting
Logged from file test_volume_start_stop_while_rebalance_in_progress.py, line 135
```
Solution:
Adding the missing () brackets in line 135.
Change-Id: I318a5b838f01840afee5d4109645cc7dcd86c8fa
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
Currently the code supports both service and systemctl
commands but it fails on the latest platforms with
the below error on the latest platforms:
```
service glusterd reload
Redirecting to /bin/systemctl reload glusterd.service
Failed to reload glusterd.service: Job type reload is
not applicable for unit glusterd.service.
```
This is because the latest platforms uses systemctl
instead of service to reload the daemon processes:
```
systemctl daemon-reload
```
Solution:
The present code doesn't work properly as the check
is specific to only one platform, hence it fails.
The solution for this is to just check for older
platforms and run service command. For all other
platforms run systemctl command.
Change-Id: I19b24652b96c4794553d3659eaf0301395929bca
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
`g.rpyc_get_connection()` has a limitaion where it can't
convert python2 calls to python3 calls. Due to this a large
number of testcases fail when executed from a python2 machine
on a python3 only setup or visa versa with the below stack trace:
```
E ========= Remote Traceback (1) =========
E Traceback (most recent call last):
E File "/root/tmp.tL8Eqx7d8l/rpyc/core/protocol.py", line 323, in _dispatch_request
E res = self._HANDLERS[handler](self, *args)
E File "/root/tmp.tL8Eqx7d8l/rpyc/core/protocol.py", line 591, in _handle_inspect
E if hasattr(self._local_objects[id_pack], '____conn__'):
E File "/root/tmp.tL8Eqx7d8l/rpyc/lib/colls.py", line 110, in __getitem__
E return self._dict[key][0]
E KeyError: (b'rpyc.core.service.SlaveService', 94282642994712, 140067150858560)
```
Solution:
The solution here is to modify the code to not use
`g.rpyc_get_connection()`. The following changes are done
to accomplish it:
1)Remove code which uses g.rpyc_get_connection() and use generic
logic in functions:
a. do_bricks_exist_in_shd_volfile()
b. get_disk_usage()
c. mount_volume()
d. list_files()
f. append_string_to_file()
2)Create files which can be uploaded and executed on
clients/servers to avoid rpc calls in functions:
a. calculate_hash()
b. validate_files_in_dir()
3)Modify setup.py to push the below files to
`/usr/share/glustolibs/scripts/`:
a.compute_hash.py
b.walk_dir.py
Change-Id: I00a81a88382bf3f8b366753eebdb2999260788ca
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
BZ#1702298 - Custom xattrs are not healed on newly added brick
Test Steps:
1) Create a volume.
2) Mount the volume using FUSE.
3) Create 100 directories on the mount point.
4) Set the xattr on the directories.
5) Add bricks to the volume and trigger rebalance.
6) Wait for rebalance to complete.
7) After rebalance completes,check if all the bricks have healed.
8) Check the xattr for dirs on the newly added bricks.
Change-Id: If83f65ea163ccf16f9024d6b3a867ba7b35773f0
Signed-off-by: sayaleeraut <saraut@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
Add docleanup and docleanupclass in baseclass,
which will call the function fresh_setup_cleanup,
will cleanup the nodes to fresh setup if it is
set to true or whenever the testcase fails.
Change-Id: I951ff59cc3959ede5580348b7f93b57683880a23
Signed-off-by: Sri Vignesh <sselvan@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
Testcase test_ec_version was failing with the
below traceback:
Traceback (most recent call last):
File "/usr/lib64/python2.7/logging/__init__.py", line 851, in emit
msg = self.format(record)
File "/usr/lib64/python2.7/logging/__init__.py", line 724, in format
return fmt.format(record)
File "/usr/lib64/python2.7/logging/__init__.py", line 464, in format
record.message = record.getMessage()
File "/usr/lib64/python2.7/logging/__init__.py", line 328, in getMessage
msg = msg % self.args
TypeError: %d format: a number is required, not str
Logged from file test_ec_version_healing_whenonebrickdown.py, line 233
This was due to a missing 's' in the log message on line 233.
Solution:
Add the missing s in the log message on line 233 as
shown below:
g.log.info('Brick %s is offline successfully', brick_b2_down)
Also renaming the file for more clarity of what the
testcase does.
Change-Id: I626fbe23dfaab0dd6d77c75329664a81a120c638
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Testcase steps: (file access)
- rename the file so that the hashed and cached are different
- make sure file can be accessed as long as cached is up
Fixes a library issue as well in find_new_hashed()
Change-Id: Id81264848d6470b9fe477b50290f5ecf917ceda3
Co-authored-by: Susant Palai <spalai@redhat.com>
Signed-off-by: Susant Palai <spalai@redhat.com>
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Case 1:
1.mkdir srcdir and dstdir(such that srcdir and
dstdir hashes to different subvols)
2.Bring down srcdir hashed subvol
3.mv srcdir dstdir (should fail)
Case 2:
1.mkdir srcdir dstdir
2.Bring down srcdir hashed
3.Bring down dstdir hashed
4.mv srcdir dstdir (should fail)
Case 3:
1.mkdir srcdir dstdir
2.Bring down dstdir hashed subvol
3.mv srcdir dstdir (should fail)
Additional library fix details:
Also fixing library function to work with distributed-disperse volume
by removing `if oldhashed._host != brickdir._host:` as the same node
can host multiple bricks of the same volume.
Change-Id: Iaa472d1eb304b547bdec7a8e6b62c1df1a0ce591
Co-authored-by: Susant Palai <spalai@redhat.com>
Signed-off-by: Susant Palai <spalai@redhat.com>
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
| |
Changes done in this patch include:
1. reduced runtime of test by removing multiple volume configs
2. added extra validation for node already peer detached
3. added test steps to cover peer detach when volume is offline
Change-Id: I80413594e90b59dc63b7f4f52e6e348ddb7a9fa0
Signed-off-by: nchilaka <nchilaka@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Earlier, brick creation is carried out based on the difference of used
and unused bricks. This is a bottleneck for implementing brick
multiplexing testcases. Moreover we can't create more than 10 volumes.
With this library, implementing a way to create bricks on top of the
existing servers in a cyclic way to have equal number of bricks on each
brick partition on each server
Added paramter in setup_volume function, if multi_vol flag is set it
will fetch bricks using cyclic manner using (form_bricks_for_multi_vol)
otherwise it will fetch using old mechanism.
Added bulk_volume_creation function, to create multiple volumes the
user has specified.
Change-Id: I2103ec6ce2be4e091e0a96b18220d5e3502284a0
Signed-off-by: Bala Konda Reddy M <bala12352@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add sleep in test_snap_delete_original_volume.py
after cloning a volume added sleep and then starting
the volume. Changed io to write in one mount point
otherwise seeing issues with validate io.
removed baseclass cleanup because original volume is
already cleaned up in testcase.
Change-Id: I7bf9686384e238e1afe8491013a3058865343eee
Signed-off-by: Sri Vignesh <sselvan@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Testcase steps:
1.Create directory on mount point and write files/dirs
2.Create another set of files (1K files)
3.While creation of files/dirs are in progress Kill one brick
4.Remove the contents of the killed brick(simulating disk replacement)
5.When the IO's are still in progress, restart glusterd on the nodes
where we simulated disk replacement to bring back bricks online
6.Start volume heal
7.Wait for IO's to complete
8.Verify whether the files are self-healed
9.Calculate arequals of the mount point and all the bricks
CentOS-CI failure due to the following bug:
https://bugzilla.redhat.com/show_bug.cgi?id=1807384
Change-Id: I9e9f58a16a7950fd7d6493cbb5c4f5483892851e
Co-authored-by: Vitalii Koriakov <vkoriako@redhat.com>
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
| |
Please refer to commit message of patch [1].
[1] https://review.gluster.org/#/c/glusto-tests/+/24140/
Change-Id: I5319ce497ca3359e0e7dbd9ece481bada1ee2205
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Testcase steps:
1.Create a single brick volume
2.Add some files and directories
3.Get arequal from mountpoint
4.Add-brick such that this brick makes
the volume a replica vol 1x3
5.Start heal full
6.Make sure heal is completed
7.Get arequals from all bricks and
compare with arequal from mountpoint
Change-Id: I4ef140b326b3d9edcbd5b1f0b7d9c43f38ccfe66
Co-authored-by: Vitalii Koriakov <vkoriako@redhat.com>
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
BZ#1257394 - Provide meaningful errors on peer probe and peer detach
Test Steps:
1 check the current peer status
2 detach one of the valid nodes which is already part of cluster
3 stop glusterd on that node
4 try to attach above node to cluster, which must fail with
Transport End point error
5 Recheck the test using hostname, expected to see same result
6 start glusterd on that node
7 halt/reboot the node
8 try to peer probe the halted node, which must fail again.
9 The only error accepted is as below
"peer probe: failed: Probe returned with Transport endpoint is not
connected"
10 Check peer status and make sure no other nodes in peer reject state
Change-Id: Ic0a083d5cb150275e927723d960e89fe1a5528fb
Signed-off-by: nchilaka <nchilaka@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
Add extra time for beaker machines to validate
the testcases
for test_rebalance_spurious.py added cleanup in
teardown because fix layout patch is still not
merged.
Change-Id: I7ee8324ff136bbdb74600b730b4b802d86116427
Signed-off-by: Sri Vignesh <sselvan@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
Due to patch [1], the framework broke and
was failing for all the testcases with the below
backtrace:
```
> mount_dict['server'] = cls.snode
E AttributeError: type object 'VolumeAccessibilityTests_cplex_replicated_glusterf'
has no attribute 'snode'
```
Solution:
This was becasue mnode_slave was accidentally written as snode. And
cls.geo_rep_info wasn't a safe condition operator hence changed it
to cls.slaves.
Testcase results with patch:
test_cvt.py::TestGlusterHealSanity_cplex_replicated_glusterfs::test_self_heal_when_io_in_progress PASSED
test_cvt.py::TestGlusterExpandVolumeSanity_cplex_distributed-dispersed_glusterfs::test_expanding_volume_when_io_in_progress PASSED
test_cvt.py::TestGlusterHealSanity_cplex_dispersed_glusterfs::test_self_heal_when_io_in_progress PASSED
test_cvt.py::TestGlusterExpandVolumeSanity_cplex_distributed_nfs::test_expanding_volume_when_io_in_progress PASSED
test_cvt.py::TestQuotaSanity_cplex_replicated_nfs::test_quota_enable_disable_enable_when_io_in_progress PASSED
test_cvt.py::TestSnapshotSanity_cplex_distributed-dispersed_glusterfs::test_snapshot_basic_commands_when_io_in_progress PASSED
test_cvt.py::TestGlusterReplaceBrickSanity_cplex_distributed-replicated_glusterfs::test_replace_brick_when_io_in_progress PASSED
test_cvt.py::TestQuotaSanity_cplex_distributed_nfs::test_quota_enable_disable_enable_when_io_in_progress PASSED
test_cvt.py::TestGlusterExpandVolumeSanity_cplex_dispersed_glusterfs::test_expanding_volume_when_io_in_progress PASSED
test_cvt.py::TestGlusterShrinkVolumeSanity_cplex_distributed-dispersed_glusterfs::test_shrinking_volume_when_io_in_progress PASSED
test_cvt.py::TestGlusterExpandVolumeSanity_cplex_dispersed_nfs::test_expanding_volume_when_io_in_progress PASSED
test_cvt.py::TestGlusterShrinkVolumeSanity_cplex_distributed_glusterfs::test_shrinking_volume_when_io_in_progress PASSED
test_cvt.py::TestGlusterExpandVolumeSanity_cplex_distributed_glusterfs::test_expanding_volume_when_io_in_progress PASSED
test_cvt.py::TestQuotaSanity_cplex_dispersed_nfs::test_quota_enable_disable_enable_when_io_in_progress PASSED
test_cvt.py::TestSnapshotSanity_cplex_distributed-replicated_glusterfs::test_snapshot_basic_commands_when_io_in_progress PASSED
test_cvt.py::TestSnapshotSanity_cplex_dispersed_glusterfs::test_snapshot_basic_commands_when_io_in_progress PASSED
test_cvt.py::TestQuotaSanity_cplex_distributed-replicated_glusterfs::test_quota_enable_disable_enable_when_io_in_progress PASSED
test_cvt.py::TestSnapshotSanity_cplex_replicated_glusterfs::test_snapshot_basic_commands_when_io_in_progress PASSED
test_cvt.py::TestGlusterShrinkVolumeSanity_cplex_distributed-dispersed_nfs::test_shrinking_volume_when_io_in_progress PASSED
test_cvt.py::TestGlusterShrinkVolumeSanity_cplex_distributed_nfs::test_shrinking_volume_when_io_in_progress PASSED
links:
[1] https://review.gluster.org/#/c/glusto-tests/+/24029/
Change-Id: If7b329e232ab61df9f9d38f5491c58693336dd48
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Adding the code for the following:
1.Adding function setup_master_and_slave_volumes() to geo_rep_libs.
2.Adding variables for master_mounts, slave_mounts, master_volume
and slave_volume to gluster_base_class.py
3.Adding class class method
setup_and_mount_geo_rep_master_and_slave_volumes to
gluster_base_class.py.
Change-Id: Ic8ae1cb1c8b5719d4774996c3e9e978551414b44
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Testcase steps:
1. Create a volume and mount it.
2. Create a directory on mount and check whether all the bricks have
the same gfid.
3. Now delete gfid attr from all but one backend bricks,
4. Do lookup from the mount.
5. Check whether all the bricks have the same gfid assigned.
Failing in CentOS-CI due to the following bug:
https://bugzilla.redhat.com/show_bug.cgi?id=1696075
Change-Id: I4eebc247b15c488cfa24599e0afec2fa5671656f
Co-authored-by: Anees Patel <anepatel@redhat.com>
Signed-off-by: Anees Patel <anepatel@redhat.com>
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The new function volume_type() will check if the volume under test
is of pure Replicated/Disperse/Arbiter type and return the result
in string.
The functions,run_layout_tests() & validate_files_in_dir() have
been modified to check the Gluster version and volume type in order
to fix the DHT pass-through caused issues.
Change-Id: Ie7ad259883907c1fdc0b54e6743636fdab793272
Signed-off-by: sayaleeraut <saraut@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The issue earlier was that whenever a TC called the _get_layout()
and _is_complete() methods, it failed on Replicate/Arbiter/Disperse
volume types because of DHT pass-through.
The functions,get_layout() and is_complete() have been modified to
check for the Gluster version and volume type before running, in
order to fix the issue.
About DHT pass-through : Please refer to-
https://github.com/gluster/glusterfs/issues/405
for the details.
Change-Id: I0b0dc0ac3cbdef070a20854fbc89442fee1da8b6
Signed-off-by: sayaleeraut <saraut@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
The current timeout for reboot given in
test_heal_full_node_reboot is about 350 seconds
which works with most hardware configurations.
However when reboot is done on slower systems which
take time to come up this logic fails due to
which this testcase and the preceding testcases
fail.
Solution:
Change the timeout for reboot from 350 to 700, this
wouldn't affect the testcase's perfromance in good
hardware configurations as the timeout is for the max
value and if the node is up before the testcase it'll
exit anyways.
Change-Id: I60d05236e8b08ba7d0fec29657a93f2ae53404d4
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
| |
Please refer to commit message of patch [1].
[1] https://review.gluster.org/#/c/glusto-tests/+/24140/
Change-Id: I25d30f7bdb20f0825709c4c852140e1906870ce7
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
| |
Please refer to commit message of patch [1].
[1] https://review.gluster.org/#/c/glusto-tests/+/24140/
Change-Id: Ib357d5690bb28131d788073b80a088647167fe80
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
| |
Please refer to commit message of patch [1].
[1] https://review.gluster.org/#/c/glusto-tests/+/24140/
Change-Id: Ic0b3b1333ac7b1ae02f701943d49510e6d46c259
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
sys library was added to all the testcases to fetch
the `sys.version_info.major` which fetches the version
of python with which glusto and glusto-tests is installed
and runs the I/O script i.e file_dir_ops.py with that
version of python but this creates a problem as older jobs
running on older platforms won't run the way they use to,
like if the older platform had python2 by default and
we are running it tests from a slave which
has python3 it'll fails and visa-versa.
The problem is introduced due the below code:
```
cmd = ("/usr/bin/env python%d %s create_deep_dirs_with_files "
"--dirname-start-num 10 --dir-depth 1 --dir-length 1 "
"--max-num-of-dirs 1 --num-of-files 5 %s" % (
sys.version_info.major, self.script_upload_path,
self.mounts[0].mountpoint))
```
The solution to this problem is to change `python%d`
to `python` which would enable the code to run with
whatever version of python is avaliable on that client
this would enable us to run any version of framework
with both the older and latest platforms.
Change-Id: I7c8200a7578f03c482f0c6a91832b8c0fdb33e77
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Script sometimes fails at expand volume with
"Already part of volume" error fixed it with this patch.
Change-Id: I628bbdb268e5a42112f68d9148da6bdb775acd26
Co-authored-by: Prasad Desala <tdesala@redhat.com>,
Milind Waykole <milindwaykole96@gmail.com>
Signed-off-by: Prasad Desala <tdesala@redhat.com>
Signed-off-by: Milind Waykole <milindwaykole96@gmail.com>
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
The default value of performance.io-cache was ON by default
before gluster 6.0, in gluster 6.0 it was set to OFF.
Solution:
Adding code to check gluster version and then check
weather it is ON or OFF as shown below:
```
if get_gluster_version(self.mnode) >= 6.0:
self.assertIn("off", ret['performance.io-cache'],
"io-cache value is not correct")
else:
self.assertIn("on", ret['performance.io-cache'],
"io-cache value is not correct")
```
CentOS-CI failure analysis:
This patch is expected to failed as if we run `gluster --version` on
nightly builds the output returned as shown below:
```
# gluster --version
glusterfs 20200220.a0e0890
Repository revision: git://git.gluster.org/glusterfs.git
Copyright (c) 2006-2016 Red Hat, Inc. <https://www.gluster.org/>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
It is licensed to you under your choice of the GNU Lesser
General Public License, version 3 or any later version (LGPLv3
or later), or the GNU General Public License, version 2 (GPLv2),
in all cases as published by the Free Software Foundation.
```
This output can't be parsed by get_gluster_version() function
which is used in this patch to get the gluster version
and check for perfromance.io-cache's default value accordingly.
Change-Id: I00b652a9d5747cbf3006825bb17b9ca2f69cb9cd
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
| |
Moved steps from teardown class to teardown and removed
unwanted teardown class and rectified the testcase
failing with wait for io to complete by removing the step
because after validate io the sub process terminates and
results in failure.
Change-Id: I2eaf05680b817b681aff8b48683fc9dac88896b0
Signed-off-by: Sri Vignesh <sselvan@redhat.com>
|
|
|
|
|
| |
Change-Id: I0fa6bbacda16fb97d3454a8510a937442b5755a4
Signed-off-by: Sri Vignesh <sselvan@redhat.com>
|
|
|
|
|
| |
Change-Id: I04f7b7c894d48d0188379028412d9c6b48eac210
Signed-off-by: Sri Vignesh <sselvan@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
and used wait for peer to connect and
wait for glusterd to connect functions in testcases
added fixes to check file exists
increased timeout value for failure cases
Change-Id: I9d5692f635ed324ffe7dac9944ec9b8f3b933fd1
Signed-off-by: Sri Vignesh <sselvan@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
As distributed-arbiter and arbiter weren't present before patch [1],
arbiter and distributed-arbiter volumes were created by the hack show
below where a distributed-replicated or replicated volume's configuration
was modified to create an arbiter volume.
```
@runs_on([['replicated', 'distributed-replicated'],
['glusterfs', 'nfs']])
class TestSelfHeal(GlusterBaseClass):
.................
@classmethod
def setUpClass(cls):
...............
# Overriding the volume type to specifically test the volume
# type Change from distributed-replicated to arbiter
if cls.volume_type == "distributed-replicated":
cls.volume['voltype'] = { 'type': 'distributed-replicated',
'dist_count': 2,
'replica_count': 3,
'arbiter_count': 1,
'transport': 'tcp'}
```
Now this code is to be updated where we need to remove code which
was used to override volume configuration and
just add arbiter or distributed-arbiter in `@runs_on([],[])`
as shown below:
```
@runs_on([['replicated', 'distributed-arbiter'],
['glusterfs', 'nfs']])
class TestSelfHeal(GlusterBaseClass):
```
Links:
[1] https://github.com/gluster/glusto-tests/commit/08b727842bc66603e3b8d1160ee4b15051b0cd20
Change-Id: I4c44c2f3506bd0183fd991354fb723f8ec235a4b
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
| |
Instead of calling g.log.error, we were calling g.log.err
Due to this instead of throwing the right error message in, say,
when doing volume cleanup, it was throwing ambiguos traceback.
Change-Id: I39887ce08756eaf29df2d99f73cc7795a4d2c065
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Earlier the method get_volume_type() passed when ran on interpreter.
But when the method was called in a TC, it failed at condition (Line:
2235) because :
e.g.
The value of brickdir_path is "dhcp47-3.lab.eng.blr.redhat.com:/bricks/
brick2/testvol_replicated_brick2/" and it is trying to find the value
in the list ['10.70.46.172:/bricks/brick2/testvol_replicated_brick0',
'10.70.46.195:/bricks/brick2/testvol_replicated_brick1',
'10.70.47.3:/bricks/brick2/testvol_replicated_brick2'] returned by
get_all_bricks(), which will fail.
Now, with fix, it will run successfully as it tries to check if for host
dhcp47-3.lab.eng.blr.redhat.com, the brick
/bricks/brick2/testvol_replicated_brick2 is present in the list
brick_paths[] which consists of only the paths and not the IP addresses
of the bricks present on that host.
Change-Id: Ie595faba1e92c559293ddd04f46b85065b23dfc5
Signed-off-by: sayaleeraut <saraut@redhat.com>
|
|
|
|
|
| |
Change-Id: I2ba0c81dad41bdac704007bd1780b8a98cb50358
Signed-off-by: Sri Vignesh <sselvan@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Testcase steps:
1.Set the volume option
"metadata-self-heal": "off"
"entry-self-heal": "off"
"data-self-heal": "off"
"self-heal-daemon": "off"
2.Bring down all bricks processes from selected set
3.Create IO (50k files)
4.Get arequal before getting bricks online
5.Bring bricks online
6.Set the volume option
"self-heal-daemon": "on"
7.Check for daemons
8.Start healing
9.Check if heal is completed
10.Check for split-brain
11.Get arequal after getting bricks online and compare with
arequal before getting bricks online
12.Add bricks to volume
13.Do rebalance and wait for it to complete
14.Get arequal after adding bricks and compare with
arequal after getting bricks online
Change-Id: I1598c4d6cf98ce99249e85fc377b9db84886f284
Co-authored-by: Vitalii Koriakov <vkoriako@redhat.com>
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
| |
Added steps to reset volume and resolved teardown class
cleanup failures.
Change-Id: I06b0ed8810c9b064fd2ee7c0bfd261928d8c07db
|
|
|
|
|
|
|
|
|
| |
used library functions to wait for glusterd to start
and wait for peer to connect and made modifications in teardown part
to rectified statements to correct values
Change-Id: I40b4362ae1491acf75681c7623c16c53213bb1b9
Signed-off-by: Sri Vignesh <sselvan@redhat.com>
|
|
|
|
|
|
|
|
|
| |
Added wait_for_io_to_complete function to testcases
used wait_for_glusterd function
and wait_for_peer_connect function
Change-Id: I4811848aad8cca4198cc93d8e200dfc47ae7ac9b
Signed-off-by: Sri Vignesh <sselvan@redhat.com>
|
|
|
|
|
|
|
|
| |
Add steps to include bring offline bricks to online and
volume reset in case of failure scenarios
Change-Id: I9bdadd8a80ded81cf7cb4e324a18321400bfcc4c
Signed-off-by: Sri Vignesh <sselvan@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
Earlier the elements passed in list for volume type check were
('replicate', 'disperse', 'arbiter'), but as the volume type
returned by get_volume_type() will be in the format 'Replicate',
'Disperse', 'Arbiter' and lists are case sensitive, these changes
will make sure it does not change.
Change-Id: Ic73ca946cd9c06bfa5b92605dbeba74d6ffa83d9
Signed-off-by: sayaleeraut <saraut@redhat.com>
|
|
|
|
|
|
|
|
|
| |
The function get_volume_type() will return the type of volume (as
distributed/replicate/disperse/arbiter/distributed-replicated/
distributed-dispersed/distributed-arbiter) under test.
Change-Id: Ib23ae1ad18ef65d0520fe041a5f80211030a034b
Signed-off-by: sayaleeraut <saraut@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The DHT pass-through functionality was introduced in the Gluster
6, due to which the TCs were failing for Replicate, Disperse and
Arbiter volume types whenever the function to get hashrange was
called.
With this fix, first the Gluster version and later the volume
type will be checked before calling the function to get the
hashrange. If the Gluster version is greater than or equal to
6, the layout will not be checked for the pure AFR/Arbiter/EC
volumes.
About DHT pass-through option : The distribute xlator now skips
unnecessary checks and operations when the distribute count is one
for a volume, resulting in improved performance. Comes into play
when there is only 1 brick or it is a pure replicate or pure
disperse or pure arbiter volume.
Change-Id: I55634f495a54e3c9909b1e1c716990b9ee9834a3
Signed-off-by: sayaleeraut <saraut@redhat.com>
|
|
|
|
|
|
|
|
| |
Reboot cases are failing with timeout value,
therfore increasing the timeout value in function.
Change-Id: I262120e87d36b2d5cc7244b37d5f6e051c964f0f
Signed-off-by: Sri Vignesh <sselvan@redhat.com>
|
|
|
|
|
|
|
| |
Change-Id: I1eacfd74c730d28e36bb8f7e3a1f574edc3d13c7
Co-authored-by: Vitalii Koriakov <vkoriako@redhat.com>
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Testcase 1: Test entry transaction crash consistency : create
- Create IO
- Calculate arequal before creating snapshot
- Create snapshot
- Modify the data
- Stop the volume
- Restore snapshot
- Start the volume
- Get arequal after restoring snapshot
- Compare arequals
Testcase 2: Test entry transaction crash consistency : delete
- Create IO of 50 files
- Delete 20 files
- Calculate arequal before creating snapshot
- Create snapshot
- Delete 20 files more
- Stop the volume
- Restore snapshot
- Start the volume
- Get arequal after restoring snapshot
- Compare arequals
Testcase 3: Test entry transaction crash consistency : rename
- Create IO of 50 files
- Rename 20 files
- Calculate arequal before creating snapshot
- Create snapshot
- Rename 20 files more
- Stop the volume
- Restore snapshot
- Start the volume
- Get arequal after restoring snapshot
- Compare arequals
Change-Id: I7cb9182f91ae50c47d5ae9b3f8031413b2bbfbbf
Co-authored-by: Vitalii Koriakov <vkoriako@redhat.com>
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Adding function collect_bricks_arequal() to lib_utils.py
to collect arequal-checksum on all the bricks of all the
nodes used to create a volume using the below command:
```
arequal-checksum -p <BrickPath> -i .glusterfs -i .landfill -i .trashcan
```
Usage:
```
>>> all_bricks = get_all_bricks(self.mnode, self.volname)
>>> ret, arequal = collect_bricks_arequal(all_bricks)
>>> ret
True
```
Change-Id: Id42615469be18d84e5691c982369634c436ed0cf
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|