summaryrefslogtreecommitdiffstats
Commit message (Collapse)AuthorAgeFilesLines
* [TestFix] Performing cluster options reset.srijan-sivakumar2020-12-041-1/+15
| | | | | | | | | | | Issue: The cluster options set during TC aren't reset, causing the cluster options to affect subsequent TC runs. Fix: Adding volume_reset() in the tearDown of a TC to perform a cleanup of the cluster options. Change-Id: I00da5837d2a4260b4d414cc3c8083f83d8f6fadd Signed-off-by: srijan-sivakumar <ssivakum@redhat.com>
* [Test]: Add a tc to check default max bricks per processnik-redhat2020-12-041-0/+100
| | | | | | | | | | | | | | | Test steps: 1) Create a volume and start it. 2) Fetch the max bricks per process value 3) Reset the volume options 4) Fetch the max bricks per process value 5) Compare the value fetched in last step with the initial value 6) Enable brick-multiplexing in the cluster 7) Fetch the max bricks per process value 8) Compare the value fetched in last step with the initial value Change-Id: I20bdefd38271d1e12acf4699b4fe5d0da5463ab3 Signed-off-by: nik-redhat <nladha@redhat.com>
* [TestFix] Adding cluster options reset.srijan-sivakumar2020-12-041-0/+7
| | | | | | | | | The cluster options once set aren't reset and this would cause problem for subsequent TCs. hence reseting the options at teardown. Change-Id: Ifd1df2632a25ca7788a6bb4f765b3f6583ab06d6 Signed-off-by: srijan-sivakumar <ssivakum@redhat.com>
* [Test] Check perfromance of ls on distributed volumeskshithijiyer2020-12-031-0/+105
| | | | | | | | | | | | | Test case: 1. Create a volume of type distributed-replicated or distributed-arbiter or distributed-dispersed and start it. 2. Mount the volume to clients and create 2000 directories and 10 files inside each directory. 3. Wait for I/O to complete on mount point and perform ls (ls should complete within 10 seconds). Change-Id: I5c08c185f409b23bd71de875ad1d0236288b0dcc Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Test] Add Tc Verify the vol status vol/all --xml dump“Milind2020-12-021-0/+106
| | | | | | | | | | | 1. stop one of the volume (i.e) gluster volume stop <vol-name> 2. Get the status of the volumes with --xml dump (i.e) gluster volume status all --xml XML dump should be consistent Signed-off-by: “Milind <“mwaykole@redhat.com”> Change-Id: I3e7af6d1bc45b73ed8302bf3277e3613a6b1100f
* [TestFix] Move mem_leak TC to resource_leak dirPranav2020-12-021-0/+0
| | | | | | | | Moving the gluster mem_leak test case to resource_leak dir Change-Id: I8189dc9b509a09f793fe8ca2be53e8546babada7 Signed-off-by: Pranav <prprakas@redhat.com>
* [TestFix]: Fixed the grep pattern for epoll thread countnik-redhat2020-11-271-10/+10
| | | | | | | | | | | Modified the command from 'grep epoll_wait' to 'grep -i 'sys_epoll_wait' to address the changes in the epoll functionality for newer versions of Linux. Details of the changes can be found here: https://github.com/torvalds/linux/commit/791eb22eef0d077df4ddcf633ee6eac038f0431e Change-Id: I1671a74e538d20fe5dbf951fca6f8edabe0ead7f Signed-off-by: nik-redhat <nladha@redhat.com>
* [Test] xml Dump of gluster volume status during rebalance“Milind2020-11-271-0/+185
| | | | | | | | | | | | | | 1. Create a trusted storage pool by peer probing the node 2. Create a distributed-replicated volume 3. Start the volume and fuse mount the volume and start IO 4. Create another replicated volume and start it and stop it 5. Start rebalance on the volume. 6. While rebalance in progress, stop glusterd on one of the nodes in the Trusted Storage pool. 7. Get the status of the volumes with --xml dump Change-Id: I581b7713d7f9bfdd7be00add3244578b84daf94f Signed-off-by: “Milind <“mwaykole@redhat.com”>
* [LibFix] Adding retry for start_glusterdsrijan-sivakumar2020-11-271-4/+11
| | | | | | | | | | | | Issue: Glusterd start fails after repeated start and stop. ( Due to the cap on maximum of 6 starts of the service within an hour ) Fix: Hence it is prudent to add the retry option similar to that of restart_glusterd so as to run `systemctl reset-failed glusterd` on the servers. Change-Id: Ic0378934623dfa6dc5ab265246c746269f6995bc Signed-off-by: srijan-sivakumar <ssivakum@redhat.com>
* [Test] Add test to verify memory leak with ssl enabledPranav2020-11-271-0/+128
| | | | | | | | | | This test is to verify BZ:1785577 ( https://bugzilla.redhat.com/show_bug.cgi?id=1785577) To verify that there are no memeory leak when SSL is enabled Change-Id: I1f44de8c65b322ded76961253b8b7a7147aca76a Signed-off-by: Pranav <prprakas@redhat.com>
* [Test] Add tc to check vol set when glusterd is stopped on one nodenik-redhat2020-11-241-0/+193
| | | | | | | | | | | | | | Test Steps: 1) Setup and mount a volume on client. 2) Stop glusterd on a random server. 3) Start IO on mount points 4) Set an option on the volume 5) Start glusterd on the stopped node. 6) Verify all the bricks are online after starting glusterd. 7) Check if the volume info is synced across the cluster. Change-Id: Ia2982ce4e26f0d690eb2bc7516d463d2a71cce86 Signed-off-by: nik-redhat <nladha@redhat.com>
* [Test]: Add tc for default ping-timeout and epoll thread countnik-redhat2020-11-241-0/+87
| | | | | | | | | | | | | Test Steps: 1. Start glusterd 2. Check ping timeout value in glusterd.vol should be 0 3. Create a test script for epoll thread count 4. Source the test script 5. Fetch the pid of glusterd 6. Check epoll thread count of glusterd should be 1 Change-Id: Ie3bbcb799eb1776004c3db4922d7ee5f5993b100 Signed-off-by: nik-redhat <nladha@redhat.com>
* [Test] Rebalance with add brick and lookup on mountsrijan-sivakumar2020-11-181-0/+113
| | | | | | | | | | | | | | Steps- 1. Create a distributed-replicated volume, start and mount it. 2. Create deep dirs (200) and create some 100 files on the deepest directory. 3. Expand volume. 4. Start rebalance. 5. Once rebalance is completed, do a lookup on mount and log the time taken. Change-Id: I3a55d2670cc6bda7670f97f0cd6208dc9e36a5d6 Signed-off-by: srijan-sivakumar <ssivakum@redhat.com>
* [Test] Pro-active metadata self heal on open fdArthy Loganathan2020-11-161-0/+244
| | | | | Change-Id: I626914130554cccf1008ab43158d7063d131b870 Signed-off-by: Arthy Loganathan <aloganat@redhat.com>
* [Test] Add test to add brick, replace brick and fix layoutkshithijiyer2020-11-121-0/+124
| | | | | | | | | | | | | Test case: 1. Create a volume, start it and mount it. 2. Create files and dirs on the mount point. 3. Add bricks to the volume. 4. Replace 2 old brick to the volume. 5. Trigger rebalance fix layout and wait for it to complete. 6. Check layout on all the bricks through trusted.glusterfs.dht. Change-Id: Ibc8ded6ce2a54b9e4ec8bf0dc82436fcbcc25f56 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Test] Add test with filled bricks + add brick + rebalancekshithijiyer2020-11-121-0/+120
| | | | | | | | | | | | | Test case: 1. Create a volume, start it and mount it. 2. Create a data set on the client node such that all the available space is used and "No space left on device" error is generated. 3. Set cluster.min-free-disk to 30%. 4. Add bricks to the volume, trigger rebalance and wait for rebalance to complete. Change-Id: I69c9d447b4713b107f15b4801f4371c33f5fb2fc Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Test] Add tests for add brick with hard links and sticky bitkshithijiyer2020-11-121-0/+171
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Scenarios: ---------- Test case 1: 1. Create a volume, start it and mount it using fuse. 2. Create 50 files on the mount point and create 50 hardlinks for the files. 3. After the files and hard links creation is complete, add bricks to the volume and trigger rebalance on the volume. 4. Wait for rebalance to complete and check if files are skipped or not. 5. Trigger rebalance on the volume with force and repeat step 4. Test case 2: 1. Create a volume, start it and mount it using fuse. 2. Create 50 files on the mount point and set sticky bit to the files. 3. After the files creation and sticky bit addition is complete, add bricks to the volume and trigger rebalance on the volume. 4. Wait for rebalance to complete. 5. Check for data corruption by comparing arequal before and after. Change-Id: I61bcf14185b0fe31b44e9d2b0a58671f21752633 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Test] Add test for full brick + add brick + remove brickkshithijiyer2020-11-121-0/+111
| | | | | | | | | | | | | | Test case: 1. Create a volume, start it and mount it. 2. Fill few bricks till min.free.limit is reached. 3. Add brick to the volume. 4. Set cluster.min-free-disk to 30%. 5. Remove bricks from the volume. (Remove brick should pass without any errors) 6. Check for data loss by comparing arequal before and after. Change-Id: I0033ec47ab2a2958178ce23c9d164939c9bce2f3 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Test] Add test to check kill brick with remove brick runningkshithijiyer2020-11-121-0/+128
| | | | | | | | | | | | | Test case: 1. Create a volume, start it and mount it. 2. Create some data on the volume. 3. Start remove-brick on the volume. 4. When remove-brick is in progress kill brick process of a brick which is being remove. 5. Remove-brick should complete without any failures. Change-Id: I8b8740d0db82d3345279dee3f0f5f6e17160df47 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Test] Add tests to check remove brick with different optionskshithijiyer2020-11-121-0/+113
| | | | | | | | | | | | | | | | | | | | Test scenarios: =============== Test case: 1 1. Create a volume, start it and mount it. 2. Create some data on the volume. 3. Run remove-brick start, status and finally commit. 4. Check if there is any data loss or not. Test case: 2 1. Create a volume, start it and mount it. 2. Create some data on the volume. 3. Run remove-brick with force. 4. Check if bricks are still seen on volume or not Change-Id: I2cfd324093c0a835811a682accab8fb0a19551cb Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Test] Add test to add & remove bricks with lookups & I/O runningkshithijiyer2020-11-121-0/+162
| | | | | | | | | | | | | | | | | | | | Test case: 1. Enable brickmux on cluster, create a volume, start it and mount it. 2. Start the below I/O from 4 clients: From client-1 : run script to create folders and files continuously From client-2 : start linux kernel untar From client-3 : while true;do find;done From client-4 : while true;do ls -lRt;done 3. Kill brick process on one of the nodes. 4. Add brick to the volume. 5. Remove bricks from the volume. 6. Validate if I/O was successful or not. Skip reason: Test case skipped due to bug 1571317. Change-Id: I48bdb433230c0b13b0738bbebb5bb71a95357f57 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* check that heal info does not hangRavishankar N2020-11-121-0/+162
| | | | | | | | Check that when there are pending heals and healing and I/O are going on, heal info completes successfully. Change-Id: I7b00c5b6446d6ec722c1c48a50e5293272df0fdf Signed-off-by: Ravishankar N <ravishankar@redhat.com>
* [Test] Add test to check remove brick with open fdkshithijiyer2020-11-111-0/+107
| | | | | | | | | | | | | Test case: 1. Create volume, start it and mount it. 2. Open file datafile on mount point and start copying /etc/passwd line by line(Make sure that the copy is slow). 3. Start remove-brick of the subvol to which has datafile is hashed. 4. Once remove-brick is complete compare the checksum of /etc/passwd and datafile. Change-Id: I278e819731af03094dcee93963ec1da115297bef Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [TestFix] Fixing typo glusterd“Milind2020-11-101-1/+1
| | | | | Change-Id: I080328dfbcde5652f9ab697f8751b87bf96e8245 Signed-off-by: “Milind <“mwaykole@redhat.com”>
* [Test] Brick status offline when quorum not met.srijan-sivakumar2020-11-091-0/+125
| | | | | | | | | | | | | Steps- 1. Create a volume and mount it. 2. Set the quorum type to 'server'. 3. Bring some nodes down such that quorum isn't met. 4. Brick status in the node which is up should be offline. 5. Restart glusterd in this node. 6. Brick status in the restarted node should be offline. Change-Id: If6885133848d77ec803f059f7a056dc3aeba7eb1 Signed-off-by: srijan-sivakumar <ssivakum@redhat.com>
* [Test] Add test to add brick rebal with one brick fullkshithijiyer2020-11-091-0/+139
| | | | | | | | | | | | | Test case: 1. Create a pure distribute volume with 3 bricks. 2. Start it and mount it on client. 3. Fill one disk of the volume till it's full 4. Add brick to volume, start rebalance and wait for it to complete. 5. Check arequal checksum before and after add brick should be same. 6. Check if link files are present on bricks or not. Change-Id: I4645a3eea33fefe78d48805a3794556b81b189bc Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Test] Add tc to check that default log level of CLInik-redhat2020-11-091-0/+97
| | | | | | | | | | | | | Test Steps: 1) Create and start a volume 2) Run volume info command 3) Run volume status command 4) Run volume stop command 5) Run volume start command 6) Check the default log level of cli.log Change-Id: I871d83500b2a3876541afa348c49b8ce32169f23 Signed-off-by: nik-redhat <nladha@redhat.com>
* [Test] Add tc to check updates in 'options' file on quorum changesnik-redhat2020-11-051-0/+94
| | | | | | | | | | | | | | Test Steps: 1. Create and start a volume 2. Check the output of '/var/lib/glusterd/options' file 3. Store the value of 'global-option-version' 4. Set server-quorum-ratio to 70% 5. Check the output of '/var/lib/glusterd/options' file 6. Compare the value of 'global-option-version' and check if the value of 'server-quorum-ratio' is set to 70% Change-Id: I5af40a1e05eb542e914e5766667c271cbbe126e8 Signed-off-by: nik-redhat <nladha@redhat.com>
* [Test] Add tc to validate auth.allow and auth.reject options on volumenik-redhat2020-11-051-0/+162
| | | | | | | | | | | | | | | | | | Test Steps: 1. Create and start a volume 2. Disable brick mutliplex 2. Set auth.allow option on volume for the client address on which volume is to be mounted 3. Mount the volume on client and then unmmount it. 4. Reset the volume 5. Set auth.reject option on volume for the client address on which volume is to be mounted 6. Mounting the volume should fail 7. Reset the volume and mount it on client. 8. Repeat the steps 2-7 with brick multiplex enabled Change-Id: I26d88a217c03f1b4732e4bdb9b8467a9cd608bae Signed-off-by: nik-redhat <nladha@redhat.com>
* [Test] Add Test to authored to test posix storage.reserve option“Milind”2020-11-021-0/+79
| | | | | | | | | | | | 1) Create a distributed-replicated volume and start it. 2) Enable storage.reserve option on the volume using below command, gluster volume set storage.reserve. let's say, set it to a value of 50. 3) Mount the volume on a client 4) check df -h output of the mount point and backend bricks. Change-Id: I74f891ce5a92e1a4769ec47c64fc5469b6eb9224 Signed-off-by: “Milind” <mwaykole@redhat.com>
* [Test] - Check self heal with data-self-heal-algorithm set to diffkarthik-us2020-10-301-0/+162
| | | | | | | | | | | | | | | | | | Steps: 1. Create a replicated/distributed-replicate volume and mount it 2. Set data/metadata/entry-self-heal to off and data-self-heal-algorithm to diff 3. Create few files inside a directory with some data 4. Check arequal of the subvol and all the bricks in the subvol should have same checksum 5. Bring down a brick from the subvol and validate it is offline 6. Modify the data of existing files under the directory 7. Bring back the brick online and wait for heal to complete 8. Check arequal of the subvol and all the brick in the same subvol should have same checksum Change-Id: I568a932c6e1db4a9084c01556c5fcca7c8e24a49 Signed-off-by: karthik-us <ksubrahm@redhat.com>
* [Lib] Add get_usable_size_per_disk() to librarykshithijiyer2020-10-292-5/+24
| | | | | | | | | | Changes done in this patch: 1. Adding get_usable_size_per_disk() to lib_utils.py. 2. Removing the redundant code from dht/test_rename_with_brick_min_free_limit_crossed.py. Change-Id: I80c1d6124b7f0ce562d8608565f7c46fd8612d0d Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Test] Add test to copy huge file with remove-brickkshithijiyer2020-10-291-0/+111
| | | | | | | | | | | | Test case: 1. Create a volume, start it and mount it. 2. Create files and dirs on the mount point. 3. Start remove-brick and copy huge file when remove-brick is in progress. 4. Commit remove-brick and check checksum of orginal and copied file. Change-Id: I487ca05114c1f36db666088f06cf5512671ee7d7 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Test] Validate creation of different file typessayaleeraut2020-10-281-0/+494
| | | | | | | | | | | | | | This test script covers below scenarios: 1) Creation of various file types - regular, block, character and pipe file 2) Hard link create, validate 3) Symbolic link create, validate Issue : Fails on CI due to- https://github.com/gluster/glusterfs/issues/1461 Change-Id: If50b8d697115ae7c23b4d30e0f8946e9fe705ece Signed-off-by: sayaleeraut <saraut@redhat.com>
* [Test] Self-heal, add-brick on replicated volume typesBala Konda Reddy M2020-10-231-0/+199
| | | | | | | | | | | | | | | | | 1. Create a replicated/distributed-replicate volume and mount it 2. Start IO from the clients 3. Bring down a brick from the subvol and validate it is offline 4. Bring back the brick online and wait for heal to complete 5. Once the heal is completed, expand the volume. 6. Trigger rebalance and wait for rebalance to complete 7. Validate IO, no errors during the steps performed from step 2 8. Check arequal of the subvol and all the brick in the same subvol should have same checksum Note: This tests is cleary for replicated volume types. Change-Id: I2286e75cbee4f22a0ed14d6c320a4496dc3c3905 Signed-off-by: Bala Konda Reddy M <bala12352@gmail.com>
* [Test]: Add tc to check profile simultaneously on 2 different nodesnik-redhat2020-10-221-0/+185
| | | | | | | | | | | | | | | | Test Steps: 1) Create a volume and start it. 2) Mount volume on client and start IO. 3) Start profile on the volume. 4) Create another volume. 5) Start profile on the volume. 6) Run volume status in a loop for 100 times in one node. 7) Run profile info for the new volume on one of the other node 8) Run profile info for the new volume in loop for 100 times on the other node Change-Id: I1c32a938bf434a88aca033c54618dca88623b9d1 Signed-off-by: nik-redhat <nladha@redhat.com>
* [Test] Add TC to check glusterd config file“Milind”2020-10-221-0/+29
| | | | | | | | | 1 . Check the location of glusterd socket file ( glusterd.socket ) ls /var/run/ | grep -i glusterd.socket 2. systemctl is-enabled glusterd -> enabled Change-Id: I6557c27ffb7e91482043741eeac0294e171a0925 Signed-off-by: “Milind” <mwaykole@redhat.com>
* [Test] Add 2 memory leak tests and fix library issueskshithijiyer2020-10-215-54/+337
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Scenarios added: ---------------- Test case: 1. Create a volume, start it and mount it. 2. Start I/O from mount point. 3. Check if there are any memory leaks and OOM killers. Test case: 1. Create a volume, start it and mount it. 2. Set features.cache-invalidation to ON. 3. Start I/O from mount point. 4. Run gluster volume heal command in a loop 5. Check if there are any memory leaks and OOM killers on servers. Design change: -------------- - self.id() is moved into test class as it was hitting bound errors in the original logic. - Logic changed for checking leaks fuse. - Fixed breakage in methods where ever needed. Change-Id: Icb600d833d0c08636b6002abb489342ea1f946d7 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Test] Default volume behavior and quorum optionssrijan-sivakumar2020-10-201-0/+129
| | | | | | | | | | | Steps- 1. Create and start volume. 2. Check that the quorum options aren't coming up in the vol info. 3. Kill two glusterd processes. 4. There shouldn't be any effect on the glusterfsd processes. Change-Id: I40e6ab5081e723ae41417f1e5a6ece13c65046b3 Signed-off-by: srijan-sivakumar <ssivakum@redhat.com>
* [Test] Add test gluster does not release posix lock multiple clients “Milind”2020-10-191-0/+91
| | | | | | | | | | | | Steps: 1. Create all types of volumes. 2. Mount the brick on two client mounts 3. Prepare same script to do flock on the two nodes while running this script it should not hang 4. Wait till 300 iteration on both the node Change-Id: I53e5c8b3b924ac502e876fb41dee34e9b5a74ff7 Signed-off-by: “Milind” <mwaykole@redhat.com>
* [Test] Test Eager lock reduce the number of locks during write.Sheetal2020-10-191-0/+161
| | | | | | | | | | | | | Steps- 1. Create a disperse volume and start it. 2. Set the eager lock option 3. mount the volume and create a file 4. Check the profile info of the volume for inodelk count. 5. check xattrs of the file for dirty bit. 6. Reset the eager lock option and check the attributes again. Change-Id: I0ef1a0e89c1bc202e5df4022c6d98ad0de0c1a68 Signed-off-by: Sheetal <spamecha@redhat.com>
* [TestFix] Changing the assert statement“Milind”2020-10-191-1/+5
| | | | | | | | | | changed from `self.validate_vol_option('storage.reserve', '1 (DEFAULT)')` to `self.validate_vol_option('storage.reserve', '1')` Change-Id: If75820b4ab3c3b04454e232ea1eccc4ee5f7be0b Signed-off-by: “Milind” <mwaykole@redhat.com>
* [Test] Test mountpoint ownership post volume restart.srijan-sivakumar2020-10-191-0/+109
| | | | | | | | | | | Steps- 1. Create a volume and mount it. 2. Set ownership permissions on the mountpoint and validate it. 3. Restart the volume. 4. Validate the permissions set on the mountpoint. Change-Id: I1bd3f0b5181bc93a7afd8e77ab5244224f2f4fed Signed-off-by: srijan-sivakumar <ssivakum@redhat.com>
* [Test] Add test check glusterd crash when firewall ports not openedPranav2020-10-121-0/+140
| | | | | | | | Add test to verify whether the glusterd crash is found while performing a peer probe with firewall services removed. Change-Id: If68c3da2ec90135a480a3cb1ffc85a6b46b1f3ef Signed-off-by: Pranav <prprakas@redhat.com>
* [Test]: Add tc to check volume status with brick removalnik-redhat2020-10-121-12/+69
| | | | | | | | | | | | | Steps: 1. Create a volume and start it. 2. Fetch the brick list 3. Bring any one brick down umount the brick 4. Force start the volume and check that all the bricks are not online 5. Remount the removed brick and bring back the brick online 6. Force start the volume and check if all the bricks are online Change-Id: I464d3fe451cb7c99e5f21835f3f44f0ea112d7d2 Signed-off-by: nik-redhat <nladha@redhat.com>
* [Test] Add test to fill brick and perform renamekshithijiyer2020-10-121-0/+85
| | | | | | | | | | | Test case: 1. Create a volume, start it and mount it. 2. Calculate the usable size and fill till it reachs min free limit 3. Rename the file 4. Try to perfrom I/O from mount point.(This should fail) Change-Id: Iaee9944b6ba676157ee2453d734a4335aac27811 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Test] Rebalance preserves / and user subdirs permissionsTamar Shacked2020-10-121-0/+192
| | | | | | | | | | | | | | | | | Test case: 1. Create a volume start it and mount on the client. 2. Set full permission on the mount point. 3. Add new user to the client. 4. As the new user create dirs/files. 5. Compute arequal checksum and verfiy permission on / and subdir. 6. Add brick into the volume and start rebalance. 7. After rebalance is completed: 7.1 check arequal checksum 7.2 verfiy no change in permission on / and sub dir 7.3 As the new user create and delete file/dir. Change-Id: Iacd829c0714c28e231c9fc52df6526200cb53041 Signed-off-by: Tamar Shacked <tshacked@redhat.com>
* [TestFix]: Add tc to check volume status with bricks absentnik-redhat2020-10-091-45/+30
| | | | | | | | | Fix: Added more volume types to perform tests and optimized the code for a better flow. Change-Id: I8249763161f30109d068da401504e0a24cde4d78 Signed-off-by: nik-redhat <nladha@redhat.com>
* [TestFix] Add check to verify glusterd ErrorPranav2020-10-071-0/+27
| | | | | | | | Adding check to verify gluster volume status doesn't cause any error msg in glusterd logs Change-Id: I5666aa7fb7932a7b61a56afa7d60341ef66a978e Signed-off-by: Pranav <prprakas@redhat.com>
* [TestFix] Add check vol size after bringing min brick downPranav2020-10-071-37/+67
| | | | | | | | | | | | Added check to verify the behavior after bringing down the smallest brick. The available volume size should not be greater than the initial vol size Test skipped due to bug: https://bugzilla.redhat.com/show_bug.cgi?id=1883429 Change-Id: I00c0310210f6fe218cedd23e055dfaec3632ec8d Signed-off-by: Pranav <prprakas@redhat.com>