summaryrefslogtreecommitdiffstats
path: root/tests/functional/dht
Commit message (Collapse)AuthorAgeFilesLines
* [Test] Check perfromance of ls on distributed volumeskshithijiyer2020-12-031-0/+105
| | | | | | | | | | | | | Test case: 1. Create a volume of type distributed-replicated or distributed-arbiter or distributed-dispersed and start it. 2. Mount the volume to clients and create 2000 directories and 10 files inside each directory. 3. Wait for I/O to complete on mount point and perform ls (ls should complete within 10 seconds). Change-Id: I5c08c185f409b23bd71de875ad1d0236288b0dcc Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Test] Rebalance with add brick and lookup on mountsrijan-sivakumar2020-11-181-0/+113
| | | | | | | | | | | | | | Steps- 1. Create a distributed-replicated volume, start and mount it. 2. Create deep dirs (200) and create some 100 files on the deepest directory. 3. Expand volume. 4. Start rebalance. 5. Once rebalance is completed, do a lookup on mount and log the time taken. Change-Id: I3a55d2670cc6bda7670f97f0cd6208dc9e36a5d6 Signed-off-by: srijan-sivakumar <ssivakum@redhat.com>
* [Test] Add test to add brick, replace brick and fix layoutkshithijiyer2020-11-121-0/+124
| | | | | | | | | | | | | Test case: 1. Create a volume, start it and mount it. 2. Create files and dirs on the mount point. 3. Add bricks to the volume. 4. Replace 2 old brick to the volume. 5. Trigger rebalance fix layout and wait for it to complete. 6. Check layout on all the bricks through trusted.glusterfs.dht. Change-Id: Ibc8ded6ce2a54b9e4ec8bf0dc82436fcbcc25f56 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Test] Add test with filled bricks + add brick + rebalancekshithijiyer2020-11-121-0/+120
| | | | | | | | | | | | | Test case: 1. Create a volume, start it and mount it. 2. Create a data set on the client node such that all the available space is used and "No space left on device" error is generated. 3. Set cluster.min-free-disk to 30%. 4. Add bricks to the volume, trigger rebalance and wait for rebalance to complete. Change-Id: I69c9d447b4713b107f15b4801f4371c33f5fb2fc Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Test] Add tests for add brick with hard links and sticky bitkshithijiyer2020-11-121-0/+171
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Scenarios: ---------- Test case 1: 1. Create a volume, start it and mount it using fuse. 2. Create 50 files on the mount point and create 50 hardlinks for the files. 3. After the files and hard links creation is complete, add bricks to the volume and trigger rebalance on the volume. 4. Wait for rebalance to complete and check if files are skipped or not. 5. Trigger rebalance on the volume with force and repeat step 4. Test case 2: 1. Create a volume, start it and mount it using fuse. 2. Create 50 files on the mount point and set sticky bit to the files. 3. After the files creation and sticky bit addition is complete, add bricks to the volume and trigger rebalance on the volume. 4. Wait for rebalance to complete. 5. Check for data corruption by comparing arequal before and after. Change-Id: I61bcf14185b0fe31b44e9d2b0a58671f21752633 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Test] Add test for full brick + add brick + remove brickkshithijiyer2020-11-121-0/+111
| | | | | | | | | | | | | | Test case: 1. Create a volume, start it and mount it. 2. Fill few bricks till min.free.limit is reached. 3. Add brick to the volume. 4. Set cluster.min-free-disk to 30%. 5. Remove bricks from the volume. (Remove brick should pass without any errors) 6. Check for data loss by comparing arequal before and after. Change-Id: I0033ec47ab2a2958178ce23c9d164939c9bce2f3 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Test] Add test to check kill brick with remove brick runningkshithijiyer2020-11-121-0/+128
| | | | | | | | | | | | | Test case: 1. Create a volume, start it and mount it. 2. Create some data on the volume. 3. Start remove-brick on the volume. 4. When remove-brick is in progress kill brick process of a brick which is being remove. 5. Remove-brick should complete without any failures. Change-Id: I8b8740d0db82d3345279dee3f0f5f6e17160df47 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Test] Add tests to check remove brick with different optionskshithijiyer2020-11-121-0/+113
| | | | | | | | | | | | | | | | | | | | Test scenarios: =============== Test case: 1 1. Create a volume, start it and mount it. 2. Create some data on the volume. 3. Run remove-brick start, status and finally commit. 4. Check if there is any data loss or not. Test case: 2 1. Create a volume, start it and mount it. 2. Create some data on the volume. 3. Run remove-brick with force. 4. Check if bricks are still seen on volume or not Change-Id: I2cfd324093c0a835811a682accab8fb0a19551cb Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Test] Add test to add & remove bricks with lookups & I/O runningkshithijiyer2020-11-121-0/+162
| | | | | | | | | | | | | | | | | | | | Test case: 1. Enable brickmux on cluster, create a volume, start it and mount it. 2. Start the below I/O from 4 clients: From client-1 : run script to create folders and files continuously From client-2 : start linux kernel untar From client-3 : while true;do find;done From client-4 : while true;do ls -lRt;done 3. Kill brick process on one of the nodes. 4. Add brick to the volume. 5. Remove bricks from the volume. 6. Validate if I/O was successful or not. Skip reason: Test case skipped due to bug 1571317. Change-Id: I48bdb433230c0b13b0738bbebb5bb71a95357f57 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Test] Add test to check remove brick with open fdkshithijiyer2020-11-111-0/+107
| | | | | | | | | | | | | Test case: 1. Create volume, start it and mount it. 2. Open file datafile on mount point and start copying /etc/passwd line by line(Make sure that the copy is slow). 3. Start remove-brick of the subvol to which has datafile is hashed. 4. Once remove-brick is complete compare the checksum of /etc/passwd and datafile. Change-Id: I278e819731af03094dcee93963ec1da115297bef Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Test] Add test to add brick rebal with one brick fullkshithijiyer2020-11-091-0/+139
| | | | | | | | | | | | | Test case: 1. Create a pure distribute volume with 3 bricks. 2. Start it and mount it on client. 3. Fill one disk of the volume till it's full 4. Add brick to volume, start rebalance and wait for it to complete. 5. Check arequal checksum before and after add brick should be same. 6. Check if link files are present on bricks or not. Change-Id: I4645a3eea33fefe78d48805a3794556b81b189bc Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Lib] Add get_usable_size_per_disk() to librarykshithijiyer2020-10-291-5/+2
| | | | | | | | | | Changes done in this patch: 1. Adding get_usable_size_per_disk() to lib_utils.py. 2. Removing the redundant code from dht/test_rename_with_brick_min_free_limit_crossed.py. Change-Id: I80c1d6124b7f0ce562d8608565f7c46fd8612d0d Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Test] Add test to copy huge file with remove-brickkshithijiyer2020-10-291-0/+111
| | | | | | | | | | | | Test case: 1. Create a volume, start it and mount it. 2. Create files and dirs on the mount point. 3. Start remove-brick and copy huge file when remove-brick is in progress. 4. Commit remove-brick and check checksum of orginal and copied file. Change-Id: I487ca05114c1f36db666088f06cf5512671ee7d7 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Test] Validate creation of different file typessayaleeraut2020-10-281-0/+494
| | | | | | | | | | | | | | This test script covers below scenarios: 1) Creation of various file types - regular, block, character and pipe file 2) Hard link create, validate 3) Symbolic link create, validate Issue : Fails on CI due to- https://github.com/gluster/glusterfs/issues/1461 Change-Id: If50b8d697115ae7c23b4d30e0f8946e9fe705ece Signed-off-by: sayaleeraut <saraut@redhat.com>
* [Test] Add test to fill brick and perform renamekshithijiyer2020-10-121-0/+85
| | | | | | | | | | | Test case: 1. Create a volume, start it and mount it. 2. Calculate the usable size and fill till it reachs min free limit 3. Rename the file 4. Try to perfrom I/O from mount point.(This should fail) Change-Id: Iaee9944b6ba676157ee2453d734a4335aac27811 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Test] Rebalance preserves / and user subdirs permissionsTamar Shacked2020-10-121-0/+192
| | | | | | | | | | | | | | | | | Test case: 1. Create a volume start it and mount on the client. 2. Set full permission on the mount point. 3. Add new user to the client. 4. As the new user create dirs/files. 5. Compute arequal checksum and verfiy permission on / and subdir. 6. Add brick into the volume and start rebalance. 7. After rebalance is completed: 7.1 check arequal checksum 7.2 verfiy no change in permission on / and sub dir 7.3 As the new user create and delete file/dir. Change-Id: Iacd829c0714c28e231c9fc52df6526200cb53041 Signed-off-by: Tamar Shacked <tshacked@redhat.com>
* [Test] Add tests to check rebalance of files with holeskshithijiyer2020-09-301-0/+128
| | | | | | | | | | | | | | | | | | | | | Scenarios: --------- Test case: 1. Create a volume, start it and mount it using fuse. 2. On the volume root, create files with holes. 3. After the file creation is complete, add bricks to the volume. 4. Trigger rebalance on the volume. 5. Wait for rebalance to complete. Test case: 1. Create a volume, start it and mount it using fuse. 2. On the volume root, create files with holes. 3. After the file creation is complete, remove-brick from volume. 4. Wait for remove-brick to complete. Change-Id: Icf512685ed8d9ceeb467fb694d3207797aa34e4c Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Test] Validate copy of filesayaleeraut2020-09-291-0/+336
| | | | | | | | | | | | | | | | | | | This test script covers following scenarios: 1) Sub-volume is down copy file where source and destination files are on up sub-volume 2) Sub-volume is down copy file where source - hashed down, cached up, destination - hashed down 3) Sub-volume is down copy file where source - hashed down, cached up, destination hashed to up 4) Sub-volume is down copy file where source and destination files are hashing to down sub-volume 5) Sub-volume is down copy file where source file is stored on down sub-volume and destination file is stored on up sub-volume 6) Sub-volume is down copy file where source file is stored on up sub-volume and destination file is stored on down sub-volume Change-Id: I2765857950723aa8907456364aee9159f9a529ed Signed-off-by: sayaleeraut <saraut@redhat.com>
* [Test] Add test to check invalid mem read after freedkshithijiyer2020-09-251-0/+102
| | | | | | | | | | | | | Test case: 1. Create a volume and start it. 2. Mount the volume using FUSE. 3. Create multiple level of dirs and files inside every dir. 4. Rename files such that linkto files are created. 5. From the mount point do an rm -rf * and check if all files are delete or not from mount point as well as backend bricks. Change-Id: I658f67832715dde7260827cc0a27b005b6df5fe3 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Test] Add test to check for data loss with readdrip offkshithijiyer2020-09-241-0/+103
| | | | | | | | | | | | | | Test case: 1. Create a 2 x (4+2) disperse volume and start it. 2. Disable performance.force-readdirp and dht.force-readdirp. 3. Mount the volume on one client and create 8 directories. 4. Do a lookup on the mount using the same mount point, number of directories should be 8. 5. Mount the volume again on a different client and check if number of directories is the same or not. Change-Id: Id94db2bc9200ab2ce4ca2fb604f38ca4525e6ed1 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Test] Add test rm -rf * with self pointing linkto fileskshithijiyer2020-09-241-0/+140
| | | | | | | | | | | | | | | | | | | Test case: 1. Create a pure distribute volume with 2 bricks, start and mount it. 2. Create dir dir0/dir1/dir2 inside which create 1000 files and rename all the files. 3. Start remove-brick operation on the volume. 4. Check remove-brick status till status is completed. 5. When remove-brick status is completed stop it. 6. Go to brick used for remove brick and perform lookup on the files. 8. Change the linkto xattr value for every file in brick used for remove brick to point to itself. 9. Perfrom rm -rf * from mount point. Change-Id: Ic4a5e0ff93485c9c7d9a768093a52e1d34b78bdf Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Test] Add test to create and delete sparse fileskshithijiyer2020-09-241-0/+156
| | | | | | | | | | | | | | | | | | | Test case: 1. Create volume with 5 sub-volumes, start and mount it. 2. Check df -h for available size. 3. Create 2 sparse file one from /dev/null and one from /dev/zero. 4. Find out size of files and compare them through du and ls. (They shouldn't match.) 5. Check df -h for available size.(It should be less than step 2.) 6. Remove the files using rm -rf. CentOS-CI failure analysis: The testcase fails on CentOS-CI on distributed-disperse volumes as it requires 30 bricks which aren't avaliable on CentOS-CI. Change-Id: Ie53b2531cf6105117625889d21c6e27ad2c10667 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Test] Add test to nuke happy pathkshithijiyer2020-09-221-0/+95
| | | | | | | | | | | | | Test case: 1. Create a distributed volume, start and mount it 2. Create 1000 dirs and 1000 files under a directory say 'dir1' 3. Set xattr glusterfs.dht.nuke to "test" for dir1 4. Validate dir-1 is not seen from mount point 5. Validate if the entry is moved to '/brickpath/.glusterfs/landfill' and deleted eventually. Change-Id: I6359ee3c39df4e9e024a1536c95d861966f78ce5 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Test] Add test to check directory permissions wipe outkshithijiyer2020-09-221-0/+132
| | | | | | | | | | | | | | | | | | Test case: 1. Create a 1 brick pure distributed volume. 2. Start the volume and mount it on a client node using FUSE. 3. Create a directory on the mount point. 4. Check trusted.glusterfs.dht xattr on the backend brick. 5. Add brick to the volume using force. 6. Do lookup from the mount point. 7. Check the directory permissions from the backend bricks. 8. Check trusted.glusterfs.dht xattr on the backend bricks. 9. From mount point cd into the directory. 10. Check the directory permissions from backend bricks. 11. Check trusted.glusterfs.dht xattr on the backend bricks. Change-Id: I1ba2c07560bf4bdbf7de5d3831e5de71173b64a2 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Test] Rebalance with quota on mountpointsrijan-sivakumar2020-09-211-0/+188
| | | | | | | | | | | | | | Steps- 1. Create Volume of type distribute 2. Set Quota limit on the root directory 3. Do some IO to reach the Hard limit 4. After IO ends, compute arequal checksum 5. Add bricks to the volume. 6. Start rebalance 7. After rebalance is completed, check arequal checksum Change-Id: I1cffafbe90dd30013e615c353d6fd7daa5990a86 Signed-off-by: srijan-sivakumar <ssivakum@redhat.com>
* [Test] Rebalance with special filessrijan-sivakumar2020-09-211-0/+158
| | | | | | | | | | | | | | Steps- 1. Create and start volume. 2. Create some special files on mount point. 3. Once it is complete, start some IO. 4. Add brick into the volume and start rebalance. 5. All IO should be successful. Failing on centos-ci issue due to: https://github.com/gluster/glusterfs/issues/1461 Change-Id: If91886afb3f44d5ede09dfc84e966f66c89ff709 Signed-off-by: srijan-sivakumar <ssivakum@redhat.com>
* [Test] Rebalance with quota on subdirectorysrijan-sivakumar2020-09-181-0/+195
| | | | | | | | | | | | | | Steps- 1. Create Volume of type distribute 2. Set Quota limit on subdirectory 3. Do some IO to reach the Hard limit 4. After IO ends, compute arequal checksum 5. Add bricks to the volume. 6. Start rebalance 7. After rebalance is completed, check arequal checksum Change-Id: I0a431ffb5d1c957e8d11817dd8142d9551323a65 Signed-off-by: srijan-sivakumar <ssivakum@redhat.com>
* [Test] Rename Files after Rebalancesrijan-sivakumar2020-09-181-0/+181
| | | | | | | | | | | | | | | Steps- 1. Create a volume 2. Create directories or files 3. Calculate checksum using arequal 4. Add brick and start rebalance 5. While rebalance is running, rename the files or directories 6. After rebalance is completed, calculate checksum 7. Compare the Checksum Change-Id: I59f80b06a23f6b4c406907673d71b254d054461d Signed-off-by: srijan-sivakumar <ssivakum@redhat.com>
* [Test] Check heal of custom xattr on directorysayaleeraut2020-09-181-0/+332
| | | | | | | | | | | | | | | This test script covers below scenarios: 1) Sub-volume is down - Directory - Verify extended attribute creation, display, modification and removal. 2) Directory self heal - extended custom attribute when sub-volume is up again. 3) Sub-volume is down -create new Directory - Verify extended attribute creation, display, modification and removal. 4) Newly Directory self heal - extended custom attribute when sub-volume is up again. Change-Id: I35f8772d7758c2e9c02558b46301681d6c0f319b Signed-off-by: sayaleeraut <saraut@redhat.com>
* [Test] Brick removal with Quota in Distribute volumesrijan-sivakumar2020-09-181-0/+160
| | | | | | | | | | | | | Steps- 1. Create a distribute volume. 2. Set quota limit on a directory on mount. 3. Do IO to reach the hardlimit on the directory. 4. After IO is completed, remove a brick. 5. Check if quota is validated, i.e. hardlimit exceeded true after rebalance. Change-Id: I8408cc31f70019c799df91e1c3faa7dc82ee5519 Signed-off-by: srijan-sivakumar <ssivakum@redhat.com>
* [Test] Rebalance with brick down in replicasrijan-sivakumar2020-09-181-0/+171
| | | | | | | | | | | | | | Steps- 1. Create a Replica volume. 2. Bring down one of the brick down in the replica pair 3. Do some IO and create files on the mount point 4. Add a pair of bricks to the volume 5. Initiate rebalance 6. Bring back the brick which was down 7. After self heal happens, all the files should be present. Change-Id: I78a42866d585b00c40a2712c4ae8f2ab3552adca Signed-off-by: srijan-sivakumar <ssivakum@redhat.com>
* [Testfix] Increase timeouts and fix I/O errorskshithijiyer2020-09-145-6/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Problem: -------- Problem 1: In the latest runs the following testcases fail with wait timeout mostly on rebalance with an exception on test_stack_overflow which fails on layout: 1.functional.dht.test_stack_overflow.TestStackOverflow_cplex_dispersed_glusterfs.test_stack_overflow 2.functional.dht.test_rebalance_dir_file_from_multiple_clients.RebalanceValidation_cplex_dispersed_glusterfs.test_expanding_volume_when_io_in_progress 3.functional.dht.test_restart_glusterd_after_rebalance.RebalanceValidation_cplex_dispersed_glusterfs.test_restart_glusterd_after_rebalance 4.functional.dht.test_stop_glusterd_while_rebalance_in_progress.RebalanceValidation_cplex_dispersed_glusterfs.test_stop_glusterd_while_rebalance_in_progress 5.functional.dht.test_rebalance_with_hidden_files.RebalanceValidation_cplex_dispersed_glusterfs.test_rebalance_with_hidden_files This is mostly observed on disprese volumes which is expected as in most cases disprese volumes take more time than pure replicated or distributed volumes due to it's design. Problem 2: Another issue which was observed was that test_rebalance_with_hidden_files failing on I/O with distributed volume type with the below stack trace: Traceback (most recent call last): File "/usr/share/glustolibs/io/scripts/file_dir_ops.py", line 1246, in <module> rc = args.func(args) File "/usr/share/glustolibs/io/scripts/file_dir_ops.py", line 374, in create_files base_file_name, file_types) File "/usr/share/glustolibs/io/scripts/file_dir_ops.py", line 341, in _create_files ret = pool.map(lambda file_tuple: _create_file(*file_tuple), files) File "/usr/lib64/python2.7/multiprocessing/pool.py", line 250, in map return self.map_async(func, iterable, chunksize).get() File "/usr/lib64/python2.7/multiprocessing/pool.py", line 554, in get raise self._value IOError: [Errno 17] File exists: '/mnt/testvol_distributed_glusterfs/.1.txt' Solution: -------- Problem 1 Increasing or adding timeout so that wait timeouts are not observed. Problem 2 Adding counter logic to fix the I/O failure. Change-Id: I917137abdeb2e3844ee666258235f6ccc854ee9f Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Test] Validate copy of directorysayaleeraut2020-09-111-0/+308
| | | | | | | | | | | | | | This test script verifies below scenarios: 1)Sub-volume is down copy directory 2)Sub-volume is down copy directory - destination dir hash to up sub-volume 3)Sub-volume is down copy newly created directory - destination dir hash to up sub-volume 4)Sub-volume is down copy newly created directory - destination dir hash to down sub-volume Change-Id: I22b9bf79ef4775b1128477fb858c509a719efb4a Signed-off-by: sayaleeraut <saraut@redhat.com>
* [Test] Add test to add brick with IO & rsync runningkshithijiyer2020-09-071-0/+151
| | | | | | | | | | | | | | | | | Test case: 1. Create, start and mount a volume. 2. Create a directory on the mount point and start linux utar. 3. Create another directory on the mount point and start rsync of linux untar directory. 4. Add bricks to the volume 5. Trigger rebalance on the volume. 6. Wait for rebalance to complete on volume. 7. Wait for I/O to complete. 8. Validate if checksum of both the untar and rsync is same. Change-Id: I008c65b1783d581129b4c35f3ff90642fffe29d8 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Testfix] Fix hostname issue with special file caseskshithijiyer2020-09-041-3/+3
| | | | | | | | | | | | | | Problem: The code fails if we have give hostname in glusto-tests config file. This is becuase we have a converstion logic present in the testcase which converts IP to hostname. Solution: Adding code to check if it's an IP and then only run the code to convert it. Change-Id: I3bb1a566d469a4c32161c91fa610da378d46e77e Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Test] Add basic tests for different device fileskshithijiyer2020-09-021-0/+328
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Adding the following testcases for block, character and pipe files: Test case 1: 1. Create distributed volume with 5 sub-volumes, start and mount it. 2. Create character and block device files. 3. Check filetype of files from mount point. 4. Verify that the files are stored on only the bricks which is mentioned in trusted.glusterfs.pathinfo xattr. 5. Verify stat output from mount point and bricks. Test case 2: 1. Create distributed volume with 5 sub-volumes, start and mount it. 2. Create character and block device files. 3. Check filetype of files from mount point. 4. Verify that the files are stored on only one bricks which is mentioned in trusted.glusterfs.pathinfo xattr. 5. Delete the files. 6. Verify if the files are delete from all the bricks Test case 3: 1. Create distributed volume with 5 sub-volumes, start and mount it. 2. Create character and block device files. 3. Check filetype of files from mount point. 4. Set a custom xattr for files. 5. Verify that xattr for files is displayed on mount point and bricks. 6. Modify custom xattr value and verify that xattr for files is displayed on mount point and bricks. 7. Remove the xattr and verify that custom xattr is not displayed. 8. Verify that mount point and brick shows pathinfo xattr properly. Test case 4: 1. Create distributed volume with 5 sub-volumes, start and mount it. 2. Create a pipe file. 3. Check filetype of files from mount point. 4. Verify that the files are stored on only the bricks which is mentioned in trusted.glusterfs.pathinfo xattr. 5. Verify stat output from mount point and bricks. 6. Write data to fifo file and read data from fifo file from the other instance of the same client. Upstream bug: https://github.com/gluster/glusterfs/issues/1461 Change-Id: I0e72246ba3d6d20a5de95a95d51271337b6b5a57 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Testfix] Fix wrong comparion in test_create_filekshithijiyer2020-08-311-1/+1
| | | | | | | | | | | | | Problem: brickdir.hashrange_contains_hash() returns true or False. However it test test_create_file it's check it ret == 1 or not Fix: Changing ret == 1 to ret. Change-Id: I53655794f10fc5d778790bdffbe65563907bef6d Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Libfix] Fix python3 getfattr() issueskshithijiyer2020-08-172-16/+22
| | | | | | | | | | | | | | | | | Problem: Due to patch [1] which was sent for issue #24 causes a large number of testcases to fail or get stuck in the latest DHT run. Solution: Make changes sot that getfattr command sends back the output in text wherever needed. Links: [1] https://review.gluster.org/#/c/glusto-tests/+/24841/ Change-Id: I6390e38130b0699ceae652dee8c3b2db2ef3f379 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Testfix] Change g.log.error() to self.assertTrue()kshithijiyer2020-08-171-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | Problem: Testcases test_volume_start_stop_while_rebalance_is_in_progress throws the below traceback when run: ``` Traceback (most recent call last): File "/usr/lib64/python2.7/logging/__init__.py", line 851, in emit msg = self.format(record) File "/usr/lib64/python2.7/logging/__init__.py", line 724, in format return fmt.format(record) File "/usr/lib64/python2.7/logging/__init__.py", line 464, in format record.message = record.getMessage() File "/usr/lib64/python2.7/logging/__init__.py", line 328, in getMessage msg = msg % self.args TypeError: not all arguments converted during string formatting Logged from file test_volume_start_stop_while_rebalance_in_progress.py, line 135 ``` This is because g.log.error() was used instead of self.assertTrue(). Solution: Changing to self.assertTrue(). Change-Id: If926eb834c0128a4e507da9fdd805916196432cb Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Testfix] Fix missing param in test_rmdir_subvol_down.pyPranav2020-08-071-1/+2
| | | | | | | The assertIsNotNone is missing the param. Change-Id: Iddff9b203672b2edf702ada624bfac1892641712 Signed-off-by: Pranav <prprakas@redhat.com>
* [Test] Validate data integritysayaleeraut2020-08-051-0/+169
| | | | | | | | | | | | | | | | | | | | Description: Checks that there is no data loss when remove-brick operation is stopped and then new bricks are added to the volume. Steps: 1) Create a volume. 2) Mount the volume using FUSE. 3) Create files and dirs on the mount-point. 4) Calculate the arequal-checksum on the mount-point. 5) Start remove-brick operation on the volume. 6) While migration is in progress, stop the remove-brick operation. 7) Add-bricks to the volume and trigger rebalance. 8) Wait for rebalance to complete. 9) Calculate the arequal-checksum on the mount-point. Change-Id: I96a7311f5acd0ae19b17d7b7c7da4d3899cdef77 Signed-off-by: sayaleeraut <saraut@redhat.com>
* [Test] Validate readdirp with rebalancesayaleeraut2020-07-271-0/+173
| | | | | | | | | | | | | | | | | | | | | Description : Check that all directories are read and listed while rebalance is still in progress. Steps : 1) Create a volume. 2) Mount the volume using FUSE. 3) Create a dir "master" on mount-point. 4) Create 8000 empty dirs (dir1 to dir8000) inside dir "master". 5) Now inside a few dirs (e.g. dir1 to dir10), create deep dirs and inside every dir, create 50 files. 6) Collect the number of dirs present on /mnt/<volname>/master 7) Change the rebalance throttle to lazy. 8) Add-brick to the volume (at least 3 replica sets.) 9) Start rebalance using "force" option on the volume. 10) List the directories on dir "master". Change-Id: I4d04b3e2be93b5c25b5ed70516bb99d99fb1fb8a Signed-off-by: sayaleeraut <saraut@redhat.com>
* [Test] Validate open file migrationsayaleeraut2020-07-141-0/+131
| | | | | | | | | | | | | | | | | | | | Description: Checks that files with open fd are migrated successfully. Steps: 1) Create a volume. 2) Mount the volume using FUSE. 3) Create files on volume mount. 4) Open fd for the files and keep on doing read write operations on these files. 5) While fds are open, add bricks to the volume and trigger rebalance. 6) Wait for rebalance to complete. 7) Wait for write on open fd to complete. 8) Check for any data loss during rebalance. 9) Check if rebalance has any failures. Change-Id: I9345827ae36eb6d2c264d0e0874738211aadc55e Signed-off-by: sayaleeraut <saraut@redhat.com>
* [TestFix] Fix failures in test_dht_create_dir.pysayaleeraut2020-07-091-80/+44
| | | | | | | | | | | | | | Added following changes: 1) The test script consists of 2 test cases. Hence changed the setUpClass(cls) to setUp(self). 2) Changed the code that checks if the symlink is pointing to correct location in the test_create_link_for_directory(self), as earlier it was failing with "AssertionError: sym link does not point to correct location" as the output of command 'stat' for symlink file varies as per the platform. Change-Id: I43f98a0d60b3ebf30236ff7e702667373a39a0e1 Signed-off-by: sayaleeraut <saraut@redhat.com>
* [Test] Add file rename test when dest exist in diff subvol combinationsPranav2020-07-081-0/+919
| | | | | | | | Tests to validate the behaviour of rename cases when destination file exists and is hashed or cached to different subvol combinations Change-Id: I44752a444d9c112d590efd66c48ff095c22fcecd Signed-off-by: Pranav <prprakas@redhat.com>
* [Test] Add file rename tests when dest file in src hashed/cached subvolPranav2020-06-251-0/+639
| | | | | | | | | Tests to validate behaviour of different scenarios of file rename cases, when destination file exists intially and is hashed to the source file hashed or cached subvol. Change-Id: Iec12d33c459cb966861d2efac2bae85103555cc1 Signed-off-by: Pranav <prprakas@redhat.com>
* [TestFix] Change method namesayaleeraut2020-06-241-1/+1
| | | | | | | | Changing the method name from test_readdirp_with_rebalance(self) to test_access_file_with_stale_linkto_xattr(self) Change-Id: I5503e301d65f96e38aa135827d8bc698a0371281 Signed-off-by: sayaleeraut <saraut@redhat.com>
* [Test] Check access file with stale linkto xattrsayaleeraut2020-06-241-0/+169
| | | | | | | | | | | | | | | | | | | | Description: The test script verfies that a file with stale linkto xattr can be accessed from a non-root user. Steps: 1) Create a volume and start it. 2) Mount the volume on client node using FUSE. 3) Create a file. 4) Enable performance.parallel-readdir and performance.readdir-ahead on the volume. 5) Rename the file in order to create a linkto file. 6) Force the linkto xattr values to become stale by changing the dht subvols in the graph. 7) Login as an non-root user and access the file. Change-Id: I4f275dedd47a851c2c4839f51cf1867638a66667 Signed-off-by: sayaleeraut <saraut@redhat.com>
* [TestFix] Remove tier related kwarg from testBala Konda Reddy M2020-06-241-1/+1
| | | | | | | | | Removing 'add_to_hot_tier' parameter as it defaults to False and it is not needed for the add-brick operation in the test as the volume type is not tier. Change-Id: I4a697a453e368197dfaf143d344a623d449e2614 Signed-off-by: Bala Konda Reddy M <bala12352@gmail.com>
* [Test] Validate delete file picked for migrationsayaleeraut2020-06-191-0/+165
| | | | | | | | | | | | | | | | | | | Description: The test script verifies that if a file is picked for migration and if it is deleted, then the file should be removed successfully. Steps : 1) First create a big data file of 10GB. 2) Rename that file, such that after rename a linkto file is created (we are doing this to make sure that file is picked for migration.) 3) Add bricks to the volume and trigger rebalance using force option. 4) When the file has been picked for migration, delete that file from the mount point. 5) Check whether the file has been deleted or not on the mount-point as well as the back-end bricks. Change-Id: I137512a1d94a89aa811a3a9d61a9fb4002bf26be Signed-off-by: sayaleeraut <saraut@redhat.com>