summaryrefslogtreecommitdiffstats
path: root/tests
Commit message (Collapse)AuthorAgeFilesLines
...
* [Test] xml Dump of gluster volume status during rebalance“Milind2020-11-271-0/+185
| | | | | | | | | | | | | | 1. Create a trusted storage pool by peer probing the node 2. Create a distributed-replicated volume 3. Start the volume and fuse mount the volume and start IO 4. Create another replicated volume and start it and stop it 5. Start rebalance on the volume. 6. While rebalance in progress, stop glusterd on one of the nodes in the Trusted Storage pool. 7. Get the status of the volumes with --xml dump Change-Id: I581b7713d7f9bfdd7be00add3244578b84daf94f Signed-off-by: “Milind <“mwaykole@redhat.com”>
* [Test] Add test to verify memory leak with ssl enabledPranav2020-11-271-0/+128
| | | | | | | | | | This test is to verify BZ:1785577 ( https://bugzilla.redhat.com/show_bug.cgi?id=1785577) To verify that there are no memeory leak when SSL is enabled Change-Id: I1f44de8c65b322ded76961253b8b7a7147aca76a Signed-off-by: Pranav <prprakas@redhat.com>
* [Test] Add tc to check vol set when glusterd is stopped on one nodenik-redhat2020-11-241-0/+193
| | | | | | | | | | | | | | Test Steps: 1) Setup and mount a volume on client. 2) Stop glusterd on a random server. 3) Start IO on mount points 4) Set an option on the volume 5) Start glusterd on the stopped node. 6) Verify all the bricks are online after starting glusterd. 7) Check if the volume info is synced across the cluster. Change-Id: Ia2982ce4e26f0d690eb2bc7516d463d2a71cce86 Signed-off-by: nik-redhat <nladha@redhat.com>
* [Test]: Add tc for default ping-timeout and epoll thread countnik-redhat2020-11-241-0/+87
| | | | | | | | | | | | | Test Steps: 1. Start glusterd 2. Check ping timeout value in glusterd.vol should be 0 3. Create a test script for epoll thread count 4. Source the test script 5. Fetch the pid of glusterd 6. Check epoll thread count of glusterd should be 1 Change-Id: Ie3bbcb799eb1776004c3db4922d7ee5f5993b100 Signed-off-by: nik-redhat <nladha@redhat.com>
* [Test] Rebalance with add brick and lookup on mountsrijan-sivakumar2020-11-181-0/+113
| | | | | | | | | | | | | | Steps- 1. Create a distributed-replicated volume, start and mount it. 2. Create deep dirs (200) and create some 100 files on the deepest directory. 3. Expand volume. 4. Start rebalance. 5. Once rebalance is completed, do a lookup on mount and log the time taken. Change-Id: I3a55d2670cc6bda7670f97f0cd6208dc9e36a5d6 Signed-off-by: srijan-sivakumar <ssivakum@redhat.com>
* [Test] Pro-active metadata self heal on open fdArthy Loganathan2020-11-161-0/+244
| | | | | Change-Id: I626914130554cccf1008ab43158d7063d131b870 Signed-off-by: Arthy Loganathan <aloganat@redhat.com>
* [Test] Add test to add brick, replace brick and fix layoutkshithijiyer2020-11-121-0/+124
| | | | | | | | | | | | | Test case: 1. Create a volume, start it and mount it. 2. Create files and dirs on the mount point. 3. Add bricks to the volume. 4. Replace 2 old brick to the volume. 5. Trigger rebalance fix layout and wait for it to complete. 6. Check layout on all the bricks through trusted.glusterfs.dht. Change-Id: Ibc8ded6ce2a54b9e4ec8bf0dc82436fcbcc25f56 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Test] Add test with filled bricks + add brick + rebalancekshithijiyer2020-11-121-0/+120
| | | | | | | | | | | | | Test case: 1. Create a volume, start it and mount it. 2. Create a data set on the client node such that all the available space is used and "No space left on device" error is generated. 3. Set cluster.min-free-disk to 30%. 4. Add bricks to the volume, trigger rebalance and wait for rebalance to complete. Change-Id: I69c9d447b4713b107f15b4801f4371c33f5fb2fc Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Test] Add tests for add brick with hard links and sticky bitkshithijiyer2020-11-121-0/+171
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Scenarios: ---------- Test case 1: 1. Create a volume, start it and mount it using fuse. 2. Create 50 files on the mount point and create 50 hardlinks for the files. 3. After the files and hard links creation is complete, add bricks to the volume and trigger rebalance on the volume. 4. Wait for rebalance to complete and check if files are skipped or not. 5. Trigger rebalance on the volume with force and repeat step 4. Test case 2: 1. Create a volume, start it and mount it using fuse. 2. Create 50 files on the mount point and set sticky bit to the files. 3. After the files creation and sticky bit addition is complete, add bricks to the volume and trigger rebalance on the volume. 4. Wait for rebalance to complete. 5. Check for data corruption by comparing arequal before and after. Change-Id: I61bcf14185b0fe31b44e9d2b0a58671f21752633 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Test] Add test for full brick + add brick + remove brickkshithijiyer2020-11-121-0/+111
| | | | | | | | | | | | | | Test case: 1. Create a volume, start it and mount it. 2. Fill few bricks till min.free.limit is reached. 3. Add brick to the volume. 4. Set cluster.min-free-disk to 30%. 5. Remove bricks from the volume. (Remove brick should pass without any errors) 6. Check for data loss by comparing arequal before and after. Change-Id: I0033ec47ab2a2958178ce23c9d164939c9bce2f3 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Test] Add test to check kill brick with remove brick runningkshithijiyer2020-11-121-0/+128
| | | | | | | | | | | | | Test case: 1. Create a volume, start it and mount it. 2. Create some data on the volume. 3. Start remove-brick on the volume. 4. When remove-brick is in progress kill brick process of a brick which is being remove. 5. Remove-brick should complete without any failures. Change-Id: I8b8740d0db82d3345279dee3f0f5f6e17160df47 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Test] Add tests to check remove brick with different optionskshithijiyer2020-11-121-0/+113
| | | | | | | | | | | | | | | | | | | | Test scenarios: =============== Test case: 1 1. Create a volume, start it and mount it. 2. Create some data on the volume. 3. Run remove-brick start, status and finally commit. 4. Check if there is any data loss or not. Test case: 2 1. Create a volume, start it and mount it. 2. Create some data on the volume. 3. Run remove-brick with force. 4. Check if bricks are still seen on volume or not Change-Id: I2cfd324093c0a835811a682accab8fb0a19551cb Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Test] Add test to add & remove bricks with lookups & I/O runningkshithijiyer2020-11-121-0/+162
| | | | | | | | | | | | | | | | | | | | Test case: 1. Enable brickmux on cluster, create a volume, start it and mount it. 2. Start the below I/O from 4 clients: From client-1 : run script to create folders and files continuously From client-2 : start linux kernel untar From client-3 : while true;do find;done From client-4 : while true;do ls -lRt;done 3. Kill brick process on one of the nodes. 4. Add brick to the volume. 5. Remove bricks from the volume. 6. Validate if I/O was successful or not. Skip reason: Test case skipped due to bug 1571317. Change-Id: I48bdb433230c0b13b0738bbebb5bb71a95357f57 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* check that heal info does not hangRavishankar N2020-11-121-0/+162
| | | | | | | | Check that when there are pending heals and healing and I/O are going on, heal info completes successfully. Change-Id: I7b00c5b6446d6ec722c1c48a50e5293272df0fdf Signed-off-by: Ravishankar N <ravishankar@redhat.com>
* [Test] Add test to check remove brick with open fdkshithijiyer2020-11-111-0/+107
| | | | | | | | | | | | | Test case: 1. Create volume, start it and mount it. 2. Open file datafile on mount point and start copying /etc/passwd line by line(Make sure that the copy is slow). 3. Start remove-brick of the subvol to which has datafile is hashed. 4. Once remove-brick is complete compare the checksum of /etc/passwd and datafile. Change-Id: I278e819731af03094dcee93963ec1da115297bef Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [TestFix] Fixing typo glusterd“Milind2020-11-101-1/+1
| | | | | Change-Id: I080328dfbcde5652f9ab697f8751b87bf96e8245 Signed-off-by: “Milind <“mwaykole@redhat.com”>
* [Test] Brick status offline when quorum not met.srijan-sivakumar2020-11-091-0/+125
| | | | | | | | | | | | | Steps- 1. Create a volume and mount it. 2. Set the quorum type to 'server'. 3. Bring some nodes down such that quorum isn't met. 4. Brick status in the node which is up should be offline. 5. Restart glusterd in this node. 6. Brick status in the restarted node should be offline. Change-Id: If6885133848d77ec803f059f7a056dc3aeba7eb1 Signed-off-by: srijan-sivakumar <ssivakum@redhat.com>
* [Test] Add test to add brick rebal with one brick fullkshithijiyer2020-11-091-0/+139
| | | | | | | | | | | | | Test case: 1. Create a pure distribute volume with 3 bricks. 2. Start it and mount it on client. 3. Fill one disk of the volume till it's full 4. Add brick to volume, start rebalance and wait for it to complete. 5. Check arequal checksum before and after add brick should be same. 6. Check if link files are present on bricks or not. Change-Id: I4645a3eea33fefe78d48805a3794556b81b189bc Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Test] Add tc to check that default log level of CLInik-redhat2020-11-091-0/+97
| | | | | | | | | | | | | Test Steps: 1) Create and start a volume 2) Run volume info command 3) Run volume status command 4) Run volume stop command 5) Run volume start command 6) Check the default log level of cli.log Change-Id: I871d83500b2a3876541afa348c49b8ce32169f23 Signed-off-by: nik-redhat <nladha@redhat.com>
* [Test] Add tc to check updates in 'options' file on quorum changesnik-redhat2020-11-051-0/+94
| | | | | | | | | | | | | | Test Steps: 1. Create and start a volume 2. Check the output of '/var/lib/glusterd/options' file 3. Store the value of 'global-option-version' 4. Set server-quorum-ratio to 70% 5. Check the output of '/var/lib/glusterd/options' file 6. Compare the value of 'global-option-version' and check if the value of 'server-quorum-ratio' is set to 70% Change-Id: I5af40a1e05eb542e914e5766667c271cbbe126e8 Signed-off-by: nik-redhat <nladha@redhat.com>
* [Test] Add tc to validate auth.allow and auth.reject options on volumenik-redhat2020-11-051-0/+162
| | | | | | | | | | | | | | | | | | Test Steps: 1. Create and start a volume 2. Disable brick mutliplex 2. Set auth.allow option on volume for the client address on which volume is to be mounted 3. Mount the volume on client and then unmmount it. 4. Reset the volume 5. Set auth.reject option on volume for the client address on which volume is to be mounted 6. Mounting the volume should fail 7. Reset the volume and mount it on client. 8. Repeat the steps 2-7 with brick multiplex enabled Change-Id: I26d88a217c03f1b4732e4bdb9b8467a9cd608bae Signed-off-by: nik-redhat <nladha@redhat.com>
* [Test] Add Test to authored to test posix storage.reserve option“Milind”2020-11-021-0/+79
| | | | | | | | | | | | 1) Create a distributed-replicated volume and start it. 2) Enable storage.reserve option on the volume using below command, gluster volume set storage.reserve. let's say, set it to a value of 50. 3) Mount the volume on a client 4) check df -h output of the mount point and backend bricks. Change-Id: I74f891ce5a92e1a4769ec47c64fc5469b6eb9224 Signed-off-by: “Milind” <mwaykole@redhat.com>
* [Test] - Check self heal with data-self-heal-algorithm set to diffkarthik-us2020-10-301-0/+162
| | | | | | | | | | | | | | | | | | Steps: 1. Create a replicated/distributed-replicate volume and mount it 2. Set data/metadata/entry-self-heal to off and data-self-heal-algorithm to diff 3. Create few files inside a directory with some data 4. Check arequal of the subvol and all the bricks in the subvol should have same checksum 5. Bring down a brick from the subvol and validate it is offline 6. Modify the data of existing files under the directory 7. Bring back the brick online and wait for heal to complete 8. Check arequal of the subvol and all the brick in the same subvol should have same checksum Change-Id: I568a932c6e1db4a9084c01556c5fcca7c8e24a49 Signed-off-by: karthik-us <ksubrahm@redhat.com>
* [Lib] Add get_usable_size_per_disk() to librarykshithijiyer2020-10-291-5/+2
| | | | | | | | | | Changes done in this patch: 1. Adding get_usable_size_per_disk() to lib_utils.py. 2. Removing the redundant code from dht/test_rename_with_brick_min_free_limit_crossed.py. Change-Id: I80c1d6124b7f0ce562d8608565f7c46fd8612d0d Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Test] Add test to copy huge file with remove-brickkshithijiyer2020-10-291-0/+111
| | | | | | | | | | | | Test case: 1. Create a volume, start it and mount it. 2. Create files and dirs on the mount point. 3. Start remove-brick and copy huge file when remove-brick is in progress. 4. Commit remove-brick and check checksum of orginal and copied file. Change-Id: I487ca05114c1f36db666088f06cf5512671ee7d7 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Test] Validate creation of different file typessayaleeraut2020-10-281-0/+494
| | | | | | | | | | | | | | This test script covers below scenarios: 1) Creation of various file types - regular, block, character and pipe file 2) Hard link create, validate 3) Symbolic link create, validate Issue : Fails on CI due to- https://github.com/gluster/glusterfs/issues/1461 Change-Id: If50b8d697115ae7c23b4d30e0f8946e9fe705ece Signed-off-by: sayaleeraut <saraut@redhat.com>
* [Test] Self-heal, add-brick on replicated volume typesBala Konda Reddy M2020-10-231-0/+199
| | | | | | | | | | | | | | | | | 1. Create a replicated/distributed-replicate volume and mount it 2. Start IO from the clients 3. Bring down a brick from the subvol and validate it is offline 4. Bring back the brick online and wait for heal to complete 5. Once the heal is completed, expand the volume. 6. Trigger rebalance and wait for rebalance to complete 7. Validate IO, no errors during the steps performed from step 2 8. Check arequal of the subvol and all the brick in the same subvol should have same checksum Note: This tests is cleary for replicated volume types. Change-Id: I2286e75cbee4f22a0ed14d6c320a4496dc3c3905 Signed-off-by: Bala Konda Reddy M <bala12352@gmail.com>
* [Test]: Add tc to check profile simultaneously on 2 different nodesnik-redhat2020-10-221-0/+185
| | | | | | | | | | | | | | | | Test Steps: 1) Create a volume and start it. 2) Mount volume on client and start IO. 3) Start profile on the volume. 4) Create another volume. 5) Start profile on the volume. 6) Run volume status in a loop for 100 times in one node. 7) Run profile info for the new volume on one of the other node 8) Run profile info for the new volume in loop for 100 times on the other node Change-Id: I1c32a938bf434a88aca033c54618dca88623b9d1 Signed-off-by: nik-redhat <nladha@redhat.com>
* [Test] Add TC to check glusterd config file“Milind”2020-10-221-0/+29
| | | | | | | | | 1 . Check the location of glusterd socket file ( glusterd.socket ) ls /var/run/ | grep -i glusterd.socket 2. systemctl is-enabled glusterd -> enabled Change-Id: I6557c27ffb7e91482043741eeac0294e171a0925 Signed-off-by: “Milind” <mwaykole@redhat.com>
* [Test] Add 2 memory leak tests and fix library issueskshithijiyer2020-10-213-0/+237
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Scenarios added: ---------------- Test case: 1. Create a volume, start it and mount it. 2. Start I/O from mount point. 3. Check if there are any memory leaks and OOM killers. Test case: 1. Create a volume, start it and mount it. 2. Set features.cache-invalidation to ON. 3. Start I/O from mount point. 4. Run gluster volume heal command in a loop 5. Check if there are any memory leaks and OOM killers on servers. Design change: -------------- - self.id() is moved into test class as it was hitting bound errors in the original logic. - Logic changed for checking leaks fuse. - Fixed breakage in methods where ever needed. Change-Id: Icb600d833d0c08636b6002abb489342ea1f946d7 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Test] Default volume behavior and quorum optionssrijan-sivakumar2020-10-201-0/+129
| | | | | | | | | | | Steps- 1. Create and start volume. 2. Check that the quorum options aren't coming up in the vol info. 3. Kill two glusterd processes. 4. There shouldn't be any effect on the glusterfsd processes. Change-Id: I40e6ab5081e723ae41417f1e5a6ece13c65046b3 Signed-off-by: srijan-sivakumar <ssivakum@redhat.com>
* [Test] Add test gluster does not release posix lock multiple clients “Milind”2020-10-191-0/+91
| | | | | | | | | | | | Steps: 1. Create all types of volumes. 2. Mount the brick on two client mounts 3. Prepare same script to do flock on the two nodes while running this script it should not hang 4. Wait till 300 iteration on both the node Change-Id: I53e5c8b3b924ac502e876fb41dee34e9b5a74ff7 Signed-off-by: “Milind” <mwaykole@redhat.com>
* [Test] Test Eager lock reduce the number of locks during write.Sheetal2020-10-191-0/+161
| | | | | | | | | | | | | Steps- 1. Create a disperse volume and start it. 2. Set the eager lock option 3. mount the volume and create a file 4. Check the profile info of the volume for inodelk count. 5. check xattrs of the file for dirty bit. 6. Reset the eager lock option and check the attributes again. Change-Id: I0ef1a0e89c1bc202e5df4022c6d98ad0de0c1a68 Signed-off-by: Sheetal <spamecha@redhat.com>
* [TestFix] Changing the assert statement“Milind”2020-10-191-1/+5
| | | | | | | | | | changed from `self.validate_vol_option('storage.reserve', '1 (DEFAULT)')` to `self.validate_vol_option('storage.reserve', '1')` Change-Id: If75820b4ab3c3b04454e232ea1eccc4ee5f7be0b Signed-off-by: “Milind” <mwaykole@redhat.com>
* [Test] Test mountpoint ownership post volume restart.srijan-sivakumar2020-10-191-0/+109
| | | | | | | | | | | Steps- 1. Create a volume and mount it. 2. Set ownership permissions on the mountpoint and validate it. 3. Restart the volume. 4. Validate the permissions set on the mountpoint. Change-Id: I1bd3f0b5181bc93a7afd8e77ab5244224f2f4fed Signed-off-by: srijan-sivakumar <ssivakum@redhat.com>
* [Test] Add test check glusterd crash when firewall ports not openedPranav2020-10-121-0/+140
| | | | | | | | Add test to verify whether the glusterd crash is found while performing a peer probe with firewall services removed. Change-Id: If68c3da2ec90135a480a3cb1ffc85a6b46b1f3ef Signed-off-by: Pranav <prprakas@redhat.com>
* [Test]: Add tc to check volume status with brick removalnik-redhat2020-10-121-12/+69
| | | | | | | | | | | | | Steps: 1. Create a volume and start it. 2. Fetch the brick list 3. Bring any one brick down umount the brick 4. Force start the volume and check that all the bricks are not online 5. Remount the removed brick and bring back the brick online 6. Force start the volume and check if all the bricks are online Change-Id: I464d3fe451cb7c99e5f21835f3f44f0ea112d7d2 Signed-off-by: nik-redhat <nladha@redhat.com>
* [Test] Add test to fill brick and perform renamekshithijiyer2020-10-121-0/+85
| | | | | | | | | | | Test case: 1. Create a volume, start it and mount it. 2. Calculate the usable size and fill till it reachs min free limit 3. Rename the file 4. Try to perfrom I/O from mount point.(This should fail) Change-Id: Iaee9944b6ba676157ee2453d734a4335aac27811 Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Test] Rebalance preserves / and user subdirs permissionsTamar Shacked2020-10-121-0/+192
| | | | | | | | | | | | | | | | | Test case: 1. Create a volume start it and mount on the client. 2. Set full permission on the mount point. 3. Add new user to the client. 4. As the new user create dirs/files. 5. Compute arequal checksum and verfiy permission on / and subdir. 6. Add brick into the volume and start rebalance. 7. After rebalance is completed: 7.1 check arequal checksum 7.2 verfiy no change in permission on / and sub dir 7.3 As the new user create and delete file/dir. Change-Id: Iacd829c0714c28e231c9fc52df6526200cb53041 Signed-off-by: Tamar Shacked <tshacked@redhat.com>
* [TestFix]: Add tc to check volume status with bricks absentnik-redhat2020-10-091-45/+30
| | | | | | | | | Fix: Added more volume types to perform tests and optimized the code for a better flow. Change-Id: I8249763161f30109d068da401504e0a24cde4d78 Signed-off-by: nik-redhat <nladha@redhat.com>
* [TestFix] Add check to verify glusterd ErrorPranav2020-10-071-0/+27
| | | | | | | | Adding check to verify gluster volume status doesn't cause any error msg in glusterd logs Change-Id: I5666aa7fb7932a7b61a56afa7d60341ef66a978e Signed-off-by: Pranav <prprakas@redhat.com>
* [TestFix] Add check vol size after bringing min brick downPranav2020-10-071-37/+67
| | | | | | | | | | | | Added check to verify the behavior after bringing down the smallest brick. The available volume size should not be greater than the initial vol size Test skipped due to bug: https://bugzilla.redhat.com/show_bug.cgi?id=1883429 Change-Id: I00c0310210f6fe218cedd23e055dfaec3632ec8d Signed-off-by: Pranav <prprakas@redhat.com>
* [Test] Volume profile info without starting profilenik-redhat2020-10-061-0/+188
| | | | | | | | | | | | | | | | Steps- 1. Create a volume and start it. 2. Mount volume on the client and start IO. 3. Start profile on the volume 4. Run profile info and see if all bricks are present or not 5. Create another volume and start it. 6. Run profile info without starting profile. 7. Run profile info with all possible options without starting profile. Change-Id: I0eb2424f385197c45bc0c4e3084c053a9498ae7d Signed-off-by: nik-redhat <nladha@redhat.com>
* [Test] Replica 3 to arbiter conversion with ongoing IO'sArthy Loganathan2020-10-061-13/+106
| | | | | Change-Id: I3920be66ac84fe700c4d0d6a1d2c1750efb43335 Signed-off-by: Arthy Loganathan <aloganat@redhat.com>
* [Test] multiple clients dd on same-fileArthy Loganathan2020-10-011-9/+14
| | | | | Change-Id: I465fefeae36a5b700009bb1d6a3c6639ffafd6bd Signed-off-by: Arthy Loganathan <aloganat@redhat.com>
* [Test] Add tests to check rebalance of files with holeskshithijiyer2020-09-301-0/+128
| | | | | | | | | | | | | | | | | | | | | Scenarios: --------- Test case: 1. Create a volume, start it and mount it using fuse. 2. On the volume root, create files with holes. 3. After the file creation is complete, add bricks to the volume. 4. Trigger rebalance on the volume. 5. Wait for rebalance to complete. Test case: 1. Create a volume, start it and mount it using fuse. 2. On the volume root, create files with holes. 3. After the file creation is complete, remove-brick from volume. 4. Wait for remove-brick to complete. Change-Id: Icf512685ed8d9ceeb467fb694d3207797aa34e4c Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
* [Test] Glusterfind --full --type optionShwetha K Acharya2020-09-301-0/+175
| | | | | | | | | | | | | | | | | | | | | | * Create a volume * Create a session on the volume * Create various files on mount point * Create various directories on point * Perform glusterfind pre with --full --type f --regenerate-outfile * Check the contents of outfile * Perform glusterfind pre with --full --type d --regenerate-outfile * Check the contents of outfile * Perform glusterfind pre with --full --type both --regenerate-outfile * Check the contents of outfile * Perform glusterfind query with --full --type f * Check the contents of outfile * Perform glusterfind query with --full --type d * Check the contents of outfile * Perform glusterfind query with --full --type d * Check the contents of outfile Change-Id: I5c4827ff2052a90613de7bd38d61aaf23cb3284b Signed-off-by: Shwetha K Acharya <sacharya@redhat.com>
* [Test] Validate copy of filesayaleeraut2020-09-291-0/+336
| | | | | | | | | | | | | | | | | | | This test script covers following scenarios: 1) Sub-volume is down copy file where source and destination files are on up sub-volume 2) Sub-volume is down copy file where source - hashed down, cached up, destination - hashed down 3) Sub-volume is down copy file where source - hashed down, cached up, destination hashed to up 4) Sub-volume is down copy file where source and destination files are hashing to down sub-volume 5) Sub-volume is down copy file where source file is stored on down sub-volume and destination file is stored on up sub-volume 6) Sub-volume is down copy file where source file is stored on up sub-volume and destination file is stored on down sub-volume Change-Id: I2765857950723aa8907456364aee9159f9a529ed Signed-off-by: sayaleeraut <saraut@redhat.com>
* [Test] Test Quorum specific CLI commands.srijan-sivakumar2020-09-291-0/+97
| | | | | | | | | | | Steps- 1. Create a volume and start it. 2. Set the quorum-type to 'server' and verify it. 3. Set the quorum-type to 'none' and verify it. 4. Set the quorum-ratio to some value and verify it. Change-Id: I08715972c13fc455cee25f25bdda852b92a48e10 Signed-off-by: srijan-sivakumar <ssivakum@redhat.com>
* [Test] Test set and reset of storage.reserve limit on glusterdsrijan-sivakumar2020-09-291-0/+91
| | | | | | | | | | Steps- 1. Create a volume and start it. 2. Set storage.reserve limit on the created volume and verify 3. Reset storage.reserve limit on the created volume and verify Change-Id: I6592d19463696ba2c43efbb8f281024fc610d18d Signed-off-by: srijan-sivakumar <ssivakum@redhat.com>