| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
| |
Test case:
1. Create volume, start it and mount it.
2. Open file datafile on mount point and start copying /etc/passwd
line by line(Make sure that the copy is slow).
3. Start remove-brick of the subvol to which has datafile is hashed.
4. Once remove-brick is complete compare the checksum of /etc/passwd
and datafile.
Change-Id: I278e819731af03094dcee93963ec1da115297bef
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
| |
Change-Id: I080328dfbcde5652f9ab697f8751b87bf96e8245
Signed-off-by: “Milind <“mwaykole@redhat.com”>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Steps-
1. Create a volume and mount it.
2. Set the quorum type to 'server'.
3. Bring some nodes down such that quorum isn't met.
4. Brick status in the node which is up should be offline.
5. Restart glusterd in this node.
6. Brick status in the restarted node should be offline.
Change-Id: If6885133848d77ec803f059f7a056dc3aeba7eb1
Signed-off-by: srijan-sivakumar <ssivakum@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Test case:
1. Create a pure distribute volume with 3 bricks.
2. Start it and mount it on client.
3. Fill one disk of the volume till it's full
4. Add brick to volume, start rebalance and wait for it to complete.
5. Check arequal checksum before and after add brick should be same.
6. Check if link files are present on bricks or not.
Change-Id: I4645a3eea33fefe78d48805a3794556b81b189bc
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Test Steps:
1) Create and start a volume
2) Run volume info command
3) Run volume status command
4) Run volume stop command
5) Run volume start command
6) Check the default log level of cli.log
Change-Id: I871d83500b2a3876541afa348c49b8ce32169f23
Signed-off-by: nik-redhat <nladha@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Test Steps:
1. Create and start a volume
2. Check the output of '/var/lib/glusterd/options' file
3. Store the value of 'global-option-version'
4. Set server-quorum-ratio to 70%
5. Check the output of '/var/lib/glusterd/options' file
6. Compare the value of 'global-option-version' and check
if the value of 'server-quorum-ratio' is set to 70%
Change-Id: I5af40a1e05eb542e914e5766667c271cbbe126e8
Signed-off-by: nik-redhat <nladha@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Test Steps:
1. Create and start a volume
2. Disable brick mutliplex
2. Set auth.allow option on volume for the client address on which
volume is to be mounted
3. Mount the volume on client and then unmmount it.
4. Reset the volume
5. Set auth.reject option on volume for the client address on which
volume is to be mounted
6. Mounting the volume should fail
7. Reset the volume and mount it on client.
8. Repeat the steps 2-7 with brick multiplex enabled
Change-Id: I26d88a217c03f1b4732e4bdb9b8467a9cd608bae
Signed-off-by: nik-redhat <nladha@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
1) Create a distributed-replicated volume and start it.
2) Enable storage.reserve option on the volume
using below command, gluster volume set storage.reserve.
let's say, set it to a value of 50.
3) Mount the volume on a client
4) check df -h output of the mount point and backend bricks.
Change-Id: I74f891ce5a92e1a4769ec47c64fc5469b6eb9224
Signed-off-by: “Milind” <mwaykole@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Steps:
1. Create a replicated/distributed-replicate volume and mount it
2. Set data/metadata/entry-self-heal to off and
data-self-heal-algorithm to diff
3. Create few files inside a directory with some data
4. Check arequal of the subvol and all the bricks in the subvol should
have same checksum
5. Bring down a brick from the subvol and validate it is offline
6. Modify the data of existing files under the directory
7. Bring back the brick online and wait for heal to complete
8. Check arequal of the subvol and all the brick in the same subvol
should have same checksum
Change-Id: I568a932c6e1db4a9084c01556c5fcca7c8e24a49
Signed-off-by: karthik-us <ksubrahm@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
Changes done in this patch:
1. Adding get_usable_size_per_disk() to lib_utils.py.
2. Removing the redundant code from
dht/test_rename_with_brick_min_free_limit_crossed.py.
Change-Id: I80c1d6124b7f0ce562d8608565f7c46fd8612d0d
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Test case:
1. Create a volume, start it and mount it.
2. Create files and dirs on the mount point.
3. Start remove-brick and copy huge file when remove-brick is
in progress.
4. Commit remove-brick and check checksum of orginal and copied file.
Change-Id: I487ca05114c1f36db666088f06cf5512671ee7d7
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This test script covers below scenarios:
1) Creation of various file types - regular, block,
character and pipe file
2) Hard link create, validate
3) Symbolic link create, validate
Issue : Fails on CI due to-
https://github.com/gluster/glusterfs/issues/1461
Change-Id: If50b8d697115ae7c23b4d30e0f8946e9fe705ece
Signed-off-by: sayaleeraut <saraut@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
1. Create a replicated/distributed-replicate volume and mount it
2. Start IO from the clients
3. Bring down a brick from the subvol and validate it is offline
4. Bring back the brick online and wait for heal to complete
5. Once the heal is completed, expand the volume.
6. Trigger rebalance and wait for rebalance to complete
7. Validate IO, no errors during the steps performed from step 2
8. Check arequal of the subvol and all the brick in the same subvol
should have same checksum
Note: This tests is cleary for replicated volume types.
Change-Id: I2286e75cbee4f22a0ed14d6c320a4496dc3c3905
Signed-off-by: Bala Konda Reddy M <bala12352@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Test Steps:
1) Create a volume and start it.
2) Mount volume on client and start IO.
3) Start profile on the volume.
4) Create another volume.
5) Start profile on the volume.
6) Run volume status in a loop for 100 times in one node.
7) Run profile info for the new volume on one of the other node
8) Run profile info for the new volume in loop for 100 times on
the other node
Change-Id: I1c32a938bf434a88aca033c54618dca88623b9d1
Signed-off-by: nik-redhat <nladha@redhat.com>
|
|
|
|
|
|
|
|
|
| |
1 . Check the location of glusterd socket file ( glusterd.socket )
ls /var/run/ | grep -i glusterd.socket
2. systemctl is-enabled glusterd -> enabled
Change-Id: I6557c27ffb7e91482043741eeac0294e171a0925
Signed-off-by: “Milind” <mwaykole@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Scenarios added:
----------------
Test case:
1. Create a volume, start it and mount it.
2. Start I/O from mount point.
3. Check if there are any memory leaks and OOM killers.
Test case:
1. Create a volume, start it and mount it.
2. Set features.cache-invalidation to ON.
3. Start I/O from mount point.
4. Run gluster volume heal command in a loop
5. Check if there are any memory leaks and OOM killers on servers.
Design change:
--------------
- self.id() is moved into test class as it was hitting bound
errors in the original logic.
- Logic changed for checking leaks fuse.
- Fixed breakage in methods where ever needed.
Change-Id: Icb600d833d0c08636b6002abb489342ea1f946d7
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
| |
Steps-
1. Create and start volume.
2. Check that the quorum options aren't coming up in the vol info.
3. Kill two glusterd processes.
4. There shouldn't be any effect on the glusterfsd processes.
Change-Id: I40e6ab5081e723ae41417f1e5a6ece13c65046b3
Signed-off-by: srijan-sivakumar <ssivakum@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Steps:
1. Create all types of volumes.
2. Mount the brick on two client mounts
3. Prepare same script to do flock on the two nodes
while running this script it should not hang
4. Wait till 300 iteration on both the node
Change-Id: I53e5c8b3b924ac502e876fb41dee34e9b5a74ff7
Signed-off-by: “Milind” <mwaykole@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Steps-
1. Create a disperse volume and start it.
2. Set the eager lock option
3. mount the volume and create a file
4. Check the profile info of the volume for inodelk count.
5. check xattrs of the file for dirty bit.
6. Reset the eager lock option and check the attributes again.
Change-Id: I0ef1a0e89c1bc202e5df4022c6d98ad0de0c1a68
Signed-off-by: Sheetal <spamecha@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
changed from
`self.validate_vol_option('storage.reserve', '1 (DEFAULT)')`
to
`self.validate_vol_option('storage.reserve', '1')`
Change-Id: If75820b4ab3c3b04454e232ea1eccc4ee5f7be0b
Signed-off-by: “Milind” <mwaykole@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
Steps-
1. Create a volume and mount it.
2. Set ownership permissions on the mountpoint and validate it.
3. Restart the volume.
4. Validate the permissions set on the mountpoint.
Change-Id: I1bd3f0b5181bc93a7afd8e77ab5244224f2f4fed
Signed-off-by: srijan-sivakumar <ssivakum@redhat.com>
|
|
|
|
|
|
|
|
| |
Add test to verify whether the glusterd crash is found while
performing a peer probe with firewall services removed.
Change-Id: If68c3da2ec90135a480a3cb1ffc85a6b46b1f3ef
Signed-off-by: Pranav <prprakas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Steps:
1. Create a volume and start it.
2. Fetch the brick list
3. Bring any one brick down umount the brick
4. Force start the volume and check that all the bricks are not online
5. Remount the removed brick and bring back the brick online
6. Force start the volume and check if all the bricks are online
Change-Id: I464d3fe451cb7c99e5f21835f3f44f0ea112d7d2
Signed-off-by: nik-redhat <nladha@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
Test case:
1. Create a volume, start it and mount it.
2. Calculate the usable size and fill till it reachs min free limit
3. Rename the file
4. Try to perfrom I/O from mount point.(This should fail)
Change-Id: Iaee9944b6ba676157ee2453d734a4335aac27811
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Test case:
1. Create a volume start it and mount on the client.
2. Set full permission on the mount point.
3. Add new user to the client.
4. As the new user create dirs/files.
5. Compute arequal checksum and verfiy permission on / and subdir.
6. Add brick into the volume and start rebalance.
7. After rebalance is completed:
7.1 check arequal checksum
7.2 verfiy no change in permission on / and sub dir
7.3 As the new user create and delete file/dir.
Change-Id: Iacd829c0714c28e231c9fc52df6526200cb53041
Signed-off-by: Tamar Shacked <tshacked@redhat.com>
|
|
|
|
|
|
|
|
|
| |
Fix:
Added more volume types to perform tests and
optimized the code for a better flow.
Change-Id: I8249763161f30109d068da401504e0a24cde4d78
Signed-off-by: nik-redhat <nladha@redhat.com>
|
|
|
|
|
|
|
|
| |
Adding check to verify gluster volume status
doesn't cause any error msg in glusterd logs
Change-Id: I5666aa7fb7932a7b61a56afa7d60341ef66a978e
Signed-off-by: Pranav <prprakas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Added check to verify the behavior after bringing down the
smallest brick. The available volume size should not be
greater than the initial vol size
Test skipped due to bug:
https://bugzilla.redhat.com/show_bug.cgi?id=1883429
Change-Id: I00c0310210f6fe218cedd23e055dfaec3632ec8d
Signed-off-by: Pranav <prprakas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Steps-
1. Create a volume and start it.
2. Mount volume on the client and start IO.
3. Start profile on the volume
4. Run profile info and see if all bricks
are present or not
5. Create another volume and start it.
6. Run profile info without starting profile.
7. Run profile info with all possible options
without starting profile.
Change-Id: I0eb2424f385197c45bc0c4e3084c053a9498ae7d
Signed-off-by: nik-redhat <nladha@redhat.com>
|
|
|
|
|
| |
Change-Id: I3920be66ac84fe700c4d0d6a1d2c1750efb43335
Signed-off-by: Arthy Loganathan <aloganat@redhat.com>
|
|
|
|
|
| |
Change-Id: I465fefeae36a5b700009bb1d6a3c6639ffafd6bd
Signed-off-by: Arthy Loganathan <aloganat@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Scenarios:
---------
Test case:
1. Create a volume, start it and mount it using fuse.
2. On the volume root, create files with holes.
3. After the file creation is complete, add bricks to the volume.
4. Trigger rebalance on the volume.
5. Wait for rebalance to complete.
Test case:
1. Create a volume, start it and mount it using fuse.
2. On the volume root, create files with holes.
3. After the file creation is complete, remove-brick from volume.
4. Wait for remove-brick to complete.
Change-Id: Icf512685ed8d9ceeb467fb694d3207797aa34e4c
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Create a volume
* Create a session on the volume
* Create various files on mount point
* Create various directories on point
* Perform glusterfind pre with --full --type f --regenerate-outfile
* Check the contents of outfile
* Perform glusterfind pre with --full --type d --regenerate-outfile
* Check the contents of outfile
* Perform glusterfind pre with --full --type both --regenerate-outfile
* Check the contents of outfile
* Perform glusterfind query with --full --type f
* Check the contents of outfile
* Perform glusterfind query with --full --type d
* Check the contents of outfile
* Perform glusterfind query with --full --type d
* Check the contents of outfile
Change-Id: I5c4827ff2052a90613de7bd38d61aaf23cb3284b
Signed-off-by: Shwetha K Acharya <sacharya@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This test script covers following scenarios:
1) Sub-volume is down copy file where source and destination files
are on up sub-volume
2) Sub-volume is down copy file where source - hashed down,
cached up, destination - hashed down
3) Sub-volume is down copy file where source - hashed down,
cached up, destination hashed to up
4) Sub-volume is down copy file where source and destination files
are hashing to down sub-volume
5) Sub-volume is down copy file where source file is stored on
down sub-volume and destination file is stored on up sub-volume
6) Sub-volume is down copy file where source file is stored on up
sub-volume and destination file is stored on down sub-volume
Change-Id: I2765857950723aa8907456364aee9159f9a529ed
Signed-off-by: sayaleeraut <saraut@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
Steps-
1. Create a volume and start it.
2. Set the quorum-type to 'server' and verify it.
3. Set the quorum-type to 'none' and verify it.
4. Set the quorum-ratio to some value and verify it.
Change-Id: I08715972c13fc455cee25f25bdda852b92a48e10
Signed-off-by: srijan-sivakumar <ssivakum@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
Steps-
1. Create a volume and start it.
2. Set storage.reserve limit on the created volume and verify
3. Reset storage.reserve limit on the created volume and verify
Change-Id: I6592d19463696ba2c43efbb8f281024fc610d18d
Signed-off-by: srijan-sivakumar <ssivakum@redhat.com>
|
|
|
|
|
|
|
|
|
| |
Test to validate gluster peer probe scenarios using ip addr,
hostname and fqdn by verifying each with peer status output,
pool list and cmd_history.log
Change-Id: I77512cfcf62b28e70682405c47014646be71593c
Signed-off-by: Pranav <prprakas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Steps-
1) Create a volume and start it.
2) Fetch the brick list
3) Remove any brickpath
4) Check number of bricks online is equal
to number of bricks in volume
Change-Id: I4c3a6692fc88561a47a7d2564901f21dfe0073d4
Signed-off-by: nik-redhat <nladha@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
1. Check for the presence of /var/lib/glusterd/glusterd.info file
2. Get the UUID of the current NODE
3. check the value of the uuid returned by executing the command
"gluster system:: uuid get "
4. Check the uuid value shown by other node in the cluster
for the same node "gluster peer status"
on one node will give the UUID of the other node
Change-Id: I61dfb227e37b87e889577b77283d65eda4b3cd29
Signed-off-by: “Milind” <mwaykole@redhat.com>
|
|
|
|
|
|
|
|
|
| |
Reason : The cd will change the working directory to root
and renames and softlink creations for subsequent files will
fail as seen in the glusto logs.
Change-Id: I174ac11007dc301ba6ec8ccddaeb919a181b1c30
Signed-off-by: srijan-sivakumar <ssivakum@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Test case:
1. Create a volume and start it.
2. Mount the volume using FUSE.
3. Create multiple level of dirs and files inside every dir.
4. Rename files such that linkto files are created.
5. From the mount point do an rm -rf * and check if all files
are delete or not from mount point as well as backend bricks.
Change-Id: I658f67832715dde7260827cc0a27b005b6df5fe3
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Test case:
1. Create a 2 x (4+2) disperse volume and start it.
2. Disable performance.force-readdirp and dht.force-readdirp.
3. Mount the volume on one client and create 8 directories.
4. Do a lookup on the mount using the same mount point,
number of directories should be 8.
5. Mount the volume again on a different client and check
if number of directories is the same or not.
Change-Id: Id94db2bc9200ab2ce4ca2fb604f38ca4525e6ed1
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Test case:
1. Create a pure distribute volume with 2 bricks,
start and mount it.
2. Create dir dir0/dir1/dir2 inside which create 1000
files and rename all the files.
3. Start remove-brick operation on the volume.
4. Check remove-brick status till status is completed.
5. When remove-brick status is completed stop it.
6. Go to brick used for remove brick and perform lookup
on the files.
8. Change the linkto xattr value for every file in brick
used for remove brick to point to itself.
9. Perfrom rm -rf * from mount point.
Change-Id: Ic4a5e0ff93485c9c7d9a768093a52e1d34b78bdf
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Test case:
1. Create volume with 5 sub-volumes, start and mount it.
2. Check df -h for available size.
3. Create 2 sparse file one from /dev/null and one from /dev/zero.
4. Find out size of files and compare them through du and ls.
(They shouldn't match.)
5. Check df -h for available size.(It should be less than step 2.)
6. Remove the files using rm -rf.
CentOS-CI failure analysis:
The testcase fails on CentOS-CI on distributed-disperse
volumes as it requires 30 bricks which aren't avaliable
on CentOS-CI.
Change-Id: Ie53b2531cf6105117625889d21c6e27ad2c10667
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Test case:
1. Create a distributed volume, start and mount it
2. Create 1000 dirs and 1000 files under a directory say 'dir1'
3. Set xattr glusterfs.dht.nuke to "test" for dir1
4. Validate dir-1 is not seen from mount point
5. Validate if the entry is moved to '/brickpath/.glusterfs/landfill'
and deleted eventually.
Change-Id: I6359ee3c39df4e9e024a1536c95d861966f78ce5
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
| |
Extending the existing validation by adding
node restart as a method to bring back
offline bricks along with exiting volume start
approach.
Change-Id: I1291b7d9b4a3c299859175b4cdcd2952339c48a4
Signed-off-by: Pranav <prprakas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Test case:
1. Create a 1 brick pure distributed volume.
2. Start the volume and mount it on a client node using FUSE.
3. Create a directory on the mount point.
4. Check trusted.glusterfs.dht xattr on the backend brick.
5. Add brick to the volume using force.
6. Do lookup from the mount point.
7. Check the directory permissions from the backend bricks.
8. Check trusted.glusterfs.dht xattr on the backend bricks.
9. From mount point cd into the directory.
10. Check the directory permissions from backend bricks.
11. Check trusted.glusterfs.dht xattr on the backend bricks.
Change-Id: I1ba2c07560bf4bdbf7de5d3831e5de71173b64a2
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Steps-
1. Create Volume of type distribute
2. Set Quota limit on the root directory
3. Do some IO to reach the Hard limit
4. After IO ends, compute arequal checksum
5. Add bricks to the volume.
6. Start rebalance
7. After rebalance is completed, check arequal checksum
Change-Id: I1cffafbe90dd30013e615c353d6fd7daa5990a86
Signed-off-by: srijan-sivakumar <ssivakum@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Steps-
1. Create and start volume.
2. Create some special files on mount point.
3. Once it is complete, start some IO.
4. Add brick into the volume and start rebalance.
5. All IO should be successful.
Failing on centos-ci issue due to: https://github.com/gluster/glusterfs/issues/1461
Change-Id: If91886afb3f44d5ede09dfc84e966f66c89ff709
Signed-off-by: srijan-sivakumar <ssivakum@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Steps-
1. Create Volume of type distribute
2. Set Quota limit on subdirectory
3. Do some IO to reach the Hard limit
4. After IO ends, compute arequal checksum
5. Add bricks to the volume.
6. Start rebalance
7. After rebalance is completed, check arequal checksum
Change-Id: I0a431ffb5d1c957e8d11817dd8142d9551323a65
Signed-off-by: srijan-sivakumar <ssivakum@redhat.com>
|