| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
| |
This test case checks whether directory with null gfid is getting
the gfids assigned on all the subvols of a dist-rep volume when
lookup comes on that directory from the mount point.
Change-Id: Ie68cd0e8b293e9380532e2ccda3d53659854de9b
Signed-off-by: karthik-us <ksubrahm@redhat.com>
|
|
|
|
|
|
|
|
| |
This test cases will validate snapshot scheduler behaviour
when we enable/disable scheduler.
Change-Id: Ia6f01a9853aaceb05155bfc92cccba686d320e43
Signed-off-by: Sunny Kumar <sunkumar@redhat.com>
|
|
|
|
|
| |
Change-Id: I14807c51fb534e5b729da6de69eb062601e80b42
Signed-off-by: Manisha Saini <msaini@redhat.com>
|
|
|
|
|
| |
Change-Id: I29eefb9ba5bbe46ba79267b85fb8814a14d10b00
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
1. Trusted storage Pool of 2 nodes
2. Create a distributed volumes with 2 bricks
3. Start the volume
4. Mount the volume
5. Add some data file on mount
6. Start rebalance with force
7. stop glusterd on 2nd node
8. Check rebalance status , it should not hang
9. Issue volume related command
Change-Id: Ie3e809e5fe24590eec070607ee99417d0bea0aa0
Signed-off-by: Mohit Agrawal <moagrawa@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When the IOs are done with server side heal disabled,
it should not hang.
ec_check_heal_comp function will fail because of the
bug 1593224- Client side heal is not removing dirty
flag for some of the files
While this bug has been raised and investigated by
dev, this patch is doing its job and testing the
target functionality.
RHG3-11097
Signed-off-by: Sunil Kumar Acharya <sheggodu@redhat.com>
Signed-off-by: Ashish Pandey <aspandey@redhat.com>
Change-Id: I841285c9b1a747f5800ec8cdd29a099e5fcc08c5
Signed-off-by: Ashish Pandey <aspandey@redhat.com>
|
|
|
|
|
|
|
|
| |
This test case will validate USS behaviour when we
enable USS on the volume when brick is down.
Change-Id: I9be021135c1f038a0c6949ce2484b47cd8634c1e
Signed-off-by: Sunny Kumar <sunkumar@redhat.com>
|
|
|
|
|
| |
Change-Id: Ica3d1175ee5d2c6a45e7b7d6513885ee2b84d960
Signed-off-by: Bala Konda Reddy Mekala <bmekala@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Steps followed are:
1. Create and a start a volume
2. Set cluster.server-quorum-type as server
3. Set cluster.server-quorum-ratio as 95%
4. Bring down glusterd in half of the nodes
5. Confirm that quorum is not met, by check whether the bricks are down.
6. Perform a add brick operation, which should fail.
7. Check whether added brick is part of volume.
Change-Id: I93e3676273bbdddad4d4920c46640e60c7875964
Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
- Enable Quota
- Set quota limit of 1 GB on /
- Create 10 directories inside volume
- Set quota limit of 100 MB on directories
- Fill data inside the directories till quota limit is reached
- Validate the size fields using quota list
Change-Id: I917da8cdf0d78afd6eeee22b6cf6a4d580ac0c9f
Signed-off-by: Vinayak Papnoi <vpapnoi@redhat.com>
|
|
|
|
|
| |
Change-Id: Iaaa78c071bd7ee3ad3ed222957e71aec61f80045
Signed-off-by: Ravishankar N <ravishankar@redhat.com>
|
|
|
|
|
|
|
| |
subdir
Change-Id: I8c71470a67fef17d54d5fdfbcf0d36eb156c07dd
Signed-off-by: ubansal <ubansal@redhat.com>
|
|
|
|
|
| |
Change-Id: I84c4375c38ef7322e65f113db6c6229620c57214
Signed-off-by: ubansal <ubansal@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
-> Set global options and other volume specific options on the volume
-> gluster volume set VOL nfs.rpc-auth-allow 1.1.1.1
-> gluster volume set VOL nfs.addr-namelookup on
-> gluster volume set VOL cluster.server-quorum-type server
-> gluster volume set VOL network.ping-timeout 20
-> gluster volume set VOL nfs.port 2049
-> gluster volume set VOL performance.nfs.write-behind on
-> Peer probe for a new node
Change-Id: Ifc06159b436e74ae4865ffcbe877b84307d517fd
Signed-off-by: Rajesh Madaka <rmadaka@redhat.com>
|
|
|
|
|
| |
Change-Id: I5dd80e8b1ec8a0e3ab7f565c478be368c2e7c73d
Signed-off-by: ubansal <ubansal@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Test quota limit-usage by setting limits of various values
being big, small and decimal values. eg. 1GB, 10GB, 2.5GB, etc.
and validate the limits by creating data more than the
hard limits.
(after reaching hard limit the data creation should stop)
Addressed review comments.
Change-Id: If2801cf13ea22c253b22ecb41fc07f2f1705a6d7
Signed-off-by: Vinayak Papnoi <vpapnoi@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
-> Create volume
-> Mount a volume
-> set 'read-only on' on a volume
-> perform some I/O's on mount point
-> set 'read-only off' on a volume
-> perform some I/O's on mount point
Change-Id: Iab980b1fd51edd764ef38b329275d72f875bf3c0
Signed-off-by: Rajesh Madaka <rmadaka@redhat.com>
|
|
|
|
|
|
|
|
| |
Rebalance should fail on a pure distribute volume when glusterd is down
on one of the nodes.
Change-Id: I5a871a7783b434ef61f0f1cf4b262db9f5148af6
Signed-off-by: Prasad Desala <tdesala@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
-> Create volume
-> Set quorum type
-> Set quorum ratio to 95%
-> Start the volume
-> Stop the glusterd on one node
-> Now quorum is in not met condition
-> Check all bricks went to offline or not
-> Perform replace brick operation
-> Start glusterd on same node which is already stopped
-> Check all bricks are in online or not
-> Verify in vol info that old brick not replaced with new brick
Change-Id: Iab84df9449feeaba66ff0df2d0acbddb6b4e7591
Signed-off-by: Rajesh Madaka <rmadaka@redhat.com>
|
|
|
|
|
|
|
|
| |
While remove-brick operation is in-progress on a volume, glusterd
should not allow add-brick operation on the same volume.
Change-Id: Iddcbbdb1a5a444ea88995f176c0a18df932dea41
Signed-off-by: Prasad Desala <tdesala@redhat.com>
|
|
|
|
|
|
|
|
| |
If a rebalance is in-progress on a volume, glusterd should fail a
remove-brick operation on the same volume.
Change-Id: I2f15023870f342c98186b1860b960cb3c04c0572
Signed-off-by: Prasad Desala <tdesala@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
the cluster
In this test case, we are setting up some volume options when one of the
node in the cluster is down and then after the node is up, chekcing whether
the volume info is synced. And we are also trying to peer probe a new
node at the same time bringing down the glusterd of some node in the cluster.
After the node is up, checking whether peer status has correct information.
Steps followed are:
1. Create a cluster
2. Create a 2x3 distribute-replicated volume
3. Start the volume
4. From N1 issue 'gluster volume set <vol-name> stat-prefetch on'
5. At the same time when Step4 is happening, bring down glusterd of N2
6. Start glusterd on N2
7. Verify volume info is synced
8. From N1, issue 'gluster peer probe <new-host>'
9. At the same time when Step8 is happening, bring down glusterd of N2
10. Start glusterd on N2
11. Check the peer status has correct information across the cluster.
Change-Id: Ib95268a3fe11cfbc5c76aa090658133ecc8a0517
Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
-> Detach the node from peer
-> Check that any error messages related to peer detach
in glusterd log file
-> No errors should be there in glusterd log file
Change-Id: I481df5b15528fb6fd77cd1372110d7d23dd5cdef
Signed-off-by: Rajesh Madaka <rmadaka@redhat.com>
|
|
|
|
|
|
|
|
|
| |
This test tries to create and validate EC volume
with various combinations of input parameters.
RHG3-12926
Change-Id: Icfc15e069d04475ca65b4d7c1dd260434f104cdb
Signed-off-by: Sunil Kumar Acharya <sheggodu@redhat.com>
|
|
|
|
|
| |
Change-Id: Icd5c423ad1b2fee770680cc66d9919c930c4780f
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Steps followed are:
1. Create and a start a volume
2. Set cluster.server-quorum-type as server
3. Set cluster.server-quorum-ratio as 95%
3. Bring down glusterd in half of the nodes
4. Confirm that quorum is not met, by check whether the bricks are down.
5. Perform a remove brick operation, which should fail.
Change-Id: I69525651727ec92dce2f346ad706ab0943490a2d
Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
|
|
|
|
|
|
|
|
| |
Self heal should heal the files even if the quota object limit
is exceeded on a directory.
Change-Id: Icc63b1794f82aef708832d0b207ded5f13391b85
Signed-off-by: karthik-us <ksubrahm@redhat.com>
|
|
|
|
|
| |
Change-Id: I94b67fe9a810f020fef36ec9ab00ce7182c9e5c0
Signed-off-by: Manisha Saini <msaini@redhat.com>
|
|
|
|
|
|
| |
sub-directory level using both IP and hostname of clients.
Change-Id: I3822b2cfd0fbadcdcbc679f046b299d84e741f19
|
|
|
|
| |
Change-Id: I8770aa4fdfd4bf94ecdda3e80a79c6717e2974dd
|
|
|
|
|
|
|
|
| |
Activated snaps should get listed in .snap directory while deactivated
snap should not.
Change-Id: I04a61c49dcbc9510d60cc8ee6b1364742271bbf0
Signed-off-by: Sunny Kumar <sunkumar@redhat.com>
|
|
|
|
|
|
|
|
|
| |
Test case objective:
Restarting glusterd should not restart a completed
rebalance operation.
Change-Id: I52b808d91d461048044ac742185ddf4696bf94a3
Signed-off-by: Prasad Desala <tdesala@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In this test case we will set auth.allow option with more than 4096
characters and restart the glusterd. glusterd should restart successfully.
Steps followed:
1. Create and start a volume
2. Set auth.allow with <4096 characters
3. Restart glusterd, it should succeed
4. Set auth.allow with >4096 characters
5. Restart glusterd, it should succeed
6. Confirm that glusterd is running on the stopped node
Change-Id: I7a5a8e49a798238bd88e5da54a8f4857c039ca07
Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
1. Create Dist Volume on Node 1
2. Down brick on Node 1
3. Peer Probe N2 from N1
4. Add identical brick on newly added node
5. Check volume status
Change-Id: I17c4769df6e4ec2f11b7d948ca48a006cf301073
Signed-off-by: Mohit Agrawal <moagrawa@redhat.com>
|
|
|
|
|
|
|
|
| |
Create 10 clones of snapshot and verify gluster volume list
information
Change-Id: Ibd813680d1890e239deaf415469f7f4dccfa6867
Signed-off-by: srivickynesh <sselvan@redhat.com>
|
|
|
|
|
| |
Change-Id: I2ba674b8ea97964040f2e7d47a169c1e41808116
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
-> Create volume
-> Mount the volume on 2 clients
-> Run I/O's on mountpoint
-> While I/O's are in progress
-> Perfrom gluster volume status fd repeatedly
-> List all files and dirs listed
Change-Id: I2d979dd79fa37ad270057bd87d290c84569c4a3d
Signed-off-by: Rajesh Madaka <rmadaka@redhat.com>
|
|
|
|
| |
Change-Id: I40f41c03e5ea8130a7374579b249bdd113b4a842
|
|
|
|
|
| |
Change-Id: I624e041271d3b776e243aebfab43e081ccfd7946
Signed-off-by: Manisha Saini <msaini@redhat.com>
|
|
|
|
| |
Change-Id: Iaad1dcb4339aa752a45e39d7bca338d1fdc87da0
|
|
|
|
|
|
|
|
|
|
|
|
| |
This testcase will enable quota on dir of volume and rename it
to other name and checks whether quota list is showing the renamed dir.
Incorporated the changes made on the quota_ops and quota_libs.
Change-Id: I7166a9810614c966a4a656b5e8976df55b102c01
Signed-off-by: venkata edara <redara@redhat.com>
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
Signed-off-by: Vinayak Papnoi <vpapnoi@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
-> Create distributed-replica Volume
-> Add 6 bricks to the volume
-> Mount the volume
-> Perform some I/O's on mount point
-> unmount the volume
-> Stop and delete the volume
-> Create another volume using bricks of deleted volume
Change-Id: I263d2f0a359ccb0409dba620363a39d92ea8d2b9
Signed-off-by: Rajesh Madaka <rmadaka@redhat.com>
|
|
|
|
|
| |
Change-Id: Iff0e832ebcad14968328c7d7575d120ba8152252
Signed-off-by: Vitalii Koriakov <vkoriako@redhat.com>
|
|
|
|
|
|
|
|
| |
This testcase verifies rebalance behaviour while IO is in-progress from
multiple clients
Change-Id: Id87472a8194d31e5de181827cfcf30ccacc346c0
Signed-off-by: Prasad Desala <tdesala@redhat.com>
|
|
|
|
| |
Change-Id: I3ad5486c5b507fa82ac2f4c0b7c0bdadfc523220
|
|
|
|
|
|
|
|
| |
Test Cases in this module tests the snapshot information after
glusterd is restarted.
Change-Id: I7c5e761d8a8cd261841d064dbd94093e1c5b6edd
Signed-off-by: srivickynesh <sselvan@redhat.com>
|
|
|
|
|
| |
Change-Id: I3fbb764925fb19b3e4808711eadbf51090ed98b3
Signed-off-by: Manisha Saini <msaini@redhat.com>
|
|
|
|
|
|
|
|
| |
Self heal should heal the files even if the quota limit on a
directory is reached.
Change-Id: I336b78eb55cd5c7ec6b3236f95ce9f0cb8423667
Signed-off-by: karthik-us <ksubrahm@redhat.com>
|
|
|
|
|
|
|
| |
Deletion of file on source bricks must reflect on sink brick after bringing
it up(conservative merge must NOT happen) when quota is enabled.
Change-Id: I8c3f55ddd1eee9a211674c8759b94aa801f6f174
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In this test case, we will check gluster volume status and gluster
volume status --xml from a node which is part of cluster but not
having any bricks of volumes.
Steps followed are:
1. Create a two node cluster
2. Create a distributed volume with one brick(Assume brick contains to N1)
3. From node which is not having any bricks i.e, N2 check gluster v status
which should fail saying volume is not started.
4. From N2, check gluster v status --xml. It should fail because volume
is not started yet.
5. Start the volume
6. From N2, check gluster v status, this should succeed.
7. From N2, check gluster v status --xml, this should succeed.
Change-Id: I1a230b82c0628c66c16f25f89dd4e6d1d0b3f443
Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
|