| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Test to verify whether the lock is being granted to two diff
clients at the same time.
- Take lock from client 1 => Lock is acquired
- Try taking lock from client 2
- Release lock from client1
- Take lock from client2
- Again try taking lock from client 1
Also, verifying the behaviour with eagerlock and other eagerlock
set of on and off.
Change-Id: Ie839f893f7a4f9b2c6fc9375cdf9ee8a27fad13b
Signed-off-by: Pranav <prprakas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Steps-
1. Create a disperse volume and start it.
2. Set the eager lock option
3. mount the volume and create a file
4. Check the profile info of the volume for inodelk count.
5. check xattrs of the file for dirty bit.
6. Reset the eager lock option and check the attributes again.
Change-Id: I0ef1a0e89c1bc202e5df4022c6d98ad0de0c1a68
Signed-off-by: Sheetal <spamecha@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Added check to verify the behavior after bringing down the
smallest brick. The available volume size should not be
greater than the initial vol size
Test skipped due to bug:
https://bugzilla.redhat.com/show_bug.cgi?id=1883429
Change-Id: I00c0310210f6fe218cedd23e055dfaec3632ec8d
Signed-off-by: Pranav <prprakas@redhat.com>
|
|
|
|
|
|
|
|
|
| |
Reason : The cd will change the working directory to root
and renames and softlink creations for subsequent files will
fail as seen in the glusto logs.
Change-Id: I174ac11007dc301ba6ec8ccddaeb919a181b1c30
Signed-off-by: srijan-sivakumar <ssivakum@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
Extending the existing validation by adding
node restart as a method to bring back
offline bricks along with exiting volume start
approach.
Change-Id: I1291b7d9b4a3c299859175b4cdcd2952339c48a4
Signed-off-by: Pranav <prprakas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Test Steps:
1. Create a pure-ec volume (say 1x(4+2))
2. Mount volume on two clients
3. Create some files and dirs from both mnts
4. Add bricks in this case the (4+2) ie 6 bricks
5. Create a new dir(common_dir) and in that directory create a distinct
directory(using hostname as dirname) for each client and pump IOs
from the clients(dd)
6. While IOs are in progress replace any of the bricks
7. Check for errors if any collected after step 6
Change-Id: I3125fc5906b5d5e0bc40477e1ed88825f53fa758
Signed-off-by: Bala Konda Reddy M <bala12352@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem: Test is failing with below traceback
when ran with python3 as default.
`
Traceback (most recent call last):
File "<string>", line 1, in <module>
TypeError: a bytes-like object is required, not 'str'
`
Solution:
Added ''.encode() which will fix the issue when ran
using both python2 and python3
Added a check for core file on the client node.
Change-Id: I8f800f5fad97c3b7591db79ea51203e5293a1f69
Signed-off-by: Bala Konda Reddy M <bala12352@gmail.com>
|
|
|
|
|
|
|
|
|
| |
- Remove decimal before passing to `head` command
- Breakup sparsefile into chunks to ~half of brick size
- Whole test has to be skipped due to BZ #1339144
Change-Id: I7a9ae25798b442c74248954023dd821c3442f8f9
Signed-off-by: Leela Venkaiah G <lgangava@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem: Creating a third mount obj works for glusterfs
protocol but in future while running for nfs/cifs might
face complications and test might fail.
Solution: Skip test unless three clients are provided
Removing redundant logging and minor fixes.
Change-Id: Ie657975a46b6989cb9f057f5cc337333bbf1010d
Signed-off-by: Bala Konda Reddy M <bala12352@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Test steps:
1. Create a volume, start and mount it on a client
2. Bring down redundant bricks in the subvol
3. Create a file on the volume using "touch"
4. Truncate the file using "O_TRUNC"
5. Bring the brick online
6. Write data on the file and wait for heal completion
7. Check for crashes and coredumps
Change-Id: Ie02a56ab5180f6a88e4499c8cf6e5fe5019e8df1
Signed-off-by: Bala Konda Reddy M <bala12352@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Test Steps:
1. Created a volume and mount this volume on 3 clients.
2. Bring down two bricks offline in each subvol.
3. On client1: under dir1 create files f{1..10000} run in background
4. On client2: under / touch x{1..1000}
5. On client3: start creating x{1001..10000}
6. Bring bricks online which were offline(brought up all the bricks
which were down (2 in each of the two subvols)
7. While IO on Client1 and Client3 were happening, On client2 move
all the x* files into dir1
8. Perform lookup from client 3
Change-Id: Ib72648af783535557e20cea7e64ea68036b23121
Signed-off-by: Bala Konda Reddy M <bala12352@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Steps:
1.Create a EC volume and mount it
2.Run different types of IO's
3.Take arequal of mountpoint
4.Bring down redundant bricks
5.Take arequal of mountpoint
6.Bring down another set of redundant bricks
7.Take arequal of mountpoint
Change-Id: If253cdfe462c6671488e858871ec904fbb2f9ead
Signed-off-by: ubansal <ubansal@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
- Tests to check EIO changes to EDQUOTE errors on reaching quota
- Scenarios covered are:
- Redundant bricks are down in a volume
- Multiple IOs were happening from clients
- Single IO session from a client
Change-Id: Ie15244231dae7fe2e61cc6df0d7f35d2231d9bdf
Signed-off-by: Leela Venkaiah G <lgangava@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Test Steps:
- Create, start and mount an EC volume in two clients
- Create multiple files and directories including all file types on
one directory from client 1
- Take arequal check sum of above data
- Create another folder and pump different fops from client 2
- Fail and bring up redundant bricks in a cyclic fashion in all of
the subvols maintaining a minimum delay between each operation
- In every cycle create new dir when brick is down and wait for heal
- Validate heal info on volume when brick down erroring out instantly
- Validate arequal on brining the brick offline
Change-Id: Ied5e0787eef786e5af7ea70191f5521b9d5e34f6
Signed-off-by: Leela Venkaiah G <lgangava@redhat.com>
|
|
|
|
|
| |
Change-Id: Ib39894e9f44c41f5539377c5c124ad45a786cbb3
Signed-off-by: ubansal <ubansal@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
On setting disperse quorum count to 5 , atleat 5 bricks
should be online for successful writes on volume
Steps:
1.Set disperse quorum count to 5
2.Write and read IO's
2.Bring down 1st brick
3.Writes and reads successful
4.Brind down 2nd brick
5.Writes should fail and reads successful
4.Write and read again
5.Writes should fail and reads successful
6.Rebalance should fail as quorum not met
7.Reset volume
8.Write and read IO's and vaildate them
9.Bring down redundant bricks
10.Write and read IO's and vaildate them
Change-Id: Ib825783f01a394918c9016808cc62f6530fe8c67
Signed-off-by: ubansal <ubansal@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
On setting disperse quorum count to 6 all bricks should
be online for successful writes on volume
Steps:
1.Set disperse quorum count to 6
2.Write and read IO's
2.Bring down 1 brick
3.Writes should fail and reads successful
4.Write and read again
5.Writes should fail and reads successful
6.Rebalance should fail as quorum not met
7.Reset volume
8.Write and read IO's and vaildate them
9.Bring down redundant bricks
10.Write and read IO's and vaildate them
Change-Id: I93d418fd75d75fa3563d23f52fdd5aed71cfe540
Signed-off-by: ubansal <ubansal@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Test Steps:
1. Create volume and mount the volume on 3 clients, c1(client1),
c2(client2), and, c3(client3)
2. On c1, mkdir /c1/dir
3. On c2, Create 4000 files on mount point i.e. "/"
4. After step 3, Create next 4000 files on c2 on mount point i.e. "/"
5. On c1 Create 10000 files on /dir/
6. On c3 start moving 4000 files created on step 3 from mount point
to /dir/
7. On c3, start ls in a loop for 20 iterations
Note: Used upload scripts in setupclass, as there is one more test
to be added in the same file.
Change-Id: Ibab74433cbec4d6a4f9b494f257b3e517b8fbfbc
Signed-off-by: Bala Konda Reddy M <bala12352@gmail.com>
|
|
|
|
|
|
|
|
|
|
| |
Description:
This script tests Disperse(EC) eagerlock default values
and the performance impact on lookups with eagerlock
and other-eagerlock default values
Change-Id: Ia083d0d00f99a42865fb6f06eda75ecb18ff474f
Signed-off-by: nchilaka <nchilaka@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Testcase Steps:
1.Create an EC volume
2.Set the eager lock option by turning
on disperse.eager-lock by using different inputs:
- Try non boolean values(Must fail)
- Try boolean values
Change-Id: Iec875ce9fb4c8f7c68b012ede98bd94b82d04d7e
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
| |
- Test is designed to run on EC volumes only
Change-Id: Ice6a77422695ebabbec6b9cfd910e453e5b2c81a
Signed-off-by: Leela Venkaiah G <lgangava@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Steps:
1.Create a volume and mount it
2.Create a directory, dir1 and run different types of IO's
3.Create a directory, dir2
4.Bring down redundant bricks
5.Write IO's to directory dir2
6.Create a directory, dir3 and run IO's(read,write,ammend)
7.Bring up bricks
8.Monitor heal
9.Check for data intergrity of dir1
Change-Id: I9a7e366084bb46dcfc769b1d98b89b303fc16150
Signed-off-by: ubansal <ubansal@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Test Steps:
1. Create an EC volume
2. Mount the volume using FUSE on two different clients
3. Create ~9 files from one of the client
4. Create ~9 dir with ~9 files each from another client
5. Create soft-links, hard-links for file{4..6}, file{7..9}
6. Create soft-links for dir{4..6}
7. Begin renaming the files, in multiple iterations
8. Bring down a brick while renaming the files
9. Bring the brick online after renaming some of the files
10. Wait for renaming of the files
11. Validate no data loss and files are renamed successfully
Change-Id: I6d98c00ff510cb473978377bb44221908555681e
Signed-off-by: Leela Venkaiah G <lgangava@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
Steps:
1.Open a File descriptor when a brick in down
2.Write to File descriptor when brick has come up and
check if healing is complete
Change-Id: I721cedf4dc6a420f0c153d4232b046f780da201b
Signed-off-by: ubansal <ubansal@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Test Steps:
- create and mount EC volume 4+2
- start append to a file from client
- bring down one of the bricks (say b1)
- wait for ~minute and bring down another brick (say b2)
- after ~minute bring up first brick (b1)
- check the xattrs 'ec.size', 'ec.version'
- xattrs of online bricks should be same as an indication to heal
Change-Id: I81a5bad4a91dd891fbbc9d93ae3f76610237789e
Signed-off-by: Leela Venkaiah G <lgangava@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Test Steps:
1. Create a volume, start and mount it
2. Create directories and files
3. Rename, change permissions of files
4. Create hardlink and soflink and different types of IO's
5. Delete all the data
6. Check no heals are pending
7. Check al bricks are empty
Change-Id: Ic8f5dad1a44de71688a6b0a2fcfb4a25cef435ba
Signed-off-by: ubansal <ubansal@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Test steps:
1. Create volume, start and mount it one client.
2. Enable metadata-cache(md-cache) options on the volume.
3. Touch a file and create a hardlink for it.
4. Read data from the hardlink.
5. Read data from the actual file.
Change-Id: Ibf4b8757262707fcfb4d09b4b031ff9dea166570
Signed-off-by: Bala Konda Reddy M <bala12352@gmail.com>
|
|
|
|
|
| |
Change-Id: I3f77dc73044a5bc59a26319c55e8e024e2edf449
Signed-off-by: ubansal <ubansal@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Steps:
1.Enable uss and create snapshot, list and delete
2.Create Snapshot with same same and list
Github Issue for CentOS-CI failure:
https://github.com/gluster/glusterfs/issues/1203
Testcase failing due to:
https://bugzilla.redhat.com/show_bug.cgi?id=1828820
Change-Id: I829e6b340dfb4963355b445259fcb011b62ba057
Signed-off-by: ubansal <ubansal@redhat.com>
|
|
|
|
|
| |
Change-Id: Id94870735b26fbeab2bf448d4f80341c92beb5ba
Signed-off-by: ubansal <ubansal@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
This test verifies remove brick operations on disperse
volume.
Change-Id: If4be3ffc39a8b58e4296d58b288e3843a218c468
Co-authored-by: Sunil Kumar Acharya <sheggodu@redhat.com>
Signed-off-by: Sunil Kumar Acharya <sheggodu@redhat.com>
Signed-off-by: Sri Vignesh <sselvan@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Test Steps:
1. Create a volume and set the volume option
'diagnostics.client-log-level' to DEBUG mount the volume on one
client.
2. Create a directory
3. Validate the number of lookups for the directory creation from the
log file.
4. Perform a new lookup of the directory
5. No new lookups should have happened on the directory, validate from
the log file.
6. Bring down one subvol of the volume and repeat step 4, 5
7. Bring down one brick from the online bricks and repeat step 4, 5
8. Start the volume with force and wait for all process to be online.
Change-Id: I162766837fd7e61625238a669c4050c2ec9c8a8b
Signed-off-by: Bala Konda Reddy M <bmekala@redhat.com>
|
|
|
|
|
|
|
|
|
| |
Steps:
1.Checks replace-brick and data intergrity post that
2.Checks replace-brick while IO's are in progress
Change-Id: Idfc801fde50967924696b2e909633b9ca95ac721
Signed-off-by: ubansal <ubansal@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
Testcase test_ec_version was failing with the
below traceback:
Traceback (most recent call last):
File "/usr/lib64/python2.7/logging/__init__.py", line 851, in emit
msg = self.format(record)
File "/usr/lib64/python2.7/logging/__init__.py", line 724, in format
return fmt.format(record)
File "/usr/lib64/python2.7/logging/__init__.py", line 464, in format
record.message = record.getMessage()
File "/usr/lib64/python2.7/logging/__init__.py", line 328, in getMessage
msg = msg % self.args
TypeError: %d format: a number is required, not str
Logged from file test_ec_version_healing_whenonebrickdown.py, line 233
This was due to a missing 's' in the log message on line 233.
Solution:
Add the missing s in the log message on line 233 as
shown below:
g.log.info('Brick %s is offline successfully', brick_b2_down)
Also renaming the file for more clarity of what the
testcase does.
Change-Id: I626fbe23dfaab0dd6d77c75329664a81a120c638
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
The current timeout for reboot given in
test_heal_full_node_reboot is about 350 seconds
which works with most hardware configurations.
However when reboot is done on slower systems which
take time to come up this logic fails due to
which this testcase and the preceding testcases
fail.
Solution:
Change the timeout for reboot from 350 to 700, this
wouldn't affect the testcase's perfromance in good
hardware configurations as the timeout is for the max
value and if the node is up before the testcase it'll
exit anyways.
Change-Id: I60d05236e8b08ba7d0fec29657a93f2ae53404d4
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
|
|
|
| |
Please refer to commit message of patch [1].
[1] https://review.gluster.org/#/c/glusto-tests/+/24140/
Change-Id: Ic0b3b1333ac7b1ae02f701943d49510e6d46c259
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
| |
Change-Id: I33e75fe773ee26a2d205f5ebd29198968bfe6c59
Signed-off-by: ubansal <ubansal@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Removing script_local_path as both script_local_path and
cls.script_upload_path hold the same values which makes
each script slow. This will help decrease the execution
time of the test suite.
PoC:
$cat test.py
a = ("/usr/share/glustolibs/io/scripts/"
"file_dir_ops.py")
b = ("/usr/share/glustolibs/io/scripts/"
"file_dir_ops.py")
$time python test.py
real 0m0.063s
user 0m0.039s
sys 0m0.019s
$cat test.py
a = ("/usr/share/glustolibs/io/scripts/"
"file_dir_ops.py")
$time python test.py
real 0m0.013s
user 0m0.009s
sys 0m0.003s
Code changes needed:
From:
script_local_path = ("/usr/share/glustolibs/io/scripts/"
"file_dir_ops.py")
cls.script_upload_path = ("/usr/share/glustolibs/io/scripts/"
"file_dir_ops.py")
ret = upload_scripts(cls.clients, script_local_path)
To:
cls.script_upload_path = ("/usr/share/glustolibs/io/scripts/"
"file_dir_ops.py")
ret = upload_scripts(cls.clients, cls.script_upload_path)
Change-Id: I7908b3b418bbc929b7cc3ff81e3675310eecdbeb
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
| |
Change-Id: I2e85670e50e3dab8727295c34aa6ec4f1326c19d
Signed-off-by: Valerii Ponomarov <kiparis.kh@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This test case verifies that disruption during full heal
doesn't result in data corruption.
Testcase steps:
1.Create IO from mountpoint.
2.Calculate arequal from mount.
3.Delete data from backend from the EC volume.
4.Trigger heal full.
5.Disable Heal.
6.Again Enable and do Heal full.
7.Reboot a Node.
8.Calculate arequal checksum and compare it.
Change-Id: I1fac53df30106ff98fdd270b210aca90a53a1ac5
Co-authored-by: Sunil Kumar Acharya <sheggodu@redhat.com>
Signed-off-by: Sunil Kumar Acharya <sheggodu@redhat.com>
Signed-off-by: Karan Sandha <ksandha@redhat.com>
Signed-off-by: kshithijiyer <kshithij.ki@gmail.com>
|
|
|
|
|
| |
Change-Id: Iafa09988617e2e29942aa6ceb003eac2ddf2b561
Signed-off-by: ubansal <ubansal@redhat.com>
|
|
|
|
|
| |
Change-Id: I308f95d16ac18ec80c5c78aac9152d9ae41449bb
Signed-off-by: Valerii Ponomarov <kiparis.kh@gmail.com>
|
|
|
|
|
| |
Change-Id: I97bcbc3f9b75129be833ffa7def1b00cfd32a474
Signed-off-by: ubansal <ubansal@redhat.com>
|
|
|
|
|
|
|
|
|
| |
With redundancy count as negative or disperse count
with negative(different permutations) and disperse
data count equal to disperse count
Change-Id: I761851c64833256532464f56a9a78e20ceb8a4e1
Signed-off-by: ubansal <ubansal@redhat.com>
|
|
|
|
|
|
|
|
| |
Added a line to change permission of the directory so that client
side healing happens for the directory also
Change-Id: If4a24f2dbd6c9c85d4cb2944d1ad4795dbc39adb
Signed-off-by: ubansal <ubansal@redhat.com>
|
|
|
|
|
| |
Change-Id: Ib1aff1c1bf843dddac5862e55a049d7b47603049
Signed-off-by: ubansal <ubansal@redhat.com>
|
|
|
|
|
| |
Change-Id: Id8cfc0dd31cf4f6f381ec7bb07d4aba06d52b43e
Signed-off-by: ubansal <ubansal@redhat.com>
|
|
|
|
|
| |
Change-Id: I640f5c554fab791aa5f196415c5204f7cbca83a4
Signed-off-by: ubansal <ubansal@redhat.com>
|
|
|
|
|
|
| |
operation are performed like chmod
Change-Id: I797253cd4454359bd8f0596c322b2eb71a8a4751
|
|
|
|
| |
Change-Id: I35dd3c387c7b6eb3957c5a790af9ff8693403202
|