| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem : Basically, in this test case a file is created
which exceeds the quota limit. Once the limit is reached
that file will be deleted. At the same moment we are
testing inode-quota. It can so happen that before the
marker updates the information related to deletion of
file, a new file creation operation comes and sees that
quota limit is still exceeded.
Solution : Inducing a check to see if marker updation
completed successfully.
Updated all the test case which has the similar
machanism and also moved the "usage" function
to a common place "volume.rc"
Change-Id: I36ddbc5ebbf1b74c9d326a0d1d5f3b32f20a906a
BUG: 1229297
Signed-off-by: Sachin Pandit <spandit@redhat.com>
Signed-off-by: vmallika <vmallika@redhat.com>
Reviewed-on: http://review.gluster.org/11125
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
1) Avoid hangs on unmounting NFS on NetBSD
NetBSD umount(8) on a NFS mount whose server is gone will wait forever
because umount(8) calls realpath(3) and tries to access the mount before
it calls unmount(2). The non-portable, NetBSD-specific umount -R flag
prevent that behavior.
We therefore introduce UMOUNT_F, defined as "umount -f" on Linux and
"umount -f -R" on NetBSD to take care of forced unmounts, especially
in the NFS case.
2) Enforce usage of force_umount wrapper with timeout
Whenever umount is used it should be wrapped in force_umount with
tiemout handling. That saves us timing issues, and it handles the
NetBSD NFS case.
3) Cleanup kernel cache flush.
We used (cd $M0 && umount $M0 ) as a portable kernel cache flush
trick, but it does not flush everything we need on Linux. Introduce
a drop_cache() shell function that reverts to previously used
echo 3 > /proc/sys/vm/drop_caches on Linux, and keeps
(cd $M0 && umount $M0 ) on other systems.
BUG: 1129939
Change-Id: Iab1f5a023405f1f7270c42b595573702ca1eb6f3
Signed-off-by: Emmanuel Dreyfus <manu@netbsd.org>
Reviewed-on: http://review.gluster.org/11114
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If a file is under migration, any xattrs created on it
are lost post migration of the file. This is because
the xattrs are set only on the cached subvol of the source
and as the source is under migration, it becomes a linkto file
post migration.
Change-Id: Ib8e233b519cf954e7723c6e26b38fa8f9b8c85c0
BUG: 1193636
Signed-off-by: Nithya Balachandran <nbalacha@redhat.com>
Reviewed-on: http://review.gluster.org/10212
Tested-by: NetBSD Build System
Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
Tested-by: Raghavendra G <rgowdapp@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
quota enforcement
Currently quota enforcer doesn't consider parallel writes
and allows quota to exceed limit where there are high rate
of parallel writes. Bug# 1223658 tracks the issue.
This patch fixes the spurious failures by not sending
parallel writes.
Using O_SYNC and O_APPEND flags and block size
not more that 256k (For higher block size NFS client
splits the block into 256k chinks and does parallel writes)
Change-Id: I297c164b030cecb87ce5b494c02b09e8b073b276
BUG: 1223798
Signed-off-by: vmallika <vmallika@redhat.com>
Reviewed-on: http://review.gluster.org/10878
Tested-by: NetBSD Build System
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
Tested-by: Raghavendra G <rgowdapp@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The key concept here is to determine whether a directory is "clean" by
comparing its last-known-good topology to the current one for the
volume. These are stored as "commit hashes" on the directory and the
volume root respectively. The volume's commit hash changes whenever a
brick is added or removed, and a fix-layout is done. A directory's
commit hash changes only when a full rebalance (not just fix-layout)
is done on it. If all bricks are present and have a directory
commit hash that matches the volume commit hash, then we can assume
that every file is in its "proper" place. Therefore, if we look for
a file in that proper place and don't find it, we can assume it's not
on any other subvolume and *safely* skip the global (broadcast to all)
lookup.
Change-Id: Id6ce4593ba1f7daffa74cfab591cb45960629ae3
BUG: 1219637
Signed-off-by: Jeff Darcy <jdarcy@redhat.com>
Signed-off-by: Shyam <srangana@redhat.com>
Reviewed-on: http://review.gluster.org/7702
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Tested-by: NetBSD Build System
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
Change-Id: If57d08f3446755ea41f66ca258efcc8ea5a89063
BUG: 1217701
Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
Reviewed-on: http://review.gluster.org/10480
Tested-by: NetBSD Build System
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- Use REBALANCE_TIMEOUT in EXPECT_WITHIN
- Use fdatasync to prevent write-behind from giving success
- Add logfile to glupy
Change-Id: I51ab51644aaa4aa9d49f185e7b8959bb58be966b
BUG: 1217766
Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
Reviewed-on: http://review.gluster.org/10487
Reviewed-by: Niels de Vos <ndevos@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously when user start remove-brick operation on a volume then by
giving non-existing brick for remove-brick status/stop command it was
showing remove-brick status/stoping remove-brick operation on a volume.
With this fix it will validate bricks which user have given for
remove-brick status/stop command and if bricks are part of volume then
it will show statistics of remove-brick operation otherwise it will show
error "Incorrect brick <brick_name> for <volume_name>".
Change-Id: I151284ef78c25f52d1b39cdbd71ebfb9eb4b8471
BUG: 1121584
Signed-off-by: Gaurav Kumar Garg <ggarg@redhat.com>
Reviewed-on: http://review.gluster.org/9681
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
1. Expected behavior of get_real_filename feature.
A getxattr on a existing dir with glusterfs.get_real_filename:<filename>
as key should result in one of the following things.
a. A value returned for that key having the real filename (a file whose
match is a case insensitive match to the filename passed in key).
b. op_ret = -1 and errno set to ENOENT meaning that no such file exists
under the specified dir in any case.
c. op_ret = -1 and errno set to ENODATA. This is a case assuming no
xlator interprets the glusterfs.get_real_filename key and it get
passed down to the posix xlator. Naturally, posix xlator would not
find any xattr with this key and would return ENODATA. This will be
interpreted specially by the caller as the feature not being supported
by underlying glusterfs.
2. What assumptions are wrong?
Initially the key used to be user.glusterfs.get_real_filename.
In that case, when posix xlator did a getxattr call it would have
received ENODATA as error. However, the key has now changed to
glusterfs.get_real_filename. This leads to a EOPNOTSUPP error instead.
Considering the above information, this is a rewrite of
get_real_filename logic in dht.
Change-Id: I012e9150047fc8563be91b0d112a368ac1cbf598
BUG: 1204140
Signed-off-by: Raghavendra Talur <rtalur@redhat.com>
Reviewed-on: http://review.gluster.org/9956
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: N Balachandran <nbalacha@redhat.com>
Reviewed-by: Shyamsundar Ranganathan <srangana@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This test runs file renames in a loop in the background and
expects them to be done within 75 seconds. On slower VMs the
operation takes about 75-80 seconds to complete causing the
test to fail randomly. Increased the timeout to 120 seconds.
Change-Id: I103e630c5a1bcea1fb4c7842892a2e67714c3fbb
BUG: 1163543
Signed-off-by: Nithya Balachandran <nbalacha@redhat.com>
Reviewed-on: http://review.gluster.org/10111
Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Shyamsundar Ranganathan <srangana@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is the second leading cause of spurious failures, including those
in tests for other spurious-regression-failure fixes (creating a bit of
a "catch 22" situation). While these failures have been hard to
reproduce except during full regression-test runs, two changes have been
made that might make this test more resilient to certain types of
failures.
* Use a specific "ls" instead of a general "find" to list/count only
the files we're interested in, without (possibly) including transient
artifacts from the "remove-brick" command.
* Retry the file count up to five times, just in case there are other
transient conditions causing it to yield the wrong result.
Also, "inlining" some of the functions for removing the brick might help
to highlight exactly which command within those functions was failing.
Change-Id: I5a462b91fb4e04d9e9a53cc60f9db11b89101107
Signed-off-by: Jeff Darcy <jdarcy@redhat.com>
Reviewed-on: http://review.gluster.org/10013
Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
| |
Change-Id: I0327a48ba5a1a217f54557386b1ae1b986702340
BUG: 1178685
Signed-off-by: Niels de Vos <ndevos@redhat.com>
Reviewed-on: http://review.gluster.org/9962
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Shyamsundar Ranganathan <srangana@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
position in the graph rather than relative (local) to a particular
translator.
Encoding the volume in this way allows a single translator to manage
which brick is currently being scanned for directory entries. Using a
single translator minimizes allocated bits in the d_off. It also allows
multiple DHT translators in the same graph to have a common frame of
reference (the graph position) for which brick is being read. Multiple
DHT translators are needed for the Tiering feature.
The fix builds off a previous change (9332) which removed subvolume
encoding from AFR. The fix makes an equivalent change to the EC
translator.
More background can be found in fix 9332 and gluster-dev discussions [1].
DHT and AFR/EC are responsibile (as before) for choosing which brick to
enumerate directory entries in over the readdir lifecycle.
The client translator receiving the readdir fop encodes the dht_t. It
is referred to as the "leaf node" in the graph and corresponds to the
brick being scanned.
When DHT decodes the d_off, it translates the leaf node to a local
subvolume, which represents the next node in the graph leading to
the brick.
Tracking of leaf nodes is done in common utility functions. Leaf nodes
counts and positional information are updated on a graph switch.
[1] www.gluster.org/pipermail/gluster-devel/2015-January/043592.html
Change-Id: Iaf0ea86d7046b1ceadbad69d88707b243077ebc8
BUG: 1190734
Signed-off-by: Dan Lambright <dlambrig@redhat.com>
Reviewed-on: http://review.gluster.org/9688
Reviewed-by: Xavier Hernandez <xhernandez@datalab.es>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
Tested-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This test runs file renames in a loop in the background and
writes to a status file to indicate that it is done.
However, the status file was also created in the background
and was sometimes not created in time before the test which
checked the contents.
Change-Id: Ida29456fbdc006f1da84a5f25a629cc6fa9830f4
BUG: 1163543
Signed-off-by: Nithya Balachandran <nbalacha@redhat.com>
Reviewed-on: http://review.gluster.org/9798
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Shyamsundar Ranganathan <srangana@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
Change-Id: I8caab03531d74c64dcfa05c35a7daeee646cd2fa
BUG: 1075417
Signed-off-by: Sakshi Bansal <sabansal@redhat.com>
Reviewed-on: http://review.gluster.org/9507
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
Reviewed-on: http://review.gluster.org/9548
Tested-by: Vijay Bellur <vbellur@redhat.com>
|
|
There are around 300 regression tests, 250 being in tests/bugs. Running
partial set of tests/bugs is not easy because this is a flat directory
with almost all tests inside.
It would be valuable to make partial test/bugs easier, and allow the use
of mulitple build hosts for a single commit, each running a subset of
the tests for a quicker result.
Additional changes made:
- correct the include path for *.rc shell libraries and *.py utils
- make the testcases pass checkpatch
- arequal-checksum in afr/self-heal.t was never executed, now it is
- include.rc now complains loudly if it fails to find env.rc
Change-Id: I26ffd067e9853d3be1fd63b2f37d8aa0fd1b4fea
BUG: 1178685
Reported-by: Emmanuel Dreyfus <manu@netbsd.org>
Reported-by: Atin Mukherjee <amukherj@redhat.com>
URL: http://www.gluster.org/pipermail/gluster-devel/2014-December/043414.html
Signed-off-by: Niels de Vos <ndevos@redhat.com>
Reviewed-on: http://review.gluster.org/9353
Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com>
Reviewed-by: Emmanuel Dreyfus <manu@netbsd.org>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|