| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
| |
Issue: Event value_overwrite:Overwriting previous write to "ret"
with value "-1".
Fix : An "If" condition is added to check the value of "ret".
Change-Id: I7b6bd4f20f73fa85eb8a5169644e275c7b56af51
BUG: 789278
Signed-off-by: Subha sree Mohankumar <smohanku@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem: In a distributed volume custom extended attribute value for a directory
does not display correct value after stop/start or added newly brick.
If any extended(acl) attribute value is set for a directory after stop/added
the brick the attribute(user|acl|quota) value is not updated on brick
after start the brick.
Solution: First store hashed subvol or subvol(has internal xattr) on inode ctx and
consider it as a MDS subvol.At the time of update custom xattr
(user,quota,acl, selinux) on directory first check the mds from
inode ctx, if mds is not present on inode ctx then throw EINVAL error
to application otherwise set xattr on MDS subvol with internal xattr
value of -1 and then try to update the attribute on other non MDS
volumes also.If mds subvol is down in that case throw an
error "Transport endpoint is not connected". In dht_dir_lookup_cbk|
dht_revalidate_cbk|dht_discover_complete call dht_call_dir_xattr_heal
to heal custom extended attribute.
In case of gnfs server if hashed subvol has not found based on
loc then wind a call on all subvol to update xattr.
Fix: 1) Save MDS subvol on inode ctx
2) Check if mds subvol is present on inode ctx
3) If mds subvol is down then call unwind with error ENOTCONN and if it is up
then set new xattr "GF_DHT_XATTR_MDS" to -1 and wind a call on other
subvol.
4) If setxattr fop is successful on non-mds subvol then increment the value of
internal xattr to +1
5) At the time of directory_lookup check the value of new xattr GF_DHT_XATTR_MDS
6) If value is not 0 in dht_lookup_dir_cbk(other cbk) functions then call heal
function to heal user xattr
7) syncop_setxattr on hashed_subvol to reset the value of xattr to 0
if heal is successful on all subvol.
Test : To reproduce the issue followed below steps
1) Create a distributed volume and create mount point
2) Create some directory from mount point mkdir tmp{1..5}
3) Kill any one brick from the volume
4) Set extended attribute from mount point on directory
setfattr -n user.foo -v "abc" ./tmp{1..5}
It will throw error " Transport End point is not connected "
for those hashed subvol is down
5) Start volume with force option to start brick process
6) Execute getfattr command on mount point for directory
7) Check extended attribute on brick
getfattr -n user.foo <volume-location>/tmp{1..5}
It shows correct value for directories for those
xattr fop were executed successfully.
Note: The patch will resolve xattr healing problem only for fuse mount
not for nfs mount.
BUG: 1371806
Signed-off-by: Mohit Agrawal <moagrawa@redhat.com>
Change-Id: I4eb137eace24a8cb796712b742f1d177a65343d5
|
|
|
|
|
|
|
|
|
|
| |
With this change, enabling choose-local (which means its state makes
transition from "off" to "on") will be effective after the first
gfid-lookup on "/" since volume-set was executed.
Change-Id: Ibab292ba705d993b475cd0303fb3318211fb2500
BUG: 1480525
Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
Problem: ctx pointer could be NULL
Solution: Updated the code to verify ctx pointer
BUG: 789278
Change-Id: I25e07a07c6ebe2f630c99ba3aa9a61656fbaa981
Signed-off-by: Akarsha Rai <akrai@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
cbk could be NULL.
Solution:
Returning NULL when memory is not allocated
for cbk.
BUG: 789278
Change-Id: Iea9128e0f3b95100deca560f690f9baaae226abf
Signed-off-by: Akarsha Rai <akrai@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem: Unreachable assignment statement at dht-rebalance.c:1040
Fix: Delete line dht-rebalance.c:1040.
The goto statements at lines 1037 and 1031 are also deleted since
both branches of the if statement finally go to the same
immediately-following label anyway.
Change-Id: I5f47ea99244cae2a0a9f2aec7284faadf2ea286a
BUG: 789278
Signed-off-by: Kamal Mohanan <kmohanan@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
Problem: Pool pointer could be NULL while destroying it.
Solution: Verifying pointer before destroying it.
BUG: 789278
Change-Id: I497d1310aa47cb749a4c992aa961bd4dfa23ee48
Signed-off-by: Akarsha Rai <akrai@redhat.com>
|
|
|
|
|
|
|
|
|
| |
... for AFR_METADATA_TRANSACTION and just mark source and sinks if
metadata is the same.
Change-Id: I69e55d3c842c7636e3538d1b57bc4deca67bed05
BUG: 1491670
Signed-off-by: Ravishankar N <ravishankar@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
Issue :Event check_return: Calling "ec_dict_set_number" without checking return value.
Fix : Type casted the return value of the function "ec_dict_set_number" to void.
Change-Id: Id97034f9b1b8591536d63dca680ca7c7a9c4fcc3
BUG: 789278
Signed-off-by: Subha sree Mohankumar <smohanku@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
Problem: dht_frame_return was being called without checking the
return value.
Solution: Typecast the value returned by the function to void.
Change-Id: Idfc6a7ed467d1c8f5f8d09ec26d9059f3d23b760
BUG: 789278
Signed-off-by: Kamal Mohanan <kmohanan@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problems:
As described in BZ 1491670, renaming hardlinks can result in data/mdata
split-brain of the DHT link-to files (T files) without any mismatch of
data and metadata.
As described in BZ 1486063, for a zero-byte file with only dirty bits
set, arbiter brick will likely be chosen as the source brick.
Fix:
For zero byte files in split-brain, pick first brick as
a) data source if file size is zero on all bricks.
b) metadata source if metadata is the same on all bricks
In arbiter case, if file size is zero on all bricks and there are no
pending afr xattrs, pick 1st brick as data source.
Change-Id: I0270a9a2f97c3b21087e280bb890159b43975e04
BUG: 1491670
Signed-off-by: Ravishankar N <ravishankar@redhat.com>
Reported-by: Rahul Hinduja <rhinduja@redhat.com>
Reported-by: Mabi <mabi@protonmail.ch>
|
|
|
|
|
|
| |
Change-Id: I6580351b245d5f868e9ddc6a4eb4dd6afa3bb6ec
BUG: 1493539
Signed-off-by: karthik-us <ksubrahm@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Address comments to https://review.gluster.org/18067, (Change-Id
I86e15d12939c610c99f5f96c551bb870df20f4b4)
Which was posted as an RFC as an example of a possible alternative
fix to https://review.gluster.org/17860 (Change-Id
I28a3bdd4a357526dba0cf84c262919c05cfa173e)
An alternative fix that preserved the unsignedness of the indexes
throughout, obviating the need to check its value before using it to
shift. (shift by negative number is undefined, as is shift by more
bits than in the type.)
BUG: 1474309
Change-Id: I46fe9cec140d3397463780748f6876251acb06dd
Signed-off-by: Kaleb S. KEITHLEY <kkeithle@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
bug-797171.7 loaded error-gen xlator on the brick which sent EBADF for a
non fd-based fop, namely setattr. This caused
dht_check_and_open_fd_on_subvol_task() to crash as local->fd was NULL.
Fix:
Call dht_check_and_open_fd_on_subvol_task() from dht_file_setattr_cbk
only for dht_fsetattr and not dht_setattr or dht_setattr2
Signed-off-by: Ravishankar N <ravishankar@redhat.com>
Change-Id: Iab4999e213bf2065804f3f8237e470ad454e3c99
BUG: 1488399
Reviewed-on: https://review.gluster.org/18208
Smoke: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Susant Palai <spalai@redhat.com>
Reviewed-by: Amar Tumballi <amarts@redhat.com>
Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
Reviewed-by: N Balachandran <nbalacha@redhat.com>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Addresses review comments in commit 468ca877807625817b72921d1e9585036687b640
Change-Id: I04b1bd3b00abfd6758798d6272954e36a24249a9
BUG: 1473636
Signed-off-by: Ravishankar N <ravishankar@redhat.com>
Reviewed-on: https://review.gluster.org/18187
Smoke: Gluster Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
...in various self-heal code paths.
Originally found by Pranith in __afr_selfheal_name_impunge ()
Also change __afr_selfheal_assign_gfid() to send lookup only on those
bricks that don't have a gfid matching that of the source.
Change-Id: I70a2ccd750a2af92c5fc36e0eefb2b6125404b4a
BUG: 1482923
Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
Signed-off-by: Ravishankar N <ravishankar@redhat.com>
Reviewed-on: https://review.gluster.org/18065
Smoke: Gluster Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
There was no easy way to find out which files were
skipped during a rebalance.
Rebalance now logs a message for every skipped file
using msgid 109126, making it easier to find
all files that were skipped.
Change-Id: I2cac7db7285e2f82354251f3ea4094827b0daf3e
BUG: 1480445
Signed-off-by: N Balachandran <nbalacha@redhat.com>
Reviewed-on: https://review.gluster.org/18021
Smoke: Gluster Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: hari gowtham <hari.gowtham005@gmail.com>
Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If dht_discover finds data files on more than one subvol,
racing calls to dht_discover_cbk could end up calling
dht_aggregate_xattr which could delete dictionary data
that is being accessed by higher layer translators.
Fixed to call dht_aggregate_xattr only for directories and
consider only the first file to be found.
Change-Id: I4f3d2a405ec735d4f1bb33a04b7255eb2d179f8a
BUG: 1484709
Signed-off-by: N Balachandran <nbalacha@redhat.com>
Reviewed-on: https://review.gluster.org/18137
Smoke: Gluster Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In order to generate statedumps per glusterfs_ctx_t, it is needed to
place all the memory pools in a structure that the context can reach.
The 'struct mem_pool' has been extended with a 'list_head owner' that is
linked with the glusterfs_ctx_t->mempool_list.
All callers of mem_pool_new() have been updated to pass the current
glusterfs_ctx_t along. This context is needed to add the new memory pool
to the list and for grabbing the ctx->lock while updating the
glusterfs_ctx_t->mempool_list.
Updates: #307
Change-Id: Ia9384424d8d1630ef3efc9d5d523bf739c356c6e
Signed-off-by: Niels de Vos <ndevos@redhat.com>
Reviewed-on: https://review.gluster.org/18075
Smoke: Gluster Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Jeff Darcy <jeff@pl.atyp.us>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is how I would like to see this fixed.
passes (eliminates the warning in) coverity.
The use of uintptr_t as a bitmask is a problem IMO, especially on
32-bit clients.
Change-Id: I86e15d12939c610c99f5f96c551bb870df20f4b4
Signed-off-by: Kaleb S. KEITHLEY <kkeithle@redhat.com>
Reviewed-on: https://review.gluster.org/18067
Smoke: Gluster Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Amar Tumballi <amarts@redhat.com>
Reviewed-by: Jeff Darcy <jeff@pl.atyp.us>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Earlier, rebalance performed a fix-layout on a directory
before healing its subdirectories. If there were a lot of
subdirs, it could take a while before all subdirs were
created on the newly added bricks. As dht_readdirp only lists
dirs from their hashed subvol, those dirs which hashed to
the newly added bricks but were not yet created on them were
not listed.
Now, the child dirs are listed and processed before the layout
of the parent is fixed. This introduces a change in behaviour
where files in subdirs are migrated before those in parent
directories.
Credit: Shyam <srangana@redhat.com>
Github issue: #239
Change-Id: I8ae7f24a510754cd8d1b31e5d608bcf1928599e2
BUG: 1248393
Signed-off-by: N Balachandran <nbalacha@redhat.com>
Reviewed-on: https://review.gluster.org/18045
Smoke: Gluster Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
During graph switch, if fuse sends nameless (gfid) lookups, afr takes
the discover code path to serve it. If there are pending metadata heals,
they do not happen unless an inode refresh happens as a part of
discover (which is not guaranteed to happen always).
This patch fixes it by attempting metadata heal as a part of discover,
just like how it is done in lookup code path.
Also removed creating superfluous heal frames when launching heal.
Change-Id: I49868649361ebe5d70b6ea150f4686169b6c3070
BUG: 1473636
Signed-off-by: Ravishankar N <ravishankar@redhat.com>
Reviewed-on: https://review.gluster.org/17850
Smoke: Gluster Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Karthik U S <ksubrahm@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add EBADF handling for dht_fremovexattr and dht_fsetxattr.
Change-Id: Ide0d5812dae79655d2565157e5baabcd753b4309
BUG: 1476665
Signed-off-by: N Balachandran <nbalacha@redhat.com>
Reviewed-on: https://review.gluster.org/17999
Smoke: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Shyamsundar Ranganathan <srangana@redhat.com>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
DHT fd based fops used to check if the fd was open
on the cached subvol before winding the call. However,
this introduced a performance regression of about
30% for reads.
This check was introduced to handle cases where files
were migrated while IOs were happening. As this is not
the common case, dht will now check if the fd is
open on the cached subvol only if the call fails
with EBADF.
This will prevent a performance hit where a rebalance
is not running.
Change-Id: I2035a858d63c3fcd22bb634055bbb0ad01686808
BUG: 1476665
Signed-off-by: N Balachandran <nbalacha@redhat.com>
Reviewed-on: https://review.gluster.org/17976
Smoke: Gluster Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Amar Tumballi <amarts@redhat.com>
Reviewed-by: Susant Palai <spalai@redhat.com>
Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
Change-Id: I5acb8bd0a19fc4e764d61e349bb690b5236ee610
BUG: 1478297
Signed-off-by: Ravishankar N <ravishankar@redhat.com>
Reviewed-on: https://review.gluster.org/17981
Smoke: Gluster Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Amar Tumballi <amarts@redhat.com>
Reviewed-by: Karthik U S <ksubrahm@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
This reverts commit 91c9f4a19fde4894576b398252c77f730832a26a.
This patch needs to be reworked.
Change-Id: I4c24f647c2b1abc68fc4e9fe6eb810418e2033aa
BUG: 1476665
Signed-off-by: N Balachandran <nbalacha@redhat.com>
Reviewed-on: https://review.gluster.org/17970
Smoke: Gluster Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
|
|
|
|
|
|
|
|
|
|
|
| |
Change-Id: Id91ef35f890055cd42b9a94462f92297c77f1fff
Bug: 1475282
Signed-off-by: Susant Palai <spalai@redhat.com>
Reviewed-on: https://review.gluster.org/17868
Tested-by: Raghavendra G <rgowdapp@redhat.com>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
Smoke: Gluster Build System <jenkins@build.gluster.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
To calculate available space on a subvolume we used to do
the following in __dht_check_free_space.
post_availspace = (dst_statfs.f_bavail * dst_statfs.f_frsize) - stbuf->ia_size
Now to subtracting the file size from available space is tricky here.
Sometime available space will be lesser than the file size and since all the
participating members in calculation are unsigned int, the result is a large
number (integer overflow).
Solution: We do not need to subtract the file size from the space available,
since fallocate would have reserved file size space already.
Change-Id: I4f724358c44b9911933742ff3ff8d55b3dfda1cb
BUG: 1475282
Signed-off-by: Susant Palai <spalai@redhat.com>
Reviewed-on: https://review.gluster.org/17876
Smoke: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
Reviewed-by: N Balachandran <nbalacha@redhat.com>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
DHT fd based fops will now check if the fd is
open on the cached subvol only if the call fails
with EBADF.
This will improve performance for scenarios where
a rebalance is not running which would be most of
the time.
Change-Id: Idfaeb8927af769c6110d07a165a0fe2307369239
BUG: 1476665
Signed-off-by: N Balachandran <nbalacha@redhat.com>
Reviewed-on: https://review.gluster.org/17922
Smoke: Gluster Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
In case of truncate, if writev or open fails on a brick,
in some cases it does not mark the failure onlock->good_mask.
This causes the update of size and version on all the bricks
even if it has failed on one of the brick. That ultimately
causes a data corruption.
Solution:
In callback of such writev and open calls, mark fop->good
for parent too.
Thanks Pranith Kumar K <pkarampu@redhat.com> for finding the
root cause.
Change-Id: I8a1da2888bff53b91a0d362b8c44fcdf658e7466
BUG: 1476205
Signed-off-by: Ashish Pandey <aspandey@redhat.com>
Reviewed-on: https://review.gluster.org/17906
Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
Smoke: Gluster Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Every time all the thread sleeps or wakes up, we log a message
about that event. Sometime this can be noisy where the number of
files eligible to be migrated are placed far away from each other.
Moving the logs to DEBUG.
Change-Id: I4dc2cc9fdf4f42d4001754532a5bc4aeb3f0f959
BUG: 1474639
Signed-off-by: Susant Palai <spalai@redhat.com>
Reviewed-on: https://review.gluster.org/17866
Smoke: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Amar Tumballi <amarts@redhat.com>
Reviewed-by: N Balachandran <nbalacha@redhat.com>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The calculation of the rebalance estimates will start
after the rebalance operation has been running for 10
minutes. This patch also changes the cli rebalance status
code to use unsigned variables for the time calculations.
Change-Id: Ic76f517c59ad938a407f1cf5e3b9add571690a6c
BUG: 1457985
Signed-off-by: N Balachandran <nbalacha@redhat.com>
Reviewed-on: https://review.gluster.org/17863
Reviewed-by: Amar Tumballi <amarts@redhat.com>
Smoke: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The size of non-migrated files was not added to the
size_processed causing incorrect rebalance estimate
calculations. This has been fixed.
Change-Id: I9f338c44da22b856e9fdc6dc558f732ae9a22f15
BUG: 1467209
Signed-off-by: N Balachandran <nbalacha@redhat.com>
Reviewed-on: https://review.gluster.org/17867
Reviewed-by: Amar Tumballi <amarts@redhat.com>
Smoke: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Corrected the iterator for looping over the list of
decommissioned bricks while checking if the new target
determined because of min-free-disk values has been
decommissioned.
Change-Id: Iee778547eb7370a8069e954b5d629fcedf54e59b
BUG: 1474318
Signed-off-by: N Balachandran <nbalacha@redhat.com>
Reviewed-on: https://review.gluster.org/17861
Reviewed-by: Susant Palai <spalai@redhat.com>
Smoke: Gluster Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The local->call_cnt was being accessed and updated inside
the loop where the entries were being processed and the calls
were being wound.
This could end up in a scenario where the local->call_cnt became
0 before the processing was complete causing the crash when the
next entry was being processed.
Change-Id: I930f61f1a1d1948f90d4e58e80b7d6680cf27f2f
BUG: 1472949
Signed-off-by: N Balachandran <nbalacha@redhat.com>
Reviewed-on: https://review.gluster.org/17825
Smoke: Gluster Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Jeff Darcy <jeff@pl.atyp.us>
Reviewed-by: Amar Tumballi <amarts@redhat.com>
Reviewed-by: Shyamsundar Ranganathan <srangana@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Set names to threads on creation for easier
debugging.
Output of top -H -p <PID-OF-GLUSTERFSD>
Before:
19773 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterfsd
19774 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterfsd
19775 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterfsd
19776 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterfsd
19777 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterfsd
19778 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterfsd
19779 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterfsd
19780 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterfsd
19781 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterfsd
19782 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterfsd
19783 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterfsd
19784 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterfsd
19785 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.01 glusterfsd
19786 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.01 glusterfsd
19787 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.01 glusterfsd
19789 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterfsd
19790 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterfsd
25178 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterfsd
5398 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterfsd
7881 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterfsd
After:
19773 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterfsd
19774 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glustertimer
19775 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterfsd
19776 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glustermemsweep
19777 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glustersproc0
19778 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glustersproc1
19779 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterepoll0
19780 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusteridxwrker
19781 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusteriotwr0
19782 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterbrssign
19783 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterbrswrker
19784 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterclogecon
19785 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.01 glusterclogd0
19786 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.01 glusterclogd1
19787 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.01 glusterclogd2
19789 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterposixjan
19790 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterposixfsy
25178 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterepoll1
5398 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterepoll2
7881 root 20 0 1301.3m 12.6m 8.4m S 0.0 0.1 0:00.00 glusterposixhc
Change-Id: Id5f333755c1ba168a2ffaa4fce6e71c375e10703
BUG: 1254002
Updates: #271
Signed-off-by: Raghavendra Talur <rtalur@redhat.com>
Reviewed-on: https://review.gluster.org/11926
Reviewed-by: Prashanth Pai <ppai@redhat.com>
Smoke: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Niels de Vos <ndevos@redhat.com>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
Currently there is no way for the admin from CLI to resolve gfid
split-brain based on some policy like choice of the brick, mtime
or size.
Fix:
With the existing CLI options based on size, mtime, and choice of
brick, we do lookup on the parent for the specified file. As
part of the lookup, if we find gfid mismatch, we resolve them
based on the policy and return. If the file is not in gfid split-
brain, then we check for the data and metadata split-brain in the
getxattr code path, and resolve if any.
This will work provided absolute path to the file with the CLI
and not with gfid of the file. Hence the source-brick policy
without any file path will also not resolve the gfid split-brain
since it uses the gfid of the files. But it can resolve any other
type of split-brains and skip the gfid mismatch resolution with
the usual error message.
Reverting the change https://review.gluster.org/17290. This patch
resolves the issue.
Fixes gluster/glusterfs#135
Change-Id: Iaeba6fc32f184a34255d03be87cda02773130a09
BUG: 1459530
Signed-off-by: karthik-us <ksubrahm@redhat.com>
Reviewed-on: https://review.gluster.org/17485
Reviewed-by: Ravishankar N <ravishankar@redhat.com>
Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Smoke: Gluster Build System <jenkins@build.gluster.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
Enabling optimistic changelog on EC volume was not
handling node down scenarios appropriately resulting
in volume data inaccessibility.
Solution:
Update dirty xattr appropriately on good bricks whenever
nodes are down. This would fix the metadata information
as part of heal and thus ensures data accessibility.
BUG: 1468261
Change-Id: I08b0d28df386d9b2b49c3de84b4aac1c729ac057
Signed-off-by: Sunil Kumar Acharya <sheggodu@redhat.com>
Reviewed-on: https://review.gluster.org/17703
Smoke: Gluster Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
In a 3 way replica, when the source brick does not have pending xattrs
for the sinks, but the 2 sinks blame each other, metadata heal was not
happpening because we were not setting all non-sources as sinks.
Fix: Mark all non-sources as sinks, like it is done in data and entry
heal.
Change-Id: I534978940f5087302e307fcc810a48ffe898ce08
BUG: 1468279
Signed-off-by: Ravishankar N <ravishankar@redhat.com>
Reviewed-on: https://review.gluster.org/17717
Smoke: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
For allowing parallel writes we shouldn't depend on ia_size to be same for
all the bricks in each write_cbk(). But we need to make sure backend size
is correct on all the bricks and no crashes/manual modifications happened.
Fix:
At the time of get_size_version() we do 1 check to make sure size of the file
is same across the bricks. From then on the FOPs will give the status of the
fop, so we rely on this information to keep which bricks are good/bad.
Updates #251
Change-Id: I1df645347e2e9f2e09cfa4411b6cc305d7f4e4e5
Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
Reviewed-on: https://review.gluster.org/17741
Smoke: Gluster Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Xavier Hernandez <xhernandez@datalab.es>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
A brief about how hardlink migration works:
- Different hardlinks (to the same file) may hash to different bricks,
but their cached subvol will be same. Rebalance picks up the first hardlink,
calculates it's hash(call it TARGET) and set the hashed subvolume as an xattr
on the data file.
- Now all the hardlinks those come after this will fetch that xattr and will
create linkto files on TARGET (all linkto files for the hardlinks will be hardlink
to each other on TARGET).
- When number of hardlinks on source is equal to the number of hardlinks on
TARGET, the data migration will happen.
RACE:1
Since rebalance is multi-threaded, the first lookup (which decides where the TARGET
subvol should be), can be called by two hardlink migration parallely and they may end
up creating linkto files on two different TARGET subvols. Hence, hardlinks won't be
migrated.
Fix: Rely on the xattr response of lookup inside gf_defrag_handle_hardlink since it
is executed under synclock.
RACE:2
The linkto files on TARGET can be created by other clients also if they are doing
lookup on the hardlinks. Consider a scenario where you have 100 hardlinks. When
rebalance is migrating 99th hardlink, as a result of continuous lookups from other
client, linkcount on TARGET is equal to source linkcount. Rebalance will migrate data
on the 99th hardlink itself. On 100th hardlink migration, hardlink will have TARGET as
cached subvolume. If it's hash is also the same, then a migration will be triggered from
TARGET to TARGET leading to data loss.
Fix: Make sure before the final data migration, source is not same as destination.
RACE:3
Since a hardlink can be migrating to a non-hashed subvolume, a lookup from other
client or even the rebalance it self, might delete the linkto file on TARGET leading
to hardlinks never getting migrated.
This will be addressed in a different patch in future.
Change-Id: If0f6852f0e662384ee3875a2ac9d19ac4a6cea98
BUG: 1469964
Signed-off-by: Susant Palai <spalai@redhat.com>
Reviewed-on: https://review.gluster.org/17755
Smoke: Gluster Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: N Balachandran <nbalacha@redhat.com>
Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If the target of a file migration was changed because
of min-free-disk limits, the dst_fd was closed but the
clean_dst flag was not set to false. If the file could
not be created on the new target for some reason, the
ftruncate call to clean up the dst was sent on the now
invalid fd causing the process to deadlock.
Change-Id: I5bfa80f519b04567413d84229cf62d143c6e2f04
BUG: 1469029
Signed-off-by: N Balachandran <nbalacha@redhat.com>
Reviewed-on: https://review.gluster.org/17735
Smoke: Gluster Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Amar Tumballi <amarts@redhat.com>
Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
There is a another race between the cached subvol
being updated in the inode_ctx and the fd being opened on
the target.
1. fop1 -> fd1 -> subvol0
2. file migrated from subvol0 to subvol1 and cached_subvol
changed to subvol1 in inode_ctx
3. fop2 -> fd1 -> subvol1 [takes new cached subvol]
4. fop2 -> checks fd ctx (fd not open on subvol1) -> opens fd1 on subvol1
5. fop1 -> checks fd ctx (fd not open on subvol0)
-> tries to open fd1 on subvol0 -> fails with "No such file on directory".
Fix:
If dht_fd_open_on_dst fails with ENOENT or ESTALE, wind to old subvol
and let the phase1/phase2 checks handle it.
Change-Id: I34f8011574a8b72e3bcfe03b0cc4f024b352f225
BUG: 1465075
Signed-off-by: N Balachandran <nbalacha@redhat.com>
Reviewed-on: https://review.gluster.org/17731
Smoke: Gluster Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
Reviewed-by: Amar Tumballi <amarts@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The earlier approach of using the number of files
to determine when the rebalance would complete did
not work well when file sizes differed widely.
The new approach now gets the total data size and
uses that information to determine how long
the rebalance is expected to take.
Change-Id: I84e80a0893efab72ff06130e4596fa71c9c8c868
BUG: 1467209
Signed-off-by: N Balachandran <nbalacha@redhat.com>
Reviewed-on: https://review.gluster.org/17668
Smoke: Gluster Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: MOHIT AGRAWAL <moagrawa@redhat.com>
Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
4 + 2 EC volume configuration.
If untar of linux is going on and we kill a brick,
indices will be created for the files/dir which need
to be healed. ec_shd_index_sweep spawns threads to
scan these entries and start heal. If in the middle
of this we kill one more brick, we end up in a
situation where we can not heal an entry as there
are only "ec->fragment" number of bricks are UP.
However, the scan will be continued and it will
trigger the heal for those entries.
Solution:
When a heal is triggered for an entry, check if it
*CAN* be healed or not. If not come out with ENOTCONN.
Change-Id: I305be7701c289f36bd7bde22491b71074771424f
BUG: 1464359
Signed-off-by: Ashish Pandey <aspandey@redhat.com>
Reviewed-on: https://review.gluster.org/17692
Smoke: Gluster Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
Reviewed-by: Sunil Kumar Acharya <sheggodu@redhat.com>
Reviewed-by: Xavier Hernandez <xhernandez@datalab.es>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When a SEEK_HOLE was issued near to the end of file, sometimes an
offset beyond the end of file was returned. Another problem was that
using some offsets greater than the end of file returned successfully
instead of failing with ENXIO.
Change-Id: I238d2884ba02fd19a78116b0f8f8e8d6338fb3f5
BUG: 1449348
Signed-off-by: Xavier Hernandez <xhernandez@datalab.es>
Reviewed-on: https://review.gluster.org/17228
Smoke: Gluster Build System <jenkins@build.gluster.org>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Amar Tumballi <amarts@redhat.com>
Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Plus minor readability improvements.
Reported-by: pmatthaei@debian.org
Change-Id: I5393819a2fc9f240a19811143bb57b127df717cf
BUG: 1466785
Signed-off-by: Kaleb S. KEITHLEY <kkeithle@redhat.com>
Reviewed-on: https://review.gluster.org/17660
Smoke: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Shyamsundar Ranganathan <srangana@redhat.com>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Use a local variable to store the call count
in the STACK_WIND for loop. Using frame->local
is dangerous as it could be freed while the loop
is still being processed
Change-Id: Ie65cdcfb7868509b4a83bc2a5b5d6304eabfbc8e
BUG: 1466110
Signed-off-by: N Balachandran <nbalacha@redhat.com>
Reviewed-on: https://review.gluster.org/17645
Smoke: Gluster Build System <jenkins@build.gluster.org>
Tested-by: Nigel Babu <nigelb@redhat.com>
Reviewed-by: Amar Tumballi <amarts@redhat.com>
Reviewed-by: Jeff Darcy <jeff@pl.atyp.us>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Shyamsundar Ranganathan <srangana@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If an fd is opened on a file, the file is migrated
and the cached subvol is updated in the inode_ctx
before an fd based fop is sent, the fop is sent to
the dst subvol on which the fd is not opened.
This causes the FOP to fail with EBADF.
Now, every fd based fop will check to see that the fd
has been opened on the dst subvol before winding it down.
Change-Id: Id92ef5eb7a5b5226688e2d2868b15e383f5f240e
BUG: 1465075
Signed-off-by: N Balachandran <nbalacha@redhat.com>
Reviewed-on: https://review.gluster.org/17630
Smoke: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
Reviewed-by: Susant Palai <spalai@redhat.com>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
"gluster v heal <volname> info" is taking
long time to respond when a brick is down.
RCA:
Heal info command does virtual mount.
EC wait for 10 seconds, before sending UP call to upper xlator,
to get notification (DOWN or UP) from all the bricks.
Currently, we are increasing ec->xl_notify_count based on
the current status of the brick. So, if a DOWN event notification
has come and brick is already down, we are not increasing
ec->xl_notify_count in ec_handle_down.
Solution:
Handle DOWN even as notification irrespective of what
is the current status of brick.
Change-Id: I0acac0db7ec7622d4c0584692e88ad52f45a910f
BUG: 1464091
Signed-off-by: Ashish Pandey <aspandey@redhat.com>
Reviewed-on: https://review.gluster.org/17606
Tested-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
Smoke: Gluster Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Xavier Hernandez <xhernandez@datalab.es>
NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
|