| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
| |
The ec_manager_open/opendir memsets ctx->loc which causes
memory/inode leak, and ec_fheal uses ctx->loc out of fd->lock
that loc_copy may copy bad data when memset it.
This patch skips updating ctx->loc when it is initilizaed.
With it, ctx->loc is filled once, and never updated.
Change-Id: I3bf5ffce4caf4c1c667f7acaa14b451d37a3550a
fixes: bz#1806843
Signed-off-by: Kinglong Mee <mijinlong@horiscale.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
In a hyperconverged setup with granular-entry-heal enabled, if a file is
recreated while one of the bricks is down, and an index heal is triggered
(with the brick still down), entry-self heal was doing a spurious heal
with just the 2 good bricks. It was doing a post-op leading to removal
of the filename from .glusterfs/indices/entry-changes as well as
erroneous setting of afr xattrs on the parent. When the brick came up,
the xattrs were cleared, resulting in the renamed file not getting
healed and leading to gfid split-brain and EIO on the mount.
Fix:
Proceed with entry heal only when shd can connect to all bricks of the replica,
just like in data and metadata heal.
fixes: bz#1804591
Change-Id: I916ae26ad1fabf259bc6362da52d433b7223b17e
Signed-off-by: Ravishankar N <ravishankar@redhat.com>
(cherry picked from commit 06453d77d056fbaa393a137ca277a20e38d2f67e)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
When we mount a ta volume, as soon as 2 data bricks are connected
we consider that the mount is done and then send a lookup/create on
ta file on ta node. However, this connection with ta node might not
have been completed.
Due to this delay, ta replica id file will not be created and we
will see ENOTCONN error in log file if we do lookup.
Solution:
As we know that this ta node could have a higher latency, we should
wait for reasonable time for connection to happen before sending
lookup/create on replica id file.
fixes: bz#1804058
Change-Id: I36f90865afe617e4e84cee57fec832a16f5dd6cc
(cherry picked from commit a7fa54ddea3fe429f143b37e4de06a93b49d776a)
|
|
|
|
|
|
|
|
|
|
|
| |
Thin-arbiter module makes use of 'pending-xattr' name for the translator
as the filename which gets created in thin-arbiter node. By making this
unique, we can host single thin-arbiter node for multiple clusters.
Updates: #763
Change-Id: Ib3c732e7e04e6dba229e71ae3e64f1f3cb6d794d
Signed-off-by: Amar Tumballi <amar@kadalu.io>
(cherry picked from commit 8db8202f716fd24c8c52f8ee5f66e169310dc9b1)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
heal-info code assumes that all indices in xattrop directory
definitely need heal. There is one corner case.
The very first xattrop on the file will lead to adding the
gfid to 'xattrop' index in fop path and in _cbk path it is
removed because the fop is zero-xattr xattrop in success case.
These gfids could be read by heal-info and shown as needing heal.
Fix:
Check the pending flag to see if the file definitely needs or
not instead of which index is being crawled at the moment.
fixes: bz#1802449
Change-Id: I79f00dc7366fedbbb25ec4bec838dba3b34c7ad5
Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
(cherry picked from commit d27df94016b5526c18ee964d4a47508326329dda)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem: At the time of coming up one server node(1x3) after reboot
client is unmounted.The client is unmounted because a client
is getting AUTH_FAILED event and client call fini for the graph.The
client is getting AUTH_FAILED because brick is not attached with a
graph at that moment
Solution: To avoid the unmounting the client graph throw ENOENT error
from server in case if brick is not attached with server at
the time of authenticate clients.
> Credits: Xavi Hernandez <xhernandez@redhat.com>
> Change-Id: Ie6fbd73cbcf23a35d8db8841b3b6036e87682f5e
> Fixes: bz#1793852
> Signed-off-by: Mohit Agrawal <moagrawa@redhat.com>
> (cherry picked from commit > f6421dff22a6ddaf14134f6894deae219948c89d)
Change-Id: Ie6fbd73cbcf23a35d8db8841b3b6036e87682f5e
Fixes: bz#1794019
Signed-off-by: Mohit Agrawal <moagrawa@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
xlators/features/quota/src/quota.c quota_log_usage function.
The quota_log_helper() function applies memory
for path through inode_path(), should be GF_FREE().
Upstream Patch:
https://review.gluster.org/#/c/glusterfs/+/24018/
Backport of:
> fixes: bz#1792707
> Change-Id: I33143bdf272bf10837061df4a1b7b2fc146162d5
> Signed-off-by: Xi Jinyu <xijinyu@cmss.chinamobile.com>
> (cherry picked from commit 18549de12bcfafe4ac30fc2e11ad7a3f3c216b38)
fixes: bz#1791154
Change-Id: I33143bdf272bf10837061df4a1b7b2fc146162d5
Signed-off-by: Signed-off-by: Sunny Kumar <sunkumar@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Glusterfs client process has memory leak if create serveral files under one folder, and delete the folder.
According to statedump, the ref counts of readdir-ahead is bigger than zero in the inode table. Readdir-ahead get parent inode by inode_parent in rda_mark_inode_dirty when each rda_writev_cbk,the inode ref count of parent folder will be increased in inode_parent, but readdir-ahead do not unref it later.
The correction is unref the parent inode at the end of rda_mark_inode_dirty
Backport of:
> Change-Id: Iee68ab1089cbc2fbc4185b93720fb1f66ee89524
> Fixes: bz#1779055
> Signed-off-by: HuangShujun <549702281@qq.com>
Change-Id: Iee68ab1089cbc2fbc4185b93720fb1f66ee89524
(cherry picked from commit 99044a5cedcff9a9eec40a07ecb32bd66271cd02)
Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com>
Fixes: bz#1789336
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Backport of https://review.gluster.org/#/c/glusterfs/+/23960/
This volume option was not made avaialble to `gluster volume set` CLI.
Reported-by: epolakis(https://github.com/kinsu) in
https://github.com/gluster/glusterfs/issues/781
fixes: bz#1788785
Change-Id: I7141bdd4e53ee99e22b354edde8d023bfc0b2cd7
Signed-off-by: Ravishankar N <ravishankar@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We need to safe-guard against possible zero'ing out of iatt
structure in acl ctx, which can cause many issues.
> fixes: bz#1668286
> Change-Id: Ie81a57d7453a6624078de3be8c0845bf4d432773
> Signed-off-by: Amar Tumballi <amarts@redhat.com>
> (cherry picked from commit 6bf9637a93011298d032332ca93009ba4e377e46)
fixes: bz#1785493
Change-Id: Ie81a57d7453a6624078de3be8c0845bf4d432773
Signed-off-by: Mohit Agrawal <moagrawa@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Null character string is a valid xattr value in file system. But for
those xattrs processed by md-cache, it does not update its entries if
value is null('\0'). This results in ENODATA when those xattrs are
queried afterwards via getxattr() causing failures in basic operations
like create, copy etc in a specially configured Samba setup for Mac OS
clients.
On the other side snapview-server is internally setting empty string("")
as value for xattrs received as part of listxattr() and are not intended
to be cached. Therefore we try to maintain that behaviour using an
additional dictionary key to prevent updation of entries in getxattr()
and fgetxattr() callbacks in md-cache.
Credits: Poornima G <pgurusid@redhat.com>
Change-Id: I7859cbad0a06ca6d788420c2a495e658699c6ff7
Fixes: bz#1785228
Signed-off-by: Anoop C S <anoopcs@redhat.com>
(cherry picked from commit b4b683736367d93daad08a5ee6ca95778c07c5a4)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Changes in locks xlator:
Added support for per-domain inodelk count requests.
Caller needs to set GLUSTERFS_MULTIPLE_DOM_LK_CNT_REQUESTS key in the
dict and then set each key with name
'GLUSTERFS_INODELK_DOM_PREFIX:<domain name>'.
In the response dict, the xlator will send the per domain count as
values for each of these keys.
Changes in AFR:
Replaced afr_selfheal_locked_inspect() with afr_lockless_inspect(). Logic has
been added to make the latter behave same as the former, thus not
breaking the current heal info output behaviour.
fixes: bz#1783858
Change-Id: Ie9e83c162aa77f44a39c2ba7115de558120ada4d
Signed-off-by: Ravishankar N <ravishankar@redhat.com>
(cherry picked from commit d7e049160a9dea988ded5816491c2234d40ab6b3)
|
|
|
|
|
|
|
|
|
|
|
|
| |
The fd processing loops in the
dht_migration_complete_check_task and the
dht_rebalance_inprogress_task functions were unsafe
and could cause an open to be sent on an already freed
fd. This has been fixed.
Change-Id: I0a3c7d2fba314089e03dfd704f9dceb134749540
fixes: bz#1769315
Signed-off-by: N Balachandran <nbalacha@redhat.com>
|
|
|
|
|
|
|
| |
fixes: bz#1775495
Change-Id: Iea289032a8feecf2945668d3fb44a6a53089fdea
Signed-off-by: Xie Changlong <xiechanglong@cmss.chinamobile.com>
(cherry picked from commit 99d210a704d2e85c95fac5edcf435bd059aad368)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
In a situation where B1 blames B2, B2 blames B1 and B3 doesn't blame
anything for entry heal, heal will not complete even though we have
clear source and sinks. This will happen because while doing
afr_selfheal_find_direction() only the bricks which are blamed by
non-accused bricks are considered as sinks. Later in
__afr_selfheal_entry_finalize_source() when it tries to mark all the
non-sources as sinks it fails to do so because there won't be any
healed_sinks marked, no witness present and there will be a source.
Fix:
If there is a source and no healed_sinks, then reset all the locked
sources to 0 and healed sinks to 1 to do conservative merge.
Change-Id: If40d8bc95d52a52b2730f55bdcf135109b421548
Fixes: bz#1760699
Signed-off-by: karthik-us <ksubrahm@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
on rhel8 machine cleanup of shards is not happening properly for a
sharded file with hard-links. It needs to refresh the hard link count
to make it successful
The problem occurs when a sharded file with hard-links gets removed.
When the last link file is removed, all shards need to be cleaned up.
But in the current code structure shard xlator, instead of sending a lookup
to get the link count uses stale cache values of inodectx. Therby removing
the base shard but not the shards present in /.shard directory.
This fix will make sure that it marks in the first unlink's callback that
the inode ctx needs a refresh so that in the next operation, it will be
refreshed by looking up the file on-disk.
>fixes: bz#1764110
>Change-Id: I81625c7451dabf006c0864d859b1600f3521b648
>Signed-off-by: Sheetal Pamecha <spamecha@redhat.com>
>(Reviewed on upstream link https://review.gluster.org/#/c/glusterfs/+/23585/)
Fixes: bz#1768760
Change-Id: I81625c7451dabf006c0864d859b1600f3521b648
Signed-off-by: Sheetal Pamecha <spamecha@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Ever since we added quorum checks for lookups in afr via commit
bd44d59741bb8c0f5d7a62c5b1094179dd0ce8a4, the split-brain resolution
commands would not work for replica 3 because there would be no
readables for the lookup fop.
The argument was that split-brains do not occur in replica 3 but we do
see (data/metadata) split-brain cases once in a while which indicate that there are
a few bugs/corner cases yet to be discovered and fixed.
Fortunately, commit 8016d51a3bbd410b0b927ed66be50a09574b7982 added
GF_CLIENT_PID_GLFS_HEALD as the pid for all fops made by glfsheal. If we
leverage this and allow lookups in afr when pid is GF_CLIENT_PID_GLFS_HEALD,
split-brain resolution commands will work for replica 3 volumes too.
Likewise, the check is added in shard_lookup as well to permit resolving
split-brains by specifying "/.shard/shard-file.xx" as the file name
(which previously used to fail with EPERM).
Change-Id: I3c543dea79caf7cfbc1633e9089cb1cdd2538ba9
Fixes: bz#1760791
Signed-off-by: Ravishankar N <ravishankar@redhat.com>
(cherry picked from commit 47dbd753187f69b3835d2e42fdbe7485874c4b3e)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem : When a file is migrated, dht attempts to re-open all open
fds on the new cached subvol. Earlier, if dht had not opened the fd,
the client xlator would be unable to find the remote fd and would
fall back to using an anon fd for the fop. That behavior changed with
https://review.gluster.org/#/c/glusterfs/+/15804, causing fops to fail
with EBADFD if the fd was not available on the cached subvol.
The client xlator returns EBADFD if the remote fd is not found but
dht only checks for EBADF before re-opening fds on the new cached subvol.
Solution: Handle EBADFD at dht code path to avoid the issue
> Change-Id: I43c51995cdd48d05b12e4b2889c8dbe2bb2a72d8
> Fixes: bz#1758579
> (cherry picked from commit 9314a9fbf487614c736cf6c4c1b93078d37bb9df)
> (Reviewed on upstream link https://review.gluster.org/23518)
Change-Id: I43c51995cdd48d05b12e4b2889c8dbe2bb2a72d8
Fixes: bz#1761910
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When iot_worker terminates, its resources have not been reaped, which
will consumes lots of memory.
Detach iot_worker to automically release its resources back to the
system.
Change-Id: I71fabb2940e76ad54dc56b4c41aeeead2644b8bb
fixes: bz#1768742
Signed-off-by: Liguang Li <liguang.lee6@gmail.com>
Signed-off-by: N Balachandran <nbalacha@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
WB saves the wb_inode in frame->local for the truncate and
ftruncate fops. This value is not cleared in case of error
on a conflicting write request. FRAME_DESTROY finds a non-null
frame->local and tries to free it using mem_put. However,
wb_inode is allocated using GF_CALLOC, causing the
process to crash.
credit: vpolakis@gmail.com
Change-Id: I217f61470445775e05145aebe44c814731c1b8c5
fixes: bz#1755678
Signed-off-by: N Balachandran <nbalacha@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
After add-brick and rebalance, the ctime xattr is not present
on rebalanced directories on new brick. This patch fixes the
same.
Note that ctime still doesn't support consistent time across
distribute sub-volume.
This patch also fixes the in-memory inconsistency of time attributes
when metadata is self healed.
Backport of:
> Patch: https://review.gluster.org/23127/
> Change-Id: Ia20506f1839021bf61d4753191e7dc34b31bb2df
> BUG: 1734026
> Signed-off-by: Kotresh HR <khiremat@redhat.com>
(cherry picked from commit 304640e55c0f3c6d15f4e230dc6376e4f5020fea)
Change-Id: Ia20506f1839021bf61d4753191e7dc34b31bb2df
Signed-off-by: Kotresh HR <khiremat@redhat.com>
fixes: bz#1752429
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
A dict was passed to a function that calls dict_unref() without taking
any additional reference. Given that the same dict is also used after
the function returns, this was causing a use-after-free situation.
To fix the issue, we simply take an additional reference before calling
the function.
> Fixes: bz#1723890
> Change-Id: I98c6b76b08fe3fa6224edf281a26e9ba1ffe3017
> Signed-off-by: Xavi Hernandez <xhernandez@redhat.com>
> (cherry picked from commit f36086db87aae24c10abde434f081d78b942735e)
Fixes: bz#1752245
Change-Id: I98c6b76b08fe3fa6224edf281a26e9ba1ffe3017
Signed-off-by: Mohit Agrawal <moagrawa@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We were not passing xattr_req when doing a name self heal
as well as a meta data heal. Because of this, some xdata
was missing which causes i/o errors
Backport of > https://review.gluster.org/#/c/glusterfs/+/23024/
>Change-Id: Ibfb1205a7eb0195632dc3820116ffbbb8043545f
>Fixes: bz#1728770
>Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com>
Fixes: bz#1749305
Change-Id: Ibfb1205a7eb0195632dc3820116ffbbb8043545f
Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com>
(cherry picked from commit d026f0bcfd301712e4f0671ccf238f43f2e6dd30)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem: IPV6 hostname address is not parsed correctly in function
glusterd_check_brick_order
Solution: Update the code to parse hostname address
> Change-Id: Ifb2f83f9c6e987b2292070e048e97eeb51b728ab
> Fixes: bz#1747746
> Credits: Amgad Saleh <amgad.saleh@nokia.com>
> Signed-off-by: Mohit Agrawal <moagrawal@redhat.com>
> (cherry picked from commit 6563ffb04d7ba51a89726e7c5bbb85c7dbc685b5)
Change-Id: Ifb2f83f9c6e987b2292070e048e97eeb51b728ab
Fixes: bz#1749664
Credits: Amgad Saleh <amgad.saleh@nokia.com>
Signed-off-by: Mohit Agrawal <moagrawal@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
get_real_filename is implemented as a virtual extended attribute to help
Samba implement the case-insensitive but case preserving SMB protocol
more efficiently. It is implemented as a getxattr call on the parent directory
with the virtual key of "get_real_filename:<entryname>" by looking for a
spelling with different case for the provided file/dir name (<entryname>)
and returning this correct spelling as a result if the entry is found.
Originally (05aaec645a6262d431486eb5ac7cd702646cfcfb), the
implementation used the ENOENT errno to return the authoritative answer
that <entryname> does not exist in any case folding.
Now this implementation is actually a violation or misuse of the defined
API for the getxattr call which returns ENOENT for the case that the dir
that the call is made against does not exist and ENOATTR (or the synonym
ENODATA) for the case that the xattr does not exist.
This was not a problem until the gluster fuse-bridge was changed
to do map ENOENT to ESTALE in 59629f1da9dca670d5dcc6425f7f89b3e96b46bf,
after which we the getxattr call for get_real_filename returned an
ESTALE instead of ENOENT breaking the expectation in Samba.
It is an independent problem that ESTALE should not leak out to user
space but is intended to trigger retries between fuse and gluster.
But nevertheless, the semantics seem to be incorrect here and should
be changed.
This patch changes the implementation of the get_real_filename virtual
xattr to correctly return ENOATTR instead of ENOENT if the file/directory
being looked up is not found.
The Samba glusterfs_fuse vfs module which takes advantage of the
get_real_filename over a fuse mount will receive a corresponding change
to map ENOATTR to ENOENT. Without this change, it will still work
correctly, but the performance optimization for nonexisting files is
lost. On the other hand side, this change removes the distinction
between the old not-implemented case and the implemented case.
So Samba changed to treat ENOATTR like ENOENT will not work correctly
any more against old servers that don't implement get_real_filename.
I.e. existing files will be reported as non-existing
Change-Id: I971b427ab8410636d5d201157d9af70e0d075b67
fixes: bz#1745914
Signed-off-by: Michael Adam <obnox@samba.org>
(cherry picked from commit dc1b87fcfef08c9497b0c02b2410c9d18bbc2dba)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
...whenever shd is re-enabled after disabling or there is a change in
`cluster.heal-timeout`, without needing to restart shd or waiting for the
current `cluster.heal-timeout` seconds to expire.
See BZ 1743988 for more details.
Change-Id: Ia5ebd7c8e9f5b54cba3199c141fdd1af2f9b9bfe
fixes: bz#1747301
Reported-by: Glen Kiessling <glenk1973@hotmail.com>
Signed-off-by: Ravishankar N <ravishankar@redhat.com>
(cherry picked from commit 600ba94183333c4af9b4a09616690994fd528478)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
On systems that don't support "timespec_get"(e.g., centos6), it
was using "clock_gettime" with "CLOCK_MONOTONIC" to get unix epoch
time which is incorrect. This patch introduces "timespec_now_realtime"
which uses "clock_gettime" with "CLOCK_REALTIME" which fixes
the issue.
Backport of:
> Patch: https://review.gluster.org/23274/
> Change-Id: I57be35ce442d7e05319e82112b687eb4f28d7612
> Signed-off-by: Kotresh HR <khiremat@redhat.com>
> BUG: 1743652
(cherry picked from commit d14d0749340d9cb1ef6fc4b35f2fb3015ed0339d)
Change-Id: I57be35ce442d7e05319e82112b687eb4f28d7612
Signed-off-by: Kotresh HR <khiremat@redhat.com>
fixes: bz#1746145
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When atime|mtime is updated via utime family of syscalls,
ctime is not updated. This patch fixes the same.
Backport of:
> Patch: https://review.gluster.org/23177/
> Change-Id: I7f86d8f8a1e06a332c3449b5bbdbf128c9690f25
> BUG: 1738786
> Signed-off-by: Kotresh HR <khiremat@redhat.com>
(cherry picked from commit 95f71df31dc73d85df722b0e7d3a7eb1e0237e7f)
Change-Id: I7f86d8f8a1e06a332c3449b5bbdbf128c9690f25
fixes: bz#1746142
Signed-off-by: Kotresh HR <khiremat@redhat.com>
|
|
|
|
|
|
| |
Change-Id: Id7e003e4a53d0a0057c1c84e1cd704c80a6cb015
Fixes: bz#1744874
Signed-off-by: Csaba Henk <csaba@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
In posix_gfid_set, the proper error is not captured in one of
the failure cases.
Change-Id: I1c13f0691a15d6893f1037b3a5fe385a99657e00
Fixes: bz#1736481
Signed-off-by: Raghavendra Bhat <raghavendra@redhat.com>
(cherry picked from commit ed7a3793073670e787063c47e55010fc7c963064)
|
|
|
|
|
|
|
|
|
|
| |
When discard/truncate performs write fop, it should do so
after updating lock->good_mask to make sure readv happens
on the correct mask
fixes: bz#1739424
Change-Id: Idfef0bbcca8860d53707094722e6ba3f81c583b7
Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
when a file needs to be re-opened O_APPEND and O_EXCL
flags are not filtered in EC.
- O_APPEND should be filtered because EC doesn't send O_APPEND below EC for
open to make sure writes happen on the individual fragments instead of at the
end of the file.
- O_EXCL should be filtered because shd could have created the file so even
when file exists open should succeed
- O_CREAT should be filtered because open happens with gfid as parameter. So
open fop will create just the gfid which will lead to problems.
Fix:
Filter out these two flags in reopen.
Change-Id: Ia280470fcb5188a09caa07bf665a2a94bce23bc4
Fixes: bz#1739426
Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
There are cases where fop->mask may have fop->healing added
and readv shouldn't be wound on fop->healing. To avoid this
always wind readv to lock->good_mask
updates: bz#1739424
Change-Id: I2226ef0229daf5ff315d51e868b980ee48060b87
Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
EC doesn't allow concurrent writes on overlapping areas, they are
serialized. However non-overlapping writes are serviced in parallel.
When a write is not aligned, EC first needs to read the entire chunk
from disk, apply the modified fragment and write it again.
The problem appears on sparse files because a write to an offset
implicitly creates data on offsets below it (so, in some way, they
are overlapping). For example, if a file is empty and we read 10 bytes
from offset 10, read() will return 0 bytes. Now, if we write one byte
at offset 1M and retry the same read, the system call will return 10
bytes (all containing 0's).
So if we have two writes, the first one at offset 10 and the second one
at offset 1M, EC will send both in parallel because they do not overlap.
However, the first one will try to read missing data from the first chunk
(i.e. offsets 0 to 9) to recombine the entire chunk and do the final write.
This read will happen in parallel with the write to 1M. What could happen
is that half of the bricks process the write before the read, and the
half do the read before the write. Some bricks will return 10 bytes of
data while the otherw will return 0 bytes (because the file on the brick
has not been expanded yet).
When EC tries to recombine the answers from the bricks, it can't, because
it needs more than half consistent answers to recover the data. So this
read fails with EIO error. This error is propagated to the parent write,
which is aborted and EIO is returned to the application.
The issue happened because EC assumed that a write to a given offset
implies that offsets below it exist.
This fix prevents the read of the chunk from bricks if the current size
of the file is smaller than the read chunk offset. This size is
correctly tracked, so this fixes the issue.
Also modifying ec-stripe.t file for Test #13 within it.
In this patch, if a file size is less than the offset we are writing, we
fill zeros in head and tail and do not consider it strip cache miss.
That actually make sense as we know what data that part holds and there is
no need of reading it from bricks.
Change-Id: Ic342e8c35c555b8534109e9314c9a0710b6225d6
Fixes: bz#1739427
Signed-off-by: Xavi Hernandez <xhernandez@redhat.com>
|
|
|
|
|
|
|
|
|
| |
If lock has info, fop should inherit healing mask from it.
Otherwise, fop cannot inherit right healing when changed_flags is zero.
Change-Id: Ife80c9169d2c555024347a20300b0583f7e8a87f
updates: bz#1739424
Signed-off-by: Kinglong Mee <mijinlong@horiscale.com>
|
|
|
|
|
|
|
| |
Fixes: bz#1741041
Change-Id: I29e338bac62104233a6f80212df8d0fb016affda
Signed-off-by: Ravishankar N <ravishankar@redhat.com>
(cherry picked from commit 8e9c53ebf16705b9a1db2fc486dc24a5cb244ddd)
|
|
|
|
|
|
|
| |
Change-Id: I0cebaaf55c09eb1fb77a274268ff564e871b743b
fixes bz#1740316
Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com>
(cherry picked from commit 51237eda7c4b3846d08c5d24d1e3fe9b7ffba1d4)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
I hit one crash issue when using the libgfapi.
In the libgfapi it will call glfs_poller() --> event_dispatch()
in file api/src/glfs.c:721, and the event_dispatch() is defined
by libgluster locally, the problem is the name of event_dispatch()
is the extremly the same with the one from libevent package form
the OS.
For example, if a executable program Foo, which will also use and
link the libevent and the libgfapi at the same time, I can hit the
crash, like:
kernel: glfs_glfspoll[68486]: segfault at 1c0 ip 00007fef006fd2b8 sp
00007feeeaffce30 error 4 in libevent-2.0.so.5.1.9[7fef006ed000+46000]
The link for Foo is:
lib_foo_LADD = -levent $(GFAPI_LIBS)
It will crash.
This is because the glfs_poller() is calling the event_dispatch() from
the libevent, not the libglsuter.
The gfapi link info :
GFAPI_LIBS = -lacl -lgfapi -lglusterfs -lgfrpc -lgfxdr -luuid
If I link Foo like:
lib_foo_LADD = $(GFAPI_LIBS) -levent
It will works well without any problem.
And if Foo call one private lib, such as handler_glfs.so, and the
handler_glfs.so will link the GFAPI_LIBS directly, while the Foo won't
and it will dlopen(handler_glfs.so), then the crash will be hit everytime.
The link info will be:
foo_LADD = -levent
libhandler_glfs_LIBADD = $(GFAPI_LIBS)
I can avoid the crash temporarily by linking the GFAPI_LIBS in Foo too like:
foo_LADD = $(GFAPI_LIBS) -levent
libhandler_glfs_LIBADD = $(GFAPI_LIBS)
But this is ugly since the Foo won't use any APIs from the GFAPI_LIBS.
And in some cases when the --as-needed link option is added(on many dists
it is added as default), then the crash is back again, the above workaround
won't work.
Backport of:
> https://review.gluster.org/#/c/glusterfs/+/23110/
> Change-Id: I38f0200b941bd1cff4bf3066fca2fc1f9a5263aa
> Fixes: #699
> Signed-off-by: Xiubo Li <xiubli@redhat.com>
Change-Id: I38f0200b941bd1cff4bf3066fca2fc1f9a5263aa
updates: bz#1740519
Signed-off-by: Xiubo Li <xiubli@redhat.com>
(cherry picked from commit 799edc73c3d4f694c365c6a7c27c9ab8eed5f260)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The fop_wind_count can go negative when fencing is enabled
on unwind path of the IO leading to hang.
Also changed code so that fop_wind_count needs to be maintained only
till fencing is enabled on the file.
> updates: bz#1717824
> Change-Id: Icd04b42bc16cd3d50eaa581ee57233910194f480
> signed-off-by: Susant Palai <spalai@redhat.com>
(backport of https://review.gluster.org/#/c/glusterfs/+/23088/)
fixes: bz#1740077
Change-Id: Icd04b42bc16cd3d50eaa581ee57233910194f480
Signed-off-by: Susant Palai <spalai@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
For the nfs EXCLUSIVE mode create may sets a later time
to mtime (at verifier), it should not set to ctime for
storage.ctime does not allowed set ctime to a earlier time.
/* Earlier, mdata was updated only if the existing time is less
* than the time to be updated. This would fail the scenarios
* where mtime can be set to any time using the syscall. Hence
* just updating without comparison. But the ctime is not
* allowed to changed to older date.
*/
According to kernel's setattr, always set ctime at setattr,
and doesnot set ctime from mtime at storage.ctime.
Backport of:
> Patch: https://review.gluster.org/23154
> Change-Id: I5cfde6cb7f8939da9617506e3dc80bd840e0d749
> BUG: 1737288
> Signed-off-by: Kinglong Mee <kinglongmee@gmail.com>
Change-Id: I5cfde6cb7f8939da9617506e3dc80bd840e0d749
fixes: bz#1739437
Signed-off-by: Kotresh HR <khiremat@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
Ctime heals the ctime xattr ("trusted.glusterfs.mdata") in lookup
if it's not present. In a multi client scenario, there is a race
which results in updating the ctime xattr to older value.
e.g. Let c1 and c2 be two clients and file1 be the file which
doesn't have the ctime xattr. Let the ctime of file1 be t1.
(from backend, ctime heals time attributes from backend when not present).
Now following operations are done on mount
c1 -> ls -l /mnt/file1 | c2 -> ls -l /mnt/file1;echo "append" >> /mnt/file1;
The race is that the both c1 and c2 didn't fetch the ctime xattr in lookup,
so both of them tries to heal ctime to time 't1'. If c2 wins the race and
appends the file before c1 heals it, it sets the time to 't1' and updates
it to 't2' (because of append). Now c1 proceeds to heal and sets it to 't1'
which is incorrect.
Solution:
Compare the times during heal and only update the larger time. This is the
general approach used in ctime feature but got missed with healing legacy
files.
Backport of:
> Patch: https://review.gluster.org/23131
> BUG: 1734299
> Change-Id: I930bda192c64c3d49d0aed431ce23d3bc57e51b7
> Signed-off-by: Kotresh HR <khiremat@redhat.com>
fixes: bz#1739436
Change-Id: I930bda192c64c3d49d0aed431ce23d3bc57e51b7
Signed-off-by: Kotresh HR <khiremat@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
When frame->local is not null FRAME_DESTROY calls mem_put on it.
Since the stub is already destroyed in call_resume(), it leads
to crash
Fix:
Set frame->local to NULL before calling call_resume()
Backport of:
> Patch: https://review.gluster.org/23091
> BUG: 1593542
> Change-Id: I0f8adf406f4cefdb89d7624ba7a9d9c2eedfb1de
> Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
fixes: bz#1739430
Change-Id: I0f8adf406f4cefdb89d7624ba7a9d9c2eedfb1de
Signed-off-by: Kotresh HR <khiremat@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
The files which were created before ctime enabled would not
have "trusted.glusterfs.mdata"(stores time attributes) xattr.
Upon fops which modifies either ctime or mtime, the xattr
gets created with latest ctime, mtime and atime, which is
incorrect. It should update only the corresponding time
attribute and rest from backend
Solution:
Creating xattr with values from brick is not possible as
each brick of replica set would have different times.
So create the xattr upon successful lookup if the xattr
is not created
Note To Reviewers:
The time attributes used to set xattr is got from successful
lookup. Instead of sending the whole iatt over the wire via
setxattr, a structure called mdata_iatt is sent. The mdata_iatt
contains only time attributes.
Backport of:
> Patch: https://review.gluster.org/22936
> Change-Id: I5e535631ddef04195361ae0364336410a2895dd4
> BUG: 1593542
> Signed-off-by: Kotresh HR <khiremat@redhat.com>
Change-Id: I5e535631ddef04195361ae0364336410a2895dd4
updates: bz#1739430
Signed-off-by: Kotresh HR <khiremat@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
>Backport of https://review.gluster.org/#/c/glusterfs/+/22948/
>Change-Id: I0cb5320fea71306e0283509ae47024f23874b53b
>fixes: bz#1723761
Change-Id: I0cb5320fea71306e0283509ae47024f23874b53b
fixes: bz#1739399
Signed-off-by: Jiffin Tony Thottan <jthottan@redhat.com>
(cherry picked from commit 7d8be567f2f904fc74d0990ebce2e8afbedab918)
|
|
|
|
|
|
|
|
|
| |
Fixed a memleak in dht_rename_cbk when creating
a linkto file.
Change-Id: I705adef3cb79e33806520fc2b15558e90e2c211c
fixes: bz#1739337
Signed-off-by: N Balachandran <nbalacha@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Two reasons:
* ping responses from glusterd may not be relevant for Halo
replication. Instead, it might be interested in only knowing whether
the brick itself is responsive.
* When a brick is killed, propagating GF_EVENT_CHILD_PING of ping
response from glusterd results in GF_EVENT_DISCONNECT spuriously
propagated to parent xlators. These DISCONNECT events are from the
connections client establishes with glusterd as part of its
reconnect logic. Without GF_EVENT_CHILD_PING, the last event
propagated to parent xlators would be the first DISCONNECT event
from brick and hence subsequent DISCONNECTS to glusterd are not
propagated as protocol/client prevents same event being propagated
to parent xlators consecutively. propagating GF_EVENT_CHILD_PING for
ping responses from glusterd would change the last_sent_event to
GF_EVENT_CHILD_PING and hence protocol/client cannot prevent
subsequent DISCONNECT events
Signed-off-by: Raghavendra G <rgowdapp@redhat.com>
Fixes: bz#1739334
Change-Id: I50276680c52f05ca9e12149a3094923622d6eaef
(cherry picked from commit 5d66eafec581fb3209af74595784be8854ca40a4)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We are logging a warning message saying unknown-key for
tier-enabled kay. although the tier xlator is deprecated,
this key is left behind for handling the peer rejection
issues in a heterogeneous cluster. We need not to log if
this key is not found/recognised.
updates: bz#1727008
Change-Id: Ia68661898a618f99a240ca8d8a124ff6a65ebe9d
Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
(cherry picked from commit 7d270d05f835d41e32572501246b50181dc9be56)
|
|
|
|
|
|
|
|
|
| |
The current list of snapshots from priv->dirents is obtained outside
the lock.
Change-Id: I8876ec0a38308da5db058397382fbc82cc7ac177
Fixes: bz#1731512
(cherry picked from commit 8e795617fd6f5193d0d52a336059ce1a28108c0e)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
In case of thin arbiter, before index healer starts crawling the
indices at every heal-timeout interval, even if there is nothing to
be healed it will send an upcall notification to all the clients to
release any AFR_TA_DOM_NOTIFY locks that they hold. SHD will wait
for the upcall to return before proceeding with the heal even though
there is nothing to be healed. This will also invalidates the cached
information about the bricks states on the clients which leads to
extra calls on TA from clients for the next reads & writes if needed.
This will impact the IO performance.
Fix:
- Before sending the upcall to the clients, check for any pending heals
on TA without taking any locks.
- If there is nothing marked bad on TA, then continue with the index
crawl to heal any dirty markings present on the files due to any post-op
failure.
- If there is a brick marked as bad on TA, then take the
AFR_TA_DOM_NOTIFY lock on TA from SHD, get the state on TA and
continue with the current healing process.
Change-Id: Ieb477bc6cb18bbdfd4e7a0453c5ed79b574ec9d6
fixes: bz#1729483
Signed-off-by: karthik-us <ksubrahm@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
For a normal volume, we are updating the pid from a the
process while we do a daemonization or at the end of the
init if it is no-daemon mode. Along with updating the pid
we also lock the file, to make sure that the process is
running fine.
With brick mux, we were updating the pidfile from gluterd
after an attach/detach request.
There are two problems with this approach.
1) We are not holding a pidlock for any file other than parent
process.
2) There is a chance for possible race conditions with attach/detach.
For example, shd start and a volume stop could race. Let's say
we are starting an shd and it is attached to a volume.
While we trying to link the pid file to the running process,
this would have deleted by the thread that doing a volume stop.
Backport of : https://review.gluster.org/#/c/glusterfs/+/22935/
>Change-Id: I29a00352102877ce09ea3f376ca52affceb5cf1a
>Updates: bz#1722541
>Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com>
Change-Id: I29a00352102877ce09ea3f376ca52affceb5cf1a
Updates: bz#1732668
Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com>
|