<feed xmlns='http://www.w3.org/2005/Atom'>
<title>glusterfs.git/xlators/features, branch v7.6</title>
<subtitle></subtitle>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/'/>
<entry>
<title>features/utime: Don't access frame after stack-wind</title>
<updated>2020-04-07T11:22:36+00:00</updated>
<author>
<name>Pranith Kumar K</name>
<email>pkarampu@redhat.com</email>
</author>
<published>2020-04-02T10:00:28+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=b0e61d5a15e61c7defd47781bf05a859b0145124'/>
<id>b0e61d5a15e61c7defd47781bf05a859b0145124</id>
<content type='text'>
Problem:
frame is accessed after stack-wind. This can lead to crash
if the cbk frees the frame.

Fix:
Use new frame for the wind instead.

Fixes: #832
Change-Id: I64754609f1114b0bbd4d1336fa81a56f2cca6e03
Signed-off-by: Pranith Kumar K &lt;pkarampu@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem:
frame is accessed after stack-wind. This can lead to crash
if the cbk frees the frame.

Fix:
Use new frame for the wind instead.

Fixes: #832
Change-Id: I64754609f1114b0bbd4d1336fa81a56f2cca6e03
Signed-off-by: Pranith Kumar K &lt;pkarampu@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>utime: resolve an issue of permission denied logs</title>
<updated>2020-04-07T11:14:40+00:00</updated>
<author>
<name>Amar Tumballi</name>
<email>amar@kadalu.io</email>
</author>
<published>2020-03-03T19:16:08+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=0a67613d683cd07aea8d91243db7ef2298f25921'/>
<id>0a67613d683cd07aea8d91243db7ef2298f25921</id>
<content type='text'>
In case where uid is not set to be 0, there are possible errors
from acl xlator. So, set `uid = 0;` with pid indicating this is
set from UTIME activity.

The message "E [MSGID: 148002] [utime.c:146:gf_utime_set_mdata_setxattr_cbk] 0-dev_SNIP_data-utime: dict set of key for set-ctime-mdata failed [Permission denied]" repeated 2 times between [2019-12-19 21:27:55.042634] and [2019-12-19 21:27:55.047887]

Change-Id: Ieadf329835a40a13ac0bf908dac776e66954466c
Updates: #832
Signed-off-by: Amar Tumballi &lt;amar@kadalu.io&gt;
(cherry picked from commit eb916c057036db8289b41265797e5dce066d1512)
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
In case where uid is not set to be 0, there are possible errors
from acl xlator. So, set `uid = 0;` with pid indicating this is
set from UTIME activity.

The message "E [MSGID: 148002] [utime.c:146:gf_utime_set_mdata_setxattr_cbk] 0-dev_SNIP_data-utime: dict set of key for set-ctime-mdata failed [Permission denied]" repeated 2 times between [2019-12-19 21:27:55.042634] and [2019-12-19 21:27:55.047887]

Change-Id: Ieadf329835a40a13ac0bf908dac776e66954466c
Updates: #832
Signed-off-by: Amar Tumballi &lt;amar@kadalu.io&gt;
(cherry picked from commit eb916c057036db8289b41265797e5dce066d1512)
</pre>
</div>
</content>
</entry>
<entry>
<title>features/shard: Fix crash during shards cleanup in error cases</title>
<updated>2020-04-07T11:11:25+00:00</updated>
<author>
<name>Krutika Dhananjay</name>
<email>kdhananj@redhat.com</email>
</author>
<published>2020-03-23T06:17:10+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=6909b249139679acc2ffb04486b5743b7bef8a10'/>
<id>6909b249139679acc2ffb04486b5743b7bef8a10</id>
<content type='text'>
A crash is seen during a reattempt to clean up shards in background
upon remount. And this happens even on remount (which means a remount
is no workaround for the crash).

In such a situation, the in-memory base inode object will not be
existent (new process, non-existent base shard).
So local-&gt;resolver_base_inode will be NULL.

In the event of an error (in this case, of space running out), the
process would crash at the time of logging the error in the following line -

        gf_msg(this-&gt;name, GF_LOG_ERROR, local-&gt;op_errno, SHARD_MSG_FOP_FAILED,
               "failed to delete shards of %s",
               uuid_utoa(local-&gt;resolver_base_inode-&gt;gfid));

Fixed that by using local-&gt;base_gfid as the source of gfid when
local-&gt;resolver_base_inode is NULL.

Change-Id: I0b49f2b58becd0d8874b3d4b14ff8d92a89d02d5
Fixes: #1127
Signed-off-by: Krutika Dhananjay &lt;kdhananj@redhat.com&gt;
(cherry picked from commit cc43ac8651de9aa508b01cb259b43c02d89b2afc)
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
A crash is seen during a reattempt to clean up shards in background
upon remount. And this happens even on remount (which means a remount
is no workaround for the crash).

In such a situation, the in-memory base inode object will not be
existent (new process, non-existent base shard).
So local-&gt;resolver_base_inode will be NULL.

In the event of an error (in this case, of space running out), the
process would crash at the time of logging the error in the following line -

        gf_msg(this-&gt;name, GF_LOG_ERROR, local-&gt;op_errno, SHARD_MSG_FOP_FAILED,
               "failed to delete shards of %s",
               uuid_utoa(local-&gt;resolver_base_inode-&gt;gfid));

Fixed that by using local-&gt;base_gfid as the source of gfid when
local-&gt;resolver_base_inode is NULL.

Change-Id: I0b49f2b58becd0d8874b3d4b14ff8d92a89d02d5
Fixes: #1127
Signed-off-by: Krutika Dhananjay &lt;kdhananj@redhat.com&gt;
(cherry picked from commit cc43ac8651de9aa508b01cb259b43c02d89b2afc)
</pre>
</div>
</content>
</entry>
<entry>
<title>multiple: fix bad type cast</title>
<updated>2020-03-16T08:21:00+00:00</updated>
<author>
<name>Xavi Hernandez</name>
<email>xhernandez@redhat.com</email>
</author>
<published>2019-12-20T13:14:32+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=dfaaace24d26d8e39f7783e99ac7440eafeced74'/>
<id>dfaaace24d26d8e39f7783e99ac7440eafeced74</id>
<content type='text'>
When using inode_ctx_get() or inode_ctx_set(), a 'uint64_t *' is expected.
In many cases, the value to retrieve or store is a pointer, which will be
of smaller size in some architectures (for example 32-bits). In this case,
directly passing the address of the pointer casted to an 'uint64_t *' is
wrong and can cause memory corruption.

Backport of:
&gt; Change-Id: Iae616da9dda528df6743fa2f65ae5cff5ad23258
&gt; Signed-off-by: Xavi Hernandez &lt;xhernandez@redhat.com&gt;
&gt; Fixes: bz#1785611

Change-Id: Iae616da9dda528df6743fa2f65ae5cff5ad23258
Signed-off-by: Xavi Hernandez &lt;xhernandez@redhat.com&gt;
Fixes: bz#1785323
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
When using inode_ctx_get() or inode_ctx_set(), a 'uint64_t *' is expected.
In many cases, the value to retrieve or store is a pointer, which will be
of smaller size in some architectures (for example 32-bits). In this case,
directly passing the address of the pointer casted to an 'uint64_t *' is
wrong and can cause memory corruption.

Backport of:
&gt; Change-Id: Iae616da9dda528df6743fa2f65ae5cff5ad23258
&gt; Signed-off-by: Xavi Hernandez &lt;xhernandez@redhat.com&gt;
&gt; Fixes: bz#1785611

Change-Id: Iae616da9dda528df6743fa2f65ae5cff5ad23258
Signed-off-by: Xavi Hernandez &lt;xhernandez@redhat.com&gt;
Fixes: bz#1785323
</pre>
</div>
</content>
</entry>
<entry>
<title>Fix possible resource leaks.</title>
<updated>2020-02-10T07:36:46+00:00</updated>
<author>
<name>Xi Jinyu</name>
<email>xijinyu@cmss.chinamobile.com</email>
</author>
<published>2020-01-15T03:29:06+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=f77a946ad947045a04a8be0989379b2c5b7d0e8e'/>
<id>f77a946ad947045a04a8be0989379b2c5b7d0e8e</id>
<content type='text'>
xlators/features/quota/src/quota.c quota_log_usage function.
The quota_log_helper() function applies memory
for path through inode_path(), should be GF_FREE().

Upstream Patch:
https://review.gluster.org/#/c/glusterfs/+/24018/

Backport of:

&gt;   fixes: bz#1792707
&gt;   Change-Id: I33143bdf272bf10837061df4a1b7b2fc146162d5
&gt;   Signed-off-by: Xi Jinyu &lt;xijinyu@cmss.chinamobile.com&gt;
&gt;   (cherry picked from commit 18549de12bcfafe4ac30fc2e11ad7a3f3c216b38)

fixes: bz#1791154
Change-Id: I33143bdf272bf10837061df4a1b7b2fc146162d5
Signed-off-by: Signed-off-by: Sunny Kumar &lt;sunkumar@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
xlators/features/quota/src/quota.c quota_log_usage function.
The quota_log_helper() function applies memory
for path through inode_path(), should be GF_FREE().

Upstream Patch:
https://review.gluster.org/#/c/glusterfs/+/24018/

Backport of:

&gt;   fixes: bz#1792707
&gt;   Change-Id: I33143bdf272bf10837061df4a1b7b2fc146162d5
&gt;   Signed-off-by: Xi Jinyu &lt;xijinyu@cmss.chinamobile.com&gt;
&gt;   (cherry picked from commit 18549de12bcfafe4ac30fc2e11ad7a3f3c216b38)

fixes: bz#1791154
Change-Id: I33143bdf272bf10837061df4a1b7b2fc146162d5
Signed-off-by: Signed-off-by: Sunny Kumar &lt;sunkumar@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>performance/md-cache: Do not skip caching of null character xattr values</title>
<updated>2019-12-19T12:42:32+00:00</updated>
<author>
<name>Anoop C S</name>
<email>anoopcs@redhat.com</email>
</author>
<published>2019-08-10T05:00:26+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=c1cb279649b3d0c4ba88dbf0ddfd179ce6f5798f'/>
<id>c1cb279649b3d0c4ba88dbf0ddfd179ce6f5798f</id>
<content type='text'>
Null character string is a valid xattr value in file system. But for
those xattrs processed by md-cache, it does not update its entries if
value is null('\0'). This results in ENODATA when those xattrs are
queried afterwards via getxattr() causing failures in basic operations
like create, copy etc in a specially configured Samba setup for Mac OS
clients.

On the other side snapview-server is internally setting empty string("")
as value for xattrs received as part of listxattr() and are not intended
to be cached. Therefore we try to maintain that behaviour using an
additional dictionary key to prevent updation of entries in getxattr()
and fgetxattr() callbacks in md-cache.

Credits: Poornima G &lt;pgurusid@redhat.com&gt;

Change-Id: I7859cbad0a06ca6d788420c2a495e658699c6ff7
Fixes: bz#1785228
Signed-off-by: Anoop C S &lt;anoopcs@redhat.com&gt;
(cherry picked from commit b4b683736367d93daad08a5ee6ca95778c07c5a4)
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Null character string is a valid xattr value in file system. But for
those xattrs processed by md-cache, it does not update its entries if
value is null('\0'). This results in ENODATA when those xattrs are
queried afterwards via getxattr() causing failures in basic operations
like create, copy etc in a specially configured Samba setup for Mac OS
clients.

On the other side snapview-server is internally setting empty string("")
as value for xattrs received as part of listxattr() and are not intended
to be cached. Therefore we try to maintain that behaviour using an
additional dictionary key to prevent updation of entries in getxattr()
and fgetxattr() callbacks in md-cache.

Credits: Poornima G &lt;pgurusid@redhat.com&gt;

Change-Id: I7859cbad0a06ca6d788420c2a495e658699c6ff7
Fixes: bz#1785228
Signed-off-by: Anoop C S &lt;anoopcs@redhat.com&gt;
(cherry picked from commit b4b683736367d93daad08a5ee6ca95778c07c5a4)
</pre>
</div>
</content>
</entry>
<entry>
<title>afr: make heal info lockless</title>
<updated>2019-12-16T05:38:25+00:00</updated>
<author>
<name>Ravishankar N</name>
<email>ravishankar@redhat.com</email>
</author>
<published>2019-11-07T09:48:30+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=ac108947b0f25293c154707b70ea01eb3774f542'/>
<id>ac108947b0f25293c154707b70ea01eb3774f542</id>
<content type='text'>
Changes in locks xlator:
Added support for per-domain inodelk count requests.
Caller needs to set GLUSTERFS_MULTIPLE_DOM_LK_CNT_REQUESTS key in the
dict and then set each key with name
'GLUSTERFS_INODELK_DOM_PREFIX:&lt;domain name&gt;'.
In the response dict, the xlator will send the per domain count as
values for each of these keys.

Changes in AFR:
Replaced afr_selfheal_locked_inspect() with afr_lockless_inspect(). Logic has
been added to make the latter behave same as the former, thus not
breaking the current heal info output behaviour.

fixes: bz#1783858
Change-Id: Ie9e83c162aa77f44a39c2ba7115de558120ada4d
Signed-off-by: Ravishankar N &lt;ravishankar@redhat.com&gt;
(cherry picked from commit d7e049160a9dea988ded5816491c2234d40ab6b3)
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Changes in locks xlator:
Added support for per-domain inodelk count requests.
Caller needs to set GLUSTERFS_MULTIPLE_DOM_LK_CNT_REQUESTS key in the
dict and then set each key with name
'GLUSTERFS_INODELK_DOM_PREFIX:&lt;domain name&gt;'.
In the response dict, the xlator will send the per domain count as
values for each of these keys.

Changes in AFR:
Replaced afr_selfheal_locked_inspect() with afr_lockless_inspect(). Logic has
been added to make the latter behave same as the former, thus not
breaking the current heal info output behaviour.

fixes: bz#1783858
Change-Id: Ie9e83c162aa77f44a39c2ba7115de558120ada4d
Signed-off-by: Ravishankar N &lt;ravishankar@redhat.com&gt;
(cherry picked from commit d7e049160a9dea988ded5816491c2234d40ab6b3)
</pre>
</div>
</content>
</entry>
<entry>
<title>tests/shard: fix tests/bugs/shard/unlinks-and-renames.t failure</title>
<updated>2019-11-13T05:05:54+00:00</updated>
<author>
<name>Sheetal Pamecha</name>
<email>spamecha@redhat.com</email>
</author>
<published>2019-10-22T09:04:06+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=7808a10af0949ee19ba4c0ab000732ba7eb67f02'/>
<id>7808a10af0949ee19ba4c0ab000732ba7eb67f02</id>
<content type='text'>
on rhel8 machine cleanup of shards is not happening properly for a
sharded file with hard-links. It needs to refresh the hard link count
to make it successful

The problem occurs when a sharded file with hard-links gets removed.
When the last link file is removed, all shards need to be cleaned up.
But in the current code structure shard xlator, instead of sending a lookup
to get the link count uses stale cache values of inodectx. Therby removing
the base shard but not the shards present in /.shard directory.

This fix will make sure that it marks in the first unlink's callback that
the inode ctx needs a refresh so that in the next operation, it will be
refreshed by looking up the file on-disk.

&gt;fixes: bz#1764110
&gt;Change-Id: I81625c7451dabf006c0864d859b1600f3521b648
&gt;Signed-off-by: Sheetal Pamecha &lt;spamecha@redhat.com&gt;
&gt;(Reviewed on upstream link https://review.gluster.org/#/c/glusterfs/+/23585/)

Fixes: bz#1768760
Change-Id: I81625c7451dabf006c0864d859b1600f3521b648
Signed-off-by: Sheetal Pamecha &lt;spamecha@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
on rhel8 machine cleanup of shards is not happening properly for a
sharded file with hard-links. It needs to refresh the hard link count
to make it successful

The problem occurs when a sharded file with hard-links gets removed.
When the last link file is removed, all shards need to be cleaned up.
But in the current code structure shard xlator, instead of sending a lookup
to get the link count uses stale cache values of inodectx. Therby removing
the base shard but not the shards present in /.shard directory.

This fix will make sure that it marks in the first unlink's callback that
the inode ctx needs a refresh so that in the next operation, it will be
refreshed by looking up the file on-disk.

&gt;fixes: bz#1764110
&gt;Change-Id: I81625c7451dabf006c0864d859b1600f3521b648
&gt;Signed-off-by: Sheetal Pamecha &lt;spamecha@redhat.com&gt;
&gt;(Reviewed on upstream link https://review.gluster.org/#/c/glusterfs/+/23585/)

Fixes: bz#1768760
Change-Id: I81625c7451dabf006c0864d859b1600f3521b648
Signed-off-by: Sheetal Pamecha &lt;spamecha@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>afr: support split-brain CLI for replica 3</title>
<updated>2019-11-13T05:04:07+00:00</updated>
<author>
<name>Ravishankar N</name>
<email>ravishankar@redhat.com</email>
</author>
<published>2019-09-28T03:23:08+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=fc5a3dd8757ffc80869173e4758d068be2cf5d19'/>
<id>fc5a3dd8757ffc80869173e4758d068be2cf5d19</id>
<content type='text'>
Ever since we added quorum checks for lookups in afr via commit
bd44d59741bb8c0f5d7a62c5b1094179dd0ce8a4, the split-brain resolution
commands would not work for replica 3 because there would be no
readables for the lookup fop.

The argument was that split-brains do not occur in replica 3 but we do
see (data/metadata) split-brain cases once in a while which indicate that there are
a few bugs/corner cases yet to be discovered and fixed.

Fortunately, commit  8016d51a3bbd410b0b927ed66be50a09574b7982 added
GF_CLIENT_PID_GLFS_HEALD as the pid for all fops made by glfsheal. If we
leverage this and allow lookups in afr when pid is GF_CLIENT_PID_GLFS_HEALD,
split-brain resolution commands will work for replica 3 volumes too.

Likewise, the check is added in shard_lookup as well to permit resolving
split-brains by specifying "/.shard/shard-file.xx" as the file name
(which previously used to fail with EPERM).

Change-Id: I3c543dea79caf7cfbc1633e9089cb1cdd2538ba9
Fixes: bz#1760791
Signed-off-by: Ravishankar N &lt;ravishankar@redhat.com&gt;
(cherry picked from commit 47dbd753187f69b3835d2e42fdbe7485874c4b3e)
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Ever since we added quorum checks for lookups in afr via commit
bd44d59741bb8c0f5d7a62c5b1094179dd0ce8a4, the split-brain resolution
commands would not work for replica 3 because there would be no
readables for the lookup fop.

The argument was that split-brains do not occur in replica 3 but we do
see (data/metadata) split-brain cases once in a while which indicate that there are
a few bugs/corner cases yet to be discovered and fixed.

Fortunately, commit  8016d51a3bbd410b0b927ed66be50a09574b7982 added
GF_CLIENT_PID_GLFS_HEALD as the pid for all fops made by glfsheal. If we
leverage this and allow lookups in afr when pid is GF_CLIENT_PID_GLFS_HEALD,
split-brain resolution commands will work for replica 3 volumes too.

Likewise, the check is added in shard_lookup as well to permit resolving
split-brains by specifying "/.shard/shard-file.xx" as the file name
(which previously used to fail with EPERM).

Change-Id: I3c543dea79caf7cfbc1633e9089cb1cdd2538ba9
Fixes: bz#1760791
Signed-off-by: Ravishankar N &lt;ravishankar@redhat.com&gt;
(cherry picked from commit 47dbd753187f69b3835d2e42fdbe7485874c4b3e)
</pre>
</div>
</content>
</entry>
<entry>
<title>ctime: Fix incorrect realtime passed to frame-&gt;root-&gt;ctime</title>
<updated>2019-08-28T05:49:36+00:00</updated>
<author>
<name>Kotresh HR</name>
<email>khiremat@redhat.com</email>
</author>
<published>2019-08-20T10:19:40+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=b85d550a552d485f4a7f1eedbc00bdf1f67d6263'/>
<id>b85d550a552d485f4a7f1eedbc00bdf1f67d6263</id>
<content type='text'>
On systems that don't support "timespec_get"(e.g., centos6), it
was using "clock_gettime" with "CLOCK_MONOTONIC" to get unix epoch
time which is incorrect. This patch introduces "timespec_now_realtime"
which uses "clock_gettime" with "CLOCK_REALTIME" which fixes
the issue.

Backport of:
 &gt; Patch: https://review.gluster.org/23274/
 &gt; Change-Id: I57be35ce442d7e05319e82112b687eb4f28d7612
 &gt; Signed-off-by: Kotresh HR &lt;khiremat@redhat.com&gt;
 &gt; BUG: 1743652
(cherry picked from commit d14d0749340d9cb1ef6fc4b35f2fb3015ed0339d)

Change-Id: I57be35ce442d7e05319e82112b687eb4f28d7612
Signed-off-by: Kotresh HR &lt;khiremat@redhat.com&gt;
fixes: bz#1746145
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
On systems that don't support "timespec_get"(e.g., centos6), it
was using "clock_gettime" with "CLOCK_MONOTONIC" to get unix epoch
time which is incorrect. This patch introduces "timespec_now_realtime"
which uses "clock_gettime" with "CLOCK_REALTIME" which fixes
the issue.

Backport of:
 &gt; Patch: https://review.gluster.org/23274/
 &gt; Change-Id: I57be35ce442d7e05319e82112b687eb4f28d7612
 &gt; Signed-off-by: Kotresh HR &lt;khiremat@redhat.com&gt;
 &gt; BUG: 1743652
(cherry picked from commit d14d0749340d9cb1ef6fc4b35f2fb3015ed0339d)

Change-Id: I57be35ce442d7e05319e82112b687eb4f28d7612
Signed-off-by: Kotresh HR &lt;khiremat@redhat.com&gt;
fixes: bz#1746145
</pre>
</div>
</content>
</entry>
</feed>
