<feed xmlns='http://www.w3.org/2005/Atom'>
<title>glusterfs.git/tests/bugs, branch master</title>
<subtitle></subtitle>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/'/>
<entry>
<title>cluster/afr: Heal directory rename without rmdir/mkdir</title>
<updated>2020-10-01T12:03:25+00:00</updated>
<author>
<name>Pranith Kumar K</name>
<email>pkarampu@redhat.com</email>
</author>
<published>2020-04-13T14:01:51+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=9ecbd69127d373ac000e9e1be00c1829e49e64a4'/>
<id>9ecbd69127d373ac000e9e1be00c1829e49e64a4</id>
<content type='text'>
Problem1:
When a directory is renamed while a brick
is down entry-heal always did an rm -rf on that directory on
the sink on old location and did mkdir and created the directory
hierarchy again in the new location. This is inefficient.

Problem2:
Renamedir heal order may lead to a scenario where directory in
the new location could be created before deleting it from old
location leading to 2 directories with same gfid in posix.

Fix:
As part of heal, if oldlocation is healed first and is not present in
source-brick always rename it into a hidden directory inside the
sink-brick so that when heal is triggered in new-location shd can
rename it from this hidden directory to the new-location.

If new-location heal is triggered first and it detects that the
directory already exists in the brick, then it should skip healing the
directory until it appears in the hidden directory.

Credits: Ravi for rename-data-loss.t script

Fixes: #1211
Change-Id: I0cba2006f35cd03d314d18211ce0bd530e254843
Signed-off-by: Pranith Kumar K &lt;pkarampu@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem1:
When a directory is renamed while a brick
is down entry-heal always did an rm -rf on that directory on
the sink on old location and did mkdir and created the directory
hierarchy again in the new location. This is inefficient.

Problem2:
Renamedir heal order may lead to a scenario where directory in
the new location could be created before deleting it from old
location leading to 2 directories with same gfid in posix.

Fix:
As part of heal, if oldlocation is healed first and is not present in
source-brick always rename it into a hidden directory inside the
sink-brick so that when heal is triggered in new-location shd can
rename it from this hidden directory to the new-location.

If new-location heal is triggered first and it detects that the
directory already exists in the brick, then it should skip healing the
directory until it appears in the hidden directory.

Credits: Ravi for rename-data-loss.t script

Fixes: #1211
Change-Id: I0cba2006f35cd03d314d18211ce0bd530e254843
Signed-off-by: Pranith Kumar K &lt;pkarampu@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: Fix Add-brick with increasing replica count failure</title>
<updated>2020-09-24T12:49:15+00:00</updated>
<author>
<name>Sheetal Pamecha</name>
<email>spamecha@redhat.com</email>
</author>
<published>2020-09-23T12:11:46+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=3b8ee834a3b58780721e37767a274e7c9730eba0'/>
<id>3b8ee834a3b58780721e37767a274e7c9730eba0</id>
<content type='text'>
Problem: add-brick operation fails with multiple bricks on same
server error when replica count is increased.

This was happening because of extra runs in a loop to compare
hostnames and if bricks supplied were less than "replica" count,
the bricks will get compared to itself resulting in above error.

Fixes: #1508
Change-Id: I8668e964340b7bf59728bb838525d2db062197ed
Signed-off-by: Sheetal Pamecha &lt;spamecha@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem: add-brick operation fails with multiple bricks on same
server error when replica count is increased.

This was happening because of extra runs in a loop to compare
hostnames and if bricks supplied were less than "replica" count,
the bricks will get compared to itself resulting in above error.

Fixes: #1508
Change-Id: I8668e964340b7bf59728bb838525d2db062197ed
Signed-off-by: Sheetal Pamecha &lt;spamecha@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>fuse: fetch arbitrary number of groups from /proc/[pid]/status</title>
<updated>2020-08-21T14:12:09+00:00</updated>
<author>
<name>Csaba Henk</name>
<email>csaba@redhat.com</email>
</author>
<published>2020-07-17T09:33:36+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=91fc32dd3dae1b997f520a6e5303bc275301d6c2'/>
<id>91fc32dd3dae1b997f520a6e5303bc275301d6c2</id>
<content type='text'>
Glusterfs so far constrained itself with an arbitrary limit (32)
for the number of groups read from /proc/[pid]/status (this was
the number of groups shown there prior to Linux commit
v3.7-9553-g8d238027b87e (v3.8-rc1~74^2~59); since this commit, all
groups are shown).

With this change we'll read groups up to the number Glusterfs
supports in general (64k).

Note: the actual number of groups that are made use of in a
regular Glusterfs setup shall still be capped at ~93 due to limitations
of the RPC transport. To be able to handle more groups than that,
brick side gid resolution (server.manage-gids option) can be used along
with NIS, LDAP or other such networked directory service (see
https://github.com/gluster/glusterdocs/blob/5ba15a2/docs/Administrator%20Guide/Handling-of-users-with-many-groups.md#limit-in-the-glusterfs-protocol
).

Also adding some diagnostic messages to frame_fill_groups().

Change-Id: I271f3dc3e6d3c44d6d989c7a2073ea5f16c26ee0
fixes: #1075
Signed-off-by: Csaba Henk &lt;csaba@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Glusterfs so far constrained itself with an arbitrary limit (32)
for the number of groups read from /proc/[pid]/status (this was
the number of groups shown there prior to Linux commit
v3.7-9553-g8d238027b87e (v3.8-rc1~74^2~59); since this commit, all
groups are shown).

With this change we'll read groups up to the number Glusterfs
supports in general (64k).

Note: the actual number of groups that are made use of in a
regular Glusterfs setup shall still be capped at ~93 due to limitations
of the RPC transport. To be able to handle more groups than that,
brick side gid resolution (server.manage-gids option) can be used along
with NIS, LDAP or other such networked directory service (see
https://github.com/gluster/glusterdocs/blob/5ba15a2/docs/Administrator%20Guide/Handling-of-users-with-many-groups.md#limit-in-the-glusterfs-protocol
).

Also adding some diagnostic messages to frame_fill_groups().

Change-Id: I271f3dc3e6d3c44d6d989c7a2073ea5f16c26ee0
fixes: #1075
Signed-off-by: Csaba Henk &lt;csaba@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>tests: provide an option to mark tests as 'flaky'</title>
<updated>2020-08-20T08:01:07+00:00</updated>
<author>
<name>Amar Tumballi</name>
<email>amar@kadalu.io</email>
</author>
<published>2020-08-18T08:38:20+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=097db13c11390174c5b9f11aa0fd87eca1735871'/>
<id>097db13c11390174c5b9f11aa0fd87eca1735871</id>
<content type='text'>
* also add some time gap in other tests to see if we get things properly
* create a directory 'tests/000/', which can host any tests, which are flaky.
* move all the tests mentioned in the issue to above directory.
* as the above dir gets tested first, all flaky tests would be reported quickly.
* change `run-tests.sh` to continue tests even if flaky tests fail.

Reference: gluster/project-infrastructure#72
Updates: #1000
Change-Id: Ifdafa38d083ebd80f7ae3cbbc9aa3b68b6d21d0e
Signed-off-by: Amar Tumballi &lt;amar@kadalu.io&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
* also add some time gap in other tests to see if we get things properly
* create a directory 'tests/000/', which can host any tests, which are flaky.
* move all the tests mentioned in the issue to above directory.
* as the above dir gets tested first, all flaky tests would be reported quickly.
* change `run-tests.sh` to continue tests even if flaky tests fail.

Reference: gluster/project-infrastructure#72
Updates: #1000
Change-Id: Ifdafa38d083ebd80f7ae3cbbc9aa3b68b6d21d0e
Signed-off-by: Amar Tumballi &lt;amar@kadalu.io&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>features/shard: optimization over shard lookup in case of prealloc</title>
<updated>2020-08-20T06:02:19+00:00</updated>
<author>
<name>Vinayakswami Hariharmath</name>
<email>vharihar@redhat.com</email>
</author>
<published>2020-08-06T09:09:59+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=2ede911d07c6dc07a0f729526ab590ace77341ae'/>
<id>2ede911d07c6dc07a0f729526ab590ace77341ae</id>
<content type='text'>
Assume that we are preallocating a VM of size 1TB with a shard
block size of 64MB then there will be ~16k shards.

This creation happens in 2 steps shard_fallocate() path i.e

1. lookup for the shards if any already present and
2. mknod over those shards do not exist.

But in case of fresh creation, we dont have to lookup for all
shards which are not present as the the file size will be 0.
Through this, we can save lookup on all shards which are not
present. This optimization is quite useful in the case of
preallocating big vm.

Also if the file is already present and the call is to
extend it to bigger size then we need not to lookup for non-
existent shards. Just lookup preexisting shards, populate
the inodes and issue mknod on extended size.

Fixes: #1425
Change-Id: I60036fe8302c696e0ca80ff11ab0ef5bcdbd7880
Signed-off-by: Vinayakswami Hariharmath &lt;vharihar@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Assume that we are preallocating a VM of size 1TB with a shard
block size of 64MB then there will be ~16k shards.

This creation happens in 2 steps shard_fallocate() path i.e

1. lookup for the shards if any already present and
2. mknod over those shards do not exist.

But in case of fresh creation, we dont have to lookup for all
shards which are not present as the the file size will be 0.
Through this, we can save lookup on all shards which are not
present. This optimization is quite useful in the case of
preallocating big vm.

Also if the file is already present and the call is to
extend it to bigger size then we need not to lookup for non-
existent shards. Just lookup preexisting shards, populate
the inodes and issue mknod on extended size.

Fixes: #1425
Change-Id: I60036fe8302c696e0ca80ff11ab0ef5bcdbd7880
Signed-off-by: Vinayakswami Hariharmath &lt;vharihar@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: getspec() returns wrong response when volfile not found</title>
<updated>2020-07-23T10:29:26+00:00</updated>
<author>
<name>Tamar Shacked</name>
<email>tshacked@redhat.com</email>
</author>
<published>2020-07-21T06:31:18+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=850b7daf9ba1e7cd28efda3150dd8d11c61e2d27'/>
<id>850b7daf9ba1e7cd28efda3150dd8d11c61e2d27</id>
<content type='text'>
In a cluster env: getspec() detects that volfile not found.
but further on, this return code is set by another call
so the error is lost and not handled.
As a result the server responds with ambiguous message:
{op_ret = -1, op_errno = 0..} - which cause the client to stuck.

Fix:
server side: don't override the failure error.

fixes: #1375
Change-Id: Id394954d4d0746570c1ee7d98969649c305c6b0d
Signed-off-by: Tamar Shacked &lt;tshacked@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
In a cluster env: getspec() detects that volfile not found.
but further on, this return code is set by another call
so the error is lost and not handled.
As a result the server responds with ambiguous message:
{op_ret = -1, op_errno = 0..} - which cause the client to stuck.

Fix:
server side: don't override the failure error.

fixes: #1375
Change-Id: Id394954d4d0746570c1ee7d98969649c305c6b0d
Signed-off-by: Tamar Shacked &lt;tshacked@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>dht - fixing xattr inconsistency</title>
<updated>2020-07-22T09:10:35+00:00</updated>
<author>
<name>Barak Sason Rofman</name>
<email>sason922@gmail.com</email>
</author>
<published>2020-07-07T17:55:35+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=76017cf65433b7f42e6bfdc2eaddfc36685e2c61'/>
<id>76017cf65433b7f42e6bfdc2eaddfc36685e2c61</id>
<content type='text'>
The scenario of setting an xattr to a dir, killing one of the bricks,
removing the xattr, bringing back the brick results in xattr
inconsistency - The downed brick will still have the xattr, but the rest
won't.
This patch add a mechanism that will remove the extra xattrs during
lookup.

This patch is a modification to a previous patch based on comments that
were made after merge:
https://review.gluster.org/#/c/glusterfs/+/24613/

fixes: #1324
Change-Id: Ifec0b7aea6cd40daa8b0319b881191cf83e031d1
Signed-off-by: Barak Sason Rofman &lt;bsasonro@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
The scenario of setting an xattr to a dir, killing one of the bricks,
removing the xattr, bringing back the brick results in xattr
inconsistency - The downed brick will still have the xattr, but the rest
won't.
This patch add a mechanism that will remove the extra xattrs during
lookup.

This patch is a modification to a previous patch based on comments that
were made after merge:
https://review.gluster.org/#/c/glusterfs/+/24613/

fixes: #1324
Change-Id: Ifec0b7aea6cd40daa8b0319b881191cf83e031d1
Signed-off-by: Barak Sason Rofman &lt;bsasonro@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>dht: Heal missing dir entry on brick in revalidate path</title>
<updated>2020-07-09T04:26:51+00:00</updated>
<author>
<name>Susant Palai</name>
<email>spalai@redhat.com</email>
</author>
<published>2020-06-23T06:43:44+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=1ca15d9a490b57275a37db04fcae8594a1260530'/>
<id>1ca15d9a490b57275a37db04fcae8594a1260530</id>
<content type='text'>
Mark dir as missing in layout structure to be healed in
dht_selfheal_directory.

fixes: #1327
Change-Id: If2c69294bd8107c26624cfe220f008bc3b952a4e
Signed-off-by: Susant Palai &lt;spalai@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Mark dir as missing in layout structure to be healed in
dht_selfheal_directory.

fixes: #1327
Change-Id: If2c69294bd8107c26624cfe220f008bc3b952a4e
Signed-off-by: Susant Palai &lt;spalai@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>tests: added volume operations to increase code coverage</title>
<updated>2020-07-06T13:15:18+00:00</updated>
<author>
<name>nik-redhat</name>
<email>nladha@redhat.com</email>
</author>
<published>2020-05-26T13:15:46+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=f77f8c7c603069071d7a609adce3ed9bc1131c37'/>
<id>f77f8c7c603069071d7a609adce3ed9bc1131c37</id>
<content type='text'>
Added test for volume options like
localtime-logging, fixed enable-shared-storage
to include function coverage and few negative
tests for other volume options to increase the
code coverage in the glusterd component.

Change-Id: Ib1706c1fd5bc98a64dcb5c8b15a121d639a597d7
Updates: #1052
Signed-off-by: nik-redhat &lt;nladha@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Added test for volume options like
localtime-logging, fixed enable-shared-storage
to include function coverage and few negative
tests for other volume options to increase the
code coverage in the glusterd component.

Change-Id: Ib1706c1fd5bc98a64dcb5c8b15a121d639a597d7
Updates: #1052
Signed-off-by: nik-redhat &lt;nladha@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>Revert "dht - fixing xattr inconsistency"</title>
<updated>2020-06-25T11:06:08+00:00</updated>
<author>
<name>Barak Sason Rofman</name>
<email>sason922@gmail.com</email>
</author>
<published>2020-06-25T09:51:04+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=cff7804a0604b1b2ab2d91920af163bd95653ae7'/>
<id>cff7804a0604b1b2ab2d91920af163bd95653ae7</id>
<content type='text'>
This reverts commit 620158475f462251c996901a8e24306ef6cb4c42.
The patch to revert is https://review.gluster.org/#/c/glusterfs/+/24613/
Reverting is required as comments were posted regarding a more
efficient implementation were made after the patch was merged.
A new patch will be posted to adress the comments will be posted.

updates: #1324
Change-Id: I59205baefe1cada033c736d41ce9c51b21727d3f
Signed-off-by: Barak Sason Rofman &lt;redhat@gmail.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
This reverts commit 620158475f462251c996901a8e24306ef6cb4c42.
The patch to revert is https://review.gluster.org/#/c/glusterfs/+/24613/
Reverting is required as comments were posted regarding a more
efficient implementation were made after the patch was merged.
A new patch will be posted to adress the comments will be posted.

updates: #1324
Change-Id: I59205baefe1cada033c736d41ce9c51b21727d3f
Signed-off-by: Barak Sason Rofman &lt;redhat@gmail.com&gt;
</pre>
</div>
</content>
</entry>
</feed>
