<feed xmlns='http://www.w3.org/2005/Atom'>
<title>glusterfs.git/tests/bugs, branch v4.1.6</title>
<subtitle></subtitle>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/'/>
<entry>
<title>glusterd: ensure volinfo-&gt;caps is set to correct value</title>
<updated>2018-11-05T20:38:01+00:00</updated>
<author>
<name>Sanju Rakonde</name>
<email>srakonde@redhat.com</email>
</author>
<published>2018-10-03T18:28:37+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=224895148d95742c1f36b48bb79d8b9ef1ff0cd6'/>
<id>224895148d95742c1f36b48bb79d8b9ef1ff0cd6</id>
<content type='text'>
With the commit febf5ed4848, during the volume create op,
we are setting volinfo-&gt;caps to 0, only if any of the bricks
belong to the same node and brickinfo-&gt;vg[0] is null.
Previously, we used to set volinfo-&gt;caps to 0, when
either brick doesn't belong to the same node or brickinfo-&gt;vg[0]
is null.

With this patch, we set volinfo-&gt;caps to 0, when either brick
doesn't belong to the same node or brickinfo-&gt;vg[0] is null.
(as we do earlier without commit febf5ed4848).

&gt; BUG: bz#1635820
&gt; Change-Id: I00a97415786b775fb088ac45566ad52b402f1a49
&gt; Signed-off-by: Sanju Rakonde &lt;srakonde@redhat.com&gt;

fixes: bz#1643052
Change-Id: I00a97415786b775fb088ac45566ad52b402f1a49
Signed-off-by: Sanju Rakonde &lt;srakonde@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
With the commit febf5ed4848, during the volume create op,
we are setting volinfo-&gt;caps to 0, only if any of the bricks
belong to the same node and brickinfo-&gt;vg[0] is null.
Previously, we used to set volinfo-&gt;caps to 0, when
either brick doesn't belong to the same node or brickinfo-&gt;vg[0]
is null.

With this patch, we set volinfo-&gt;caps to 0, when either brick
doesn't belong to the same node or brickinfo-&gt;vg[0] is null.
(as we do earlier without commit febf5ed4848).

&gt; BUG: bz#1635820
&gt; Change-Id: I00a97415786b775fb088ac45566ad52b402f1a49
&gt; Signed-off-by: Sanju Rakonde &lt;srakonde@redhat.com&gt;

fixes: bz#1643052
Change-Id: I00a97415786b775fb088ac45566ad52b402f1a49
Signed-off-by: Sanju Rakonde &lt;srakonde@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>tests: correction in tests/bugs/glusterd/optimized-basic-testcases-in-cluster.t</title>
<updated>2018-10-25T13:19:16+00:00</updated>
<author>
<name>Sanju Rakonde</name>
<email>srakonde@redhat.com</email>
</author>
<published>2018-10-08T14:03:58+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=79806baba7c49d028d1db60dc8aacfae7b202745'/>
<id>79806baba7c49d028d1db60dc8aacfae7b202745</id>
<content type='text'>
Patch https://review.gluster.org/#/c/glusterfs/+/19135/ has
optimised glusterd test cases by clubbing the similar test
cases into a single test case.

https://review.gluster.org/#/c/glusterfs/+/19135/15/tests/bugs/glusterd/bug-1293414-import-brickinfo-uuid.t
test case has been deleted and added as a part of
tests/bugs/glusterd/optimized-basic-testcases-in-cluster.t

In the original test case, we create a volume with two bricks,
each on a separate node(N1 &amp; N2). From another node in cluster(N3),
we try to detach a node which is hosting bricks. It fails.

In the new test, we created volume with single brick on N1.
and from another node in cluster, we tried to detach N1. we
expect peer detach to fail, but peer detach was success as
the node is hosting all the bricks of volume.

Now, changing the new test case to cover the original test case scenario.

Please refer https://bugzilla.redhat.com/show_bug.cgi?id=1642597#c1 to
understand why the new test case is not failing in centos-regression.

&gt; BUG: bz#1642597

&gt; Change-Id: Ifda12b5677143095f263fbb97a6808573f513234
&gt; Signed-off-by: Sanju Rakonde &lt;srakonde@redhat.com&gt;
(cherry picked from commit 0ca6773eaf5aeb507ebc72d2c2f61902eeff414c)

fixes: bz#1643075
Change-Id: Ifda12b5677143095f263fbb97a6808573f513234
Signed-off-by: Sanju Rakonde &lt;srakonde@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Patch https://review.gluster.org/#/c/glusterfs/+/19135/ has
optimised glusterd test cases by clubbing the similar test
cases into a single test case.

https://review.gluster.org/#/c/glusterfs/+/19135/15/tests/bugs/glusterd/bug-1293414-import-brickinfo-uuid.t
test case has been deleted and added as a part of
tests/bugs/glusterd/optimized-basic-testcases-in-cluster.t

In the original test case, we create a volume with two bricks,
each on a separate node(N1 &amp; N2). From another node in cluster(N3),
we try to detach a node which is hosting bricks. It fails.

In the new test, we created volume with single brick on N1.
and from another node in cluster, we tried to detach N1. we
expect peer detach to fail, but peer detach was success as
the node is hosting all the bricks of volume.

Now, changing the new test case to cover the original test case scenario.

Please refer https://bugzilla.redhat.com/show_bug.cgi?id=1642597#c1 to
understand why the new test case is not failing in centos-regression.

&gt; BUG: bz#1642597

&gt; Change-Id: Ifda12b5677143095f263fbb97a6808573f513234
&gt; Signed-off-by: Sanju Rakonde &lt;srakonde@redhat.com&gt;
(cherry picked from commit 0ca6773eaf5aeb507ebc72d2c2f61902eeff414c)

fixes: bz#1643075
Change-Id: Ifda12b5677143095f263fbb97a6808573f513234
Signed-off-by: Sanju Rakonde &lt;srakonde@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>tests: check for shd up status in bug-1637802-arbiter-stale-data-heal-lock.t</title>
<updated>2018-10-22T16:36:03+00:00</updated>
<author>
<name>Ravishankar N</name>
<email>ravishankar@redhat.com</email>
</author>
<published>2018-10-21T12:02:52+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=b63dfd84fc8b3e08e3f005f71bf493c633452612'/>
<id>b63dfd84fc8b3e08e3f005f71bf493c633452612</id>
<content type='text'>
Problem:
https://review.gluster.org/#/c/glusterfs/+/21427/ seems to be failing
this .t spuriously. On checking one of the failure logs, I see:

22:05:44 Launching heal operation to perform index self heal on volume patchy has been unsuccessful:
22:05:44 Self-heal daemon is not running. Check self-heal daemon log file.
22:05:44 not ok 20 , LINENUM:38

In glusterd log:
[2018-10-18 22:05:44.298832] E [MSGID: 106301] [glusterd-syncop.c:1352:gd_stage_op_phase] 0-management: Staging of operation 'Volume Heal' failed on localhost : Self-heal daemon is not running. Check self-heal daemon log file

But the tests which preceed this check whether via a statedump if the shd is
conected to the bricks, and they have succeeded and even started
healing. From glustershd.log:

[2018-10-18 22:05:40.975268] I [MSGID: 108026] [afr-self-heal-common.c:1732:afr_log_selfheal] 0-patchy-replicate-0: Completed data selfheal on 3b83d2dd-4cf2-4ea3-a33e-4275be40f440. sources=[0] 1  sinks=2

So the only reason I can see launching heal via cli failing is a race where
shd has been spawned but glusterd has not yet updated in-memory that it is up,
and hence failing the CLI.

Fix:
Check for shd up status before launching heal via CLI

Change-Id: Ic88abf14ad3d51c89cb438db601fae4df179e8f4
fixes: bz#1641761
Signed-off-by: Ravishankar N &lt;ravishankar@redhat.com&gt;
(cherry picked from commit 3dea105556130abd4da0fd3f8f2c523ac52398d1)
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem:
https://review.gluster.org/#/c/glusterfs/+/21427/ seems to be failing
this .t spuriously. On checking one of the failure logs, I see:

22:05:44 Launching heal operation to perform index self heal on volume patchy has been unsuccessful:
22:05:44 Self-heal daemon is not running. Check self-heal daemon log file.
22:05:44 not ok 20 , LINENUM:38

In glusterd log:
[2018-10-18 22:05:44.298832] E [MSGID: 106301] [glusterd-syncop.c:1352:gd_stage_op_phase] 0-management: Staging of operation 'Volume Heal' failed on localhost : Self-heal daemon is not running. Check self-heal daemon log file

But the tests which preceed this check whether via a statedump if the shd is
conected to the bricks, and they have succeeded and even started
healing. From glustershd.log:

[2018-10-18 22:05:40.975268] I [MSGID: 108026] [afr-self-heal-common.c:1732:afr_log_selfheal] 0-patchy-replicate-0: Completed data selfheal on 3b83d2dd-4cf2-4ea3-a33e-4275be40f440. sources=[0] 1  sinks=2

So the only reason I can see launching heal via cli failing is a race where
shd has been spawned but glusterd has not yet updated in-memory that it is up,
and hence failing the CLI.

Fix:
Check for shd up status before launching heal via CLI

Change-Id: Ic88abf14ad3d51c89cb438db601fae4df179e8f4
fixes: bz#1641761
Signed-off-by: Ravishankar N &lt;ravishankar@redhat.com&gt;
(cherry picked from commit 3dea105556130abd4da0fd3f8f2c523ac52398d1)
</pre>
</div>
</content>
</entry>
<entry>
<title>afr: prevent winding inodelks twice for arbiter volumes</title>
<updated>2018-10-10T12:55:44+00:00</updated>
<author>
<name>Ravishankar N</name>
<email>ravishankar@redhat.com</email>
</author>
<published>2018-10-10T12:27:33+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=5b1a94468863451d1762063e954785f4ef065374'/>
<id>5b1a94468863451d1762063e954785f4ef065374</id>
<content type='text'>
Backport of https://review.gluster.org/#/c/glusterfs/+/21380/

Problem:
In an arbiter volume, if there is a pending data heal of a file only on
arbiter brick, self-heal takes inodelks twice due to a code-bug but unlocks
it only once, leaving behind a stale lock on the brick. This causes
the next write to the file to hang.

Fix:
Fix the code-bug to take lock only once. This bug was introduced master
with commit eb472d82a083883335bc494b87ea175ac43471ff

Thanks to  Pranith Kumar K &lt;pkarampu@redhat.com&gt; for finding the RCA.

fixes: bz#1637953
Change-Id: I15ad969e10a6a3c4bd255e2948b6be6dcddc61e1
Signed-off-by: Ravishankar N &lt;ravishankar@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Backport of https://review.gluster.org/#/c/glusterfs/+/21380/

Problem:
In an arbiter volume, if there is a pending data heal of a file only on
arbiter brick, self-heal takes inodelks twice due to a code-bug but unlocks
it only once, leaving behind a stale lock on the brick. This causes
the next write to the file to hang.

Fix:
Fix the code-bug to take lock only once. This bug was introduced master
with commit eb472d82a083883335bc494b87ea175ac43471ff

Thanks to  Pranith Kumar K &lt;pkarampu@redhat.com&gt; for finding the RCA.

fixes: bz#1637953
Change-Id: I15ad969e10a6a3c4bd255e2948b6be6dcddc61e1
Signed-off-by: Ravishankar N &lt;ravishankar@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>afr: fix incorrect reporting of directory split-brain</title>
<updated>2018-10-05T14:43:20+00:00</updated>
<author>
<name>Ravishankar N</name>
<email>ravishankar@redhat.com</email>
</author>
<published>2018-09-27T12:13:34+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=e3e13d2d727bab46ce168c4a3b4cce2d476638ca'/>
<id>e3e13d2d727bab46ce168c4a3b4cce2d476638ca</id>
<content type='text'>
Backport of https://review.gluster.org/#/c/glusterfs/+/21135/

Problem:
When a directory has dirty xattrs due to failed post-ops or when
replace/reset brick is performed, AFR does a conservative merge as
expected, but heal-info reports it as split-brain because there are no
clear sources.

Fix:
Modify pending flag to contain information about pending heals and
split-brains. For directories, if spit-brain flag is not set,just show
them as needing heal and not being in split-brain.

Change-Id: I09ef821f6887c87d315ae99e6b1de05103cd9383
fixes: bz#1633634
Signed-off-by: Ravishankar N &lt;ravishankar@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Backport of https://review.gluster.org/#/c/glusterfs/+/21135/

Problem:
When a directory has dirty xattrs due to failed post-ops or when
replace/reset brick is performed, AFR does a conservative merge as
expected, but heal-info reports it as split-brain because there are no
clear sources.

Fix:
Modify pending flag to contain information about pending heals and
split-brains. For directories, if spit-brain flag is not set,just show
them as needing heal and not being in split-brain.

Change-Id: I09ef821f6887c87d315ae99e6b1de05103cd9383
fixes: bz#1633634
Signed-off-by: Ravishankar N &lt;ravishankar@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>cluster/afr: Delegate metadata heal with pending xattrs to SHD</title>
<updated>2018-09-21T13:27:00+00:00</updated>
<author>
<name>Pranith Kumar K</name>
<email>pkarampu@redhat.com</email>
</author>
<published>2018-08-27T06:16:33+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=00dae0e2ea1260a082af33dac5b06ca6aa68ccc3'/>
<id>00dae0e2ea1260a082af33dac5b06ca6aa68ccc3</id>
<content type='text'>
Problem:
When metadata-self-heal is triggered on the mount, it blocks
lookup until metadata-self-heal completes. But that can lead
to hangs when lot of clients are accessing a directory which
needs metadata heal and all of them trigger heals waiting
for other clients to complete heal.

Fix:
Only when the heal is needed but the pending xattrs are not set,
trigger metadata heal that could block lookup. This is the only
case where different clients may give different metadata to the
clients without heals, which should be avoided.

Updates bz#1625575
Change-Id: I6089e9fda0770a83fb287941b229c882711f4e66
Signed-off-by: Pranith Kumar K &lt;pkarampu@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem:
When metadata-self-heal is triggered on the mount, it blocks
lookup until metadata-self-heal completes. But that can lead
to hangs when lot of clients are accessing a directory which
needs metadata heal and all of them trigger heals waiting
for other clients to complete heal.

Fix:
Only when the heal is needed but the pending xattrs are not set,
trigger metadata heal that could block lookup. This is the only
case where different clients may give different metadata to the
clients without heals, which should be avoided.

Updates bz#1625575
Change-Id: I6089e9fda0770a83fb287941b229c882711f4e66
Signed-off-by: Pranith Kumar K &lt;pkarampu@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>io-stats: dump io-stats info in /var/run/gluster</title>
<updated>2018-09-06T15:01:36+00:00</updated>
<author>
<name>Amar Tumballi</name>
<email>amarts@redhat.com</email>
</author>
<published>2018-07-24T10:12:28+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=1f46f2ada2d3365aa9252068fd970ae14bcc6d28'/>
<id>1f46f2ada2d3365aa9252068fd970ae14bcc6d28</id>
<content type='text'>
It wouldn't make sense to allow iostats file to be written in
*any* directory. While the formating makes sure we try to append
io-stats-name for the file, so overwriting existing file is slim,
but in any case it makes sense to restrict dumping to one directory.

Below are the sample commands, and files created for the corresponding
values:

 $ setfattr -n trusted.io-stats-dump -v file-for-dump $M0

In this case, the file would be in /var/run/gluster/file-for-dump

 $ setfattr -n trusted.io-stats-dump -v /dir1/dir2/file-for-dump $M0

In this case, then the dump file is in /var/run/gluster/dir1-dir2-file-for-dump

Note that the value passed for this virtual xattr would be treated as a
file, and even if the value has '/' in it, it would be changed to '-'
for sanity.

Fixes: bz#1625106

Change-Id: Id9ae6a40a190b8937c51662e6e1c2a0f6c86a0e0
Signed-off-by: Amar Tumballi &lt;amarts@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
It wouldn't make sense to allow iostats file to be written in
*any* directory. While the formating makes sure we try to append
io-stats-name for the file, so overwriting existing file is slim,
but in any case it makes sense to restrict dumping to one directory.

Below are the sample commands, and files created for the corresponding
values:

 $ setfattr -n trusted.io-stats-dump -v file-for-dump $M0

In this case, the file would be in /var/run/gluster/file-for-dump

 $ setfattr -n trusted.io-stats-dump -v /dir1/dir2/file-for-dump $M0

In this case, then the dump file is in /var/run/gluster/dir1-dir2-file-for-dump

Note that the value passed for this virtual xattr would be treated as a
file, and even if the value has '/' in it, it would be changed to '-'
for sanity.

Fixes: bz#1625106

Change-Id: Id9ae6a40a190b8937c51662e6e1c2a0f6c86a0e0
Signed-off-by: Amar Tumballi &lt;amarts@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>storage/posix: Increment trusted.pgfid in posix_mknod</title>
<updated>2018-08-29T03:41:38+00:00</updated>
<author>
<name>N Balachandran</name>
<email>nbalacha@redhat.com</email>
</author>
<published>2018-08-21T15:27:52+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=9c0e55e061dcba0fadd3a381c974f10bb7dc14fd'/>
<id>9c0e55e061dcba0fadd3a381c974f10bb7dc14fd</id>
<content type='text'>
The value of trusted.pgfid.xx was always set to 1
in posix_mknod. This is incorrect if posix_mknod
calls posix_create_link_if_gfid_exists.

Change-Id: Ibe87ca6f155846b9a7c7abbfb1eb8b6a99a5eb68
fixes: bz#1623317
Signed-off-by: N Balachandran &lt;nbalacha@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
The value of trusted.pgfid.xx was always set to 1
in posix_mknod. This is incorrect if posix_mknod
calls posix_create_link_if_gfid_exists.

Change-Id: Ibe87ca6f155846b9a7c7abbfb1eb8b6a99a5eb68
fixes: bz#1623317
Signed-off-by: N Balachandran &lt;nbalacha@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>afr: heal gfids when file is not present on all bricks</title>
<updated>2018-07-09T15:18:31+00:00</updated>
<author>
<name>Ravishankar N</name>
<email>ravishankar@redhat.com</email>
</author>
<published>2018-06-14T07:29:06+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=6e4ca3168696fd85fec5a4ad29ecdbee4d2b2101'/>
<id>6e4ca3168696fd85fec5a4ad29ecdbee4d2b2101</id>
<content type='text'>
commit 20fa80057eb430fd72b4fa31b9b65598b8ec1265 introduced a regression
wherein if a file is present in only 1 brick of replica *and* doesn't
have a gfid associated with it, it doesn't get healed upon the next
lookup from the client. Fix it.

Change-Id: I7d1111dcb45b1b8b8340a7d02558f05df70aa599
fixes: bz#1597117
Signed-off-by: Ravishankar N &lt;ravishankar@redhat.com&gt;
(cherry picked from commit eb472d82a083883335bc494b87ea175ac43471ff)
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
commit 20fa80057eb430fd72b4fa31b9b65598b8ec1265 introduced a regression
wherein if a file is present in only 1 brick of replica *and* doesn't
have a gfid associated with it, it doesn't get healed upon the next
lookup from the client. Fix it.

Change-Id: I7d1111dcb45b1b8b8340a7d02558f05df70aa599
fixes: bz#1597117
Signed-off-by: Ravishankar N &lt;ravishankar@redhat.com&gt;
(cherry picked from commit eb472d82a083883335bc494b87ea175ac43471ff)
</pre>
</div>
</content>
</entry>
<entry>
<title>storage/posix: Fix posix_symlinks_match()</title>
<updated>2018-07-02T17:14:07+00:00</updated>
<author>
<name>Pranith Kumar K</name>
<email>pkarampu@redhat.com</email>
</author>
<published>2018-06-26T10:28:02+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=0894e1d3af83228fb02310e61c1dafea2dc56ef9'/>
<id>0894e1d3af83228fb02310e61c1dafea2dc56ef9</id>
<content type='text'>
1) snprintf into linkname_expected should happen with PATH_MAX
2) comparison should happen with linkname_actual with complete
   string linkname_expected

fixes bz#1595524
Change-Id: Ic3b3c362dc6c69c046b9a13e031989be47ecff14
Signed-off-by: Pranith Kumar K &lt;pkarampu@redhat.com&gt;
(cherry picked from commit 3099d3e6ba81d3e1abf37385b13aabf5837b9c5e)
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
1) snprintf into linkname_expected should happen with PATH_MAX
2) comparison should happen with linkname_actual with complete
   string linkname_expected

fixes bz#1595524
Change-Id: Ic3b3c362dc6c69c046b9a13e031989be47ecff14
Signed-off-by: Pranith Kumar K &lt;pkarampu@redhat.com&gt;
(cherry picked from commit 3099d3e6ba81d3e1abf37385b13aabf5837b9c5e)
</pre>
</div>
</content>
</entry>
</feed>
