<feed xmlns='http://www.w3.org/2005/Atom'>
<title>glusterfs.git/tests/bugs/replicate, branch heal-info</title>
<subtitle></subtitle>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/'/>
<entry>
<title>afr: restore timestamp of parent dir during entry-heal</title>
<updated>2019-08-14T06:29:26+00:00</updated>
<author>
<name>Ravishankar N</name>
<email>ravishankar@redhat.com</email>
</author>
<published>2019-07-30T11:35:22+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=8e9c53ebf16705b9a1db2fc486dc24a5cb244ddd'/>
<id>8e9c53ebf16705b9a1db2fc486dc24a5cb244ddd</id>
<content type='text'>
Fixes: bz#1734370
Change-Id: I29e338bac62104233a6f80212df8d0fb016affda
Signed-off-by: Ravishankar N &lt;ravishankar@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Fixes: bz#1734370
Change-Id: I29e338bac62104233a6f80212df8d0fb016affda
Signed-off-by: Ravishankar N &lt;ravishankar@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>tests: fix bug-880898.t crash</title>
<updated>2019-08-12T06:08:24+00:00</updated>
<author>
<name>Ravishankar N</name>
<email>ravishankar@redhat.com</email>
</author>
<published>2019-08-12T05:59:36+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=3cc0113ee436b63f69e16e4e0cd9443f1b3188ec'/>
<id>3cc0113ee436b63f69e16e4e0cd9443f1b3188ec</id>
<content type='text'>
Problem:
https://build.gluster.org/job/centos7-regression/7337/consoleFull
indicates the shd crashing for this .t. On looking at the core, I see
the crash is at the time of shd init and glusterfs context is null:

(gdb) bt
(gdb) p ctx
$2 = (glusterfs_ctx_t *) 0xf00000000

The .t is killing all gluster processes immediately after volume start,
so it looks like a race between shd coming up and it being killed.

Fix: Kill gluster processes only after they are up and running.

Fixes: bz#1740017
Change-Id: I7cf589201669bd9f535e968d147015dc99e9a4b6
Signed-off-by: Ravishankar N &lt;ravishankar@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem:
https://build.gluster.org/job/centos7-regression/7337/consoleFull
indicates the shd crashing for this .t. On looking at the core, I see
the crash is at the time of shd init and glusterfs context is null:

(gdb) bt
(gdb) p ctx
$2 = (glusterfs_ctx_t *) 0xf00000000

The .t is killing all gluster processes immediately after volume start,
so it looks like a race between shd coming up and it being killed.

Fix: Kill gluster processes only after they are up and running.

Fixes: bz#1740017
Change-Id: I7cf589201669bd9f535e968d147015dc99e9a4b6
Signed-off-by: Ravishankar N &lt;ravishankar@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>tests: Fix bug-1717819-metadata-split-brain-detection.t failure</title>
<updated>2019-07-15T08:47:49+00:00</updated>
<author>
<name>karthik-us</name>
<email>ksubrahm@redhat.com</email>
</author>
<published>2019-07-15T06:21:58+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=30f54b702a82c45a31aeb58fabe5b79fb62cc37b'/>
<id>30f54b702a82c45a31aeb58fabe5b79fb62cc37b</id>
<content type='text'>
Problem:
tests/bugs/replicate/bug-1717819-metadata-split-brain-detection.t fails
intermittently in test cases #49 &amp; #50, which compare the values of the
user set xattr values after enabling the heal. We are not waiting for
the heal to complete before comparing those values, which might lead
those tests to fail.

Fix:
Wait till the HEAL-TIMEOUT before comparing the xattr values.
Also cheking for the shd to come up and the bricks to connect to the shd
process in another case.

Change-Id: I0e245b328da9df23ce70c5300278fad1c1d9f7ff
Fixes: bz#1729847
Signed-off-by: karthik-us &lt;ksubrahm@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem:
tests/bugs/replicate/bug-1717819-metadata-split-brain-detection.t fails
intermittently in test cases #49 &amp; #50, which compare the values of the
user set xattr values after enabling the heal. We are not waiting for
the heal to complete before comparing those values, which might lead
those tests to fail.

Fix:
Wait till the HEAL-TIMEOUT before comparing the xattr values.
Also cheking for the shd to come up and the bricks to connect to the shd
process in another case.

Change-Id: I0e245b328da9df23ce70c5300278fad1c1d9f7ff
Fixes: bz#1729847
Signed-off-by: karthik-us &lt;ksubrahm@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>cluster/afr: Fix incorrect reporting of gfid &amp; type mismatch</title>
<updated>2019-07-12T10:42:26+00:00</updated>
<author>
<name>karthik-us</name>
<email>ksubrahm@redhat.com</email>
</author>
<published>2019-06-12T06:27:02+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=1a38a6f4853f87ac4e45240579581292290f4f27'/>
<id>1a38a6f4853f87ac4e45240579581292290f4f27</id>
<content type='text'>
Problems:
1. When checking for type and gfid mismatch, if the type or gfid
is unknown because of missing gfid handle and the gfid xattr
it will be reported as type or gfid mismatch and the heal will
not complete.

2. If the source selected during entry heal has null gfid the same
will be sent to afr_lookup_and_heal_gfid(). In this function when
we try to assign the gfid on the bricks where it does not exist,
we are considering the same gfid and try to assign that on those
bricks. This will fail in posix_gfid_set() since the gfid sent
is null.

Fix:
If the gfid sent to afr_lookup_and_heal_gfid() is null choose a
valid gfid before proceeding to assign the gfid on the bricks
where it is missing.

In afr_selfheal_detect_gfid_and_type_mismatch(), do not report
type/gfid mismatch if the type/gfid is unknown or not set.

Change-Id: Ia06552e4dc4a9f89cb7f5302833604bd21bbf7da
fixes: bz#1722507
Signed-off-by: karthik-us &lt;ksubrahm@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problems:
1. When checking for type and gfid mismatch, if the type or gfid
is unknown because of missing gfid handle and the gfid xattr
it will be reported as type or gfid mismatch and the heal will
not complete.

2. If the source selected during entry heal has null gfid the same
will be sent to afr_lookup_and_heal_gfid(). In this function when
we try to assign the gfid on the bricks where it does not exist,
we are considering the same gfid and try to assign that on those
bricks. This will fail in posix_gfid_set() since the gfid sent
is null.

Fix:
If the gfid sent to afr_lookup_and_heal_gfid() is null choose a
valid gfid before proceeding to assign the gfid on the bricks
where it is missing.

In afr_selfheal_detect_gfid_and_type_mismatch(), do not report
type/gfid mismatch if the type/gfid is unknown or not set.

Change-Id: Ia06552e4dc4a9f89cb7f5302833604bd21bbf7da
fixes: bz#1722507
Signed-off-by: karthik-us &lt;ksubrahm@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>Cluster/afr: Don't treat all bricks having metadata pending as split-brain</title>
<updated>2019-06-10T14:48:11+00:00</updated>
<author>
<name>karthik-us</name>
<email>ksubrahm@redhat.com</email>
</author>
<published>2019-06-06T05:29:42+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=1b0b869d91d4e5bedc69922128551602dc4bbc13'/>
<id>1b0b869d91d4e5bedc69922128551602dc4bbc13</id>
<content type='text'>
Problem:
We currently don't have a roll-back/undoing of post-ops if quorum is not met.
Though the FOP is still unwound with failure, the xattrs remain on the disk.
Due to these partial post-ops and partial heals (healing only when 2 bricks
are up), we can end up in metadata split-brain purely from the afr xattrs
point of view i.e each brick is blamed by atleast one of the others for
metadata. These scenarios are hit when there is frequent connect/disconnect
of the client/shd to the bricks.

Fix:
Pick a source based on the xattr values. If 2 bricks blame one, the blamed
one must be treated as sink. If there is no majority, all are sources. Once
we pick a source, self-heal will then do the heal instead of erroring out
due to split-brain.
This patch also adds restriction of all the bricks to be up to perform
metadata heal to avoid any metadata loss.

Removed the test case tests/bugs/replicate/bug-1468279-source-not-blaming-sinks.t
as it was doing metadata heal even when only 2 of 3 bricks were up.

Change-Id: I07a9d62f84ceda329dcab1f02a33aeed258dcb09
fixes: bz#1717819
Signed-off-by: karthik-us &lt;ksubrahm@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem:
We currently don't have a roll-back/undoing of post-ops if quorum is not met.
Though the FOP is still unwound with failure, the xattrs remain on the disk.
Due to these partial post-ops and partial heals (healing only when 2 bricks
are up), we can end up in metadata split-brain purely from the afr xattrs
point of view i.e each brick is blamed by atleast one of the others for
metadata. These scenarios are hit when there is frequent connect/disconnect
of the client/shd to the bricks.

Fix:
Pick a source based on the xattr values. If 2 bricks blame one, the blamed
one must be treated as sink. If there is no majority, all are sources. Once
we pick a source, self-heal will then do the heal instead of erroring out
due to split-brain.
This patch also adds restriction of all the bricks to be up to perform
metadata heal to avoid any metadata loss.

Removed the test case tests/bugs/replicate/bug-1468279-source-not-blaming-sinks.t
as it was doing metadata heal even when only 2 of 3 bricks were up.

Change-Id: I07a9d62f84ceda329dcab1f02a33aeed258dcb09
fixes: bz#1717819
Signed-off-by: karthik-us &lt;ksubrahm@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>tests: improve and fix some test scripts</title>
<updated>2019-05-09T09:24:10+00:00</updated>
<author>
<name>Xavier Hernandez</name>
<email>jahernan@redhat.com</email>
</author>
<published>2018-01-19T11:18:13+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=aee9e3d27f56e4c0c2f981f20b15189eb7ffce51'/>
<id>aee9e3d27f56e4c0c2f981f20b15189eb7ffce51</id>
<content type='text'>
Change-Id: Iceefe22af754096c599dc570d4894d14fce4deae
Updates: bz#1193929
Signed-off-by: Xavier Hernandez &lt;xhernandez@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Change-Id: Iceefe22af754096c599dc570d4894d14fce4deae
Updates: bz#1193929
Signed-off-by: Xavier Hernandez &lt;xhernandez@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>cluster/afr: Remove local from owners_list on failure of lock-acquisition</title>
<updated>2019-04-15T06:02:22+00:00</updated>
<author>
<name>Pranith Kumar K</name>
<email>pkarampu@redhat.com</email>
</author>
<published>2019-04-04T10:01:56+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=8239efc76b56f1f4ce16bab263f7b355d8205820'/>
<id>8239efc76b56f1f4ce16bab263f7b355d8205820</id>
<content type='text'>
When eager-lock lock acquisition fails because of say network failures, the
local is not being removed from owners_list, this leads to accumulation of
waiting frames and the application will hang because the waiting frames are
under the assumption that another transaction is in the process of acquiring
lock because owner-list is not empty. Handled this case as well in this patch.
Added asserts to make it easier to find these problems in future.

fixes bz#1696599
Change-Id: I3101393265e9827755725b1f2d94a93d8709e923
Signed-off-by: Pranith Kumar K &lt;pkarampu@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
When eager-lock lock acquisition fails because of say network failures, the
local is not being removed from owners_list, this leads to accumulation of
waiting frames and the application will hang because the waiting frames are
under the assumption that another transaction is in the process of acquiring
lock because owner-list is not empty. Handled this case as well in this patch.
Added asserts to make it easier to find these problems in future.

fixes bz#1696599
Change-Id: I3101393265e9827755725b1f2d94a93d8709e923
Signed-off-by: Pranith Kumar K &lt;pkarampu@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>afr: thin-arbiter read txn fixes</title>
<updated>2019-03-29T08:35:36+00:00</updated>
<author>
<name>Ravishankar N</name>
<email>ravishankar@redhat.com</email>
</author>
<published>2019-03-07T11:32:36+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=500bd0014128e6727e83b6cb77e8ac94304b8f4a'/>
<id>500bd0014128e6727e83b6cb77e8ac94304b8f4a</id>
<content type='text'>
- Fixes afr_ta_read_txn() to handle inode refresh failures.
code-path.
- Fixes a double free issue of dict.

Note: This patch address post-merge review comments for commit
69532c141be160b3fea03c1579ae4ac13018dcdf

fixes: bz#1686398
Change-Id: Id5299b45b68569d47df6b73755918237a1592cb4
Signed-off-by: Ravishankar N &lt;ravishankar@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
- Fixes afr_ta_read_txn() to handle inode refresh failures.
code-path.
- Fixes a double free issue of dict.

Note: This patch address post-merge review comments for commit
69532c141be160b3fea03c1579ae4ac13018dcdf

fixes: bz#1686398
Change-Id: Id5299b45b68569d47df6b73755918237a1592cb4
Signed-off-by: Ravishankar N &lt;ravishankar@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>test: Fix a missing a '$' symbol</title>
<updated>2019-03-13T07:37:25+00:00</updated>
<author>
<name>Mohammed Rafi KC</name>
<email>rkavunga@redhat.com</email>
</author>
<published>2019-03-13T07:35:59+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=a395395e7fa16e12b3c3d9f9ba2a7bdbf4b50d74'/>
<id>a395395e7fa16e12b3c3d9f9ba2a7bdbf4b50d74</id>
<content type='text'>
While checking a test case using EXPECT_WITHIN, the
argument is actually missing a '$' symbol to denote
the token as a variable in bash

Change-Id: I5b9150acdea000b29e94cfb01d975c77f5ece3e5
fixes: bz#1688116
Signed-off-by: Mohammed Rafi KC &lt;rkavunga@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
While checking a test case using EXPECT_WITHIN, the
argument is actually missing a '$' symbol to denote
the token as a variable in bash

Change-Id: I5b9150acdea000b29e94cfb01d975c77f5ece3e5
fixes: bz#1688116
Signed-off-by: Mohammed Rafi KC &lt;rkavunga@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>cluster/afr: Send truncate on arbiter brick from SHD</title>
<updated>2019-03-11T11:21:00+00:00</updated>
<author>
<name>karthik-us</name>
<email>ksubrahm@redhat.com</email>
</author>
<published>2019-03-07T16:56:49+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=48ca0c05df4cee66cf8d07e19ee2267fc9ba920b'/>
<id>48ca0c05df4cee66cf8d07e19ee2267fc9ba920b</id>
<content type='text'>
Problem:
In an arbiter volume configuration SHD will not send any writes onto the arbiter
brick even if there is data pending marker for the arbiter brick. If we have a
arbiter setup on the geo-rep master and there are data pending markers for the files
on arbiter brick, SHD will not mark any data changelog during healing. While syncing
the data from master to slave, if the arbiter-brick is considered as ACTIVE, then
there is a chance that slave will miss out some data. If the arbiter brick is being
newly added or replaced there is a chance of slave missing all the data during sync.

Fix:
If there is data pending marker for the arbiter brick, send truncate on the arbiter
brick during heal, so that it will record truncate as the data transaction in changelog.

Change-Id: I3242ba6cea6da495c418ef860d9c3359c5459dec
fixes: bz#1686568
Signed-off-by: karthik-us &lt;ksubrahm@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem:
In an arbiter volume configuration SHD will not send any writes onto the arbiter
brick even if there is data pending marker for the arbiter brick. If we have a
arbiter setup on the geo-rep master and there are data pending markers for the files
on arbiter brick, SHD will not mark any data changelog during healing. While syncing
the data from master to slave, if the arbiter-brick is considered as ACTIVE, then
there is a chance that slave will miss out some data. If the arbiter brick is being
newly added or replaced there is a chance of slave missing all the data during sync.

Fix:
If there is data pending marker for the arbiter brick, send truncate on the arbiter
brick during heal, so that it will record truncate as the data transaction in changelog.

Change-Id: I3242ba6cea6da495c418ef860d9c3359c5459dec
fixes: bz#1686568
Signed-off-by: karthik-us &lt;ksubrahm@redhat.com&gt;
</pre>
</div>
</content>
</entry>
</feed>
