<feed xmlns='http://www.w3.org/2005/Atom'>
<title>glusterfs.git/tests/bugs, branch v7.4</title>
<subtitle></subtitle>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/'/>
<entry>
<title>glusterd: Brick process fails to come up with brickmux on</title>
<updated>2020-03-17T06:04:40+00:00</updated>
<author>
<name>Vishal Pandey</name>
<email>vpandey@redhat.com</email>
</author>
<published>2019-11-19T06:09:22+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=4bb9b30b4d7f49d17e57fad2540d9398c83d427a'/>
<id>4bb9b30b4d7f49d17e57fad2540d9398c83d427a</id>
<content type='text'>
Issue:
1- In a cluster of 3 Nodes N1, N2, N3. Create 3 volumes vol1,
vol2, vol3 with 3 bricks (one from each node)
2- Set cluster.brick-multiplex on
3- Start all 3 volumes
4- Check if all bricks on a node are running on same port
5- Kill N1
6- Set performance.readdir-ahead for volumes vol1, vol2, vol3
7- Bring N1 up and check volume status
8- All bricks processes not running on N1.

Root Cause -
Since, There is a diff in volfile versions in N1 as compared
to N2 and N3 therefore glusterd_import_friend_volume() is called.
glusterd_import_friend_volume() copies the new_volinfo and deletes
old_volinfo and then calls glusterd_start_bricks().
glusterd_start_bricks() looks for the volfiles and sends an rpc
request to glusterfs_handle_attach(). Now, since the volinfo
has been deleted by glusterd_delete_stale_volume()
from priv-&gt;volumes list before glusterd_start_bricks() and
glusterd_create_volfiles_and_notify_services() and
glusterd_list_add_order is called after glusterd_start_bricks(),
therefore the attach RPC req gets an empty volfile path
and that causes the brick to crash.

Fix- Call glusterd_list_add_order() and
glusterd_create_volfiles_and_notify_services before
glusterd_start_bricks() cal is made in glusterd_import_friend_volume

&gt; Change-Id: Idfe0e8710f7eb77ca3ddfa1cabeb45b2987f41aa
&gt; Bug: bz#1773856
&gt; Signed-off-by: Mohammed Rafi KC &lt;rkavunga@redhat.com&gt;
(cherry picked from commit 45e81aae791da9d013aba2286af44826227c05ec)

Change-Id: Idfe0e8710f7eb77ca3ddfa1cabeb45b2987f41aa
fixes: bz#1808964
Signed-off-by: Sanju Rakonde &lt;srakonde@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Issue:
1- In a cluster of 3 Nodes N1, N2, N3. Create 3 volumes vol1,
vol2, vol3 with 3 bricks (one from each node)
2- Set cluster.brick-multiplex on
3- Start all 3 volumes
4- Check if all bricks on a node are running on same port
5- Kill N1
6- Set performance.readdir-ahead for volumes vol1, vol2, vol3
7- Bring N1 up and check volume status
8- All bricks processes not running on N1.

Root Cause -
Since, There is a diff in volfile versions in N1 as compared
to N2 and N3 therefore glusterd_import_friend_volume() is called.
glusterd_import_friend_volume() copies the new_volinfo and deletes
old_volinfo and then calls glusterd_start_bricks().
glusterd_start_bricks() looks for the volfiles and sends an rpc
request to glusterfs_handle_attach(). Now, since the volinfo
has been deleted by glusterd_delete_stale_volume()
from priv-&gt;volumes list before glusterd_start_bricks() and
glusterd_create_volfiles_and_notify_services() and
glusterd_list_add_order is called after glusterd_start_bricks(),
therefore the attach RPC req gets an empty volfile path
and that causes the brick to crash.

Fix- Call glusterd_list_add_order() and
glusterd_create_volfiles_and_notify_services before
glusterd_start_bricks() cal is made in glusterd_import_friend_volume

&gt; Change-Id: Idfe0e8710f7eb77ca3ddfa1cabeb45b2987f41aa
&gt; Bug: bz#1773856
&gt; Signed-off-by: Mohammed Rafi KC &lt;rkavunga@redhat.com&gt;
(cherry picked from commit 45e81aae791da9d013aba2286af44826227c05ec)

Change-Id: Idfe0e8710f7eb77ca3ddfa1cabeb45b2987f41aa
fixes: bz#1808964
Signed-off-by: Sanju Rakonde &lt;srakonde@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>afr: prevent spurious entry heals leading to gfid split-brain</title>
<updated>2020-02-25T17:07:04+00:00</updated>
<author>
<name>Ravishankar N</name>
<email>ravishankar@redhat.com</email>
</author>
<published>2020-02-11T09:04:48+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=2c5b724c0071ae184dd4bfd441d7346106387ed5'/>
<id>2c5b724c0071ae184dd4bfd441d7346106387ed5</id>
<content type='text'>
Problem:
In a hyperconverged setup with granular-entry-heal enabled, if a file is
recreated while one of the bricks is down, and an index heal is triggered
(with the brick still down), entry-self heal was doing a spurious heal
with just the 2 good bricks. It was doing a post-op leading to removal
of the filename from .glusterfs/indices/entry-changes as well as
erroneous setting of afr xattrs on the parent. When the brick came up,
the xattrs were cleared, resulting in the renamed file not getting
healed and leading to gfid split-brain and EIO on the mount.

Fix:
Proceed with entry heal only when shd can connect to all bricks of the replica,
just like in data and metadata heal.

fixes: bz#1804591
Change-Id: I916ae26ad1fabf259bc6362da52d433b7223b17e
Signed-off-by: Ravishankar N &lt;ravishankar@redhat.com&gt;
(cherry picked from commit 06453d77d056fbaa393a137ca277a20e38d2f67e)
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem:
In a hyperconverged setup with granular-entry-heal enabled, if a file is
recreated while one of the bricks is down, and an index heal is triggered
(with the brick still down), entry-self heal was doing a spurious heal
with just the 2 good bricks. It was doing a post-op leading to removal
of the filename from .glusterfs/indices/entry-changes as well as
erroneous setting of afr xattrs on the parent. When the brick came up,
the xattrs were cleared, resulting in the renamed file not getting
healed and leading to gfid split-brain and EIO on the mount.

Fix:
Proceed with entry heal only when shd can connect to all bricks of the replica,
just like in data and metadata heal.

fixes: bz#1804591
Change-Id: I916ae26ad1fabf259bc6362da52d433b7223b17e
Signed-off-by: Ravishankar N &lt;ravishankar@redhat.com&gt;
(cherry picked from commit 06453d77d056fbaa393a137ca277a20e38d2f67e)
</pre>
</div>
</content>
</entry>
<entry>
<title>server: Mount fails after reboot 1/3 gluster nodes</title>
<updated>2020-02-10T08:03:04+00:00</updated>
<author>
<name>Mohit Agrawal</name>
<email>moagrawal@redhat.com</email>
</author>
<published>2020-01-21T15:39:56+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=04b4211c16799d6a85b496b8d845e29565fb884d'/>
<id>04b4211c16799d6a85b496b8d845e29565fb884d</id>
<content type='text'>
Problem: At the time of coming up one server node(1x3) after reboot
client is unmounted.The client is unmounted because a client
is getting AUTH_FAILED event and client call fini for the graph.The
client is getting AUTH_FAILED because brick is not attached with a
graph at that moment

Solution: To avoid the unmounting the client graph throw ENOENT error
          from server in case if brick is not attached with server at
          the time of authenticate clients.

&gt; Credits: Xavi Hernandez &lt;xhernandez@redhat.com&gt;
&gt; Change-Id: Ie6fbd73cbcf23a35d8db8841b3b6036e87682f5e
&gt; Fixes: bz#1793852
&gt; Signed-off-by: Mohit Agrawal &lt;moagrawa@redhat.com&gt;
&gt; (cherry picked from commit &gt; f6421dff22a6ddaf14134f6894deae219948c89d)

Change-Id: Ie6fbd73cbcf23a35d8db8841b3b6036e87682f5e
Fixes: bz#1794019
Signed-off-by: Mohit Agrawal &lt;moagrawa@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem: At the time of coming up one server node(1x3) after reboot
client is unmounted.The client is unmounted because a client
is getting AUTH_FAILED event and client call fini for the graph.The
client is getting AUTH_FAILED because brick is not attached with a
graph at that moment

Solution: To avoid the unmounting the client graph throw ENOENT error
          from server in case if brick is not attached with server at
          the time of authenticate clients.

&gt; Credits: Xavi Hernandez &lt;xhernandez@redhat.com&gt;
&gt; Change-Id: Ie6fbd73cbcf23a35d8db8841b3b6036e87682f5e
&gt; Fixes: bz#1793852
&gt; Signed-off-by: Mohit Agrawal &lt;moagrawa@redhat.com&gt;
&gt; (cherry picked from commit &gt; f6421dff22a6ddaf14134f6894deae219948c89d)

Change-Id: Ie6fbd73cbcf23a35d8db8841b3b6036e87682f5e
Fixes: bz#1794019
Signed-off-by: Mohit Agrawal &lt;moagrawa@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>performance/md-cache: Do not skip caching of null character xattr values</title>
<updated>2019-12-19T12:42:32+00:00</updated>
<author>
<name>Anoop C S</name>
<email>anoopcs@redhat.com</email>
</author>
<published>2019-08-10T05:00:26+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=c1cb279649b3d0c4ba88dbf0ddfd179ce6f5798f'/>
<id>c1cb279649b3d0c4ba88dbf0ddfd179ce6f5798f</id>
<content type='text'>
Null character string is a valid xattr value in file system. But for
those xattrs processed by md-cache, it does not update its entries if
value is null('\0'). This results in ENODATA when those xattrs are
queried afterwards via getxattr() causing failures in basic operations
like create, copy etc in a specially configured Samba setup for Mac OS
clients.

On the other side snapview-server is internally setting empty string("")
as value for xattrs received as part of listxattr() and are not intended
to be cached. Therefore we try to maintain that behaviour using an
additional dictionary key to prevent updation of entries in getxattr()
and fgetxattr() callbacks in md-cache.

Credits: Poornima G &lt;pgurusid@redhat.com&gt;

Change-Id: I7859cbad0a06ca6d788420c2a495e658699c6ff7
Fixes: bz#1785228
Signed-off-by: Anoop C S &lt;anoopcs@redhat.com&gt;
(cherry picked from commit b4b683736367d93daad08a5ee6ca95778c07c5a4)
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Null character string is a valid xattr value in file system. But for
those xattrs processed by md-cache, it does not update its entries if
value is null('\0'). This results in ENODATA when those xattrs are
queried afterwards via getxattr() causing failures in basic operations
like create, copy etc in a specially configured Samba setup for Mac OS
clients.

On the other side snapview-server is internally setting empty string("")
as value for xattrs received as part of listxattr() and are not intended
to be cached. Therefore we try to maintain that behaviour using an
additional dictionary key to prevent updation of entries in getxattr()
and fgetxattr() callbacks in md-cache.

Credits: Poornima G &lt;pgurusid@redhat.com&gt;

Change-Id: I7859cbad0a06ca6d788420c2a495e658699c6ff7
Fixes: bz#1785228
Signed-off-by: Anoop C S &lt;anoopcs@redhat.com&gt;
(cherry picked from commit b4b683736367d93daad08a5ee6ca95778c07c5a4)
</pre>
</div>
</content>
</entry>
<entry>
<title>cluster/afr: Heal entries when there is a source &amp; no healed_sinks</title>
<updated>2019-11-14T13:18:45+00:00</updated>
<author>
<name>karthik-us</name>
<email>ksubrahm@redhat.com</email>
</author>
<published>2019-09-05T10:44:50+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=7f58624cbf88e235dc44714c5f5b00cffcab6b59'/>
<id>7f58624cbf88e235dc44714c5f5b00cffcab6b59</id>
<content type='text'>
Problem:
In a situation where B1 blames B2, B2 blames B1 and B3 doesn't blame
anything for entry heal, heal will not complete even though we have
clear source and sinks. This will happen because while doing
afr_selfheal_find_direction() only the bricks which are blamed by
non-accused bricks are considered as sinks. Later in
__afr_selfheal_entry_finalize_source() when it tries to mark all the
non-sources as sinks it fails to do so because there won't be any
healed_sinks marked, no witness present and there will be a source.

Fix:
If there is a source and no healed_sinks, then reset all the locked
sources to 0 and healed sinks to 1 to do conservative merge.

Change-Id: If40d8bc95d52a52b2730f55bdcf135109b421548
Fixes: bz#1760699
Signed-off-by: karthik-us &lt;ksubrahm@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem:
In a situation where B1 blames B2, B2 blames B1 and B3 doesn't blame
anything for entry heal, heal will not complete even though we have
clear source and sinks. This will happen because while doing
afr_selfheal_find_direction() only the bricks which are blamed by
non-accused bricks are considered as sinks. Later in
__afr_selfheal_entry_finalize_source() when it tries to mark all the
non-sources as sinks it fails to do so because there won't be any
healed_sinks marked, no witness present and there will be a source.

Fix:
If there is a source and no healed_sinks, then reset all the locked
sources to 0 and healed sinks to 1 to do conservative merge.

Change-Id: If40d8bc95d52a52b2730f55bdcf135109b421548
Fixes: bz#1760699
Signed-off-by: karthik-us &lt;ksubrahm@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>afr: support split-brain CLI for replica 3</title>
<updated>2019-11-13T05:04:07+00:00</updated>
<author>
<name>Ravishankar N</name>
<email>ravishankar@redhat.com</email>
</author>
<published>2019-09-28T03:23:08+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=fc5a3dd8757ffc80869173e4758d068be2cf5d19'/>
<id>fc5a3dd8757ffc80869173e4758d068be2cf5d19</id>
<content type='text'>
Ever since we added quorum checks for lookups in afr via commit
bd44d59741bb8c0f5d7a62c5b1094179dd0ce8a4, the split-brain resolution
commands would not work for replica 3 because there would be no
readables for the lookup fop.

The argument was that split-brains do not occur in replica 3 but we do
see (data/metadata) split-brain cases once in a while which indicate that there are
a few bugs/corner cases yet to be discovered and fixed.

Fortunately, commit  8016d51a3bbd410b0b927ed66be50a09574b7982 added
GF_CLIENT_PID_GLFS_HEALD as the pid for all fops made by glfsheal. If we
leverage this and allow lookups in afr when pid is GF_CLIENT_PID_GLFS_HEALD,
split-brain resolution commands will work for replica 3 volumes too.

Likewise, the check is added in shard_lookup as well to permit resolving
split-brains by specifying "/.shard/shard-file.xx" as the file name
(which previously used to fail with EPERM).

Change-Id: I3c543dea79caf7cfbc1633e9089cb1cdd2538ba9
Fixes: bz#1760791
Signed-off-by: Ravishankar N &lt;ravishankar@redhat.com&gt;
(cherry picked from commit 47dbd753187f69b3835d2e42fdbe7485874c4b3e)
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Ever since we added quorum checks for lookups in afr via commit
bd44d59741bb8c0f5d7a62c5b1094179dd0ce8a4, the split-brain resolution
commands would not work for replica 3 because there would be no
readables for the lookup fop.

The argument was that split-brains do not occur in replica 3 but we do
see (data/metadata) split-brain cases once in a while which indicate that there are
a few bugs/corner cases yet to be discovered and fixed.

Fortunately, commit  8016d51a3bbd410b0b927ed66be50a09574b7982 added
GF_CLIENT_PID_GLFS_HEALD as the pid for all fops made by glfsheal. If we
leverage this and allow lookups in afr when pid is GF_CLIENT_PID_GLFS_HEALD,
split-brain resolution commands will work for replica 3 volumes too.

Likewise, the check is added in shard_lookup as well to permit resolving
split-brains by specifying "/.shard/shard-file.xx" as the file name
(which previously used to fail with EPERM).

Change-Id: I3c543dea79caf7cfbc1633e9089cb1cdd2538ba9
Fixes: bz#1760791
Signed-off-by: Ravishankar N &lt;ravishankar@redhat.com&gt;
(cherry picked from commit 47dbd753187f69b3835d2e42fdbe7485874c4b3e)
</pre>
</div>
</content>
</entry>
<entry>
<title>tests: Fix spurious failure</title>
<updated>2019-11-06T16:59:14+00:00</updated>
<author>
<name>Pranith Kumar K</name>
<email>pkarampu@redhat.com</email>
</author>
<published>2019-09-11T09:11:28+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=3d702d34c94216f00ca3244eaa3ecabc34df7114'/>
<id>3d702d34c94216f00ca3244eaa3ecabc34df7114</id>
<content type='text'>
If heal from next brick starts after the first brick completes heal, then
opendir on the brick can change atime leading to failure of the test. When
ctime is disabled it is better to just check mtime to be same after heal.

Backport of:

 &gt; BUG: 1751134
 &gt; Change-Id: Ia03e30fd547e6bbe85c1e299845ffa122f3a2692
 &gt; Signed-off-by: Pranith Kumar K &lt;pkarampu@redhat.com&gt;
(cherry picked from commit 0e37cdf271a48d3e58c212e95664a2aa34da3940)

fixes: bz#1769320
Change-Id: Ia03e30fd547e6bbe85c1e299845ffa122f3a2692
Signed-off-by: Pranith Kumar K &lt;pkarampu@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
If heal from next brick starts after the first brick completes heal, then
opendir on the brick can change atime leading to failure of the test. When
ctime is disabled it is better to just check mtime to be same after heal.

Backport of:

 &gt; BUG: 1751134
 &gt; Change-Id: Ia03e30fd547e6bbe85c1e299845ffa122f3a2692
 &gt; Signed-off-by: Pranith Kumar K &lt;pkarampu@redhat.com&gt;
(cherry picked from commit 0e37cdf271a48d3e58c212e95664a2aa34da3940)

fixes: bz#1769320
Change-Id: Ia03e30fd547e6bbe85c1e299845ffa122f3a2692
Signed-off-by: Pranith Kumar K &lt;pkarampu@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>ctime/rebalance: Heal ctime xattr on directory during rebalance</title>
<updated>2019-09-16T10:54:21+00:00</updated>
<author>
<name>Kotresh HR</name>
<email>khiremat@redhat.com</email>
</author>
<published>2019-07-29T13:00:42+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=0a2870b33d5d0a3cded21ac1e3071e3fde81bbb6'/>
<id>0a2870b33d5d0a3cded21ac1e3071e3fde81bbb6</id>
<content type='text'>
After add-brick and rebalance, the ctime xattr is not present
on rebalanced directories on new brick. This patch fixes the
same.

Note that ctime still doesn't support consistent time across
distribute sub-volume.

This patch also fixes the in-memory inconsistency of time attributes
when metadata is self healed.

Backport of:

 &gt; Patch: https://review.gluster.org/23127/
 &gt; Change-Id: Ia20506f1839021bf61d4753191e7dc34b31bb2df
 &gt; BUG: 1734026
 &gt; Signed-off-by: Kotresh HR &lt;khiremat@redhat.com&gt;
(cherry picked from commit 304640e55c0f3c6d15f4e230dc6376e4f5020fea)

Change-Id: Ia20506f1839021bf61d4753191e7dc34b31bb2df
Signed-off-by: Kotresh HR &lt;khiremat@redhat.com&gt;
fixes: bz#1752429
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
After add-brick and rebalance, the ctime xattr is not present
on rebalanced directories on new brick. This patch fixes the
same.

Note that ctime still doesn't support consistent time across
distribute sub-volume.

This patch also fixes the in-memory inconsistency of time attributes
when metadata is self healed.

Backport of:

 &gt; Patch: https://review.gluster.org/23127/
 &gt; Change-Id: Ia20506f1839021bf61d4753191e7dc34b31bb2df
 &gt; BUG: 1734026
 &gt; Signed-off-by: Kotresh HR &lt;khiremat@redhat.com&gt;
(cherry picked from commit 304640e55c0f3c6d15f4e230dc6376e4f5020fea)

Change-Id: Ia20506f1839021bf61d4753191e7dc34b31bb2df
Signed-off-by: Kotresh HR &lt;khiremat@redhat.com&gt;
fixes: bz#1752429
</pre>
</div>
</content>
</entry>
<entry>
<title>afr/lookup: Pass xattr_req in while doing a selfheal in lookup</title>
<updated>2019-09-11T05:04:47+00:00</updated>
<author>
<name>Mohammed Rafi KC</name>
<email>rkavunga@redhat.com</email>
</author>
<published>2019-07-10T16:14:38+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=072f31bea74d9321a0a71c070265f15d4104f086'/>
<id>072f31bea74d9321a0a71c070265f15d4104f086</id>
<content type='text'>
We were not passing xattr_req when doing a name self heal
as well as a meta data heal. Because of this, some xdata
was missing which causes i/o errors

Backport of &gt; https://review.gluster.org/#/c/glusterfs/+/23024/


&gt;Change-Id: Ibfb1205a7eb0195632dc3820116ffbbb8043545f
&gt;Fixes: bz#1728770
&gt;Signed-off-by: Mohammed Rafi KC &lt;rkavunga@redhat.com&gt;

Fixes: bz#1749305
Change-Id: Ibfb1205a7eb0195632dc3820116ffbbb8043545f
Signed-off-by: Mohammed Rafi KC &lt;rkavunga@redhat.com&gt;
(cherry picked from commit d026f0bcfd301712e4f0671ccf238f43f2e6dd30)
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
We were not passing xattr_req when doing a name self heal
as well as a meta data heal. Because of this, some xdata
was missing which causes i/o errors

Backport of &gt; https://review.gluster.org/#/c/glusterfs/+/23024/


&gt;Change-Id: Ibfb1205a7eb0195632dc3820116ffbbb8043545f
&gt;Fixes: bz#1728770
&gt;Signed-off-by: Mohammed Rafi KC &lt;rkavunga@redhat.com&gt;

Fixes: bz#1749305
Change-Id: Ibfb1205a7eb0195632dc3820116ffbbb8043545f
Signed-off-by: Mohammed Rafi KC &lt;rkavunga@redhat.com&gt;
(cherry picked from commit d026f0bcfd301712e4f0671ccf238f43f2e6dd30)
</pre>
</div>
</content>
</entry>
<entry>
<title>tests: fix spurious failure of bug-1402841.t-mt-dir-scan-race.t</title>
<updated>2019-09-05T04:00:59+00:00</updated>
<author>
<name>Ravishankar N</name>
<email>ravishankar@redhat.com</email>
</author>
<published>2019-09-04T05:57:30+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=f4e38c060bdf561af1d6eaee83aa664f1f14e8a8'/>
<id>f4e38c060bdf561af1d6eaee83aa664f1f14e8a8</id>
<content type='text'>
Problem:
Since commit 600ba94183333c4af9b4a09616690994fd528478, shd starts
healing as soon as it is toggled from disabled to enabled. This was
causing the following line in the .t to fail on a 'fast' machine (always
on my laptop and sometimes on the jenkins slaves).

EXPECT_NOT "^0$" get_pending_heal_count $V0

because by the time shd was disabled, the heal was already completed.

Fix:
Increase the no. of files to be healed and make it a variable called
FILE_COUNT, should we need to bump it up further because the machines
become even faster. Also created pending metadata heals to increase the
time taken to heal a file.

fixes: bz#1749155
Change-Id: I5a26b08e45b8c19bce3c01ce67bdcc28ed48198d
Signed-off-by: Ravishankar N &lt;ravishankar@redhat.com&gt;
(cherry picked from commit 724c657995a2e148243eeb78c68b620c6d7714a5)
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem:
Since commit 600ba94183333c4af9b4a09616690994fd528478, shd starts
healing as soon as it is toggled from disabled to enabled. This was
causing the following line in the .t to fail on a 'fast' machine (always
on my laptop and sometimes on the jenkins slaves).

EXPECT_NOT "^0$" get_pending_heal_count $V0

because by the time shd was disabled, the heal was already completed.

Fix:
Increase the no. of files to be healed and make it a variable called
FILE_COUNT, should we need to bump it up further because the machines
become even faster. Also created pending metadata heals to increase the
time taken to heal a file.

fixes: bz#1749155
Change-Id: I5a26b08e45b8c19bce3c01ce67bdcc28ed48198d
Signed-off-by: Ravishankar N &lt;ravishankar@redhat.com&gt;
(cherry picked from commit 724c657995a2e148243eeb78c68b620c6d7714a5)
</pre>
</div>
</content>
</entry>
</feed>
