<feed xmlns='http://www.w3.org/2005/Atom'>
<title>glusterfs.git/tests/bugs/glusterd, branch v9dev</title>
<subtitle></subtitle>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/'/>
<entry>
<title>tests: Fix for spurious failure for some test cases</title>
<updated>2020-04-16T11:15:51+00:00</updated>
<author>
<name>Mohit Agrawal</name>
<email>moagrawal@redhat.com</email>
</author>
<published>2020-04-10T08:15:50+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=39f4a03cbaa7b3b93a3750258607cbd91b9d4b9d'/>
<id>39f4a03cbaa7b3b93a3750258607cbd91b9d4b9d</id>
<content type='text'>
Problem: Sometimes test case is failing at the time of creating files
         on mount point after mounting the volume

Solution: After started the volume need to wait to make sure all
          bricks instances are completely started so put a online_brick_count
          check after just started the volume

Change-Id: I5020e7e417539377277ca00189f9c51d2cf877a6
Fixes: #1162
Signed-off-by: Mohit Agrawal &lt;moagrawal@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem: Sometimes test case is failing at the time of creating files
         on mount point after mounting the volume

Solution: After started the volume need to wait to make sure all
          bricks instances are completely started so put a online_brick_count
          check after just started the volume

Change-Id: I5020e7e417539377277ca00189f9c51d2cf877a6
Fixes: #1162
Signed-off-by: Mohit Agrawal &lt;moagrawal@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>tests: Fix spurious failure of tests/bugs/glusterd/serialize-shd-manager-glusterd-restart.t</title>
<updated>2020-04-13T05:25:40+00:00</updated>
<author>
<name>Mohit Agrawal</name>
<email>moagrawal@redhat.com</email>
</author>
<published>2020-04-09T16:25:46+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=cd77bddacb4eefb8c447c82220780626b878c3f0'/>
<id>cd77bddacb4eefb8c447c82220780626b878c3f0</id>
<content type='text'>
Problem: Sometime volume status is failed after restart glusterd
         in one cluster node

Solution: Wait to finish glusterd handshake on down cluster node

Change-Id: Ib23ca41c943caf2903c61ebf42dc437c1b9d6054
Fixes: #1158
Signed-off-by: Mohit Agrawal &lt;moagrawal@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem: Sometime volume status is failed after restart glusterd
         in one cluster node

Solution: Wait to finish glusterd handshake on down cluster node

Change-Id: Ib23ca41c943caf2903c61ebf42dc437c1b9d6054
Fixes: #1158
Signed-off-by: Mohit Agrawal &lt;moagrawal@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>posix: Avoid dict_del logs in posix_is_layout_stale while key is NULL</title>
<updated>2020-04-09T04:26:28+00:00</updated>
<author>
<name>Mohit Agrawal</name>
<email>moagrawal@redhat.com</email>
</author>
<published>2020-04-06T16:28:03+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=18f70e806bcef5a3680462a4dbb7061e5eceee1c'/>
<id>18f70e806bcef5a3680462a4dbb7061e5eceee1c</id>
<content type='text'>
Problem: The key "GF_PREOP_PARENT_KEY" has been populated by dht and
         for non-distribute volume like 1x3 key is not populated so
         posix_is_layout stale throw a message while a file is created

Solution: To avoid a log put a condition before delete a key

Change-Id: I813ee7960633e7f9f5e9ad2f42f288053d9eb71f
Fixes: #1150
Signed-off-by: Mohit Agrawal &lt;moagrawal@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem: The key "GF_PREOP_PARENT_KEY" has been populated by dht and
         for non-distribute volume like 1x3 key is not populated so
         posix_is_layout stale throw a message while a file is created

Solution: To avoid a log put a condition before delete a key

Change-Id: I813ee7960633e7f9f5e9ad2f42f288053d9eb71f
Fixes: #1150
Signed-off-by: Mohit Agrawal &lt;moagrawal@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: Brick process fails to come up with brickmux on</title>
<updated>2020-02-20T06:39:55+00:00</updated>
<author>
<name>Vishal Pandey</name>
<email>vpandey@redhat.com</email>
</author>
<published>2019-11-19T06:09:22+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=45e81aae791da9d013aba2286af44826227c05ec'/>
<id>45e81aae791da9d013aba2286af44826227c05ec</id>
<content type='text'>
Issue:
1- In a cluster of 3 Nodes N1, N2, N3. Create 3 volumes vol1,
vol2, vol3 with 3 bricks (one from each node)
2- Set cluster.brick-multiplex on
3- Start all 3 volumes
4- Check if all bricks on a node are running on same port
5- Kill N1
6- Set performance.readdir-ahead for volumes vol1, vol2, vol3
7- Bring N1 up and check volume status
8- All bricks processes not running on N1.

Root Cause -
Since, There is a diff in volfile versions in N1 as compared
to N2 and N3 therefore glusterd_import_friend_volume() is called.
glusterd_import_friend_volume() copies the new_volinfo and deletes
old_volinfo and then calls glusterd_start_bricks().
glusterd_start_bricks() looks for the volfiles and sends an rpc
request to glusterfs_handle_attach(). Now, since the volinfo
has been deleted by glusterd_delete_stale_volume()
from priv-&gt;volumes list before glusterd_start_bricks() and
glusterd_create_volfiles_and_notify_services() and
glusterd_list_add_order is called after glusterd_start_bricks(),
therefore the attach RPC req gets an empty volfile path
and that causes the brick to crash.

Fix- Call glusterd_list_add_order() and
glusterd_create_volfiles_and_notify_services before
glusterd_start_bricks() cal is made in glusterd_import_friend_volume

Change-Id: Idfe0e8710f7eb77ca3ddfa1cabeb45b2987f41aa
Fixes: bz#1773856
Signed-off-by: Mohammed Rafi KC &lt;rkavunga@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Issue:
1- In a cluster of 3 Nodes N1, N2, N3. Create 3 volumes vol1,
vol2, vol3 with 3 bricks (one from each node)
2- Set cluster.brick-multiplex on
3- Start all 3 volumes
4- Check if all bricks on a node are running on same port
5- Kill N1
6- Set performance.readdir-ahead for volumes vol1, vol2, vol3
7- Bring N1 up and check volume status
8- All bricks processes not running on N1.

Root Cause -
Since, There is a diff in volfile versions in N1 as compared
to N2 and N3 therefore glusterd_import_friend_volume() is called.
glusterd_import_friend_volume() copies the new_volinfo and deletes
old_volinfo and then calls glusterd_start_bricks().
glusterd_start_bricks() looks for the volfiles and sends an rpc
request to glusterfs_handle_attach(). Now, since the volinfo
has been deleted by glusterd_delete_stale_volume()
from priv-&gt;volumes list before glusterd_start_bricks() and
glusterd_create_volfiles_and_notify_services() and
glusterd_list_add_order is called after glusterd_start_bricks(),
therefore the attach RPC req gets an empty volfile path
and that causes the brick to crash.

Fix- Call glusterd_list_add_order() and
glusterd_create_volfiles_and_notify_services before
glusterd_start_bricks() cal is made in glusterd_import_friend_volume

Change-Id: Idfe0e8710f7eb77ca3ddfa1cabeb45b2987f41aa
Fixes: bz#1773856
Signed-off-by: Mohammed Rafi KC &lt;rkavunga@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: deafult options after volume reset</title>
<updated>2020-01-01T14:15:02+00:00</updated>
<author>
<name>Sanju Rakonde</name>
<email>srakonde@redhat.com</email>
</author>
<published>2019-12-25T16:26:32+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=e40d0c399333e1ef62ff104b5cd35d8a5baa75f4'/>
<id>e40d0c399333e1ef62ff104b5cd35d8a5baa75f4</id>
<content type='text'>
Problem: default option itransport.address-family is disappered
in volume info output after a volume reset.

Cause: with 3.8.0 onwards volume option transport.address-family
has default value, any volume which is created will have this
option set. So, volume info will show this in its output. But,
with reset volume, this option is not handled.

Solution: In glusterd_enable_default_options(), we should add this
option along with other default options. This function is called
by glusterd_options_reset() with volume reset command.

fixes: bz#1786478

Change-Id: I58f7aa24cf01f308c4efe6cae748cc3bc8b99b1d
Signed-off-by: Sanju Rakonde &lt;srakonde@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem: default option itransport.address-family is disappered
in volume info output after a volume reset.

Cause: with 3.8.0 onwards volume option transport.address-family
has default value, any volume which is created will have this
option set. So, volume info will show this in its output. But,
with reset volume, this option is not handled.

Solution: In glusterd_enable_default_options(), we should add this
option along with other default options. This function is called
by glusterd_options_reset() with volume reset command.

fixes: bz#1786478

Change-Id: I58f7aa24cf01f308c4efe6cae748cc3bc8b99b1d
Signed-off-by: Sanju Rakonde &lt;srakonde@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>[Cli] Removing old log rotate command.</title>
<updated>2019-12-17T11:33:16+00:00</updated>
<author>
<name>kshithijiyer</name>
<email>kshithij.ki@gmail.com</email>
</author>
<published>2019-11-30T09:55:11+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=f6e2958543246d547f1c3c3b3b080d435b4c9296'/>
<id>f6e2958543246d547f1c3c3b3b080d435b4c9296</id>
<content type='text'>
The old command for log rotate is still present removing
it completely. Also adding testcase to test the
log rotate command with both the old as well as the new command
and fixing testcase which use the old syntax to use the new
one.

Code to be removed:
1. In cli-cmd-volume.c from struct cli_cmd volume_cmds[]:
{"volume log rotate &lt;VOLNAME&gt; [BRICK]", cli_cmd_log_rotate_cbk,
 "rotate the log file for corresponding volume/brick"
 " NOTE: This is an old syntax, will be deprecated from next release."},

2. In cli-cmd-volume.c from cli_cmd_log_rotate_cbk():
 ||(strcmp("rotate", words[2]) == 0)))

3. In cli-cmd-parser.c from cli_cmd_log_rotate_parse()
if (strcmp("rotate", words[2]) == 0)
   volname = (char *)words[3];
else

fixes: bz#1750387
Change-Id: I56e4d295044e8d5fd1fc0d848bc87e135e9e32b4
Signed-off-by: kshithijiyer &lt;kshithij.ki@gmail.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
The old command for log rotate is still present removing
it completely. Also adding testcase to test the
log rotate command with both the old as well as the new command
and fixing testcase which use the old syntax to use the new
one.

Code to be removed:
1. In cli-cmd-volume.c from struct cli_cmd volume_cmds[]:
{"volume log rotate &lt;VOLNAME&gt; [BRICK]", cli_cmd_log_rotate_cbk,
 "rotate the log file for corresponding volume/brick"
 " NOTE: This is an old syntax, will be deprecated from next release."},

2. In cli-cmd-volume.c from cli_cmd_log_rotate_cbk():
 ||(strcmp("rotate", words[2]) == 0)))

3. In cli-cmd-parser.c from cli_cmd_log_rotate_parse()
if (strcmp("rotate", words[2]) == 0)
   volname = (char *)words[3];
else

fixes: bz#1750387
Change-Id: I56e4d295044e8d5fd1fc0d848bc87e135e9e32b4
Signed-off-by: kshithijiyer &lt;kshithij.ki@gmail.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: Client Handling of Elastic Clusters</title>
<updated>2019-11-12T06:17:40+00:00</updated>
<author>
<name>Mohit Agrawal</name>
<email>moagrawal@redhat.com</email>
</author>
<published>2019-11-06T05:02:04+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=e3d97f57c8e25e8b44d3c96b09d69336ff6edb4b'/>
<id>e3d97f57c8e25e8b44d3c96b09d69336ff6edb4b</id>
<content type='text'>
Configure the list of gluster servers in the key
GLUSTERD_BRICK_SERVERS at the time of GETSPEC RPC CALL
and access the value in client side to update volfile
serve list so that client would be able to connect
next volfile server in case of current volfile server
is down

Updates #741
Signed-off-by: Mohit Agrawal &lt;moagrawa@redhat.com&gt;

Change-Id: I23f36ddb92982bb02ffd83937a8bd8a2c97e8104
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Configure the list of gluster servers in the key
GLUSTERD_BRICK_SERVERS at the time of GETSPEC RPC CALL
and access the value in client side to update volfile
serve list so that client would be able to connect
next volfile server in case of current volfile server
is down

Updates #741
Signed-off-by: Mohit Agrawal &lt;moagrawa@redhat.com&gt;

Change-Id: I23f36ddb92982bb02ffd83937a8bd8a2c97e8104
</pre>
</div>
</content>
</entry>
<entry>
<title>cli: display detailed rebalance info</title>
<updated>2019-11-12T06:17:06+00:00</updated>
<author>
<name>Sanju Rakonde</name>
<email>srakonde@redhat.com</email>
</author>
<published>2019-10-22T09:36:29+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=50b6806bb2697246bdc1b9ac5ef19af61584e010'/>
<id>50b6806bb2697246bdc1b9ac5ef19af61584e010</id>
<content type='text'>
Problem: When one of the node is down in cluster,
rebalance status is not displaying detailed
information.

Cause: In glusterd_volume_rebalance_use_rsp_dict()
we are aggregating rsp from all the nodes into a
dictionary and sending it to cli for printing. While
assigning a index to keys we are considering all the
peers instead of considering only the peers which are
up. Because of which, index is not reaching till 1.
while parsing the rsp cli unable to find status-1
key in dictionary and going out without printing
any information.

Solution: The simplest fix for this without much
code change is to continue to look for other keys
when status-1 key is not found.

fixes: bz#1764119

Change-Id: I0062839933c9706119eb85416256eade97e976dc
Signed-off-by: Sanju Rakonde &lt;srakonde@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem: When one of the node is down in cluster,
rebalance status is not displaying detailed
information.

Cause: In glusterd_volume_rebalance_use_rsp_dict()
we are aggregating rsp from all the nodes into a
dictionary and sending it to cli for printing. While
assigning a index to keys we are considering all the
peers instead of considering only the peers which are
up. Because of which, index is not reaching till 1.
while parsing the rsp cli unable to find status-1
key in dictionary and going out without printing
any information.

Solution: The simplest fix for this without much
code change is to continue to look for other keys
when status-1 key is not found.

fixes: bz#1764119

Change-Id: I0062839933c9706119eb85416256eade97e976dc
Signed-off-by: Sanju Rakonde &lt;srakonde@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>tests: mark tests/bugs/glusterd/quorum-value-check.t as NFS test</title>
<updated>2019-10-15T04:47:28+00:00</updated>
<author>
<name>Sanju Rakonde</name>
<email>srakonde@redhat.com</email>
</author>
<published>2019-10-14T12:46:21+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=03e1c3998f4f3b0ade7a0a84b72693da31a213c8'/>
<id>03e1c3998f4f3b0ade7a0a84b72693da31a213c8</id>
<content type='text'>
Fixes: bz#1665358

Change-Id: Iea000dd839d4e4dbef45941f97ab3725a2aa1726
Signed-off-by: Sanju Rakonde &lt;srakonde@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Fixes: bz#1665358

Change-Id: Iea000dd839d4e4dbef45941f97ab3725a2aa1726
Signed-off-by: Sanju Rakonde &lt;srakonde@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: rebalance start should fail when quorum is not met</title>
<updated>2019-10-10T15:21:39+00:00</updated>
<author>
<name>Sanju Rakonde</name>
<email>srakonde@redhat.com</email>
</author>
<published>2019-10-10T15:10:49+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs.git/commit/?id=83173908b3e3a02ebb6eb17963f0889c7ecff987'/>
<id>83173908b3e3a02ebb6eb17963f0889c7ecff987</id>
<content type='text'>
rebalance start should not succeed if quorum is not met.
this patch adds a condition to check whether quorum is met
in pre-validation stage.

fixes: bz#1760467
Change-Id: Ic7d0d08f69e4bc6d5e7abae713ec1881531c8ad4
Signed-off-by: Sanju Rakonde &lt;srakonde@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
rebalance start should not succeed if quorum is not met.
this patch adds a condition to check whether quorum is met
in pre-validation stage.

fixes: bz#1760467
Change-Id: Ic7d0d08f69e4bc6d5e7abae713ec1881531c8ad4
Signed-off-by: Sanju Rakonde &lt;srakonde@redhat.com&gt;
</pre>
</div>
</content>
</entry>
</feed>
